In the sixth episode of the ELOQUENCE podcast, we had the pleasure of speaking with Dr. Petr Motlicek, Senior Researcher at the Idiap Research Institute and Associate Professor at Brno University of Technology. With a rich background in speech processing and spoken language understanding, Petr brings both technical depth and practical insight into the evolving world of trustworthy and explainable AI.
From Speech Coding to Trustworthy AI
Dr. Motlicek’s journey into AI began with his studies in electrical engineering and later evolved through his PhD work in speech coding. Early in his career, he focused on how to efficiently transmit human speech using data-driven models—an effort that laid the foundation for his current work on machine learning and speech communication. As technology advanced, so did his focus, shifting from pattern recognition to true understanding in human-AI interaction.
What Is Interpretable and Trustworthy AI?
In ELOQUENCE, Dr. Motlicek leads tasks within WP3 and WP2, focusing on interpretable and trustworthy AI systems for dialogue modeling. He explains that for AI to be truly reliable, it must not only produce outputs but also explain the logic behind its decisions. This is particularly critical in high-risk applications, such as emergency call analysis, where decisions based on AI outputs must be both accurate and verifiable. In this context, interpretability helps build user trust and provides transparency about the model’s decision-making process.
Human-in-the-Loop for High-Risk AI
Petr emphasises the role of human-in-the-loop processes in high-risk scenarios. For example, in emergency services, AI may support operators by analysing call centre data, but final decisions must still involve human judgment. To reduce risks like hallucinations or incorrect outputs, the system can be designed to retrieve verified information from internal databases and incorporate feedback from human operators. This type of knowledge infusion is central to ELOQUENCE’s development of reliable AI tools.
Training and Optimising Dialogue Systems
Within WP2, Petr also contributes to adapting LLMs for complex, real-world conversational data. His team has worked on compiling and integrating a large collection of dialogue datasets to train models capable of understanding the flow and structure of conversations. These models aim to assess whether a customer interaction was properly resolved—providing valuable insights for quality assurance in call center management.
The Challenge of Model Explanation
A key focus of Dr. Motlicek’s work is on optimising model explanations within dialogue systems. Since many AI systems operate as “black boxes,” offering predictions without context, it’s crucial to provide users with clarity about how and why a certain output was generated. This becomes even more important as AI regulations—like the upcoming EU AI Act—require explainability in high-stakes use cases.
Looking Ahead: Trends and Advice
Petr shared his thoughts on promising trends in the field, including:
- Causal reasoning: Moving beyond correlations to understand cause-and-effect in AI decision-making.
- Bias mitigation: Addressing imbalanced training data and rare pattern detection.
- AI regulation: Ensuring responsible use of AI through transparent and ethical frameworks.
He also offered advice for young AI researchers: balance knowledge of existing research with creativity—don’t be afraid of ideas that seem too bold at first.
As the ELOQUENCE project continues, Dr. Motlicek looks forward to deeper collaboration and delivering concrete results. His work highlights the importance of making AI not just smart, but trustworthy, explainable, and human-aware.
Check out the full interview here.