Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

Exploring Explainable AI in Dialogue Systems: A Conversation with Dr. Petr Motlicek

In the sixth episode of the ELOQUENCE podcast, we had the pleasure of speaking with Dr. Petr Motlicek, Senior Researcher at the Idiap Research Institute and Associate Professor at Brno University of Technology. With a rich background in speech processing and spoken language understanding, Petr brings both technical depth and practical insight into the evolving world of trustworthy and explainable AI.

From Speech Coding to Trustworthy AI

Dr. Motlicek’s journey into AI began with his studies in electrical engineering and later evolved through his PhD work in speech coding. Early in his career, he focused on how to efficiently transmit human speech using data-driven models—an effort that laid the foundation for his current work on machine learning and speech communication. As technology advanced, so did his focus, shifting from pattern recognition to true understanding in human-AI interaction.

What Is Interpretable and Trustworthy AI?

In ELOQUENCE, Dr. Motlicek leads tasks within WP3 and WP2, focusing on interpretable and trustworthy AI systems for dialogue modeling. He explains that for AI to be truly reliable, it must not only produce outputs but also explain the logic behind its decisions. This is particularly critical in high-risk applications, such as emergency call analysis, where decisions based on AI outputs must be both accurate and verifiable. In this context, interpretability helps build user trust and provides transparency about the model’s decision-making process.

Human-in-the-Loop for High-Risk AI

Petr emphasises the role of human-in-the-loop processes in high-risk scenarios. For example, in emergency services, AI may support operators by analysing call centre data, but final decisions must still involve human judgment. To reduce risks like hallucinations or incorrect outputs, the system can be designed to retrieve verified information from internal databases and incorporate feedback from human operators. This type of knowledge infusion is central to ELOQUENCE’s development of reliable AI tools.

Training and Optimising Dialogue Systems

Within WP2, Petr also contributes to adapting LLMs for complex, real-world conversational data. His team has worked on compiling and integrating a large collection of dialogue datasets to train models capable of understanding the flow and structure of conversations. These models aim to assess whether a customer interaction was properly resolved—providing valuable insights for quality assurance in call center management.

The Challenge of Model Explanation

A key focus of Dr. Motlicek’s work is on optimising model explanations within dialogue systems. Since many AI systems operate as “black boxes,” offering predictions without context, it’s crucial to provide users with clarity about how and why a certain output was generated. This becomes even more important as AI regulations—like the upcoming EU AI Act—require explainability in high-stakes use cases.

Looking Ahead: Trends and Advice

Petr shared his thoughts on promising trends in the field, including:

  • Causal reasoning: Moving beyond correlations to understand cause-and-effect in AI decision-making.
  • Bias mitigation: Addressing imbalanced training data and rare pattern detection.
  • AI regulation: Ensuring responsible use of AI through transparent and ethical frameworks.

He also offered advice for young AI researchers: balance knowledge of existing research with creativity—don’t be afraid of ideas that seem too bold at first.

As the ELOQUENCE project continues, Dr. Motlicek looks forward to deeper collaboration and delivering concrete results. His work highlights the importance of making AI not just smart, but trustworthy, explainable, and human-aware.

Check out the full interview here.