As artificial intelligence systems become increasingly embedded in critical decision-making processes, the question of trustworthiness has never been more central. From emergency response and public services to multilingual communication and information access, AI systems must be reliable, safe, transparent and fair, especially when operating in high-stakes environments.
To deepen this discussion, we are pleased to announce a joint webinar hosted by ELOQUENCE and TrustLLM, two Horizon Europe projects dedicated to advancing trustworthy, multilingual AI for Europe.
This session will bring together experts from both projects to explore how large language models can be made safer, reliable and more inclusive, with a particular focus on real-world applications where the cost of error is high.
What You Can Expect from the Webinar
During this one hour session, we will discuss:
1. Trustworthy AI in High-Stakes Scenarios
2. Multilinguality, Bias & Underrepresented Languages
3. The European Approach to Trustworthy LLMs
Participants will have the opportunity to interact directly with our speakers, ask questions and share insights.
Meet Our Speakers
Annika Simonsen (TrustLLM) is a PhD student at the University of Iceland. Her PhD is part of the TrustLLM project where her research focus is on Germanic language model alignment, building high-quality training and evaluation data and aligning models. TrustLLM is building Europe’s most trustworthy large language models with a strong emphasis on transparency, fairness, privacy and linguistic inclusivity. The project aims to develop open and trustworthy AI foundations compatible with European regulations, values, and multilingual needs.
Dr. Petr Motlicek (ELOQUENCE) received the M.Sc. degree in electrical engineering and the Ph.D. degree in computer science from Brno University of Technology (BUT), Czech Republic, in 1999 and 2003, respectively. He is a senior researcher at Idiap Research Institute in Martigny, Switzerland, external teacher at École Polytechnique Fédérale de Lausanne and also an associate professor at BUT, Faculty of Information Technology (FIT). His research focuses on advanced audio and speech processing technologies, including speech analytics, speech and speaker recognition, and information retrieval directly from speech signals, as well as broader areas of conversational analysis and language understanding. Many of the resulting applications are developed in close collaboration with European security and governmental law enforcement agencies, including INTERPOL and Europol. Dr. Motlicek is also a member of the CERIS expert group, where he contributes to the European Commission’s efforts in shaping new research and innovation priorities in the security domain. His recent work increasingly leverages large pre-trained models for speech processing – ranging from call-center analytics to the use of autonomous agents for human–machine teaming and support in high-risk scenarios such as air-traffic communication.
Event Details
- Date: November 27, 2025
- Time: 13:00 CET
- Registration Link
Upon registering, you will receive the Zoom meeting link and calendar invitation directly via email. The session will be recorded and the edited version (featuring the speakers and voluntary participant interactions) will be shared on our social media channels.
Ensuring trust, transparency, and inclusivity in AI requires collective commitment across research, innovation, and policy. This webinar marks an important step in aligning our efforts across Horizon Europe projects and advancing safer multilingual LLMs.
Join us as we explore solutions, challenges and the road ahead for trustworthy AI in Europe!
