Partner Perspective: Coordinating ELOQUENCE

In this interview, Jordi Luque Serrano, coordinator of ELOQUENCE, reflects on the project’s first two years – from building a strong, cross-disciplinary consortium to turning ambitious ideas into operational, responsible AI systems.


Q: Looking back at the first two years of ELOQUENCE, which achievement from your team are you most proud of?
Our most significant achievement is steering ELOQUENCE from vision to an operational, coherent programme. While coordinating a project of this scale presents many challenges, we are most proud of establishing a robust and collaborative working environment across the entire consortium. Seeing how effectively all the partners have worked together, sharing expertise and supporting each other, has been truly rewarding.

Q: What has been the most rewarding or inspiring part of collaborating with the ELOQUENCE consortium?
The most rewarding element has been building a genuinely cross‑disciplinary team where top ethics/legal experts, speech/LLM labs, HPC centres, pilot owners, experts and business exploitation professionals work on a shared stack instead of isolated subprojects. Such a diversity of expertise within the consortium is incredibly inspiring: From the linguistic nuances explored by scientific partners to the ethical considerations championed by ethics experts, each partner brought unique perspectives and skills to the table. Witnessing how these different areas of expertise converged to address complex challenges in ensuring responsible AI development has been incredibly motivating.

Q: Which result, insight, or technological advancement do you feel has had the strongest impact on the project so far?
One of the most important developments in creating more helpful and trustworthy conversational AI has been combining Retrieval-Augmented Generation (RAG) with a system that efficiently manage conversations and adheres to a defined LLM’s personality and role. This allows the AI to provide more accurate and relevant answers, avoid making things up (a problem called “hallucination”), and ensure its responses are confined within a context, safe and appropriate, leading to a much better overall experience, especially in high-risk scenarios. I will also highlight the strong impact from the speech‑to‑LLM interoperability, like multilingual connectors that enable conversational systems to work directly on audio.

Q: What is one thing that worked particularly well in your work package or activity and why?
We have employed a highly structured approach to managing the project’s resources and deliverables, ensuring responsible use of EU taxpayer’s funds. Our detailed tracking system promotes transparency and accountability for every aspect of the project. By proactively identifying and addressing potential challenges, we have kept the project on schedule and within budget, demonstrating to European citizens that their investment in European research and development has meaning and results. From the technical point of view, the successful implementation of the robustness speech evaluation and the Federated Learning frameworks for speech and LLMs also stands out as a significant technical achievement, enabling us to push the boundaries of multilingual AI model development.

Q: Which moment, milestone or breakthrough from the past two years stands out as especially meaningful for your organisation?
The successful first prototype of our pilot was a particularly meaningful moment for us. Delivering the Interactive Playground, the ELOQUENCE’s technical orchestration platform, and integrating it with our working Pilot 1 path: wake‑word detection, speech interaction, and local vector stores for privacy, demonstrates the real-world potential of ELOQUENCE’s technologies. Equally meaningful was successfully iterating the first major ethics/fundamental‑rights assessments for our pilot. It is providing valuable insights into the challenges and opportunities of deploying multilingual AI in practical settings and confirming that the project’s “responsible by design” ambition is realistic.

Q: What strengths or expertise did your team bring that you believe contributed most to the project’s success?
TID brought three core strengths:

  • Strong project management and quality‑assurance capacity, leading coordination, deliverable quality checks, risk monitoring, and interaction with ethics and advisory bodies.
  • Deep technical expertise in speech and LLMs, especially robustness, multilingualism, and federated learning frameworks for privacy‑aware training.
  • Industrial product and piloting experience, translating research into a real smart‑home pilot with wake‑word technology and on‑device/local pipelines.

Q: What have you learned through ELOQUENCE that will be valuable for your future work or research?
ELOQUENCE has reinforced the importance of cross-disciplinary collaboration and the need for a holistic approach to AI development from three dimensions: technical, user and ethical. We have learned valuable lessons about managing diverse teams, navigating ethical complexities, and adapting to rapidly evolving technologies and user’s needs. These lessons will be directly applicable to our future research endeavors, particularly in the development of responsible and ethical AI solutions.

Q: What aspect of the project’s final year are you most excited about and why?
We are particularly excited to see the Interactive Playground platform combining smart dialogue orchestrators with privacy‑aware federated models, robustness and bias metrics, and human‑feedback mechanisms. I hope this modular platform can be reused beyond ELOQUENCE by industry and research to build trustworthy, multilingual conversational systems in domains far beyond the four pilots.

Q: From your perspective, what do you see as the most important opportunities and responsible AI challenges emerging right now? How do you think ELOQUENCE can help address them?
The opportunities in multilingual AI are immense, enabling more inclusive and accessible communication across cultures. Key challenges include mitigating bias, ensuring transparency and explicability in AI conversational models, and addressing the ethical implications of AI-generated content. Key opportunities lie in deploying multimodal, multilingual assistants in homes, contact centres, and public services, but they come with challenges around bias, robustness, privacy, and explainability. Responsible AI development is paramount. ELOQUENCE addresses these through efficient dialogue orchestrators, federated models, bias quantification, robustness benchmarks, human‑feedback frameworks, and systematic ethical and fundamental‑rights assessments embedded into the development process.

Q: If you had to describe ELOQUENCE’s journey so far in one sentence, what would it be?
ELOQUENCE’s has been a shared journey from ambitious ideas to collaborative, multidisciplinary research, pushing the boundaries of multilingual AI while prioritizing ethical considerations and responsible innovation.


ELOQUENCE’s journey so far shows that trustworthy multilingual AI can only be achieved through close collaboration, ethical awareness, and technical excellence. As the project moves into its final year, the focus is firmly on lasting impact and solutions that can extend well beyond the project itself.