Over the past two years, ELOQUENCE has evolved not only through technical advances, but also through a strong commitment to responsible and trustworthy AI.
In this interview, PRIVANOVA shares its perspective on the project’s journey, reflecting on the role of ethics, data protection and governance within the ELOQUENCE consortium. From establishing an active Ethics Advisory Board to engaging with international stakeholders such as the United Nations Office on Drugs and Crime, PRIVANOVA highlights how continuous ethical oversight has shaped both the project’s internal work and its wider societal relevance.
Q: Looking back at the first two years of ELOQUENCE, which achievement from your team are you most proud of?
We’re especially proud of setting up and closely working with the project’s Ethics Advisory Board. From the start, our goal wasn’t just to have the board on paper but to make it a real, active part of the project. Their ongoing feedback helped us guide the consortium through some tricky questions around bias, multilingual fairness, user safety, privacy, and the broader societal impact of conversational AI. Having their perspective throughout the process made a huge difference and helped make sure the work we’re doing is grounded in responsible innovation. Another highlight for us was the opportunity to bring ELOQUENCE to a global audience through our engagement with the United Nations Office on Drugs and Crime (UNODC). Presenting the project’s approach to trustworthy and inclusive AI at this level opened a door for wider recognition and exchange. It helped position ELOQUENCE not just as a research project but as a contributor to international conversations on safe and ethical AI. Seeing the interest and positive reactions from experts outside the consortium gave us a lot of motivation.
Q: Which result, insight, or technological advancement do you feel has had the strongest impact on the project so far?
From our perspective, the biggest impact came from strengthening everyone’s understanding of data protection. We spent a lot of time helping partners from different fields navigate GDPR, privacy-by-design thinking, and the FAIR principles for handling datasets. Instead of treating this as a one-time task, we approached it as a continuous process through workshops, reviews, and day-to-day discussions. That helped make data governance a natural part of the technical work, not an obligation that sits on the side. It allowed teams to move forward with confidence, knowing they were building solutions that respect both users and the law.
Q: Which moment, milestone or breakthrough from the past two years stands out as especially meaningful for your organisation?
A big moment for us was building a community of external experts to support the assessments in WP6. Bringing together specialists in AI ethics, privacy, societal impact, and responsible innovation created a really strong foundation for the project’s governance work. Their insights shaped several rounds of risk assessments and ethical reviews, and you could genuinely see how their input helped improve decisions across the consortium. Watching this group become a trusted resource for the whole project confirmed for us how valuable it is to bridge technical teams with regulatory and ethical expertise.
Q: Which project result or development do you believe has the greatest potential for long-term impact beyond ELOQUENCE?
The way ELOQUENCE approaches dialogue systems grounding them in knowledge, controlling for bias, and making them culturally inclusive is something that can influence the field long after the project ends. What stands out to us is how these technical advances are paired with strong governance practices that make the systems safer and more transparent. This combination lines up well with the direction of the EU AI Act and other global standards. We see real potential for the methods developed here to guide future certification, shape how organisations think about fairness and privacy in conversational AI, and support trustworthy deployment in sensitive real-world settings.
Beyond its immediate results, ELOQUENCE stands as an example of how ethics-driven collaboration and regulatory awareness can strengthen trust in conversational AI — offering valuable guidance for future standards, certification efforts, and real-world deployment in sensitive contexts.
