ELOQUENCE at Didacta Italia: Reflecting on the Ethical Use of AI in Education

Every year, Didacta Italia brings together thousands of educators, researchers, policymakers and technology providers to explore the future of education. As the largest event in Italy dedicated to innovation in teaching and learning, Didacta serves as a meeting point for new ideas, emerging technologies and practical experiences from classrooms and research institutions alike. Through hundreds of workshops, participants exchange perspectives on how digital tools and new pedagogical approaches can shape the next generation of education systems.

At Didacta Italia 2026, researchers from the CNR pilot within the ELOQUENCE project contributed to this conversation by leading an interactive workshop focused on the ethical implications of generative Artificial Intelligence in educational environments.

The workshop introduced participants to several key topics addressed within ELOQUENCE, particularly those related to the responsible use of AI systems that interact with people in sensitive contexts such as education. Rather than presenting AI as a purely technical tool, the session encouraged teachers and educators to reflect on the broader societal and ethical questions that arise when AI systems are used to support decision-making.

One of the central themes of the workshop was the emergence of social bias in large language models. Through practical examples, participants were shown how AI systems can generate responses that unintentionally reproduce correlations present in their training data. Using carefully selected prompts and outputs, the session illustrated how biases related to gender, background, or other characteristics can appear in AI-generated text.

Participants were invited to analyse these examples together and discuss what they might mean in real educational settings. The goal was not only to highlight potential risks, but also to promote critical thinking about how AI tools should be evaluated and used responsibly in schools.

To further explore these questions, the workshop included a short focus group activity in which participants reflected on scenarios inspired by the ELOQUENCE pilot. In one scenario, teachers were asked to consider an AI-assisted university counselling system, discussing which student characteristics should be treated as protected attributes that must never influence recommendations. In another scenario, participants examined how AI might support the assignment of roles within a school project, again considering which personal attributes should not affect such decisions.

Through these discussions, the workshop collected valuable insights from teachers on which factors should remain neutral in AI-supported decisions and which attributes may sometimes be contextually relevant when making appropriate recommendations.

Alongside this session, a second workshop organised in collaboration with another CNR institute explored how an AI-based Intelligent Tutoring System could support the development of critical thinking skills in school activities.

Together, these workshops demonstrated how educational innovation and ethical reflection must evolve hand in hand. By engaging directly with educators, the ELOQUENCE project continues to explore how AI systems can be designed and used in ways that respect fairness, transparency and the diverse realities of learning environments.