Ethical AI: Building a Better Future for Education
By Ivana Bartoletti, Global Chief Privacy and AI Governance Officer at Wipro

Register to get 1 more free article
Reveal the article below by registering for our email newsletter.
Want unlimited access? View Plans
Already have an account? Sign in
Imagine a classroom where AI tailors lessons to each student’s learning style, provides instant feedback and opens up new avenues for exploration. This is the potential of AI in education. However, realising this potential requires careful consideration of the ethical implications and a commitment to ensuring equitable access for all learners. Without a responsible approach, AI risks exacerbating existing inequalities and undermining the development of critical thinking skills.
To understand the scope of this potential, let’s explore some of the ways AI is already transforming the educational landscape.
The future of learning with today’s AI
There are numerous instances where AI has already begun to positively impact the educational sector. For instance, intelligent tutoring systems customise learning experiences based on individual student needs. They can provide real-time feedback and tailor exercises to help students master mathematical concepts.
AI can also tailor learning plans through adaptive learning environments that adjust in real-time to student responses. As a result, students enjoy a more personal educational experience that promotes understanding and retention.
Beyond personalised learning, AI-powered chatbots can assist students by answering queries, providing information on course schedules, and helping with administrative tasks, thereby reducing the workload on staff and improving the overall student experience.
Furthermore, AI can support teachers by automating administrative tasks, such as grading and evaluating student performance. Educators can then personalise student interactions. While these advancements offer exciting possibilities, it’s crucial to address the potential pitfalls that accompany the widespread adoption of AI in education.
The first challenge is a philosophical one, yet vital to consider. It relates to AI products as socio-technical ones, as they are made of historic data. When an AI learning system is introduced, it comes with predefined outcomes set by its designers. The model’s creators impose their own categories and interests, embedding them into the system to predict future behaviour. This codification of patterns is concerning from political, sociological, and ethical perspectives because it relies on past data to forecast and shape future actions.
Consequently, our behaviour is influenced by historical data, raising questions about the impact on human agency, mobility and creativity. Breaking away from traditional behaviours is a fundamental aspect of human nature. Will AI restrict this? This is particularly relevant in education as school is the environment where young people learn to make genuine choices and disrupt historical patterns of behaviour.
Bias in education is a big risk. In a report I co-wrote for the Council of Europe several examples are listed. Facial recognition software can be biased, leading to racial and gender discrimination. In educational settings, this affects racialised students during exams. Proctoring software also negatively impacts students with disabilities by causing anxiety, disallowing carers, or not permitting breaks.
Designing with AI, understanding AI
Designing AI for education requires input from diverse professionals, including tech experts, educators, psychologists, diversity specialists, ethicists and students. The first question should be whether the AI solution is necessary – will it solve a problem or create one? Often, techno-solutionism – the belief that technology alone can solve complex societal problems – leads us to view AI as a quick fix for deeper educational challenges.
For example, while AI-powered tutoring systems can personalise learning, they cannot address the root causes of educational inequality, such as poverty, lack of access to resources or systemic discrimination. AI can support, but not replace, sound educational policy and effective teaching practices. We must be wary of implementing AI solutions simply because they are technologically feasible without carefully considering their broader social and ethical implications. AI can be a powerful tool, but it’s not a magic bullet.
Once viability is established, it is important to evaluate the potential outcomes of the product from various perspectives, including data, functionality, privacy, training, impact on human rights and inclusion. Unintended consequences may occur if appropriate stakeholders are not involved in the decision-making and evaluation process of the AI system. Therefore, it is essential to have these stakeholders participate in these discussions.
More importantly, effective AI regulation in education and research demands the urgent development of coherent policy frameworks, prioritising a human-centred approach. As Education Secretary Bridget Phillipson has emphasised, these frameworks must be built through active collaboration between policymakers, educators, technology experts and students themselves to ensure that AI implementation aligns with core educational values, promotes equitable access and outcomes and addresses potential ethical concerns. Policy development must include establishing clear guidelines for data privacy, algorithmic transparency and accountability. Furthermore, ongoing stakeholder engagement is essential for continuous monitoring and evaluation, allowing for iterative adjustments to policy as the AI landscape evolves. This includes creating accessible channels for feedback, addressing emerging challenges and fostering a shared understanding of the long-term implications of AI in education. In university settings, we should instill “distrust by design” to encourage caution. This approach helps students analyse AI-generated content critically, question sources, understand limitations and discuss ethics, privacy and AI’s societal impact.
Specifically, students should be taught to ask questions like: “Who created this AI?”, “What data was used to train it?”, “What are the potential biases embedded in this system?” and “How might this technology be misused?” Educators can incorporate these questions into classroom discussions and assignments, fostering a culture of critical inquiry.
By instilling this sense of caution, we empower students to become discerning consumers of AI-generated information, encouraging them to question the source, understand its limitations and engage in discussions about ethics, privacy and the broader implications of AI in society.
Shaping students’ future with human-centred AI
Transforming our world with AI requires more than technological thinking alone. It needs a multidisciplinary approach that blends science and humanities, equipping future generations with critical thinking and ethical awareness. We must actively shape AI’s development to ensure an equitable future, prioritising human values through ongoing dialogue and collaboration. Moving beyond mere adoption, we must design AI with inclusion and access at its core.