Designing safe AI systems for education

By Andreas Schleicher, OECD Director for Education and Skills

Generative AI offers humanity the first real opportunity to provide every learner with an excellent education. Used responsibly, it can support high‑quality, personalised learning at scale, while reinforcing, rather than replacing, the essential role of teachers. However, realising this potential requires safety, trust and accountability to be placed at the heart of these systems.

This week, I spoke at a session on Digital Safety at the UK Government Summit on Generative AI, which featured rich and thoughtful discussions on this topic. One conclusion was clear: it is hard to think about digital safety in absolute terms. If we do, the consequence is likely paralysis and we will end up doing little with GenAI at all. Instead, we need to look at AI safety in relation to human capabilities and ensure that students have learned to think before they prompt.

That focus on capability is why the OECD has concentrated its efforts on framing AI literacy. By this, we mean the capacity of young people to meaningfully engage with AI, to create with AI, to question it and to manage it with critical thinking, self and social awareness. We have built that concept of AI literacy also into our next PISA assessment, so that we can track how student preparedness for AI is evolving, and how that plays out across countries, social background, gender and age. Understanding capabilities gives us a good sense of the safety risks and where we need education systems to act.

However, we still need to act with urgency to ensure AI is safe. In this regard, it is clear we are playing catch-up. Generative AI did not knock politely on the schoolhouse door. It came in through the Wi-Fi and the question every system now faces is: Will AI drift into classrooms by accident, or will we govern it by design?

In schools, we don’t give every student unrestricted access to every tool. Scissors in the early grades are blunt. Chemistry labs are locked until students are trained. Internet access is filtered by age. Not because students are untrustworthy, but because powerful tools require context, guidance and gradual release. Generative AI calls for the same approach.

A six-year-old and a sixteen-year-old may tap the same screen, but they are not engaging with the same reality. When we talk about age-appropriate design and safeguards, we are not talking about censorship. We are talking about developmental fit.

For younger learners, GenAI must behave less like an oracle and more like a guardrail. That means strong filters. No emotional manipulation. No pretending to be a friend, a confidant, or – worse – a replacement for social interaction. Children are wired to trust authority. If AI speaks with confidence, they will believe it.

As students grow older, the rules can loosen, but they should not disappear. Teenagers can handle uncertainty but need help seeing where it lives. AI should make any ambiguity clear and say: “Here’s a suggestion, not the truth.” “Here’s a pattern, not a verdict.” That’s how you teach critical thinking in an age where machines can sound confident even when they are wrong.

Data protection must also be a central focus. Children’s data is not oil to be extracted; it is DNA to be protected. There should be no emotional profiling. No long-term memory. No recycling of classroom interactions into commercial training datasets. If we would not allow a stranger to follow a child home and take notes, we should not allow an algorithm to do so silently.

And above all, we must be clear about authority. GenAI should assist professional judgment, not replace it. The moment we allow machines to make high-stakes educational decisions on their own, we don’t just automate, we abdicate. All this aligns with the OECD AI Principles, which remind us to keep humans in the loop. It also echoes UNESCO’s guidance, which places children’s rights at the centre of educational technology. But more than that, it reflects common sense.

Strong guardrails early. Growing autonomy over time. Human judgment always in charge.

If we get this right, AI becomes a scaffold, not a crutch. A tool for thinking, not a substitute for it. And education remains what it has always been at its best: a human endeavour, powered by judgment, relationships and trust. That is the future worth designing for.

Learn more about the OECD’s work on generative AI in education in the new OECD Digital Education Outlook: OECD Digital Education Outlook 2026 | OECD