By Andreas Schleicher
Director, OECD Directorate for Education and Skills
Every year, the OECD Forum brings together experts, academics and thought leaders from the private and public sector to discuss key economic and social challenges on the international agenda. The theme of this year’s Forum was “World in EMotion” – a theme that reflects the profound changes brought about by globalisation, shifting politics and digitalisation, and the challenges and opportunities that they present.
Nowhere are these changes more rapid – and perhaps far-reaching – than in the field of artificial intelligence (AI), and its implications for values and ethics. I attended a very interesting panel on this subject, alongside Peter Gluckman, Chair of the International Network for Government Science Advice in New Zealand; Geoff Mulgan, Chief Executive of NESTA in the UK; Eric Salobir head of Optic; Pallaw Sharma, Senior Vice President at Johnson & Johnson; and Jess Whittlestone, Research Associate at the Centre for the Future of Intelligence at Cambridge University.
If anything, the panel demystified AI. As Pallaw explained, technology and AI are not magic powers; they are just extraordinary amplifiers and accelerators that add speed and accuracy. AI accelerates whatever it is applied to. It amplifies good ideas and good practice in the same way it amplifies bad ideas and bad practice. It can help remove bias and discrimination from human resources practice, for example, but it can also spread and scale bias. This is because AI extrapolates from what we know and from existing patterns. Eric illustrated this with an example from criminal justice. When AI is applied in this field, it can induce a paradigm shift from a justice of consequences – with judges looking forward for ways to reintegrate offenders into society – to a justice of correlations, whereby judges look back at what happened with similar cases in the past. In the field of education, Peter demonstrated how AI can both empower teachers to identify and better support children at risk, but also inhibit them from exercising human judgment and stigmatise disadvantage.
It’s probably fair to say that AI is ethically neutral. But AI is always in the hands of people who are not neutral. The only reason we should fear robots is because they will always obey us. The real problem with robots probably has less to do with their artificial intelligence, than with our human ignorance. The real risks do not come from AI itself, but from the consequences of its application.
This is nothing new; humans have always been far better at inventing new tools than using them wisely. But in the coming years, biotech and AI will give us not just the power to change the world around us, but also to manipulate and reshape ourselves. Peter gave rather scary examples of how technology can hack our minds without us noticing. We need to become better at seeing and understanding our own biases, and the world around us; but most of all, we need to become better at understanding our own minds and experiences. And we better understand our minds and build awareness of our experiences before the algorithms make our minds up for us.
Geoff seemed more relaxed about such outcomes. He reminded us how the emergence of the car industry created its own regulatory environment: as cars became more powerful and numerous, we built better roads, more sophisticated street signs and had people put on seatbelts. But as Jess pointed out, whether AI will do the same will depend on our ability to imagine the downsides of AI in the way we clearly saw them with cars – and quickly enough. It is clear that these questions will not be answered through generic principles, but in very specific contexts, and that we will need new, fast-paced and adaptive regulatory environments to keep up with rapid advancements in AI.
Education needs to be about developing first-class humans, not second-class robots.
But wherever regulation takes us, growing uncertainty and ambiguity will make values ever more central, and ethics will be our capacity to put those values into action. In a way, AI pushes us to think much harder about what makes us truly human. All this interacts with the increasing speed of change, as Peter highlighted. In the past, you could always look to the people around you. In a slow changing world, it was a relatively safe bet to ask your parents when you were unsure about right and wrong. But in this fast changing world, you can never tell whether what others tell you is timeless wisdom, or just outdated custom.
To stay relevant – both economically and, above all, socially – we must all be able to constantly learn and reinvent ourselves, and AI can be a powerful ally for that. AI can get us out of the industrial model of learning, under which it was both effective and efficient to educate students in batches, and to train teachers once for their entire working lives. The curricula that spelled out what students should learn were designed at the top of the pyramid, then translated into instructional material, teacher education and learning environments – often through multiple layers of bureaucracy – until they reached and were implemented by individual teachers to passive learners in the classroom. AI can give people ownership over what they learn, how they learn, where they learn and when they learn. It can teach you science and simultaneously observe how you study: how you learn science, the kinds of tasks and thinking that interest you, and the kinds of problems that you find boring or difficult. AI can then adapt learning to suit your personal learning style with far greater granularity and precision than any traditional classroom setting possibly could.
At the same time, in a world shaped by AI, education is no longer about teaching people something, but about helping them to build a reliable compass and other navigation tools to find their own way through an increasingly volatile, uncertain and ambiguous world. Tomorrow’s schools will need to help students think for themselves and join others, with empathy, in work and citizenship. They will need to help students develop a strong sense of right and wrong, a sensitivity to the claims that others make on us, and a grasp of the limits on individual and collective action. At work, at home and in the community, people will need a deep understanding of how others live, in different cultures and traditions, and how others think, whether as scientists or artists. Education needs to be about developing first-class humans, not second-class robots.
Of course, values and ethics have always been central to humanity, but as Jess put it, it is time that we move beyond values as implicit aspirations and make them explicit goals and practices in education. This can help communities shift from situational values – meaning: “I do whatever a situation allows me to do, supported by the best AI” – to the kind of sustainable values that generate trust, social bonds and hope. The bottom line is that where society doesn’t build foundations under people, many will try to build walls, no matter how self-defeating that would be. And AI is good at reinforcing those walls.
Read more: