• He warns that AI is being released on the basis of promises that have not yet been realized and cautions against deterministic attitudes to new technology.
• He further argues that at a time when we are increasingly dependent on the digital world, AI could make it much harder to trust people online. In response, we should focus more on human skills and thinking, rather than delegating certain tasks to the statistical approach of AI.
In your book, you explain that technology is historically intertwined with our human identity. But what about our relationship with AI?
AI is not just another technology. However, we shouldn’t be talking about AI, but about ‘AIs’, because they are all different. These systems are not trained on the same data or for the same purposes. AI is not yet very well understood, because many of these systems – like the growth of the companies that produce them – are based on promises that have not yet been realized. These AIs make it possible to automate many things in the business world, but it is dangerous to say that this automation will inevitably happen. AI should be used for dull aspects of jobs that we just want done, for example in industry or agriculture, but when we talk about education, justice, law or democratic decisions, we are talking about systems defined by human participation. Automating them is dangerous. We must also avoid falling into the trap of anthropomorphism, and considering that these machines have personalities, intentions or thoughts.
The widespread pollution of data by AI could undermine trust in information
But there is an expectation that AI will bring about radical change, even in speeches on economic policy…
The debates we have about AI and the ethics and values of technology lead to many different possible paths for the future. It is wrong to think that people should just shut up and adapt. The idea that technology will make the world a better place is a hypothesis for which there is not much evidence. Forty years ago, it was said that technology would make Japan the world’s leading economy. That didn’t happen. It was a prediction that paid no attention to other types of power, or to the fact that the future is uncertain. Is ChatGPT-5 going to change the world or will it just offer an incremental improvement? We don’t know. The outcome is far from certain like the outcome of the wars that are going on right now. I think we need to understand that tech companies don’t have perfect information or perfect control, and that they are also at the mercy of geopolitical forces and public opinion.
What can be done to interest citizens in the ideological issues related to the impact of new technology?
We should be careful to avoid superficial debate about technology: we talk a lot about functionality, technological efficiency, etc., but not necessarily about its intersection with children’s education, culture, art, emotions, etc. In my book, I explain that technologies are not neutral, for example if you consider a technology like facial recognition, it is clear that they can be used to limit freedom and democratic rights. Citizens don’t really care about abstract ideas. They don’t care about AI ethics, but they do care about the government monitoring their presence in public space. They don’t care about bans on mobile phones in schools, but they do care about social networks harming their children.
Why do you talk about the erasure of trust?
When we talk about AI, we mustn’t forget that we’re also talking about the production of huge volumes of data, perhaps even the widespread pollution of data. And the correlative of this is an erosion of trust in information. I worry that it will do a great deal of damage to what you might call the informational commons and culture. it will become much harder to trust people online. And solving that problem will take a lot more effort than creating it. It’s very easy to cheat using AI, but it’s impossible reliably to detect cheating. And with art and photography, it’s very easy to create art, photographs, images, but it’s almost impossible reliably to detect these. So the huge question is how do we test and measure and teach human skills? How do we have any confidence in the marketplace of ideas and the marketplace of goods and services? Or in each other? How can we have confidence in each other in the workplace or in the education system?
What about the role of human skills?
Let’s take the example of football. What if there was an AI that could analyse every football team in the world and perfectly predict the results of every game in every league even before they were played? Would you say to the world: “I have solved the problem of football, nobody needs to play anymore, and all the football players can now go and do something more useful?” The point of football is not knowing the result but to watch and appreciate the players’ human skills. If a child shows you a picture or a story he or she has brought home from school, you don’t say that there is no point in drawing or writing because AIs do it better. Human beings acquire skills by practice and by thinking and by reflection. AIs don’t acquire skills in this way at all. What they do is statistical; they make predictions based on data. It is important for children to use technology, but they also need to learn to think critically about it.