By continuing to use this site, without changing your cookie settings, you agree to the use of cookies enabling us to produce visitor statistics.
Find out more

Trending

Podcast – Interfaces that are more and more intuitive: joint interview with James Auger and Albert Moukheiber


The stakes lie at the level of education of the general public, in order to stimulate discussions around these new intuitive technologies.


Biases, implicit assumptions and local cultural specificities, combination effects, continuous attention, repetition, group effects, risk aversion... The modelling and integration of our behaviour into the design of our interaction with ever more connected machines now enable the creation of particularly intuitive interfaces. James Auger, a designer and associate professor at the Madeira Institute of Interactive Technologies, and Albert Moukheiber, a cognitive neuroscience researcher, clinical psychologist, and co-founder of the Chiasma association for critical thinking, discuss the issues and challenges that underlie the a priori intuitive man-machine interactions, which are in fact not neutral.

Listen to the podcast

Interface design and cognitive bias

Albert Moukheiber: For a long time we believed that we were rational individuals. However we now know that we are also, and especially, irrational individuals. Yet that can be predictable, more so than in the past, in particular thanks to the data to which we have access today.

James Auger: I have a product designer’s viewpoint and I precisely develop products that question our behaviour in terms of “intuitive” applications and devices (smartphones, smart speakers such as Echo or Google Home for example). The question is to know how we interact with objects that are based on a form of prediction of our behaviour.

Exploration and understanding of our cognitive biases

J.A.: Let’s take an example from history, that of Henry II of England in conflict with Thomas Becket, the archbishop of Canterbury. His famous words “Will no one rid me of this turbulent priest?” were interpreted by four of his knights as an incitement to assassinate Becket. Was this a right or wrong interpretation of his words? We will never know. It’s interesting to compare this event from the past with a more recent tale that took place in 2017 in Minnesota in the United States: a young man accidentally ordered a very expensive doll via his voice assistant, Amazon’s Alexa. When retelling this mishap on the local radio and using, of course, the word “Alexa”, a presenter triggered a slew of orders for the toy in question. As many listeners’ voice assistants were turned on during the broadcast, they automatically proceeded to order the doll. Just as Henry II, but in a completely different context of course, here we encounter a problem of interpretation.

A.M.: These examples illustrate perfectly the way in which we interpret reality according to our priorities and our beliefs. But we mustn’t forget that most of the time our cognitive biases are also a good thing. They are the result of our evolution since the age of caves and are the fruit of decisions for our survival. But today, the interactions between individuals, which are significantly more numerous than in the past due to globalisation and new technologies, make their interpretation considerably more complex.

Intuitive interfaces in ten years

J.A.: The recent affair involving Cambridge Analytica, who exploited the data of millions of Facebook users without their permission, poses the fundamental question as to who controls this data and for what purpose. In fact, people don’t know what’s in the box. Data collection, management, and processing, it all remains a huge mystery. The stakes therefore lie at the level of education and awareness of the general public, in order to stimulate discussions around these new intuitive technologies.

A.M.: Technology only brings opportunities but certainly not moral answers. The battle to be fought is therefore not at the technology level but on a political level in order to foster open data and informed and independent control of its use. It is impossible to get rid of our cognitive biases. Artificial intelligence, new intuitive interfaces that use them, must therefore be controlled and considered as tools that can provide us with positive solutions to various problems, be they in terms of health, security, predicting natural disasters…

« Reclaim the means, stop obsessing with the ends »

J.A.: This sentence that I use in my manifesto for a redefining of design can be summed up in an example given by philosopher Albert Borgmann. He draws a parallel between a house’s hearth, which we fuelled in the olden days by going to cut wood that was stored and managed according to heating, cooking needs, etc., and current central heating systems, for which we only need to press a button to get heat without knowing any of the stages necessary for their functioning. This directly questions the latest technological evolutions, in particular those of automation. All the work I do, through the development of artefacts notably related to electricity and energy, questions the process of product development and our consumption pattern: in particular the way in which we consider objects only for their purpose and no longer according to the means that enable them to exist.

A.M.: Here again, it’s all a question of education. There is admittedly a gap between the education system and the evolution of technologies, but already today the teaching of the issues of all these new tools to the young helps to better understand what is happening behind such devices, applications, or software. The problem isn’t the technology but its understanding and control.

Artificial intelligence and irrationality

A.M.: In the beginning of work on artificial intelligence, we tried to develop software that was as rational as possible. But for some time now, we have been seeing a trend that instead tries to code our own cognitive biases by integrating a form of approximation. The question is: do we want to create machines that are like us or ones that are completely different from us?

J.A.: I definitely choose the second option. Why would we need to create humanised robots? The washing machine, a kind of proto-robot in a way, isn’t the shape of a human being going to wash my washing by the riverside… This humanised vision of robots is undoubtedly good for advertising or for attracting investors but the true question remains: why do we want these machines in our lives?

TO FIND OUT MORE about the management of our behaviour and in particular of our cognitive biases in the interfaces of yesterday and today, discover our timeline summarising 20 years of attention management mechanisms in digital interfaces.


The stakes lie at the level of education of the general public, in order to stimulate discussions around these new intuitive technologies.


Keyword :

, , , , , , , , , , ,