The man/machine duality, an illusion to be overcome

Human science researchers are increasingly interested artificial intelligence. In the United States in particular, where ethnographer Tricia Wang, founder of the blog Ethnography Matters, and anthropologist Madeleine Clare Elish, a specialist in autonomous systems, speak inter alia about the need to rethink the relationship between humanity and machines.

We will always need humans to get machines back onto the right path.

e are still haunted by the nightmarish vision of the red lens of HAL 9000, Stanley Kubrick’s intelligent and despotic robot. Already, transhumanist prophets popularized by Ray Kurzweil and his disciples plunge us into unprecedented existential terror.

Further, aren’t the famous victory of Deep Mind in the Go game in 2016 and the proliferation of smart vehicles and drones signs that we are slowly but surely in the process of handing over control of our human society to machines?

This analysis has at least one flaw, and it is significant:  it assumes a humans-machines duality where big data necessarily give machines the advantage over humans (via artificial intelligence). We have always lived in symbiosis with our tools: since the first silex stone fashioned into a cutting tool, we created and were influenced by them.

This is what American ethnographer Tricia Wang, founder and lead of the blog Ethnography Matters calls “the networking system of collaboration between humans and machines”. In reality, humans model and will continue to model the machine whether it wants this or not, she underlines: even the algorithms are biased because they are designed by humans.

Mirror effects

In refusing to recognize this symbiotic relationship, therefore the role and ultimately, human responsibility in the development of the intelligent machine, humans open the door to the artificial nightmare risks referred to above.

On the other hand, outlines Tricia Wang, accepting, studying, understanding, and acknowledging this symbiosis is indispensable if there is a desire for exponential growth in data and computer performance to significantly benefit the human community.

Geneviève Bell, the famous Australian anthropologist who, for more than twenty years, has been guiding Intel innovation efforts by placing human beings  ever closer to the centre of technology, has long fascinated the media with her firmly user-centred approach. Moreover, her example has been emulated throughout the new technologies industry, particularly at Microsoft, Google, and IBM.

Schizophrenia

But for Tricia Wang, like for her human science (particularly sociology, anthropology, ethnology, ethnography, and history) peers specialized in research on innovation technologies—and there are more and more of them—it is no longer enough for technologies to serve individual users: it should be ensured that they will serve the entire human community in the long term. And it is no longer enough for humans to see themselves as users: they need to assume their role as participants in this evolution.

The widespread idea that the ultimate success of technology is a technology that is independent of humans, where humans no longer have their place,  is at best erroneous and at worst, dangerous.

Objects and systems that are intelligent or autonomous already reflect the schizophrenia at work in our society when humans create and carry the ultimate  responsibility for technologies designed to obliterate, at least in appearance, human participation.

This dissimulation negatively affects humans, confirms anthropologist Madeleine Clare Elish: the society’s reference framework for morality, as well as the juridical and legal framework, have not evolved in terms of perception of responsibility, even though these intelligent systems work according to a distributed control module.

With an MIT degree and as a thesis writer in Columbia University’s Anthropology Department in New York,   Madeleine Clare Elish focused her research on the social impact of artificial intelligence and autonomous systems. In this context Tricia Wang invited her to publish on ethnographymatters.org as part of a special issue entitled: “Co-designing with machines: moving beyond the human/machine binary”.

“Moral distortion zone”

In the event of system failure or breakdown, the latter is preserve while humans are 100% of the liability. Humans, according to Madeleine Clare Elish, are becoming the “moral distortion zone” of the system. She underlines that positive development of human-machine network systems will only be possible if the role of humans is reconsidered in the context of their collaboration with the machine, including the notion of work and social relations.

On the other hand, “We will always need humans to get machines back onto the right path” writes Tricia Wang in reference to the slippage risks inherent to technology — including certain algorithms of which the discriminatory impact has already been shown with regard to criminal justice and job-seeking, among other examples.  “Artificial intelligence must include the dimensions of senses, values, morals, and ethics,” she underlines.  

A growing number of prominent scientists in institutions as prestigious as MIT and UC Berkeley are rallying around this approach. Several thousands of leaders in scientific, industrial, and intellectual circles signed the letter published in 2015 by computer engineer Stuart Russell where he declared: “We recommend that research be dedicated to guaranteeing that the increasingly powerful artificial intelligence systems be robust and beneficial […] Our AI systems must do what we would like to make them do.”  

Following this, entrepreneur Elon Musk, an emblematic Silicon Valley figure and controversial critic of the dangers of artificial intelligence, created a financing fund for research projects dedicated to “guaranteeing the beneficial impact of AI”. Several hundreds of research teams from across the globe submitted project dossiers. Tricia Wang and Madeleine Clare Elish are in good company

Read also on Hello Future

Controversies around AI: from ethical questions to legal regulation

Discover

AI is facilitating the exploitation of the astronomical amount of space data

Discover

Using AI to monitor movement promises to improve home care

Discover

Reinforcement learning: a powerful AI in ever more areas

Discover

How to stay well-informed when algorithms are deciding for us?

Discover