“At the scale of a society, it is the whole capacity to establish the truth value of proofs in the document that has the potential to shake all authorities.”
An angelic face, a clear and honest gaze, a hint of a smile: Katie Jones deceived everyone on LinkedIn until June 2019. The young lady, connected with several people linked to the White house, has never set foot in the Center for Strategic and International Studies in Washington nor the University of Michigan.
Katie Jones doesn’t exist. Her profile picture was generated by an artificial intelligence technique: GANs (Generative Adversarial Networks). Experts questioned by Associated Press point out that her LinkedIn account is “typical of espionage efforts” on the professional network.
Most importantly, it challenges what we thought of as a given on previously irrefutable proofs of identity such as photos, videos, and sound recordings of human beings.
Since their introduction in 2014 by Ian Goodfellow, an American researcher specialised in machine learning, GANs have established themselves as an innovative programming approach to the development of generative models (i.e. that are capable of producing data themselves) and are making progress every day in the imitation of that which, until now, enabled the sure identification of a human being through the unique characteristics of their picture or their voice.
These algorithms inspired a software engineer named Philip Want to create the thispersondoesnotexist.com website. With over 4 million visitors since it was published in February 2019, this “generator of people that do not exist” exposes the true-to-life portraits of women, men, and children, who do not lack the mascara, slight stubble, or reflection in the pupil that are characteristic of an authentic HD photo.
The forger and the police officer
To get his generator to work, Philip Wang used a code written by NVidia, called StyleGAN. Like all GANs, this algorithm gets two artificial neural networks to compete: the generator, to which we can assign the role of the forger, and the discriminator, which plays the role of the police officer.
The networks mutually train each other, one to create an image that it wishes to pass as an original, the other to track down copies that it doesn’t think are “realistic” enough when taking into account the stock of “true data” to which it has access.
As this unsupervised training takes places, GANs prove capable of producing excellent data, be they designs (for the automotive, fashion, furniture, gaming, etc. sectors), music, or even pharmaceutical molecules.
Welcome to deepfaking
However, all AI initiatives are not so virtuous. In the world of artificial intelligence, the most exciting perspectives stand side by side with the risks of the most sinister usages, as reflected in “deepfakes”, these highly realistic fakes that make it possible for example to substitute one face for another on a video.
With each innovation, AI opens the door to dazzling progress in many industries… all the while reducing the gap between “true identity” and “fake identity”.
The recent creation of MeINet, a voice synthesizer capable of reproducing anybody’s voice (starting with that of Bill Gates), by Facebook’s AI division, makes it possible to generate speeches that have never been pronounced by the person to which they are attributed. This makes it possible to put a declaration of war in the mouth of a head of state, with the trickery not easily uncovered on the other side of the screen.
Challenging the notion of identity
As did the invention of photography in its time, “the progress of AI fundamentally challenges the capacity to prove one’s identity, that is to establish tangible proof of one’s existence”, explains Olivier Ertzscheid, a researcher in information and communication sciences, author of “Qu’est-ce que l’identité numérique?” (What is digital identity?) (OpenEdition Press, 2013).
“At the scale of a society, it is the whole capacity to establish the truth value of proofs in the document that has the potential to shake all authorities”, he continues.
Although “these technologies are only problematic because of the use that we put them to in certain societies, Olivier Ertzscheid adds, they tackle head-on the question of veracity, that is the ability to agree in a common way on facts that are incontestable. If each of a society’s individuals feels that everything can be contested, we quickly fall into a systematic regime of opposition, the breeding ground of fanaticism and hatred”.
Establishing regulations and training citizens thus appear as desirable shields against the problematic effects of the new faces of AI, as soon as they start to blur the notion of identity and cross the porous border that separates “public” from “private”.