“Decoding of information is benefiting more and more from technological advances.”
Social networks are fertile ground for spreading lies that are knowingly presented as legitimate information. Exposing them is often simple. Critical thinking, civic-mindedness, and media education can often suffice to foil misinformers’ strategies.
However, doctored videos, called deep fakes, are in certain cases more and more difficult to identify as such. These composite videos create an illusion of authentic filming. They are created thanks to an image synthesis technique coming from artificial intelligence, which makes it possible to convincingly change someone’s face for example.
Yet, videos viewed by a large number of people can cause social tension, influence voters’ decisions, or still damage businesses’ or people’s reputation.
The arms race
The ensuing “technological arms race” opposes two “sides” that are entwined, like the two networks of a GAN (Generative Adversarial Network), the technology that is often at the origin of deep fakes but that also enables uncovering of fake videos. The misinformers’ side striving to make more believable lies, and the fact-checkers’ side learning to identify ever more sophisticated fakes.
Fact checking has been a pillar of journalism practice since the 1920s. It means confirming the accuracy of figures and claims made in a text or speech. Certain media offer services dedicated to this procedure or more widely to explaining current events, for example in France the “Décodeurs” (newspaper Le Monde’s fact-checking service) or the “Observateurs” (France 24 television network).
Detecting fakes at source
This decoding of information is benefiting more and more from technological advances. Since 2018, researchers from the Computer Science and Artificial Intelligence Lab (Massachusetts Institute of Technology) and the Qatar Computing Research Institute thus claim that the best approach against fake news is to look at the actual sources rather than isolated news items. They have developed a machine-learning-based system to detect if a source is pertinent or biased.
In France also solutions are emerging to attempt to contain the fake news problem. Within the frame of the Content Check project started in 2016, four research laboratories and media such as Le Monde are working together to develop software aimed at journalists in order to check facts.
Ioana Manolescu, a computer science researcher at the French National Institute for Research in Digital Science and Technology (Inria), is one of the pioneers of Content Check. “My starting point was the observation that with the development of open data, everyone has access to a lot of information, the researcher confides to Farid Gueham, of the Fondation pour l’Innovation Politique. But this information is widely spread and not always easy to access: it is very complicated to link it all together.”
The team is working for example on software that improves the accessibility of Insee (The French National Institute of Statistics and Economic Studies) data. A crawler (an indexing robot) analyses the website, the data is then extracted thanks to an API and is consolidated in a database by an algorithm that identifies the type of each cell. The software makes it possible to answer a journalist’s research query by returning a value and a link to the original table.
Artificial intelligence and neural networks
At the Institut de recherche en informatique et systèmes aléatoires (Research institute of computer science and random systems), Vincent Claveau, French National centre for scientific research (CNRS) research fellow specialising in natural language processing, is concentrating on fake videos that are circulating on social networks.
Often modified and compressed several times, the content is analysed so as to detect if there are similar images on the web. “A neural network is trained to identify them, by comparing vector representations”, the researcher confides to Industrie & Technologies magazine. Calculation of the difference between the two images then makes it possible to highlight modified areas and to identify the changes made.
His team is also starting to work on image decontextualisation, analysing the characteristics of images and their associated text, again thanks to deep learning. The arms race continues more than ever.