Artificial intelligences are blank pages. If we can pass our biases on to them, we can also teach them to avoid these.
In March 2018, French mathematician and deputy Cédric Villani published a report entitled Donner un sens à l’intelligence artificielle (Giving meaning to artificial intelligence) in which he advocates an inclusive and diverse artificial intelligence (AI). “In terms of AI, the inclusion policy must therefore take on a double objective: ensure that the development of these technologies does not contribute to increase social and economic inequalities; and use AI to effectively reduce these.” A major objective, as in this area things are not straightforward.
As early as 2016, American data scientist and activist Cathy O’Neil denounced the potential abuses of algorithms in her essay Weapons of Math Destruction. Although algorithms are supposedly neutral, many examples (the Amazon recruitment software, the COMPAS justice software, etc.) have revealed that this is not always the case. Machine learning models and datasets can present biases, and even amplify them.
In a recently published article, researchers at Télécom ParisTech identified three types of bias []: those that are the result of the programmers’ cognitive biases; then, statistical biases, linked to partial or erroneous data (“‘Garbage in, garbage out’ […] refers to the fact that even the most sophisticated algorithm there is will produce inexact results that are potentially biased if the input data on which it trains are inexact”); finally, economic biases, linked to cost-efficiency calculations or to voluntary manipulation by companies.
It is all the more important the problem be taken into account that algorithms are now used as a basis for decisions that have an impact on our lives. It is no longer just about being recommended such or such a film on Netflix, such or such a video on YouTube, such or such a book on Amazon…AI is used to recruit, grant loans, make a medical diagnosis, and even set the length of a prison sentence. Fortunately, there are several solutions to limit and correct algorithm biases.
1. Foster social mix and diversity among developers. If algorithm biases are – in part – linked to the cognitive biases of those who programme them, we can understand the importance of diversity among developers. Yet, the IT and new technologies sector is largely dominated by white males. The Villani report notes that “women represent […] a mere 33% of people in the digital sector (only 12% if we remove horizontal and support roles)”. Many studies show that ethnic minorities are also under-represented.
How can things be changed? By combining education with equality and digital at school and through actions taken within companies, these can be the support of associations who encourage young girls to go into IT or programmes aimed at certain target audiences (like the Web@cadémie, which trains youths who have left the school system to be web developers); bringing female role models to the fore and developing mentoring; or making more inclusive development teams.
2. Code to include. Artificial intelligences are blank pages. If we can pass our biases on to them, we can also teach them to avoid these. We can “disencode” discriminations and develop to include, notably in the choice of algorithm or of the predictor variables to be taken into account. Beyond the simple code, the choice of learning data also plays a critical role, it is important, for example, to inject diversity into learning databases. In this light, IBM – whose facial recognition system had been pinpointed by researchers from MIT – presented its Diversity in Faces Dataset, a dataset comprising one million annotated human faces, that is supposed to be representative of society and aimed at improving facial recognition technologies.
The researchers at Télécom ParisTech describe two kinds of solution for limiting algorithm biases: the statistical leads, which are linked to the way in which the data are collected and processed, and the algorithmic leads, which seek to introduce equity right from the algorithms’ conception by integrating various constraints: “[…] an area of research in machine learning is developing around what we call algorithmic equity. The objective of this work is to design algorithms that meet equity criterion, for example non-discrimination according to characteristics that are protected by law such as ethnic origin, gender, or sexual orientation”. The task is a hard one, because equity is a plural not a universal concept, whose definitions vary from one society to another, from one era to another, and whose applications can be incompatible with each other. There is in fact a whole range of criterion used in Machine Learning to judge an algorithm’s fairness, but none has consensus and several are incompatible.
There are also AIs that can detect and combat discrimination. Some research centres and technology companies have launched their own projects, like Aequitas, developed by the Center for Data Science and Public Policy of the University of Chicago, or IBM’s AI Fairness 360, open source toolkits aiming to track and correct biases in databases and machine learning models. Mathematician Cathy O’Neil has set up her algorithmic auditing company, ORCAA. In France, we can mention French startup Maathics, who offer the same type of services and award the Fair Data Use label.
3. Make algorithms more transparent. Each time an algorithm has important consequences on a person’s life, it is important that they be able to understand the rules that it follows, and that these may possibly have been discussed beforehand. Making algorithms transparent consists in opening the “black boxes” to understand the internal workings of the learning models and the data used. The notion of “algorithmic transparency” has increased in importance in the public debate and is the subject of many initiatives, such as the TransAlgo platform launched in 2018 by Inria, the French national research institute for the digital sciences.
Beyond equity, reducing inequalities thanks to AI
We have seen that the Villani report fixes a double objective: equity, but also the reduction of inequalities. It notably mentions the creation of an automated administrative procedure management assistance system to improve equal access to public services, or AI-based technologies enabling to better take into account the needs of people with disabilities and improve their living conditions.
To this aim, Microsoft’s Seeing AI and Google’s Lookout applications help blind or visually impaired people to identify elements (individuals, objects, text, etc.) present in their surroundings, thanks to automatic image recognition.
And beyond this, AI has a great potential to simplify uses in the digital world and thus narrow the digital divide.
The idea is to put AI at the service of equal opportunity, of the fight against discriminations, or still of diversity and inclusion in the workplace. Several initiatives go in this direction such as, for example, the tools aiming to limit bias during recruitment.
By using Textio, a smart text editor capable of making a job description more inclusive, software publisher Atlassian took the percentage of women recruited from 10 to 57%. In France, the Data for Good community unites hundreds of voluntary data scientists, developers, and designers, who put their skills at the service of projects with social impact.
Although AI does carry risks, there are many examples showing that it also represents a fantastic opportunity for social innovation.
Sources :
– Donner un sens à l’intelligence artificielle : pour une stratégie nationale et européenne (Giving meaning to artificial intelligence: for a national and European strategy)
– Algorithmes : biais, discrimination et équité (Algorithms: bias, discrimination and equity)
– Concrètement, comment rendre les algorithmes responsables et équitables ? (In practice, how can we make algorithms responsible and fair?)
– Using Artificial Intelligence to Promote Diversity