● Artificial intelligence (AI) systems can absorb our biases when they are being trained. However, it is also possible to programme automatic training models to combat inequalities.
● Several projects are underway to make AI more inclusive. Gapsquare, for example, is a human resources tool trained on data that has been screened to ensure gender and ethnic parity. Other programmes have been developed to ensure inclusive access to healthcare for minority communities.
Artificial intelligences are blank pages. If we can pass our biases on to them, we can also teach them to avoid these.
While the medical community and human resources departments have voiced concerns about the growing risk that artificial intelligence systems may exacerbate inequality, actors such as Bill Gates still see AI primarily as an opportunity for the health and education sectors. Stakeholders in the sector are giving priority to limiting biases reproduced by algorithms. And above and beyond these initiatives, AI is seen by some as a valid tool in the fight against inequality: artificial intelligences are blank pages. If we give them our biases, we can also teach them to avoid them. In the United States, a study undertaken in early 2023 by researchers at MetroHealth at Case Western Reserve University (Cleveland) showed that AI can be used to evaluate the risk of minority patients failing to show up for their medical appointments. The goal of collecting this data is to enable hospitals to offer targeted alternatives like telemedicine and additional inducements like transport solutions to reduce their rate of non-attendance.
Towards a more inclusive artificial intelligence
In the United Kingdom, Dr Zara Nanu has developed the Gapsquare platform, which analyses employees’ salaries while taking into account data on gender, ethnic origins and handicap, etc. She is convinced that, if nothing is done, AI will further discriminate against female workers in terms of recruitment and pay, given that it relies on historical data from the field, which leads algorithms to reproduce situations where men are paid more and occupy more important positions. By basing a system on equal and inclusive data, it can be transformed into a tool for achieving greater social justice in the workplace. In March 2018, French mathematician and parliamentarian Cédric Villani published a report entitled Donner un sens à l’intelligence artificielle (Giving meaning to artificial intelligence) in which he advocated an inclusive and diverse artificial intelligence (AI). In its conclusion, which remains relevant today, he argued that “with regard to AI, the inclusion policy must therefore take on a double objective: ensure that the development of these technologies does not contribute to increase social and economic inequalities; and use AI to effectively reduce these.” As early as 2016, American data scientist and activist Cathy O’Neil denounced the potential abuses of algorithms in her essay Weapons of Math Destruction. Although algorithms are supposedly neutral, many examples (Amazon’s recruitment software, the COMPAS justice software, etc.) have revealed that this is not always the case. Machine learning models and datasets can present biases, and even amplify them.
Algorithms are now used as a basis for decisions that have an impact on our lives (…) AI is used to recruit employees, grant loans, and make medical diagnoses, etc.
A diversity of algorithmic biases
In an article entitled Algorithmes : biais, discrimination et équité (Algorithms: bias, discrimination and equality), researchers at Télécom ParisTech identified three types of bias: those that are the result of the programmers’ cognitive biases; then, statistical biases, linked to partial or erroneous data (“‘Garbage in, garbage out’ […] refers to the fact that even the most sophisticated algorithm there is will produce inexact results that are potentially biased if the input data on which it trains are inexact”); finally, economic biases, linked to cost-efficiency calculations or voluntary manipulation by companies. It is all the more important for these problems to be taken into account now that algorithms are being used as the basis for decisions that have an impact on our lives. Outputs of AI systems are no longer mere recommendations about films to watch on Netflix, or videos on YouTube, or books to buy on Amazon. Ais are now being used to recruit employees, grant loans, make medical diagnoses, and even to set the length of prison sentences. Fortunately, there are several solutions to limit and correct algorithm biases in such systems.
Encouraging diversity among developers
If algorithm biases are – in part – linked to the cognitive biases of those who programme them, we can understand the importance of diversity among developers. Yet, the IT and new technologies sector is largely dominated by white males. In 2023, the NGO Femmes@Numérique reported that in France “women only represent 26.9% of the workforce in digital professions and less than 16% in technical positions which today play a key role in organisational strategies”. Furthermore, numerous studies have shown that ethnic minorities are also under-represented. As Mathilde Saliou, the author of Technoféminisme, comment le numérique aggrave les inégalités (Technofeminism: how the digital world is exacerbating inequalities) explains to Hello Future, it is “urgent to enable dialogue with end users who are not necessarily aware of the data submitted to these systems”.
How can things be changed? By combining education with equality and equal access to digital training in schools and through initiatives undertaken within companies. These can be tailored to support associations that encourage young girls to go into IT or programmes aimed at certain target audiences (like the Web@cadémie, which trains youths who have left the school system to be web developers); bringing female role models to the fore and developing mentoring; or making more inclusive development teams. For example, at the end of 2022, the digital training organisation Simplon proposed free web development classes to women in the French city of Rennes.
Machine learning for more inclusive programming
We can “disencode” discrimination and develop to include, notably through the careful choice of algorithms and predictor variables. Along with coding, the choice of learning data also plays a critical role, it is important, for example, to ensure diversity in learning databases. With this in mind, in 2019, IBM – whose facial recognition system had been flagged by researchers from MIT – presented its Diversity in Faces Dataset comprising one million annotated human faces, which is supposed to improve facial recognition technologies by offering a more representative sample of society.
The researchers at Télécom ParisTech describe two kinds of solution for limiting algorithm biases: the statistical leads, which are linked to the way in which the data are collected and processed, and the algorithmic leads, which seek to introduce equity right from the algorithm design that integrates various constraints: “[…] an area of research in machine learning is developing around what we call algorithmic equity. The objective of this work is to design algorithms that meet equity criteria, and thus avoid discrimination according to characteristics that are enshrined in law such as ethnic origin, gender, or sexual orientation”. The task is a hard one, because equity is a plural not a universal concept, whose definition can vary from one society to another, from one era to another, and whose applications can be incompatible with each other. There is in fact a whole range of criteria used in machine learning to judge an algorithm’s fairness, but no individual criterion is backed by a consensus and several are mutually incompatible.
There are also AIs that can detect and combat discrimination. Some research centres and technology companies have launched their own projects, like Aequitas, developed by the Center for Data Science and Public Policy of the University of Chicago, or IBM’s AI Fairness 360, open source toolkits that aim to track and correct biases in databases and machine learning models. In the US, mathematician Cathy O’Neil has also set up an algorithmic auditing company, ORCAA, while in France, the start-up Maathics is offering similar services that lead to the awarding of a Fair Data Use label.
Making algorithms that are more transparent
When an algorithm can have important consequences on people’s live, it is important that they are able to understand the rules that it follows, and that they be given an opportunity to discuss it before it is put to use. Making algorithms transparent involves opening “black boxes” to gain an understanding of the internal workings of the learning models and the data that they use. The notion of “algorithmic transparency” has grown in importance in public debate and is the subject of a number of initiatives, such as the TransAlgo platform launched in 2018 by Inria, the French national research institute for the digital sciences and the European Centre for Algorithmic Transparency which was launched by the European Union in April 2023.
Combating inequality using AI
The Villani report established a dual objective: equality on the one hand and also the reduction of inequalities. It notably mentioned the creation of an automated help system to facilitate equal access to public services, and AI-based technologies that ensure more thorough consideration of the needs of people with disabilities and improve their living conditions. With these objectives in mind, Microsoft’s Seeing AI and Google’s Lookout applications, which make use of automatic image recognition, are now helping blind or visually impaired people to identify elements (individuals, objects, text, etc.) present in their surroundings. Projects that focus on sound are also playing role, among them DreamWaves, which combines virtual-reality-audio technology with a map-based guidance system.
Beyond this, AI has a great potential to simplify uses in the digital world and thus narrow the digital divide. The idea is to put AI to work for equal opportunities by combating discrimination and promoting diversity and inclusion in the workplace. Several initiatives that aim to take up this challenge are already underway, such as the creation of tools aiming to limit bias in recruitment procedures. And their deployment can lead to real change. For example, when it began to make use of Textio, a smart text editor that makes job descriptions more inclusive, software publisher Atlassian succeeded in raising the percentage of women graduates it recruited from 10% to 57%. In France, the Data for Good community has brought together hundreds of data scientists, developers, and designers, who have volunteered to put their skills to work on projects designed to make a social impact. Although AI does carry risks, there are many examples showing that it also represents a fantastic opportunity for social innovation.
Sources :
– Donner un sens à l’intelligence artificielle : pour une stratégie nationale et européenne (Giving meaning to artificial intelligence: for a national and European strategy)
– Algorithmes : biais, discrimination et équité (Algorithms: bias, discrimination and equity)
– Concrètement, comment rendre les algorithmes responsables et équitables ? (In practice, how can we make algorithms responsible and fair?)
– Using Artificial Intelligence to Promote Diversity