“Expectations are high, but companies are up against many limitations in deploying fair AI systems on a large scale.”
The development and deployment of artificial intelligence technologies is accelerating. But the presence of discriminatory or unfair biases in many AI solutions is worrying. For example, many remain scarred by the Amazon model for CV sorting, which was scrapped after 3 years because it rejected females, and today methods of addressing bias still vary greatly. This topic becomes all the more important given that the European Commission is defining this area of AI use as high risk in its proposed regulation. Another example concerns the biases in the facial recognition systems analyzed by Buolamwini and Gebru, who were recently controversially dismissed by Google.
Many solutions for managing these biases and ensuring fairness have been proposed by various sources: from diversifying data sets to specifying fairness criteria, from defining metrics to introducing modeling constraints, from training data scientists to increasing team diversity. But where are we in terms of putting this into practice, and what restrictions do we need to overcome to allow deployment in organizations?
What Do We Mean by Bias, Fairness and Discrimination?
Despite the lack of consensus on the definition of a fair AI model, in the context of machine learning, fairness can be defined as the absence of any prejudice or favoritism toward an individual or group on the basis of intrinsic or acquired characteristics. A lack of fairness can be described as discrimination if it relates to rights protected by law. Cognitive (misleading and falsely logical patterns of thought), statistical or economic biases can lead us to unfair or even discriminatory models. In machine learning, many types of bias crop up over the course of an AI model’s lifecycle. Identifying and limiting biases should mean that fairer models can be developed.
Tools Developed from Research
Key stakeholders, companies and universities (Google, IBM, Amazon, Microsoft, Aequitas, etc.), have made solutions that they have researched and developed available for the purpose of identifying data biases and mitigating these either by changing the weighting of individuals known to be subject to discrimination in the training data, or by imposing constraints in the optimization program before, during or after the development of the model.
Even if awareness is raised, these proposals should still be further developed to industrialize them for company use.
Tool Limitations: Little Focus on Data, Unrealistic Assumptions and Tools based on a Sole Centrally Driven Model
While data bias is the root cause of any lack of fairness, current research seems to focus more on building models than on the engineering and management of the data itself. Indeed, initiatives to remove the source of the bias, by using naturally unbiased data, are rare and limited to the world of research: Facebook recently published a varied and unbiased database of more than 45,000 videos of people with varying characteristics (age, gender, skin color) to enable researchers to test the performance of their computer vision and voice recognition models. Moreover, current tools require access to personal data in training data sets when building models; they need access to sensitive variables (gender, sex, etc.). In reality, however, it is not always easy to know in advance whether there are any sensitive variables available. Variables other than gender or sex might be more relevant depending on the context. Sometimes the data set does not contain any sensitive variables, but so-called proxy variables can play this part. Furthermore, access to personal data is not always possible. The tools available to companies mainly center around limiting biases in centrally driven classification models for structured data, excluding many other domains such as image classification and federated learning. Finally, if tools are not integrated throughout the development cycle for AI systems, this complicates the process of using these systems for companies, which then struggle to implement them. The focus on correlation between the input and output parameters of an AI model, rather than on causality, risks creating a skewed model but also risks poor generalization and the inability to reuse the models (transfer learning). Although tools for monitoring models and detecting possible drifts and associated biases are beginning to emerge (IBM’s Watson Open Scale and Nanny ML, for example), their use is yet to be scaled up for companies.
Tools Target In-Demand Technical Teams
These tools target the technical teams responsible for the development of AI models, which are already in high demand because of the scarcity of their skills, whereas in a company the fairness guidelines must be the result of a deliberative decision made collectively by all stakeholders. The contributions of human and social sciences in terms of design thinking are very useful in developing accountability practices that are central to the products and in promoting ethical choices.
The First Steps for Corporate AI Governance
The responsible AI movement has undeniably begun in organizations. Corporate AI governance is structured around ethical issues and guidelines. Ethical committees are formed as a result. Orange, for example, has created a Data and AI Ethics Council and also participates in the Cercle InterElles women and AI initiative. Labels help companies to develop skills and organize themselves. But ensuring the implementation of the company’s ethical principles and values by all employees is a long process. The new role of Responsible AI or Ethics Correspondent appears to be necessary to escalate questions centrally where decisions need to be made and to organize risk management and the deployment of tools. Although training is a favored method to give employees skills and help them put them to act, innovative approaches have to be developped to adapt to operational constraints and to various profiles, technical or otherwise. Organizations currently have different levels of maturity and there are many groups for sharing good practices (for example, Impact AI and Substra) for moving toward the automation described by IBM.
Increased Complexity for International Businesses
The question is, what ethical values should be adopted in global organizations? Should this depend on the regions of the world in which they operate even though the tools available replicate a Western or even North American point of view?
Finally, there are many ways in which to manage these biases and ensure fairness, but businesses still face many operational limitations when it comes to implementing them on a large scale in their organizations. Yet expectations are high when it comes to fairness in AI systems. The European Commission is undertaking a regulatory approach and must strike a balance with the protection of its innovation ecosystem. High-risk systems will have requirements on the quality of data sets (Article 10) and human oversight (Article 14) in order to avoid technical or human bias. The Council of Europe is also working on regulatory instruments to prevent human rights violations and avoid democracy and law becoming undermined.
Several international standards bodies are already working on the subject of AI bias (such as ISO, IEEE, NIST) and these will feed into European standardization strategies (organized by CENELEC) and into vertical standardizations for at-risk sectors.
Let us hope that all these initiatives will act as an accelerating guide for the practical implementation of bias management in businesses. Ideas and approaches for building trusted AI systems are far from finalized, and there are still many avenues to explore to overcome current limitations.
Further reading
https://hellofuture.orange.com/en/advocating-for-ethical-and-responsible-ai-by-design/
https://hellofuture.orange.com/en/how-ai-can-help-reduce-inequalities/
https://hellofuture.orange.com/en/x-ai-understanding-how-algorithms-reason/
https://hellofuture.orange.com/en/auditing-ai-when-algorithms-come-under-scrutiny/
https://hellofuture.orange.com/en/interactive/artificial-intelligence-hopes-fears-humankind/