Algorithmic biases, whether statistical or cognitive, are distortions in computer models, often resulting from unbalanced training data and/or biased design teams. They affect the accuracy of machine decisions.
What are algorithmic biases? Mathilde Saliou explains…
The term “Algorithmic bias” holds a complexity that extends beyond its surface. It encapsulates the nuanced concept of bias, a phenomenon that isn’t solely statistical, yet frequently surfaces in the realm of algorithmic models. Beyond statistical implications, “Algorithmic bias” hints at cognitive biases. These biases empower rapid thinking but can usher in errors. Moreover, they manifest as discriminatory biases, triggering machines or individuals to perpetuate unfair treatment, thereby dividing populations.
Take, biased facial recognition algorithms. These systems tend to err when identifying unfamiliar populations, resulting from inadequate training.
The crux of algorithmic bias often roots itself in the very training data supplied to AI models. This, in essence, becomes a statistical conundrum. For instance, a facial recognition model fed more images of men than women tends to skew identifications toward males, solely due to exposure frequency. The critical juncture arises when these biases infiltrate machines deployed within societal frameworks. Take, for instance, biased facial recognition algorithms used in law enforcement. These systems tend to err when identifying unfamiliar populations, resulting from inadequate training.
Instances abound where these systems misidentify individuals of darker skin tones due to their extensive exposure to lighter-skinned representations. It’s an issue stemming from an imbalance in training data. So, where do these biases originate? Primarily, they emerge from three significant sources. Firstly, biases stem from the data used to train models, inherently skewing the algorithm. Secondly, the teams constructing these technologies often lack diversity, fostering inadvertent biases. Lastly, biases seep through the purpose for which these machines are designed, generating varying outcomes based on their objectives. Understanding algorithmic bias demands introspection into the data, the creators, and the intended goals. It’s imperative to bridge these gaps, diversify perspectives, and enhance the inclusivity of training data. By doing so, we pave the way for technology that transcends biases, enabling fairer, more accurate outcomes across diverse populations.
She is a journalist specialising in digital issues and graduated from Sciences Po. She has worked for RFI, 20 Minutes, Slate, Usbek & Rica, Les Inrocks, Numerama, Flint, NextInpact, The Guardian, etc. From 2020 to 2022, she was secretary general of the association Prenons la Une, which campaigns for better representation of women in the media and equality in editorial offices.