Reading Level

AI: challenges faced by developers of automated systems to moderate hate speech

● AI tools can be used to moderate large quantities of social media posts but adapting them to effectively vet material from a variety of social and cultural contexts is not an easy task.
● Researchers from the US and the UK have recently presented a new model for the detection of hate speech which strikes an optimal balance between accuracy and fairness.
● A multidisciplinary team at Orange is also investigating ways to boost the efficiency and fairness of these technologies by combining AI-generated hate speech with social science data.
Read the article
Stylized character jumping in front of a screen with technology and communication icons

Orange launches a quantum computing research initiative to optimise network operations

Read the article
Close-up of a woman in a white coat carefully looking through the eyepiece of a black microscope, with her right eye aligned.

Efficient, lightweight computer vision models for innovative applications

Read the article
Illustration of a smiling robot emerging from a large smartphone screen, reaching out to a seated man with a

The drive to simulate human behaviour in AI agents

Read the article
A metal telecommunications tower with multiple antennas stands against a colorful sky painted with orange, pink, and blue hues during sunset.

RIC Testing as a Platform, a free, open source Open RAN tool

Read the article

6 GHz Band: A New Opportunity for Future Mobile Networks Successfully Demonstrated in the Field

Read the article
A woman lying on a yoga mat, looking at her phone with headphones beside her. Natural light illuminates the wooden room.

The perilous charms of relational AIs

Read the article