AI: challenges faced by developers of automated systems to moderate hate speech
● AI tools can be used to moderate large quantities of social media posts but adapting them to effectively vet material from a variety of social and cultural contexts is not an easy task.
● Researchers from the US and the UK have recently presented a new model for the detection of hate speech which strikes an optimal balance between accuracy and fairness.
● A multidisciplinary team at Orange is also investigating ways to boost the efficiency and fairness of these technologies by combining AI-generated hate speech with social science data.
Read the article
● Researchers from the US and the UK have recently presented a new model for the detection of hate speech which strikes an optimal balance between accuracy and fairness.
● A multidisciplinary team at Orange is also investigating ways to boost the efficiency and fairness of these technologies by combining AI-generated hate speech with social science data.


Orange launches a quantum computing research initiative to optimise network operations
Read the article
Efficient, lightweight computer vision models for innovative applications
Read the article
The drive to simulate human behaviour in AI agents
Read the article
RIC Testing as a Platform, a free, open source Open RAN tool
Read the article
6 GHz Band: A New Opportunity for Future Mobile Networks Successfully Demonstrated in the Field
Read the article