Machine learning

Manipulation, mistrust and adoption: paradoxical responses to AI in companies

• A study conducted by Natalia Vuori (Aalto University, Finland) has identified four trust configurations (full trust, full distrust, uncomfortable trust, and blind trust) that determine how employees interact with AI.
• Some employees deliberately manipulate AI tools, compromising the accuracy of their results. Deterioration in the reliability of these tools may then create a ‘vicious cycle’ within companies.
• Managers are well-advised to adopt approaches to AI that reflect levels of trust among employees while reassuring them about the use of data and emphasising concrete benefits.
Read the article
A man in a safety vest reviews documents in front of a row of colorful shipping containers at a port.

Contraband: AI efficiently detects anomalies in shipping containers

Read the article

Artificial intelligence: how psychology can contribute to AGI

Read the article

How to make AI explainable?

Read the article

Explainability of artificial intelligence systems: what are the requirements and limits?

Read the article
Two people collaborate in front of computer screens, one pointing something out to the other. The screens display computer code in a modern office environment.

AI: “the divide between freelance and in-house developers can be damaging”

Read the article
A group of people is attending a presentation of BrainBox AI at the Orange OpenTech event. Two presenter stands in front of a screen displaying graphs and information on the topic. The participants are listening attentively and appear engaged in the discussion.

BrainBox AI to cut commercial real estate emissions by up to 40%

Read the article