• Some employees deliberately manipulate AI tools, compromising the accuracy of their results. Deterioration in the reliability of these tools may then create a ‘vicious cycle’ within companies.
• Managers are well-advised to adopt approaches to AI that reflect levels of trust among employees while reassuring them about the use of data and emphasising concrete benefits.
How did you go about conducting your qualitative study on the use of AI in companies?
In order to conduct an in-depth qualitative study, we observed the introduction, implementation and utilization of new AI technology within a company. The tool in question collected data on a range of digital footprints, like calendar entries and internal communications on platforms like Slack, to create a competence profile for each member of the staff. The overall goal was to generate an organizational competence map, which in the context of a consultancy company, for example, can help management to determine which employees are best suited to take on new projects, and also help new recruits to identify colleagues with whom they should be collaborating.
Some even sought to manipulate the AI tool, for example by frequently communicating on subjects such as machine learning so that they would be perceived as experts
You identified four trust configurations among employees with regard to AI: what were they?
We identified four trust configurations that influenced employee behaviour with regard to the tool. These were situated in a space defined by two dimensions: cognitive trust (in the performance of the tool) and emotional trust (enthusiasm or anxiety with regard to the tool). The four configurations were:
- Full trust (high cognitive/high emotional): users believed in the effectiveness of the tool and felt at ease when using it.
- Full distrust (low cognitive/low emotional): users had doubts about the effectiveness of the tool and experienced negative feelings with regard to it.
- Uncomfortable trust (high cognitive/low emotional): users recognized the effectiveness of the tool but felt uneasy or anxious when using it.
- Blind trust (low cognitive/high emotional): users had a positive attitude to the tool but were not convinced of its effectiveness.
How did these different levels of trust with regard to AI affect the use of the AI tool?
These trust configurations resulted in a range of behaviours. Those with uncomfortable trust (high cognitive trust but low emotional trust) tended to recognise the effectiveness of the tool, but also felt fear or anxiety about it, which in some cases led them to limit the information they shared or a simple refusal to interact with the tool. Some even sought to influence the manner in which they were profiled by manipulating the tool, for example by frequently communicating on subjects such as machine learning so that they would be identified as experts, even though they didn’t have any specialist knowledge. Behaviours like these have the potential to degrade an AI’s performance by introducing biased or incomplete data. They can also create a vicious circle where a decrease in AI performance further erodes other users’ trust in the tool and hampers its adoption even further.
What did you identify as the main challenges to the adoption of AI tools?
The main challenge is to maintain the quality of data that is fed into an AI of this kind. When users seek to manipulate the manner in which they are profiled or provide less than adequate information, there will be a negative impact on the accuracy of the competence map. This in turn may undermine other users’ cognitive trust in the AI, leading to a vicious circle where they will also stop using it. In addition, different trust configurations can result in a variety of behaviours, such as manipulating, confining or withdrawing data, which negatively affect the AI’s performance and complicate its adoption within an organization.
What advice do you have for managers who are seeking to encourage more effective use of AI?
They should reinforce users’ emotional trust and vary the approach they adopt for specific groups to ensure that it reflects their particular trust configuration. For example, for groups with low emotional trust, it is vital to dispel any fears by communicating positively about AI, and clearly explaining how data will be used. It is also important to recognize that AI tools are not perfect to begin with, but that they improve over time with feedback from users. Managers can do a lot to encourage a smoother adoption of AI by fostering an environment of transparency and support. I’ve also noticed that companies often launch pilot projects, which tend to run out of steam after three to six months. This usually happens because managers are not doing enough to train their teams on how to use AI to improve their productivity. Without support and gradual integration into work processes, AI is perceived as a temporary experiment rather than a genuine lever for change.
Orange Campus Tech Director Roxane Marsan’s perspective :
At Orange, we have provided training to around 40,000 employees on the use of generative AI, a technology that has prompted considerable interest and also concerns not just about how it works but also about its impact on jobs. Within the group, our internal AI tool Dinootoo has now been adopted by some 60,000 users!
There’s nothing magical about artificial intelligence, which is why we need to demystify it — a process that involves talking to staff and educating them about concepts like prompts and hallucinations, and the limits of AI models. Not only do we want Orange employees to get to grips with these tools, but we also want them to be fully aware of problems like biases and limitations associated with AI, so that they develop a critical mindset with regard to it. Some of our staff are still reluctant to use AI tools, notably because they are fearful about the implications they have for their jobs.
To provide support to our employees, we have set up workshops that cater to a range of levels:
- Level 1 workshops: These sessions focus on getting to grips with AI and simple applications for it, understanding how AI systems work, their legal frameworks and awareness raising on the environmental impact of AI. The aim is to investigate the value and benefits of using AI. We also provide support to users starting out with AI, so that they really understand what they are producing.
- Level 2 workshops: These focus on uses for artificial intelligence in particular fields (communications, HR, finance, etc.) to demonstrate the day-to-day benefits of using AI in specific jobs.
- Technical training: these sessions are exclusively for the teams who develop our AI systems in collaboration with our numerous technology partners.
Sources :
It’s Amazing – But Terrifying!: Unveiling the Combined Effect of Emotional and Cognitive Trust on Organizational Member’ Behaviours, AI Performance, and Adoption
