Healthcare: will algorithms soon be taking decisions instead of doctors?

Anesthetist Working In Operating Theatre Wearing Protecive Gear checking monitors while sedating patient before surgical procedure in hospital
● Researchers in Austria have developed an algorithm capable of formulating treatment decisions that are more effective than those taken by doctors.
● The research has raised questions about healthcare decisions, algorithmic bias and responsibility for artificial intelligence.
● A professor of radiation oncology and artificial intelligence researcher, Jean-Emmanuel Bibault discusses what is at stake.

Artificial intelligence (AI) now has the capacity to formulate treatment suggestions that surpass the quality of human decisions. This is the remarkable finding of a research paper published by a group of a dozen doctors and AI specialists based in Austria. The advance has been made possible by the development of a reinforced learning agent fed on time series data detailing the clinical condition of patients receiving intensive care for septicaemia. The approach has the potential to address a range of issues regarding the prescription of corticosteroids (steroid hormones), given that the dosage and the moment of administration of this kind of medication can have a critical impact on patient survival. Treatment decisions taken by the research team’s virtual agent take into account more parameters than decisions taken by humans, which make them more pertinent to individual patients’ needs. When tested, the virtual agent’s recommendations to suspend or modify prescriptions, which were often stricter than those advocated by human clinicians, led to lower mortality: a promising result that will likely pave the way for AI systems with the potential to enhance patient outcomes in the future. In one study conducted by the researchers, AI increased the patient recovery rate by three percentage points.

The need for safe methods to take algorithms beyond the realm of computers

The authors of the paper are keen to point out that these technologies will solely be used as an aid to human medical staff, however, they nonetheless acknowledge that their innovation raises certain legal questions. Who will in fact be responsible if a medical AI makes a mistake? Should humans follow AI recommendations or ignore them? Notwithstanding the rapidly increasing number of scientific articles which demonstrate how algorithms can effectively outperform human doctors, there are few satisfactory answers to these questions. “The problem is that the majority of algorithms have demonstrated their effectiveness on a computer, while only a few have been tried and tested in the field,” points out Jean-Emmanuel Bibault, an oncology professor and AI researcher. “The challenge today is to devise methods to take algorithms beyond the realm of computers and validate them so they can have an impact in patients’ daily lives.

Algorithms need to be subject to serious and rigorous validation, or we will run the risk of developing biases.

Writing in MIT News, the CEO of Cambridge Health Alliance, Professor Assaad Sayah, warns that it is hard to predict the potential consequences of AI in healthcare, notably with regard to large numbers of inappropriate results for sub-populations. “The risk is that there will be a desire to cut corners that will expose patients to unnecessary risks”, explains Jean-Emmanuel Bibault. For the French specialist, the profession is stuck between a rock and a hard place: “Algorithms need to be subject to serious and rigorous validation, which takes a lot of time. But if that work is not done, we run the risk of developing biases.” Bibault belongs to a group of experts which recently published an article in Nature Medicine to propose a reporting guideline, christened DECIDE-AI, for the early-stage clinical evaluation of AI-based decision support systems. The goal is to establish a strict methodology for studies and development of these systems that will pave the way for large-scale trials. The guideline notably features a checklist, requiring researchers to identify data that was used as inputs for the AI, the manner in which it was acquired, the process needed to enter the input data, the pre-processing applied, and information on how missing/low-quality data were handled.

Reducing the burden on health services

“Today artificial intelligence is highly effective when it is used to interpret CT and MRI scans or analyse biopsies”, explains the doctor. Current technologies are also good at segmenting images to detect organs, which saves a lot of time. “Output from these tools still has to be checked by hand,” he adds. Jean-Emmanuel Bibault, who recently authored 2041, L’odyssée de la médecine: Comment l’intelligence artificielle bouleverse la médecine ? (Éditions des Équateurs, 2023) [“2041, A medical odyssey: how artificial intelligence is revolutionising medicine”], is convinced that within five to ten years, we will see non-supervised algorithms in healthcare. “Health authorities should prepare for this, notably because it will help to speed up patients’ access to care, and in particular their early treatment, which will result in savings for healthcare systems. However, the risk is that we will also conclude that we don’t need so many medical staff, which would be a big mistake, because the point of AI is to improve the management of patients and the quality of treatments.” As for doctors’ willingness to adopt these technologies, Jean-Emmanuel Bibault has observed some real enthusiasm: “My point of view may be biased by the practitioners I spend time with, but let’s not forget that doctors have a duty to make use of every available means to optimise the treatment of their patients”, he concludes. And within a few years, there is little doubt that these means will include artificial intelligence.

Read also on Hello Future

A man is crouched on bare ground, holding an object in the air with one hand and a pencil in the other. Next to him, an open laptop suggests he is focused on his outdoor research work.

Geology, geoarchaeology, forensic science: AI reveals history in grains of sand

Discover

Fine-tuning brewing and recipes: how AI can improve the taste of beer

Discover

Flooding: how machine learning can help save lives

Discover
décryptage de la lettre de Charles Quint - Cécile Pierrot à la bibliothèque

AI provides a wide range of new tools for historical research

Discover
An individual in a lab coat and protective glasses holds a microprocessor in their gloved hand. The setting is bright and modern, suggesting a research or technology development laboratory.

Algorithmic biases: neural networks are also influenced by hardware

Discover

Multimodal learning / multimodal AI

Discover
Three people are collaborating around a laptop in a modern office environment. One of them, standing, is explaining something to the two seated individuals, who appear attentive. On the table, there is a desktop computer, a tablet, and office supplies. Plants and desks are visible in the background.

FairDeDup limits social biases in AI models

Discover