“The holy grail being to help researchers and practitioners to uncover latent knowledge.”
Sought after to respond to the COVID-19 health crisis, artificial intelligence (AI) has proven its effectiveness in several aspects of the fight against the pandemic, be it in understanding this new coronavirus, diagnosing it, predicting its evolution, slowing down its spread, or speeding up other aspects of medical research.
Over a year after the crisis began, AI-based tools are continuing to proliferate and provide good results. The need to accelerate their deployment should not however obscure the ethical questions that they raise.
Machine learning to improve care
When a patient goes into hospital, it is important to know if they have COVID-19 and to determine whether they risk developing a severe form of the illness, in order to provide them with the appropriate treatments and to optimise the use of limited medical resources.
Right from the start of the pandemic, several tools were offered to facilitate diagnosis. A team of researchers from the University of Oxford, for example, developed two machine learning models trained with routine data from the medical files of over 100,000 patients, thus enabling virtually instant detection in patients arriving in emergency departments.
In the United States, researchers at the MIT are working on a model that can detect an asymptomatic case thanks to the analysis of a cough recorded on a mobile phone.
Other tools concentrate on prognosis, with the aim of improving patient care, in particular for serious and critical cases. In France, AI-Severity establishes a severity score enabling classification of COVID-19 patients according to probable evolution of the disease.
The fruit of a partnership between a research consortium led by Nathalie Lassau, from the Institut Gustave Roussy, and startup Owkin, the index is established through cross analysis of five clinical and biological factors and comorbidities, and the use of a deep learning model trained to predict the severity of the disease from chest CT scan images.
Developed in record time, AI-Severity has been deployed in the Institut Gustave Roussy radiology service. It is the subject of a publication in scientific journal “Nature”, and its code is accessible to researchers and hospitals the world over.
Data to enlighten public action
AI and data analysis have proved highly useful for characterising the SARS-CoV-2 virus, modelling its transmission, and providing support for the efforts deployed in terms of public health.
The algorithm developed by BlueDot in Canada is a good example of this contribution. Capable of detecting risks of propagation of infectious diseases by analysing many sources (press reports, demographic data, climate data, etc.), it enabled the North American startup to spot unusual cases of pneumonia in the Chinese city of Wuhan and alert its clients (governments, health services, and businesses) several days before the WHO issued its first warnings about the appearance of a new coronavirus.
In the follow-up, thanks to the analysis of airline ticket sales, BlueDot managed to identify the majority of the first cities affected.
To guide research and public action, telephone data can also be used as it enables precise and up-to-date compilation of statistics on population movements.
These statistics can be used as a basis for researchers to develop models aimed at predicting the evolution of an epidemic or understanding the impact of health measures. They are also of interest to the authorities: if they know the percentage of the population that left big cities before lockdown and where it was then distributed across the territory, they can plan hospital resources better.
It is in this perspective that Orange collaborated with a research team from Inserm (the French national institute of health and medical research), providing it with statistics from the technical data of its mobile network in France, with full respect of its customers’ and users’ privacy. It was not about looking at travel on the individual level, but about analysing anonymised quantitative data showing mobility before and after lockdown across French territory and, later, enabling better prediction of the spread of the virus.
NLP to “digest” scientific information
In the face of COVID-19, researchers from across the world rapidly and massively joined forces, which led to an avalanche of scientific publications. Open database LitCovid currently lists over 100,000 articles. In addition to this is the vast amount of data from clinical trials: to date, over 4,900 studies are registered in 135 countries on the clinicaltrials.gov website.
However, although the literature on COVID-19 is plentiful and freely accessible, it has become indigestible as it is extremely difficult to wade through! This has led to several initiatives making it possible to categorise and evaluate articles, or to create interactive visual displays.
The challenge is to direct researchers and practitioners to the research results most relevant to them and to facilitate their interpretation, the holy grail being to help them to uncover “latent knowledge”, in the area of drug repositioning for example (which consists in testing a drug already on the market and approved for one disease to treat another).
Natural language processing (NLP) experts thus came in to help in order to take unstructured data sources into account. They set up tools capable of extracting useful information from a wide range of scientific publications referenced on various sites, of classifying this, (field(s) concerned, peer-reviewed articles or non, level of evidence, etc.), and even generating syntheses (main results, methods used, etc.).
The COVIDScholar portal, for example, uses NLP to enable users to perform a search on thousands of articles, patents, and clinical trials. For his part, Benoit Favre, a researcher at the Laboratoire d’Informatique et Systèmes (LIS), has launched a project to facilitate COVID-19 literature monitoring thanks to NLP. In particular, he is working on an automatic classification tool to support the Bibliovid platform, whose contributors classify and summarise medical articles manually.
The challenges of AI in the time of coronavirus
The emergency of the current situation should not however mean that ethical questions and the robustness requirements of AI systems deployed should take a back seat. The people developing these systems and those who authorise their use should ensure their conformity with certain principles: respect of human rights and of privacy, security, transparency and fairness.
However, as is highlighted in this editorial of “The Lancet Digital Health”, a certain leniency regarding “COVID-19 algorithms” has raised concerns among researchers, who report that the AI models are poorly reported and trained on small or low quality datasets with high risk of bias. The ultimate danger is that premature use of AI technologies could increase diagnosis errors and compromise healthcare quality.
Broadly speaking, “The Lancet” underlines the importance of developing these tools in close cooperation with medical staff in order to better define the scope, ensure the right questions are being answered, and to provide true added value.
AI alone is not a miracle solution either. For example, as we are reminded by the Council of Europe, “the structural issues encountered by health infrastructures […] are not due to technological solutions but to the organisation of health services, which should be able to prevent such situations occurring”.