Using AI to monitor movement promises to improve home care

As populations age, providing home care for elderly and/or dependent persons is becoming a major public health issue in France and around the world. A research project is being conducted with the aim of designing artificial intelligence models that can remotely assess people’s autonomy.

“Building an AI system that provides indicators for decision-making, without intrusion or automated action — that’s always where humans come in.”

By 2050, there will be two billion people in the world aged 60 and over, which is twice as many as in 2020 (source: WHO). In addition, there are currently 40 million people affected by severe disability. For all these people, receiving sufficient health care at home while maintaining a level of autonomy is a critical yet complex issue.

AI and actigraphy

Orange research teams in Grenoble have been investigating the multiple fields of e-health for a number of years. They focus on actigraphy, which is defined, in short, as the measurement of movements. It is often associated with sleep studies, but actigraphy is suitable for other use cases too. Researchers have initiated a project in which vulnerable people—seniors and those with a disability or in isolation—are remotely monitored. The goal is to assess a person’s level of autonomy using AI techniques and to obtain an indicator to inform and direct e-health services towards an appropriate home care strategy. Actigraphy, which is part of the telemedicine field—and more precisely, telemonitoring—can be used here as an automatic monitoring system to keep track of information about people at home, with their consent and in line with the need to protect personal data.

Raw data to facilitate analysis

Acquiring this raw data requires information to be collected from sensors on a device or object that is worn, such as a connected watch or wristband, or even a smartphone. “Data that is captured remotely comes mainly from accelerometers, gyroscopes and magnetometers”, explain Grégoire Lefebvre, a researcher specialising in AI models, and Paul Compagnon, a research engineer and AI PhD student. “They are used to describe and report on the movement—acceleration, rotation and/or orientation—of the device worn. We then analyse this raw data using neural networks and different AI techniques to identify everyday activities. We can then pick out situations that we do not necessarily understand, but which reveal patterns of behaviour. And if we see that these patterns decline over time, we can detect a change in habits and a potential deterioration in autonomy.” The project aims both to try to understand these routine activities and to design an indicator to support decision-making by generating alerts that can be used by medical professionals.

Making do with little data

At first, this research objective seems feasible, but the task poses some major technical challenges, to be dealt with by developing complex and advanced neural network architectures.

Little data is available for the studies, and yet a solid and cohesive solution needs to be reached that meets the standards of the “5 Ps” of medicine: personalised, preventive, predictive, participatory and pertinent. Different learning paradigms are studied accordingly to develop AI models. Supervised learning consists of labelling the data collected to understand which activity it is associated with and then adjusting the parameters of the models. Small, personalised data sets are thereby created, which establish what movement is produced from eating a meal, walking, resting etc. (i.e. Activities of Daily Living, ADL, which provide insight into a person’s autonomy). Models can then be created from these data sets. Semi-supervised learning is then focused on repeating and identifying routines, without data labelling. “In supervised cases, we aim to infer a class of activity, whereas in semi-supervised cases, we’re looking at similarity of the activity. We know the person is doing something, but we don’t know exactly what. We use distance learning to identify two similar sequences of data across the same timescale in order to infer habits — that is, recurring situations.”

Advanced AI models

Detecting routines is compensation for the small volume of information available and maintains privacy whilst still ensuring a high degree of customisation. Where deep learning generally involves tapping into very large databases, the neural networks studied in this project have been adapted to address this challenge.

This approach required the development of highly advanced AI models. One of these models was developed using a combination of several strategies, starting with an autoencoder architecture capable of encoding long sequences of data to establish compact representations of information. These representations are then used by a Siamese neural network that measures the similarity between them. Finally, attention models are also implemented to focus the neural architecture on particular parts of more characteristic, relevant, and structural input sequences. This is how we get an efficient similarity indicator for routines.

The solution always follows the same logic in terms of its premise and purpose: “We’re not trying to build a system that can make decisions. We want to provide indicators that will support decision-making. There’s no intrusion or automated action; that’s always where humans come in.”

Medium-term prospects

Three patents and three publications have been published as part of the project. The patents are for change detection in habits, routine activity assessment and recognition of modes of transport. Research projects now look to new challenges for the future. Location tracking in the home using the same sensors is one such challenge; it aims to better understand the course of people’s lives. Recognising emotions in the home to better classify activities and events is another; research is being conducted in conjunction with the MIAI Grenoble Alpes Institute.

Read also on Hello Future

Multimodal learning / multimodal AI

Discover
Three people are collaborating around a laptop in a modern office environment. One of them, standing, is explaining something to the two seated individuals, who appear attentive. On the table, there is a desktop computer, a tablet, and office supplies. Plants and desks are visible in the background.

FairDeDup limits social biases in AI models

Discover

Health: Jaide aims to reduce diagnostic errors with generative AI

Discover

Ethical AI and children: the benefits of a multi-disciplinary approach

Discover

Neurotechnology: auditory neural networks mimic the human brain

Discover
PLEAIS

P-C. Langlais (PLEAIS): “Our language models are trained on open corpora”

Discover

Khiops: Simple and Automated Machine Learning

Discover

Attacks on AI: data cleaning becomes a cybersecurity issue

Discover