Neuroprosthetics, virtual reality, and AI tackle motor disabilities

Technological progress provides many perspectives for the rehabilitation and compensation of disabilities. Blaise Yvert, research director at the Inserm (the French national institute of health and medical research) and leader of the BrainTech Lab Neurotechnology and Network Dynamics team, reviews three technologies that aim to give more independence to people with motor disabilities and to improve their quality of life.

"Post-stroke rehabilitation consists in stimulating brain plasticity to enable the patient to recover part of the motor skills lost.”

The Clinatec research centre’s Brain Computer Interface (BCI) project

“For the first time, a tetraplegic patient was able to walk and control both arms using this neuroprosthetic, which records, transmits, and decodes brain signals in real-time to control an exoskeleton”, the French Alternative Energies and Atomic Energy Commission (CEA) proudly announced in October 2019.

Developed at the Clinatec research centre, within the CEA, BCI is a brain-machine interface project that aims to enable people with severe motor disabilities to gain mobility thanks to mental control of an exoskeleton. A neuroprosthetic implanted in the cortex collects brain signals that are emitted when there is movement intention. These signals are then decoded thanks to a machine learning algorithm so as to predict the voluntary movement imagined by the patient and control the exoskeleton.

Blaise Yvert: “There are essentially two approaches concerning brain-machine interfaces. The first, a non-invasive approach, consists in measuring brain activity via electrodes placed on the scalp (EEG). This enables people to control a computer cursor or type letters on a screen. The downsides are that it is relatively slow, it’s difficult to control many degrees of freedom, and in general it requires a high level of concentration from the patient.

To obtain something that is higher performing, we use intracortical electrodes, i.e. that are implanted in the cortex, to record the activity of individual neurons and predict the movement that the person wishes to make. This is, for example, the principal of the BrainGate project. The advantage of this second method is that it is faster and more precise, but it also requires opening up the skull and the dura mater, with all of the medical complications that this can entail.

The approach chosen by Clinatec is halfway between the two: the electrodes are implanted under the skull but on the surface of the dura mater. We are therefore freed from the bone, which is not very conductive, to obtain better quality signals than an EEG. We do however remain at the macroscopic level, at a certain distance from the brain.

What is interesting with the study carried out at Clinatec is that the researchers show it is possible to control and carry out up to eight degrees of freedom via these macroscopic electrodes, as well as wireless transmission through the skin thus minimising the risk of infection.

The control of so many degrees of freedom had, up until now, only been carried out with intracortical electrodes.

However, the movements obtained are still quite approximate, and huge efforts still need to be made in order to improve the precision of the recordings provided by this type of implant.”

A virtual reality rehabilitation system for stroke patients

Strokes are the primary cause of acquired physical disabilities in adults. When they affect an area of the brain involved in mobility, they can cause paralysis in a part of the body.

Post-stroke rehabilitation consists in stimulating brain plasticity to enable the patient to recover part of the motor skills lost. Developed by American company Penumbra and used at Cooper University Health Care, New Jersey, the REAL Immersive System is a virtual reality system for stroke patient upper-extremity rehabilitation.

Thanks to a series of interactive exercises, it aims to reactivate neuroplasticity, that is to help the brain “rewire” itself and create new neural connections. The system comprises a headset equipped with a screen and a series of sensors placed on the body of the patient, who is immersed in a virtual environment. Thanks to a tablet, a therapist can choose from a series of activities that prompt the patient to move their arms.

Blaise Yvert: “Here we try in a fun way to get the patient to move their arms, which will motivate them to work more than with traditional rehabilitation systems. The concept is interesting as it is based on play as a reward for efforts made by the patient wanting to move to stimulate plasticity.

An approach based on this principle is being developed at the École polytechnique fédérale de Lausanne, Switzerland, by the team of professor Grégoire Courtine. The researchers have managed to give paraplegic monkeys the ability to walk again thanks to targeted stimulation of the spinal cord in response to cerebral activity produced by the animal when it wants to move.

The fact that there is a correlation between the intention to move and the carrying out of this movement, even if in the beginning this is done through an artificial system, makes it possible to elicit brain plasticity, axonal growth, and to re-establish connections between the brain, the spinal cord, and the limbs.

A similar approach has also yielded promising results in patients who have been paralysed for several years.

Google’s project Euphonia

Automatic speech recognition tools create a paradox: they are supposed to make our everyday lives easier, but they are not always adapted to the users who need them most. Thus, people with a motor disability that prevents them from expressing themselves in a “natural” manner have difficulty being understand by voice assistants and interacting with their smart devices, the generalisation of which could become another factor of exclusion.

Aware of this issue, Google launched project Euphonia, the aim of which is to improve the ability of voice recognition tools to recognise non-standard forms of language thanks to artificial neural networks.

Blaise Yvert: “Automatic speech recognition is based on the training of neural networks that require large amounts of data in order to learn. We therefore use databases containing thousands of hours of recordings from different individuals.

However, to be able to understand a person with severe dysarthria, the network must be trained on this person’s speech. The problem is that we cannot acquire that amount of data from one single speaker.

The Google engineers therefore adapted a neural network, which had already learnt from a huge corpus of standard speech, to the particular speech of a subject by retraining it over a short period of time with a much smaller dataset.

This seems to work well. Their results show that this approach reduces considerably the error rate of words produced by the network for this person.

We are trying to do something similar at the BrainTech Lab with cerebral activity. We have designed a voice synthesizer that is capable of turning articulatory movements into speech with the help of a deep neural network trained on a large corpus.

We are now trying to adapt it to patients after having converted their brain signals into articulatory movements with a short calibration phase from a rather small amount of data.”

Read also on Hello Future

Attacks on AI: data cleaning becomes a cybersecurity issue

Discover

Healthcare: will algorithms soon be taking decisions instead of doctors?

Discover

Generative AI helps to design new drug candidates

Discover

Faster more accurate diagnoses and machines that can “read” your mind: the AI revolution in scanning

Discover

Using AI to monitor movement promises to improve home care

Discover

Robotics, AI and blockchain at the heart of the healthcare of tomorrow

Discover

Six uses of AI that serve a more humane society

Discover