● Numerous studies are underway to develop chip architectures based on neural networks with the potential to meet the latency and energy consumption requirements of AI.
● The goal is to create autonomous embedded systems that no longer need to communicate with servers, which will benefit from much lower energy consumption.
● This new approach to computing, which makes use of processors with onboard memory, will drive innovation in the automotive, robotics and healthcare sectors.
Will autonomous on-board sensors soon be able to spot movements in live footage, identify them and thus recognize what is happening in a video, with no need for the vast amounts of energy needed for data transfers? Such technology worthy of the scenario of Minority Report could be made possible by neuromorphic computing, which draws its inspiration from networks of neurons in the human brain. “As it stands, computing tasks of this kind can only be done by sending data to servers, which process it before sending it back to autonomous systems, which requires enormous amounts of energy”, points out Corentin Delacour, a doctoral student and researcher at Montpellier University working on the European NeurONN project, which is developing neuromorphic chip architectures for oscillatory neural networks. A response to the physical challenges implied by , neuromorphic computing could pave the way for more powerful and hugely energy-efficient chip architectures. Neuromorphism takes its inspiration from the functioning of biological neural networks, favouring parallel rather than sequential architectures, meaning that several operations can be carried out at the same time, rather than in succession. As Fabio Pavanello, a photonics researcher at the IMEP-LAHC laboratory, explains, it may hold the key to overcoming difficulties associated with “traditional architectures, which have limitations in terms of energy efficiency and on the level of communication between memory and processors.”
Low-power embedded computing solutions for networked devices in automobiles, robotics, health care and other sectors.
“In photonic technology-based neuromorphic chips, latency is reduced to the speed of light,” explains Pavanello. The idea is to directly integrate memory into the processors of these chips, which would eliminate the need for constant communication between distinct blocks and result in a ten- to 100-fold reduction in power consumption. “Combining these two essential components would result in faster data processing and a significant decrease in latency,” adds the researcher.
Applications in a comprehensive range of sectors
The challenge is therefore to develop architectures in the form of neural networks. “When the scope of a mathematical problem increases, the number of feasible solutions increases exponentially,” points out Corentin Delacour. “The oscillating neural networks (ONN) we work on allow us to solve optimization problems that involve conducting large numbers of calculations in real time.” Other research teams are studying alternative architectures that address the same needs, like spiking neural networks (SNNs). Madeleine Abernot, another doctoral student and researcher at the University of Montpellier, explains what these are: “The goal is to provide embedded computing solutions for networked devices in automobiles, robotics, health care, agriculture and other sectors. We have for example developed proof of concept for a FPGA chip with a start-up, A.I.Mergence, for on-board computer vision detection.” Jean-Baptiste Floderer, an expert in neuromorphic engineering and founder of Neurosonics, a company specialising in electronics for medical devices is focused on yet another field of application for the new chips: “Today, start-ups like Prophesee are working on neuromorphic cameras inspired by the functioning of the human retina. Detecting pixel changes in microseconds, they are able to carry out complex image processing in real time, with a minimum of computation and minimal power. Given that they mimic human neurons, neuromorphic chips could also be used to connect living neuronal networks with artificial neuronal networks for therapeutic purposes. The hope is that could be used to stimulate areas of the brain or spinal cord with such precision that it would be possible to treat neurological diseases, and even paralysis and blindness.”
Scaling up
Intel has a developed a neuromorphic chip christened Loihi 2, based on asynchronous circuits with the capacity to learn in real time using synaptic plasticity mechanisms. In other words, this is a chip that is directly inspired by the real-world functioning of neuron synapses, which in biology, permit the passing of nervous signals between neurons that enable them to communicate. This latest generation of neuromorphic chips also offers advanced features for neuronal connectivity, enabling more complex interactions between neurons and more accurate modelling of the brain. In industry, companies like General Electric and Siemens are exploring possibilities offered by neuromorphic electronics for the optimization of manufacturing processes, that make use of autonomous systems capable of real-time learning to adapt to variations in production. In the field of security, companies like AnyVision and DeepCam are using neuromorphic computing for tasks like facial recognition, behavioural analysis and fault detection.
Moore’s law, which was first expressed in 1965 by the engineer and co-founder of Intel, Gordon E. Moore, initially predicted that the number of transistors on an integrated circuit of equivalent cost would double every year. In 1975, Moore revised this forecast, claiming that the number of transistors on a given surface area would double every two years, and that transistors would continue to diminish in size until they reached a point in the year 2015 where they would no longer be bigger than a single atom