PINNs

PINNs, for “Physics-Informed Neural Networks”, are a new class of neural networks combining machine learning and physics.

The inventors of PINNs define them as “neural networks that are trained to solve supervised learning tasks while respecting any given law of physics described by general nonlinear partial differential equations”.

Indeed, algorithms do not necessarily take into account the physical principles governing the systems to which they are applied. Yet, the behaviour of these systems is dependent on different fields (such as mechanics or thermodynamics) where each has its own laws and models, which constitute a precious source of information.

Consequently, even the most advanced techniques of machine learning are sometimes not very efficient at solving complex scientific and engineering problems.

The idea of PINNs is therefore to “encode” the laws of physics and scientific knowledge into learning algorithms so as to make them more robust and to improve their performance.

According to their inventors, the addition of this crucial information can restrict the field of possible solutions, which would enable the algorithms to aim for the right solution faster and to become better at generalising, meaning to function correctly in the real world, with data that they have never seen before.

PINNs are of interest to the world of research in a wide range of areas including climatology, seismology, or material science.

This approach is also of interest to industry. For example, to create the digital twin of an aeroplane, simulation software using PINNs will take all physical phenomena into account as well as their interactions with one another. It will therefore integrate the rules of aerodynamics and mechanics that make a plane fly, as well as general data throughout its lifecycle.

Read also on Hello Future

A man in a safety vest reviews documents in front of a row of colorful shipping containers at a port.

Contraband: AI efficiently detects anomalies in shipping containers

Discover

Artificial intelligence: how psychology can contribute to AGI

Discover

How to make AI explainable?

Discover

Explainability of artificial intelligence systems: what are the requirements and limits?

Discover

Data and AI Ethics Council, guarantor of responsible AI at Orange

Discover
A group of people is attending a presentation of BrainBox AI at the Orange OpenTech event. Two presenter stands in front of a screen displaying graphs and information on the topic. The participants are listening attentively and appear engaged in the discussion.

BrainBox AI to cut commercial real estate emissions by up to 40%

Discover
A group of mining workers is listening to a colleague who is explaining something. They are wearing yellow safety helmets and masks. The environment is dark, with rocky walls visible in the background. The guide is using a headlamp to light his way.

AI fed on data from gas sensors and smart cameras prevents workplace accidents

Discover

Factiverse: reliable AI fact-checking in more than 100 languages

Discover