● A study conducted by Noam Schmitt and Marc Lacoste has identified three approaches. The first makes use of entirely ground-based AI, the second puts the AI onboard satellites, and the third is a hybrid solution.
● Minimising the transmission of sensitive data also minimises the exposure of systems to risks of interception and falsification.
The boom in satellite megaconstellations and the advent of space cloud computing have raised concerns about the security of orbiting infrastructure in a context of increasingly sophisticated threats: communications jamming, eavesdropping, the spoofing of signals and denial of service attacks are among the new risks that need to be addressed. Artificial intelligence (AI) systems, which can automatically detect and respond to incidents are set to play a key role in protecting extraterrestrial hardware, but there is debate on the issue of how they should be deployed. As Orange Innovation senior expert, Marc Lacoste points out: “In the emerging field that we now call Space AI Security, the big question is: what kind of AI architecture should we adopt. Should everything be centralised on Earth, or on the contrary, should part or even all of the necessary AI processing be done in space?” The investigation of these options is at the heart of research currently conducted by Orange in collaboration with Noam Schmitt (ENS Paris Saclay).
Three possible architectures
In their study, Noam Schmitt and Marc Lacoste identify three approaches. The first of these makes use of a centralised architecture with a single AI model deployed on the ground. “In this case, all of the telemetric satellite data is sent to Earth, where a large model – a bit like a digital twin of the constellation – analyses threats and sends back counter measures.” The advantage of this architecture is that it benefits from the “unbounded computing power available on the ground without the material constraints of satellites.” However, there is also a significant disadvantage. “The problem is latency. The communication delay between space and Earth is a major constraint, especially when the goal is to instantaneously respond to threats.” At the same time, there are risks associated with the transmission of data, which may not always be adequately protected.
The second option is a distributed architecture: “The processing workload is divided between Earth and space. Model training is still centralised, but inference is performed onboard the satellites.” This approach reduces latency, “but it inevitably involves communication delays.” The idea is to have enhanced security against certain attacks that target a single point of failure or vulnerability. “This solution is more resilient than fully centralised systems, but also limited by onboard equipment constraints,” explains Marc Lacoste.
The final option is a federated architecture which pushes decentralisation to its limits. “Everything happens in space: not just inference but also model training and updates. No raw data is transmitted: only gradients which considerably enhances security,” points out Lacoste. However, this autonomy comes at a cost: “Onboard models are less powerful, and it may be difficult to ensure their convergence, especially in constellations with hundreds of satellites.”
The trade-off between accuracy and responsiveness
Experimental results obtained in simulations confirm these observations. “Centralised architecture systems achieve high levels of accuracy faster, given that they have unlimited access to computing power. With federated architectures, training is slower, but inference latency is also reduced.” The figures speak for themselves: for a single satellite, the average latency was 125.64 milliseconds in a centralised architecture as opposed to only 23.75 milliseconds in a federated setup. “That’s a crucial difference when you scale up to hundreds of satellites.” However, centralised architecture is still the solution preferred by mobile network operators. “It remains the best option in the short term, because it speeds up the training of models, enabling them to achieve maximum accuracy much faster. However, in the long term, with the improvement of intersatellite links, federated architectures could become the norm because they minimise the transmission of sensitive data, which also minimises the exposure of systems to risks of interception and falsification.” The findings of this study have been presented by Noam Schmitt at the IEEE High-Performance Extreme Computing Virtual Conference, a major annual conference hosted by the prestigious MIT Lincoln Laboratory Supercomputing Center.
Orange’s commitment to the security of space infrastructure
“Orange has a long history in space that dates back to the era of CNET and extends up to our current partnerships with CNES,” points out Marc Lacoste. “With the current boom in constellations and growing international competition, maintaining our expertise in this field is a strategic objective for Orange. Our goal is to define optimal services and modes of deployment to protect future infrastructure both for our own needs and for those of our partners.”







