Before the competition, Orange’s Extreme Edge Computing team worked on small IT environments — sometimes dwarfed even by the smallest Arduino board. Wided Hammedi, who holds a doctorate in computer science and specializes in artificial intelligence and edge computing, joined the team in 2022. Her work initially focused on the Computing Cube, a nanocomputer fully powered by renewable energy and equipped with multiple sensors. The Research Engineer developed an algorithm to predict the amount of solar energy stored in the Cube without having to use the machine’s electronics, something that would consume a lot of energy and therefore be counterproductive. Her forecasting model was accurate to within a few milliwatts. She then submitted an application to patent the algorithm and performed a demonstration with the cubes at the Orange Research Fair in 2022, before heading to Orlando (Florida) to take part in the competition organized by the IEEE Power & Energy Society (Institute of Electrical and Electronics Engineers).
If we are able to find a quick, high-performance solution under extremely restrictive conditions, it will perform even better on a large scale.
A Big Difference
The aim was to forecast solar energy generation over one week, based on data related to energy generation, temperature, humidity, etc. collected over the previous three years. Wided Hammedi said: “The Cube populates a 30-minute forecast window from the data it generates, but, for the competition, huge data processing volumes had to be streamlined over a week. But this also had an upside: When we move beyond the restrictive context of edge computing toward cloud-based environments, we have access to machines with no memory or performance constraints. It then becomes possible to use larger, more resource-intensive deep learning models. And, if we are able to find a quick, high-performance solution under extremely restrictive conditions, it will perform even better on a large scale. In this sense, the Computing Cube experience gave me a good overall idea of what I wanted to do.”
A Cleaning Operation
Before a model can be developed, the first step is to analyze the database. Hammedi quickly detected anomalies, including negative precipitation and quantities of solar radiation that would scorch any device. It became a question of understanding the origin of this abnormal data, managing to identify anomalies even when they were not obvious and then correcting them. “Errors may arise due to a measurement system failure, a connection loss, and this is quite normal: In reality, there’s no such thing as perfect data. It is therefore necessary to go through this cleaning phase. We also have to deal with missing information. Here, geographic location or seasonal information would have been useful. Fortunately, I was able to deduce certain things, such as the time: If there is zero solar radiation, it’s because it is nighttime!” Via a correlation study, Hammedi then identified the data that actually had an impact on energy generation and was therefore relevant for the forecasting model.
The Optimal Configuration
Hammedi ended up with four values: temperature, wind speed, humidity and radiation, which were converted into learning data. She then had to run the model, test its performance by comparing the forecasted values with the observed values and adjust the parameters to find a satisfactory configuration. For this competition, the final result was a mix of several layers from different deep learning algorithms, , dense connections and a fully connected layer. It was the precision of her model in particular that earned Wided Hammedi 2nd place on the podium at the IEEE Power & Energy Society competition. “We should still bear in mind that the predictive capabilities of a model depend on the data feeding it and, to date, reliable one-week weather forecasts remain impossible. However, this award provides great validation of the research we do at Orange. Making our mark in a specialist international energy competition is a sure sign that we are doing well.”
LSTM is an algorithm commonly used for time series forecasting, a deep learning approach that seeks to make future predictions based on the analysis of past data.