Cybersecurity: AI attacks and hijacking
● AI and generative AI systems can be easily hijacked to generate malicious code, even when designed to reject such requests.
● Other types of attacks, known as "model evasion attacks," exploit modified inputs to cause unexpected behaviours in AIs, such as making a self-driving car misinterpret traffic signs.
● Poisoned data can introduce backdoors into AI models, leading to unintended behaviours, which is concerning due to the lack of control engineers have over their data sources.
Watch the video
● Other types of attacks, known as "model evasion attacks," exploit modified inputs to cause unexpected behaviours in AIs, such as making a self-driving car misinterpret traffic signs.
● Poisoned data can introduce backdoors into AI models, leading to unintended behaviours, which is concerning due to the lack of control engineers have over their data sources.


Orange’s radio propagation model: An essential element in mobile network development
Read the article
New algorithmic challenges for network virtualisation
Read the article
Udio, Suno: AI-generated music is already competing for low added value orders
Read the article
Ethical AI and children: the benefits of a multi-disciplinary approach
Read the article
P-C. Langlais (PLEAIS): “Our language models are trained on open corpora”
Read the article