Cybersecurity: AI attacks and hijacking
● AI and generative AI systems can be easily hijacked to generate malicious code, even when designed to reject such requests.
● Other types of attacks, known as "model evasion attacks," exploit modified inputs to cause unexpected behaviours in AIs, such as making a self-driving car misinterpret traffic signs.
● Poisoned data can introduce backdoors into AI models, leading to unintended behaviours, which is concerning due to the lack of control engineers have over their data sources.
Watch the video
● Other types of attacks, known as "model evasion attacks," exploit modified inputs to cause unexpected behaviours in AIs, such as making a self-driving car misinterpret traffic signs.
● Poisoned data can introduce backdoors into AI models, leading to unintended behaviours, which is concerning due to the lack of control engineers have over their data sources.


Achieving quantum communications via existing fiber infrastructures
Read the article
Attacks on AI: data cleaning becomes a cybersecurity issue
Read the article
Quarks: decentralized and blockchain-secured instant messaging
Read the article
The Trust System Cybersecurity Challenge: Protection without Impacting User Experience
Read the article
Monitoring the Security of Connected Personal Equipment in Real Time: How Could It Work?
Read the article