Cybersecurity: AI attacks and hijacking
● AI and generative AI systems can be easily hijacked to generate malicious code, even when designed to reject such requests.
● Other types of attacks, known as "model evasion attacks," exploit modified inputs to cause unexpected behaviours in AIs, such as making a self-driving car misinterpret traffic signs.
● Poisoned data can introduce backdoors into AI models, leading to unintended behaviours, which is concerning due to the lack of control engineers have over their data sources.
Watch the video
● Other types of attacks, known as "model evasion attacks," exploit modified inputs to cause unexpected behaviours in AIs, such as making a self-driving car misinterpret traffic signs.
● Poisoned data can introduce backdoors into AI models, leading to unintended behaviours, which is concerning due to the lack of control engineers have over their data sources.
![](https://hellofuture.orange.com/app/uploads/2024/06/ATTAQUE-IA-_-19201080-ENG-750x422.png)
![](https://hellofuture.orange.com/app/uploads/2024/03/Superapps960620-1-420x236.png)
![](https://hellofuture.orange.com/app/uploads/2024/03/plateformisation960620-420x236.png)
![](https://hellofuture.orange.com/app/uploads/2024/02/impactenviro_en_960620-420x236.png)
![](https://hellofuture.orange.com/app/uploads/2024/01/biaisalgo_en_960620-420x236.png)
![](https://hellofuture.orange.com/app/uploads/2024/01/quantum-internet-960x620-1-420x236.png)
![](https://hellofuture.orange.com/app/uploads/2023/12/960largelanguagemodel_intro-420x236.jpg)