Cybersecurity: AI attacks and hijacking

● AI and generative AI systems can be easily hijacked to generate malicious code, even when designed to reject such requests.
● Other types of attacks, known as "model evasion attacks," exploit modified inputs to cause unexpected behaviours in AIs, such as making a self-driving car misinterpret traffic signs.
● Poisoned data can introduce backdoors into AI models, leading to unintended behaviours, which is concerning due to the lack of control engineers have over their data sources.

Read also on Hello Future

Deepfakes: detection methods struggle to make limited progress

Discover

Generative AI: a growing threat to information systems

Discover

AI agents could further automate certain jobs

Discover

Devoxx France: “AI has ushered in a second revolution in the world of testing”

Discover
Young woman wearing gloves conducts environmental research by a lake. She uses equipment including a laptop and test kits. Trees and water in the background.

Biodiversity in lakes: multimodal AI crunches eADN data to monitor pollution

Discover
A man in a safety vest reviews documents in front of a row of colorful shipping containers at a port.

Contraband: AI efficiently detects anomalies in shipping containers

Discover

Artificial intelligence: how psychology can contribute to AGI

Discover

Explainability of artificial intelligence systems: what are the requirements and limits?

Discover