Cybersecurity: AI attacks and hijacking

● AI and generative AI systems can be easily hijacked to generate malicious code, even when designed to reject such requests.
● Other types of attacks, known as "model evasion attacks," exploit modified inputs to cause unexpected behaviours in AIs, such as making a self-driving car misinterpret traffic signs.
● Poisoned data can introduce backdoors into AI models, leading to unintended behaviours, which is concerning due to the lack of control engineers have over their data sources.

Read also on Hello Future

Protecting AI systems in space

Discover

Vivien Mura: “Companies must limit AI agent autonomy”

Discover

AI and cognitive sciences: can AIs be endowed with a human-like ability to generalize?

Discover

Seeking an ideal blueprint: the quest to deploy generative AI in companies

Discover

AI: challenges faced by developers of automated systems to moderate hate speech

Discover
Close-up of a woman in a white coat carefully looking through the eyepiece of a black microscope, with her right eye aligned.

Efficient, lightweight computer vision models for innovative applications

Discover
Illustration of a smiling robot emerging from a large smartphone screen, reaching out to a seated man with a

The drive to simulate human behaviour in AI agents

Discover