Cybersecurity: AI attacks and hijacking

● AI and generative AI systems can be easily hijacked to generate malicious code, even when designed to reject such requests.
● Other types of attacks, known as "model evasion attacks," exploit modified inputs to cause unexpected behaviours in AIs, such as making a self-driving car misinterpret traffic signs.
● Poisoned data can introduce backdoors into AI models, leading to unintended behaviours, which is concerning due to the lack of control engineers have over their data sources.

Read also on Hello Future

Ethical AI and children: the benefits of a multi-disciplinary approach

Discover

Neurotechnology: auditory neural networks mimic the human brain

Discover
PLEAIS

P-C. Langlais (PLEAIS): “Our language models are trained on open corpora”

Discover
GettyImages - WineSensed vin et IA - wine and AI

WineSensed uses artificial intelligence to predict taste preferences

Discover

Attacks on AI: data cleaning becomes a cybersecurity issue

Discover

Video games: AI paves the way for a new generation of visual content

Discover

Finance and AI: “there will be a shift in the types of jobs that are available”

Discover