Attacks on AI: data cleaning becomes a cybersecurity issue
● Artificial intelligence (AI) technologies, including generative AI systems like ChatGPT and predictive AI systems used for self-driving cars and medical diagnostics may be subject to attacks by malicious hackers.
● A recent report by the American National Institute of Standards and Technology has identified a wide range of possible attacks, among them model poisoning, privacy attacks and attempts to repurpose generative AI to produce malevolent content.
● One of the report’s authors, Apostol Vassilev, highlights the need for the standardization and systematic cleaning of training data, as well as constant monitoring to ensure that compromised systems are detected rapidly.
Read the article
● A recent report by the American National Institute of Standards and Technology has identified a wide range of possible attacks, among them model poisoning, privacy attacks and attempts to repurpose generative AI to produce malevolent content.
● One of the report’s authors, Apostol Vassilev, highlights the need for the standardization and systematic cleaning of training data, as well as constant monitoring to ensure that compromised systems are detected rapidly.