• Users are inadvertently leaking sensitive data to AI services at a time when sophisticated attacks involving deepfakes and AI phishing are expected to increase.
• With the advent of agentic AI, increasingly complex systems are marked by interface issues, configuration errors and model memory weaknesses that offer new opportunities to ill-intentioned hackers.
Research in the rapidly expanding field of artificial intelligence is not only focused on the development of innovative new models and efforts to reduce the carbon footprint of GPUs: AI has also created a host of new opportunities for cyber criminals and a wide range of challenges for cybersecurity experts. Among the research teams who are already at work on systems to combat ill-intentioned hackers, scientists at Los Alamos National Laboratory recently presented a ground-breaking defence method to shield AI from adversarial attacks, which make use of near-invisible tweaks to data inputs that can fool models into making incorrect decisions. Research of this kind highlights the growing need for more stringent cybersecurity to combat emerging new threats. “It is not unusual to see hackers who aim to make malicious use of generative AI exchanging information on models and tactics on dark web networks,” points out Vivien Mura, Global CTO for Orange Cyberdefense. Reports on artificial-intelligence vulnerabilities are also alarming: according to a May 2024 survey conducted by the Capgemini Research Institute, 97% of organizations had encountered breaches or security issues related to the use of GenAI in the preceding 12 months.
New risks have emerged, notably data leaks caused by employees who unwittingly upload information to GenAI tools
How hackers make use of generative AI
Generative AI can accelerate several aspects of hacking including the production of malicious code: “It can expedite the work of reconnaissance ahead of the launch of an attack by collecting data that enables hackers to determine the ideal point of entry, that is to say an individual within a company who has permission to access sensitive data and a suitably vulnerable profile.” Along with shorter timeframes for the preparation of attacks, “New risks have emerged, notably data leaks caused by employees who unwittingly upload information to GenAI tools.” Last but not least, there are risks inherent in the configuration of services for artificial intelligence: “AI users do not control the entire chain of third-party suppliers, hosting providers and application developers must also take charge of their responsibilities.”
Increasingly complex systems that are vulnerable to new kinds of attack
In early 2024, an employee of a British engineering firm made an apparently ordinary transfer of $25 million following a video call with company management. Unfortunately, he had fallen victim to deepfake phishing. “Incidents of this kind are still relatively rare, but we expect to see more deepfake video and voice attacks in the future,” points out Vivien Mura. “We also expect to see more AI hacks, i.e. attacks on systems that enable attackers to access information or manipulate the activity of information systems.” Given the cost and careful planning required for attacks of this kind, they have to be highly profitable for the attackers. With multi-agent systems deployed over interfaces that are not fully standardised, agentic AI may also be targeted: “System interfaces inevitably have vulnerabilities that can be exploited to hijack AI agents for malicious purposes.” At the same time, increases in the amount of memory available to AI systems and the quantities of information that they can process has also raised the prospect of further risks: “There is a growing likelihood of attacks that aim to steal models and context information memorised in response to user prompts, because they contain more and more sensitive data.”
