Navigating future AI-assisted cyberattacks
Malicious code is becoming increasingly common to find, both for traditional IT software as well as for OT, IoT and other embedded and unmanaged devices. Malicious actors are targeting public exploit proofs-of-concept (PoCs), typically porting them into something more useful or less detectable by adding payloads, packaging them into a malware module or rewriting them to run in other execution environments.
This porting process enhances the versatility and damage of existing malicious code, creating increased threats for organisations to deal with. And although this process took time and effort for threat actors, the rise of AI is only speeding up the process of exploiting PoCs.
Large Language Models (LLMs) are a leading innovation of AI, with well-known brands such as OpenAI’s ChatGPT and Google’s PaLM 2. These tools can be extremely useful for answering a range of questions and performing a variety of tasks through simple prompts.
However, the risk of malicious threats has also risen with LLMs. Cybercriminals, academic researchers, and industry researchers are all trying to understand how the recent popularity of LLMs will affect cybersecurity. Some of the main offensive use cases include exploit development, social engineering, and information gathering. Defensive utilisation includes creating code for threat hunting, explaining reverse engineered code in natural language and extracting information from threat intelligence reports.
As a result, organisations have already observed the beginning of cyberattacks carried out with LLM assistance. Although it is still early developments, and the cybersecurity community has seen minimal use of the capability for OT attacks, for instance, it’s only a matter of time before cybercriminals harness it. Using LLM code generation and conversion capabilities to create or port an existing IT or OT exploit to another language can be easy for cybercriminals and has a huge impact for the future of cyber offensive and defensive capabilities.
The horizon of AI-assisted attacks
LLMs can already generate code for attackers to parse complex data formats – such as healthcare network protocols – and extract sensitive information that can be later sold on black markets. Similarly, these tools can be used to gather relevant data in other settings and guide attackers through the process of setting up covert channels for data exfiltration, such as DNS tunnels.
Beyond that, OT:ICEFALL showed that offensive OT cyber capabilities are less difficult to develop than previously thought when using traditional reverse engineering and domain knowledge. However, using AI to enhance offensive capabilities has only made this easier.
Organisations need to look to use AI to find vulnerabilities directly in source code or via patch diffing, or they risk cybercriminals findings the vulnerability with AI first. Cybercriminals can now use AI to write exploits from scratch and even craft queries to find vulnerable devices online to be exploited.
Australia has seen a rapid increase in the number of vulnerabilities, especially given the number and types of devices connected to computer networks increasing in parallel. Furthermore, this has been accompanied by cybercriminals looking to breach organizations via vulnerable devices. The use of AI to find and exploit vulnerabilities in unmanaged devices is expected to accelerate these trends profusely.
Ultimately, AI and automation presents an opportunity for threat actors to go further faster for different parts of the cyber kill chain. It greatly accelerates steps such as reconnaissance, initial access, lateral movement, and command and control that still rely heavily on human input – especially in lesser-known domains such as healthcare data and OT/ICS.
AI has the potential to:
- Explain outputs clearly to an attacker who is unfamiliar with a specific environment.
- Describe which assets in a network are most valuable to attack or most likely to lead to critical damage.
- Provide hints and suggestions for next steps to take in an attack.
- Link outputs in a way that automates much of the intrusion process.
Besides exploiting common software vulnerabilities, AI also advances new types of attacks. LLMs are part of a trend of generative AI that includes image, audio, and video generation techniques. These tactics can improve the quality of social engineering, only making the scammer’s attempts seem more legitimate.
Fortifying defences
With AI-assisted attacks becoming more prevalent, devices, data and people will experience attacks in unexpected ways. It is the duty of every organisation to ensure that it has implemented strong cybersecurity to protect against incoming AI-assisted attacks.
Promisingly, the best practices remain the same. Security principles such as cyber hygiene, defence-in-depth, least privilege, network segmentation and zero trust all remain valid. Although attacks look to occur more often because of the ease in which AI generates malware for threat actors, the defences do not need to change. It has just become more urgent than ever to enforce them dynamically and effectively.
As ransomware and other threats continue to evolve, the main cybersecurity practises remain the same for organisations:
- Maintain a complete inventory of every asset on the network, including managed and unmanaged devices.
- Understand their risk, exposure, and compliance state.
- Be able to automatically detect and respond to advanced threats targeting these assets.
These three pillars are pivotal to protecting organisations and their data from the future of AI assisted cyberattacks.