AI powered autonomous ransomware campaigns are coming, say experts

The creation of an AI proof of concept that can autonomously build and execute a ransomware attack from scratch shouldn’t alarm CISOs who are prepared, says an expert.

The defense against such a proposed new tool, said Taylor Grossman, director for digital security at the Institute for Security and Technology (IST), is simple: “Boring cyber hygiene practices.”

“Being aware of where things are going is certainly helpful,” she said, “but there’s so much to be done already and a lot of those defensive measures can also help some of this AI-enabled ransomware as well.”

She was commenting on a furor raised last week when security researchers at New York University published an article claiming to have created a prototype of large language model (LLM)-orchestrated ransomware.

“Unlike conventional malware,” they wrote, “the prototype only requires natural language prompts embedded in the binary; malicious code is synthesized dynamically by the LLM at runtime, yielding polymorphic variants that adapt to the execution environment. The system performs reconnaissance, payload generation, and personalized extortion, in a closed-loop attack campaign without human involvement.”

They dubbed this next generation of malware Ransomware 3.0.

Security provider ESET, which came across traces of their work in the VirusTotal virus scanner, quickly called it “the first known AI-powered ransomware,” before clarifying the NYU discovery is a proof of concept and not in the wild. Nevertheless, a number of IT news outlets picked up the ESET report, treating it as an in the wild attack.

The NYU research should have been expected. After all, a number of security vendors predicted a while ago that threat actors will try to leverage AI in the creation of malware. For example, just over a year ago, the IST released a report on the implications – pro and con – of AI in cybersecurity. In June, CSO reported that a North Korean-affiliated gang is using AI generated deepfakes in real-time video calls. And last month, Anthropic said it has discovered genAI attacks that didn’t need a human hand.

Grossman’s work at IST includes supporting the Ransomware Task Force, which has produced guidance for infosec pros on combating ransomware. She avoided describing the NYU proof of concept as alarming. Rather, she suggested, it’s expected.

So far, it only works in a university lab setting, she pointed out, but she doesn’t doubt a real tool used by a threat actor is coming. She’s more interested today in the fact that such a tool will make it easier for less technically sophisticated people to enter the ransomware game.

Joseph Steinberg, a US-based cybersecurity and AI expert, also wasn’t surprised by the research.

“While the folks at NYU produced a proof of concept,” he said in an email to CSO, “it is entirely possible that criminals beat them to it. I have already seen AIs that can do scans, write malware, identify which resources are most valuable, [and more]. It is no surprise that someone found a way to have an AI automate such functions.”

Grossman advised CISOs to continue implementing security controls under frameworks created by the Centre for Internet Security or the US National Institute for Standards and Technology (NIST).

“We’re unlikely at this point to see a shift in the ransomware model” from an AI-generated autonomous ransomware attack tool, she said.

“This is a good opportunity to remind people that, while the NYU study can be frightening in a lot of facets, there is a lot [defensively] that can be done that organizations aren’t prioritizing. The tools are out there and we need better awareness of what can be done.”