
Security News
The Changelog Podcast: Practical Steps to Stay Safe on npm
Learn the essential steps every developer should take to stay secure on npm and reduce exposure to supply chain attacks.


Sarah Gooding
October 30, 2025
The security community is debating new claims from MIT Sloan researchers and Safe Security this week, after a jointly authored paper asserted that 80 percent of ransomware attacks are AI-driven. The report, titled Rethinking the Cybersecurity Arms Race, was published through the Cybersecurity at MIT Sloan (CAMS) program and co-authored with vendor Safe Security. It argues that “adversarial AI is now automating entire attack sequences” and cites U.S. CISA advisories as evidence, though none of those advisories mention AI.
Security researcher Kevin Beaumont called the paper “absolutely ridiculous" in a post on Mastodon, arguing that it re-labels ordinary ransomware operations as “AI-enabled” without proof. “No, REvil don’t use AI to set ransom demands, CISA never said that,” he wrote. “None of the sources cited said that, and they were running before the GenAI craze.”

The document lists nearly every major ransomware family, including LockBit, BlackCat, and even Emotet, which was dismantled years ago, as “AI-powered.” Yet the authors provide no dataset or definition for what qualifies as “AI-enabled,” despite presenting a table claiming that 80.83% of ransomware incidents fall into that category.

Source: Rethinking the Cybersecurity Arms Race: When 80% of Ransomware Attacks are AI-Driven
The paper’s authors include Michael Siegel and Sander Zeijlemaker from MIT Sloan, and Vidit Baxi and Sharavanan Raajah from Safe Security, a company that markets an AI-driven cyber risk quantification platform. The paper’s concluding section urges organizations to “embrace AI in cyber risk management" to enhance resilience. This echoes Safe Security’s own marketing materials, which are cited directly in the references.
MIT’s CAMS program operates as a corporate consortium, meaning companies pay to collaborate on working papers. Because of this structure, research can sometimes align with the interests of members or sponsors more than pure academic inquiry, especially when one of the authors is from a company that has commercial interests.
While this setup isn’t inherently unethical, the lack of transparency about funding and review creates confusion for readers who interpret MIT’s branding as peer-reviewed credibility. The report carries the prestige of an academic institution without the rigor or independent validation typically associated with peer-reviewed research. The result is a paper that advances a vendor narrative under an academic banner, one now circulating widely through press coverage and social media shares.
The MIT paper isn’t an isolated case. Similar claims are appearing across the security industry, often tied to surveys or marketing campaigns rather than incident data.
In October, CSO Online ran “AI-enabled ransomware attacks: CISO’s top security concern — with good reason”, based on a CrowdStrike-sponsored survey of 1,100 security leaders. The study found that 38% of CISOs ranked “AI-enabled ransomware” as their top concern but offered no evidence of such attacks in the wild. The article itself acknowledged that CrowdStrike’s survey “doesn’t provide a full picture of AI’s use by ransomware gangs,” yet its framing and executive quotes portrayed AI as already transforming attack chains.
Beaumont linked to the story on LinkedIn, saying that he has “never seen an ‘AI ransomware’ incident ever.” The post drew more than 400 responses from industry professionals who questioned why so many reports equate perception with reality.

Threat analyst Harlan Carvey commented that such surveys measure perception rather than evidence, reflecting what security leaders fear, not what incident data actually shows. Unfortunately, these kinds of vendor-led or poorly substantiated studies can reinforce those fears, shaping executive perception of threats that don’t align with reality.

DFIR manager Steve Handy agreed, commenting, “Not a single AI ransomware event seen.” That sentiment was echoed across dozens of responses.
“It's what happens when the experts in chasing clicks override the experts in providing evidence-based practical guidance," Sophos Director of Threat Hunting and Intelligence Paul Jaramillo commented. "Two completely different missions that only occasionally align.”
Beaumont replied that the problem isn’t just with vendors or media, but with the CISOs themselves: “The problem is the ‘experts in chasing clicks’ here are the chief InfoSec people at orgs, like a thousand of them who answered the survey.”
The ENISA Threat Landscape 2025 report paints a very different picture from the one presented in the Safe Security–MIT paper. While ENISA acknowledges that threat actors are experimenting with AI tools, its findings don’t support claims that more than 80% of ransomware attacks are AI-driven.
Instead, ENISA describes incremental, real-world uses of AI across the cybercrime ecosystem, from AI-generated phishing emails that mimic legitimate correspondence, to voice cloning and deepfake impersonation used in social engineering, to AI-powered data scraping and correlation that helps identify high-value targets.
The report also notes that researchers have demonstrated proof-of-concept malware using AI for tasks such as automating code generation or optimizing file encryption order, but these remain controlled demonstrations, not active threats observed in the wild. There is still no evidence that ransomware itself is being driven by AI at the scale suggested by vendor-sponsored reports.
The 2025 Verizon Data Breach Investigations Report (DBIR) likewise reinforces a more grounded view of current threats. It found that ransomware accounted for 44% of confirmed breaches, with credential theft and vulnerability exploitation remaining the primary entry points. While the report notes a rise in AI-generated phishing and social engineering content, it provides no evidence that ransomware itself is being orchestrated or accelerated by AI.
These findings align with incident data from Mandiant and Sophos, both of which point to credential theft, stolen access, and poor authentication as the dominant entry vectors, not AI-powered ransomware.
"All of the real world ransomware incident evidence points towards initial access brokers using infostealers and lack of MFA - real changes in the threat landscape - which orgs should focus on rather than making stuff up," Beaumont commented on LinkedIn.
Inflated statistics about AI and ransomware have a real impact on how organizations set priorities, fund initiatives, and understand risk. When speculative numbers are presented under the banner of research, they often move quickly through boardrooms and media coverage, shaping narratives that outpace the evidence.
For anyone working in security or research, learning to tell the difference between evidence-based insight and vendor-aligned storytelling is becoming an essential part of the job. As AI hype intensifies, so does the temptation to attribute familiar problems to new technology.
“Ironically, one of the biggest AI-related security issues is the obsession with fictional threats while neglecting fundamental security best practices," researcher Martin Zugec commented on the discussion. When the focus shifts to speculative claims and exaggerated trends, the industry risks overlooking the long-standing weaknesses that attackers continue to exploit.
Subscribe to our newsletter
Get notified when we publish new security blog posts!
Try it now

Security News
Learn the essential steps every developer should take to stay secure on npm and reduce exposure to supply chain attacks.

Security News
Ruby's creator Matz assumes control of RubyGems and Bundler repositories while former maintainers agree to step back and transfer all rights to end the dispute.

Research
/Security News
Socket researchers found 10 typosquatted npm packages that auto-run on install, show fake CAPTCHAs, fingerprint by IP, and deploy a credential stealer.