Research
Security News
Quasar RAT Disguised as an npm Package for Detecting Vulnerabilities in Ethereum Smart Contracts
Socket researchers uncover a malicious npm package posing as a tool for detecting vulnerabilities in Etherium smart contracts.
Security News
Sarah Gooding
April 25, 2024
The notion that “given enough eyeballs, all bugs are shallow,” (Linus's Law) has been the cornerstone of open source development, where community scrutiny is a vital part of the process. As the software ecosystem has grown exponentially, the reality is that the proliferation of dependencies in modern applications has outstripped the capacity of human oversight.
Today, developers often use libraries and dependencies without conducting thorough reviews of their codebases, implicitly trusting community vetting that may not be as rigorous as needed. Let’s face it, nobody is reading through all the code of every dependency in their projects, especially when these frequently number in the thousands for complex applications.
This is where AI has the advantage with more and faster “eyeballs” than any human, capable of systematically analyzing massive dependency trees. With the help of AI, all bugs are not only shallow but also trivial to exploit.
Nowhere is this more starkly apparent than in recent news about researchers claiming to have created a GPT-4 agent that can autonomously exploit both web and non-web vulnerabilities in real-world systems. Armed with the CVE description for 15 one-day vulnerabilities, GPT-4 was capable of exploiting 87%, compared to 0% for GPT-3.5, other open-source LLMs, and widely used vulnerability scanners such as Zap and Metasploit.
The last vulnerability (ACIDRain) is an attack that was used to hack a cryptocurrency exchange for $50 million, which the researchers emulated in the WooCommerce framework.
Without the description, the researchers’ GPT-4 agent could exploit only 7% of the vulnerabilities. They estimated the cost of conducting successful autonomous AI exploits to be around $8.80 per exploit, a fraction of the cost of the average security researcher.
Even if you think these findings are more indicative of AI’s effective use as an intelligent scanner and crawler and not an emergent cyber security capability, it’s clear we are not too far from autonomous hacking. The widespread deployment of LLM agents capable of these types of exploits will undoubtedly require AI-powered defenses.
Business owners are becoming increasingly alert to the potential threats posed by AI. According to a new report published by Netacea, 93% of businesses believe that they will face daily AI attacks in the next year. The company surveyed 440 businesses across the UK and US with $1.9bn average online revenue across five sectors. AI-driven attacks are squarely on their radar, although adoption of AI-powered defenses is weighted more towards high-impact attack prevention.
The report, titled “Cyber Security in the Age of Offensive AI,” found that 48% of CISOs expect to see AI-powered ransomware attacks, 38% believe that phishing attacks will be powered by AI, and 34% are concerned about AI-driven malware attacks.
This survey found that 100% of businesses classified as enterprises have incorporated AI to some degree but smaller businesses often lack the funds and resources to implement the technology, despite understanding its significance:
A 2023 report from the Office of National Statistics, which surveyed 10,000 responses from UK businesses (from small to large enterprises), found that 83% stated they have no plan to adopt AI as they head into 2024.
Businesses may not have the luxury to delay AI-adoption for very long. This new landscape where AI can both find and exploit software bugs at scale introduces a pressing challenge: while Linus's Law relied on human oversight, the future of cybersecurity will depend increasingly on AI's ability to combat AI-driven threats.
“Current state of the art AI models can offer a significant advantage for defenders in their ability to detect cyber attacks and generally improve the quality of code in a way that scales to the velocity of modern software development,” security engineer Chris Rohlf said in his response to the research on autonomous exploits.
“Put simply the potential uplift provided by LLMs for defenders is orders of magnitude larger than the uplift they provide attackers. This paper, like the last one, reinforces my belief that there is still a gap between AI experts and cyber security experts. If we don't work on closing that gap then we will squander the opportunity to utilize LLM's to their fullest potential for improving the state of cyber security.”
At Socket we are on the forefront of AI-powered threat detection for supply chain attacks, harnessing LLM’s for early warnings on more than 100 attacks per week. Leveraging AI proactively for defense is the only way to stay ahead of the constant onslaught of malicious packages published to public registries - and many of these threat actors haven’t even begun to upgrade their operations with AI-driven capabilities.
The key to humans keeping pace lies in maintaining a deep understanding of AI technologies and continuously adapting to new threats. Fear-mongering about AI’s potential is not a productive response to these emerging capabilities. It leads to over-regulation, tipping the balance of power in favor of threat actors who will inevitably use it to exploit vulnerabilities and target malicious packages in more sophisticated ways.
In his 1870 essay on Civilization, Ralph Waldo Emerson waxes eloquent on the discovery of electricity, and his exploration of mankind’s relationship to technology is remarkably prescient of our journey with AI today:
Now that is the wisdom of a man, in every instance of his labor, to hitch his wagon to a star,’ and see his chore done by the gods themselves. That is the way we are strong, by borrowing the might of the elements.
That era brought humanity inventions like the phonograph, telephone, and the incandescent light bulb, followed by the automobile in the next decade. Our chance to harness this moment in time for historic breakthroughs is just as signficant. AI is now firmly embedded in the story of human progress, and we’re just getting started.
Subscribe to our newsletter
Get notified when we publish new security blog posts!
Try it now
Research
Security News
Socket researchers uncover a malicious npm package posing as a tool for detecting vulnerabilities in Etherium smart contracts.
Security News
Research
A supply chain attack on Rspack's npm packages injected cryptomining malware, potentially impacting thousands of developers.
Research
Security News
Socket researchers discovered a malware campaign on npm delivering the Skuld infostealer via typosquatted packages, exposing sensitive data.