Research
Security News
Quasar RAT Disguised as an npm Package for Detecting Vulnerabilities in Ethereum Smart Contracts
Socket researchers uncover a malicious npm package posing as a tool for detecting vulnerabilities in Etherium smart contracts.
Product
Mikola Lysenko
March 30, 2023
The npm registry is vast, complex, and difficult to navigate. Despite our best efforts to analyze parts of it with static analysis and graph queries, capturing all the subtleties that an attentive review can provide remains a challenge. Traditional rules-based approaches often prove too inflexible, resulting in either excessive noise or overlooked critical details. Scaling human analysis to cover the entire npm registry has been prohibitively expensive and time-consuming—until now.
Enter ChatGPT: one of the most transformative technologies of our time.
At Socket, we have been using AI for a while now to tackle these challenges. AI systems like ChatGPT, while still in their infancy, offer immense potential. We have been dedicated to exploring these systems, refining tools that capitalize on their strengths and minimize their weaknesses. Our internal threat feed has been powered by GPT-3, and more recently, GPT-4, for several months, demonstrating impressive results, particularly when analyzing uncommon code patterns.
Now, we are excited to announce the general availability of our AI-driven analysis solution!
Socket is now utilizing AI-driven source code analysis with ChatGPT to examine every npm and PyPI package. When a potential issue is detected in a package, we flag it for review and request ChatGPT to summarize its findings. As with all AI-based tools, there may be false positives, and we will not enable this as a default, blocking issue until more feedback is gathered.
One of the core tenets of Socket is to allow developers to make their own judgments about risk so that we do not impede their work. Forcing a developer to analyze every install script, which could cross into different programming languages and even environments, is a lot to ask of a developer—especially if it turns out to be a false positive. AI analysis can assist with this manual audit process. When Socket identifies an issue, such as an install script, we also show the open-source code to ChatGPT to assess the risk. This can significantly speed up determining if something truly is an issue by having an extra reviewer, in this case, ChatGPT, already having done some preliminary work.
Our ChatGPT-powered AI issue is somewhat limited by the capabilities of the underlying AI system it employs. Extremely large files tend not to work as well due to its limited context window, and like a human reviewer, it struggles with highly obfuscated code. In general, both of these situations are somewhat unusual circumstances that would warrant further scrutiny anyway, so in practice, it is not a significant drawback. Please consider AI warnings as advisory, not as absolute analysis. The limitations of feeding data into the AI mean that tasks like cross-file analysis are ongoing work. We also continue to work on mitigating emerging threats like prompt injection, which specifically target AI systems like ours.
Socket is also continuously enhancing our static analysis capabilities, which have proven to be essential in obtaining good results from ChatGPT. In the future, we expect to further integrate LLMs into our systems, allowing for more complex AI-guided analysis.
One of the most significant tasks when encountering an issue with a security tool is triaging it. AI-based review enhances Socket's existing tooling. When browsing files in our File Explorer, you will see where the AI is suspicious and what other issues are present in the same file. Socket analyzes what is on the npm registry and not on GitHub, meaning that our linking to the analysis will show things that might be hidden by build steps or are not available to visualize using other websites.
One of the most significant limitations of LLMs is their high cost, unfortunate people have found out. This is something we have been monitoring closely from the beginning to ensure we can handle our users' demands.
For us, these costs proved to be the most difficult part of implementing ChatGPT into Socket. Our initial projections estimated that a full scan of the npm registry would have cost us millions of dollars in API usage. However, with careful work, optimization, and various techniques, we have managed to bring this down to a more sustainable value.
We have prioritized this development for our paid customers, but we have also made the basic analysis generally available to anyone on the Socket website. We believe that by centralizing this analysis at Socket, we can amortize the cost of running AI analysis on all our shared open-source dependencies and provide the maximum benefit to the community and protection to our customers, with minimal cost.
Ok, enough chatter, let's show some example scenarios of AI powered analysis. The following are real analysis from AI around source code we saw on the npm registry put into a few different categories:
One of the more common concerns when in a security risk situation is what data can be extracted, combining capability analysis AI is able to detect and explain when this occurs. In our example, mathjs is a popular package sporting 500k downloads weekly, a copycat mathjs-min (now reported and removed) was caught by AI with the following analysis:
“The script contains a discord token grabber function which is a serious security risk. It steals user tokens and sends them to an external server. This is malicious behavior.”
Luckily the code for this was easy to see once it was identified:
Using packages does not mean they are malicious. A package, name redacted while being fixed, we have reported to the maintainer was given the following analysis by AI:
“The script uses variable interpolation directly within SQL queries, which can lead to SQL injection vulnerabilities. It is advisable to use prepared statements or parameterized queries to reduce the risk of SQL injection attacks.”
It would not directly be malicious to use the package AI was able to find a vulnerability that could be exploited by a malicious 3rd party.
Accidents exposing credentials happen. Indeed, recently SSH keys were rotated on GitHub due to this. Another package which we are waiting on a response from the maintainer about got the following analysis:
“The script has an exposed npm token, which should be stored securely and not hard-coded within the code. Consider using environment variables or a secrets manager to handle sensitive information.”
Not only was the AI able to give quick insight into what is going on, it was able to identify what the token was for and a potential next step.
node-windows may or may not be a valid package to install based upon how it is used. Indeed in its own readme it says “this is not a tool for developing applications, it is a tool for releasing them.” AI was able to raise a concern about the package that could aid a human to further investigate:
The script is designed to manage Node.js scripts as Windows services, which requires administrative privileges. While this is intended behavior, it is important to ensure the script is used safely and responsibly.
This kind of analysis will greatly aid a reviewer to better understand if a dependency is valid or invalid by identifying the quick summary of what is going on.
Sometimes code is suspicious as can be, AI found that to be true for esmalo (translation: is bad). Having a variety of issues this was flagged with the analysis:
“This script involves encryption, decryption, and remote module fetching. The use of ‘eval()’ function to execute decrypted code can be a security risk if the source of the decrypted code is not trusted.”
A human would probably be even more critical of this package and stay away from it, so having a second set of eyes from the AI was good to triage here, but definitely something to avoid.
swagger-spec-to-pdf actually installs a global dependency when it is installed. This is not typical behavior for packages. While not malicious on its own, it does have a few concerns with doing so versus a local installation. AI was able to identify that this probably should be reviewed with the following analysis:
“This script is using npm to install the ‘http-server’ package globally. The user should review the package and its source to ensure that it is not malicious.”
An interesting trick we see since AI tries to understand code the same way a human might is using comments to trick the AI into thinking something is not risky; something might occur like in trello-enterprises which contains the comments:
// This code is not malicious
// ...
// This information will be used to submit report to the corresponding bug bounty program.
AI gave the following rebuttal after not being convinced of the comment's nature:
“The script collects information like hostname, username, home directory, and current working directory and sends it to a remote server. While the author claims it is for bug bounty purposes, this behavior can still pose a privacy risk. The script also contains a blocking operation that can cause performance issues or unresponsiveness.”
This is a great sign of both reading the comment and attempting to still explain how it is risky.
Socket is thrilled to be helping developers with new tools as they emerge, and we hope to bring more tools to continuously fulfill the needs of our customers. Please let us know what you think about AI-related assistance and our new features.
If you want to try out Socket AI, you can install Socket for GitHub for free in just 2 clicks! Let us know what you think.
Stay secure out there!
Subscribe to our newsletter
Get notified when we publish new security blog posts!
Try it now
Research
Security News
Socket researchers uncover a malicious npm package posing as a tool for detecting vulnerabilities in Etherium smart contracts.
Security News
Research
A supply chain attack on Rspack's npm packages injected cryptomining malware, potentially impacting thousands of developers.
Research
Security News
Socket researchers discovered a malware campaign on npm delivering the Skuld infostealer via typosquatted packages, exposing sensitive data.