Research
Recent Trends in Malicious Packages Targeting Discord
The Socket research team breaks down a sampling of malicious packages that download and execute files, among other suspicious behaviors, targeting the popular Discord platform.
cat-tail
Advanced tools
Readme
Get a readable stream spanning an array of files from a given position
npm install cat-tail
var CatTail = require("cat-tail")
new CatTail(files, [options])
Instantiates a new CatTail
object, which is a Readable
stream object. It will immediately open the files in the given files
array, reading them in order and pushing them into the stream. The options
argument takes the standard properties for readable streams, plus the following:
start
: Starting byte offset to begin reading from in the first file.end
: Byte position in the last file where reading should stop.Patches are welcome! This module uses ES2015. To compile the source, use npm run compile
. All changes need test coverage. To run the tests, use npm test
.
FAQs
Get a readable stream spanning an array of files
We found that cat-tail demonstrated a not healthy version release cadence and project activity because the last version was released a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Research
The Socket research team breaks down a sampling of malicious packages that download and execute files, among other suspicious behaviors, targeting the popular Discord platform.
Security News
Socket CEO Feross Aboukhadijeh joins a16z partners to discuss how modern, sophisticated supply chain attacks require AI-driven defenses and explore the challenges and solutions in leveraging AI for threat detection early in the development life cycle.
Security News
NIST's new AI Risk Management Framework aims to enhance the security and reliability of generative AI systems and address the unique challenges of malicious AI exploits.