Security News
GitHub Removes Malicious Pull Requests Targeting Open Source Repositories
GitHub removed 27 malicious pull requests attempting to inject harmful code across multiple open source repositories, in another round of low-effort attacks.
fs-capacitor
Advanced tools
Filesystem-buffered, passthrough stream that buffers indefinitely rather than propagate backpressure from downstream consumers.
The fs-capacitor npm package is designed to handle file streams efficiently, particularly in the context of handling file uploads in Node.js applications. It provides a way to manage temporary files and streams, ensuring that resources are properly cleaned up after use.
Creating a Write Stream
This feature allows you to create a write stream where you can write data to a temporary file. The stream can be used to handle file uploads or other data streams efficiently.
const { WriteStream } = require('fs-capacitor');
const writeStream = new WriteStream();
writeStream.write('Hello, World!');
writeStream.end();
Reading from a Write Stream
This feature allows you to create a read stream from a write stream, enabling you to read the data that was written to the temporary file. This is useful for processing uploaded files or other streamed data.
const { WriteStream } = require('fs-capacitor');
const writeStream = new WriteStream();
writeStream.write('Hello, World!');
writeStream.end();
writeStream.createReadStream().pipe(process.stdout);
Handling Errors
This feature allows you to handle errors that occur during the streaming process. By listening to the 'error' event, you can catch and handle any issues that arise while writing to or reading from the stream.
const { WriteStream } = require('fs-capacitor');
const writeStream = new WriteStream();
writeStream.on('error', (err) => {
console.error('Stream error:', err);
});
writeStream.write('Hello, World!');
writeStream.end();
Multer is a middleware for handling multipart/form-data, which is primarily used for uploading files. It is similar to fs-capacitor in that it handles file uploads, but it is more focused on integrating with Express.js applications and provides more features for handling different types of file uploads.
Busboy is a streaming parser for HTML form data for node.js. It is similar to fs-capacitor in that it handles file streams, but it is more low-level and provides more control over the parsing process. It is often used in conjunction with other libraries to handle file uploads.
Formidable is a Node.js module for parsing form data, especially file uploads. It is similar to fs-capacitor in that it handles file uploads, but it provides more features for parsing and handling different types of form data.
FS Capacitor is a filesystem buffer for finite node streams. It supports simultaneous read/write, and can be used to create multiple independent readable streams, each starting at the beginning of the buffer.
This is useful for file uploads and other situations where you want to avoid delays to the source stream, but have slow downstream transformations to apply:
import fs from "fs";
import http from "http";
import { WriteStream } from "fs-capacitor";
http.createServer((req, res) => {
const capacitor = new WriteStream();
const destination = fs.createWriteStream("destination.txt");
// pipe data to the capacitor
req.pipe(capacitor);
// read data from the capacitor
capacitor
.createReadStream()
.pipe(/* some slow Transform streams here */)
.pipe(destination);
// read data from the very beginning
setTimeout(() => {
capacitor.createReadStream().pipe(/* elsewhere */);
// you can destroy a capacitor as soon as no more read streams are needed
// without worrying if existing streams are fully consumed
capacitor.destroy();
}, 100);
});
It is especially important to use cases like graphql-upload
where server code may need to stash earler parts of a stream until later parts have been processed, and needs to attach multiple consumers at different times.
FS Capacitor creates its temporary files in the directory ideneified by os.tmpdir()
and attempts to remove them:
writeStream.destroy()
has been called and all read streams are fully consumed or destroyedPlease do note that FS Capacitor does NOT release disk space as data is consumed, and therefore is not suitable for use with infinite streams or those larger than the filesystem.
FS Capacitor cleans up all of its temporary files before the process exits, by listening to the node process's exit
event. This event, however, is only emitted when the process is about to exit as a result of either:
When the node process receives a SIGINT
, SIGTERM
, or SIGHUP
signal and there is no handler, it will exit without emitting the exit
event.
Beginning in version 3, fs-capacitor will NOT listen for these signals. Instead, the application should handle these signals according to its own logic and call process.exit()
when it is ready to exit. This allows the application to implement its own graceful shutdown procedures, such as waiting for a stream to finish.
The following can be added to the application to ensure resources are cleaned up before a signal-induced exit:
function shutdown() {
// Any sync or async graceful shutdown procedures can be run before exiting…
process.exit(0);
}
process.on("SIGINT", shutdown);
process.on("SIGTERM", shutdown);
process.on("SIGHUP", shutdown);
WriteStream
extends stream.Writable
new WriteStream(options: WriteStreamOptions)
Create a new WriteStream
instance.
.createReadStream(options?: ReadStreamOptions): ReadStream
Create a new ReadStream
instance attached to the WriteStream
instance.
Calling .createReadStream()
on a released WriteStream
will throw a ReadAfterReleasedError
error.
Calling .createReadStream()
on a destroyed WriteStream
will throw a ReadAfterDestroyedError
error.
As soon as a ReadStream
ends or is closed (such as by calling readStream.destroy()
), it is detached from its WriteStream
.
.release(): void
Release the WriteStream
's claim on the underlying resources. Once called, destruction of underlying resources is performed as soon as all attached ReadStream
s are removed.
.destroy(error?: ?Error): void
Destroy the WriteStream
and all attached ReadStream
s. If error
is present, attached ReadStream
s are destroyed with the same error.
.highWaterMark?: number
Uses node's default of 16384
(16kb). Optional buffer size at which the writable stream will begin returning false
. See node's docs for stream.Writable
. For the curious, node has a guide on backpressure in streams.
.defaultEncoding
Uses node's default of utf8
. Optional default encoding to use when no encoding is specified as an argument to stream.write()
. See node's docs for stream.Writable
. Possible values depend on the version of node, and are defined in node's buffer implementation;
.tmpdir
Used node's os.tmpdir
by default. This function returns the directory used by fs-capacitor to store file buffers, and is intended primarily for testing and debugging.
ReadStream
extends stream.Readable
;
.highWaterMark
Uses node's default of 16384
(16kb). Optional value to use as the readable stream's highWaterMark, specifying the number of bytes (for binary data) or characters (for strings) that will be bufferred into memory. See node's docs for stream.Readable
. For the curious, node has a guide on backpressure in streams.
.encoding
Uses node's default of utf8
. Optional encoding to use when the stream's output is desired as a string. See node's docs for stream.Readable
. Possible values depend on the version of node, and are defined in node's buffer implementation.
FAQs
Filesystem-buffered, passthrough stream that buffers indefinitely rather than propagate backpressure from downstream consumers.
We found that fs-capacitor demonstrated a not healthy version release cadence and project activity because the last version was released a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
GitHub removed 27 malicious pull requests attempting to inject harmful code across multiple open source repositories, in another round of low-effort attacks.
Security News
RubyGems.org has added a new "maintainer" role that allows for publishing new versions of gems. This new permission type is aimed at improving security for gem owners and the service overall.
Security News
Node.js will be enforcing stricter semver-major PR policies a month before major releases to enhance stability and ensure reliable release candidates.