![require(esm) Backported to Node.js 20, Paving the Way for ESM-Only Packages](https://cdn.sanity.io/images/cgdhsj6q/production/be8ab80c8efa5907bc341c6fefe9aa20d239d890-1600x1097.png?w=400&fit=max&auto=format)
Security News
require(esm) Backported to Node.js 20, Paving the Way for ESM-Only Packages
require(esm) backported to Node.js 20, easing the transition to ESM-only packages and reducing complexity for developers as Node 18 nears end-of-life.
ah-fs.processor
Advanced tools
Processes ah-fs data obtained from async resources related to file system opearations.
npm install ah-fs.processor
Table of Contents generated with DocToc
fs.readFile
fs.createReadStream
fs.writeFile
Parameters
$0
Object
Instantiates an fs.readFile data processor to process data collected via nodesource/ah-fs
Parameters
$0
ObjectincludeActivities
boolean? if true
the actual activities are appended to the output (optional, default false
)Processes the supplied async activities and splits them into
groups, and operations each representing a file read fs.readFile
.
The returned value has a groups
property which just lists the ids
of async resources that were grouped together to form an operation
indexed by the id of the open
resource.
Thus the groups
is a map of sets.
If no file read was encountered the groups are empty.
Additionally an operations
property is included as well. Each operation
represents one full fs.readFile
execution. There will be one operation per
group and they are indexed by the corresponding open resource id
as well.
An operation
has the following properties:
fs.readFile
specific Operation PropertiesData about the async resources that were part of the operation, by default
only id
and triggerId
are included:
The information below is the same for all operation
s and thus is only
mentioned here and linked from the documentation of all other processors.
Data about the lifetime of the operation:
lifeCycle: contains three timestamps that detail when an operation was created, for how long it was alive and when it was destroyed.
destroyed
and created
timestamps, i.e.
how long the operation's resources were aliveEach timestamp has the following two properties provided by utils.prettyNs.
Data that links to user code that is responsible for the operation occurring.
createdAt: provides the line of code that called fs.readFile
userFunctions: depending on the settings (see constructor docs) each resource
will include it's own array of userFunctions or they are separated out into
one property with duplicates merged. The latter is the default behavior.
In either case userFunctions
is an Array of Objects with the following properties:
name
is not seterr
and information about the res
of the operation
with which the function was invokedThe sample return value was created with default options.
{ groups: Map { 10 => Set { 10, 11, 12, 13 } },
operations:
Map {
10 => { lifeCycle:
{ created: { ms: '44.12ms', ns: 44119000 },
destroyed: { ms: '85.95ms', ns: 85955000 },
timeAlive: { ms: '41.84ms', ns: 41836000 } },
createdAt: 'at Test.<anonymous> (/Volumes/d/dev/js/async-hooks/ah-fs/test/read-one-file.js:36:6)',
open: { id: 10, triggerId: 1 },
stat: { id: 11, triggerId: 10 },
read: { id: 12, triggerId: 11 },
close: { id: 13, triggerId: 12 },
userFunctions:
[ { file: '/Volumes/d/dev/js/async-hooks/ah-fs/test/read-one-file.js',
line: 39,
column: 17,
inferredName: '',
name: 'onread',
location: 'onread (/Volumes/d/dev/js/async-hooks/ah-fs/test/read-one-file.js:39:17)',
args:
{ '0': null,
'1':
{ type: 'Buffer',
len: 6108,
included: 18,
val:
{ utf8: 'const test = requi',
hex: '636f6e73742074657374203d207265717569' } },
proto: 'Object' },
propertyPaths:
[ 'open.resource.context.callback',
'stat.resource.context.callback',
'read.resource.context.callback',
'close.resource.context.callback' ] } ] } } }
Returns Object information about fs.readFile
operations with the
structure outlined above
The minimum number of steps, represented as an async resource each,
involved to execute fs.readFile
.
This can be used by higher level processors to group activities looking for larger operations first and then operations involving less steps.
Steps are: open, stat, read+, close
Description of the operation: 'fs.readFile'.
Processes a group of async activities that represent a fs read stream operation.
It is used by the ReadFileProcessor as part of process
.
Four operation steps are derived from the group, each providing some information about the operation in question.
Each step is processed into an operation in the corresponding private method, i.e. _processOpen
.
These methods are documented below for information's sake, they should not be called directly,
nor should you have a need to directly instantiate a ReadFileOperation
in the first place.
Parameters
group
Map<Number, Set<Number>> the ids of the activities that were part of the operationincludeActivities
Boolean? if true
the activities are attached to
each operation step (optional, default false
)The open resource tells us where in user code the fs.readFile
originated
via the second frame of the stack trace, as well as when the operation
was created.
Additionally it has the same user functions attached as all the other resources.
Parameters
info
Object information about the open step, pre-processed by the ReadFileProcessor
.The stat resource gives us no interesting information.
Therefore we just capture the id
, triggerId
and userFunctions
and if so desired
attach the activities.
Parameters
info
Object information about the open step, pre-processed by the ReadFileProcessor
.The read resource gives us no interesting information.
Therefore we just capture the id
, triggerId
and userFunctions
and if so desired
attach the activities.
Parameters
info
Object information about the read step, pre-processed by the ReadFileProcessor
.The main information we pull from the close resource is the destroy
timestamp.
Combined with the init
timestamp of the open resource it allows us to deduce how long
the file read took.
Parameters
info
Object information about the close step, pre-processed by the ReadFileProcessor
.Returns the summary of processing the group into an operation.
The summary of all operations has a very similar structure, but includes some properties that are specific to this particular operation.
The general properties lifeCycle
and createdAt
are documented as part of
the ReadFileProcessor
.
Therefore learn more here.
fs.readFile
readFileOperation._processOpen
readFileOperation._processStat
readFileOperation._processRead
readFileOperation._processClose
Parameters
$0
Object options
$0.separateFunctions
Boolean? when true
the user functions are separated out
from the specific operations and attached as a userFunctions
array directly to the returned
result (optional, default true
)$0.mergeFunctions
Boolean? if true
when a duplicate function is found in the
separated functions Array, they are merged into one while preserving all information
from both version. Note that this setting only activates if separateFunctions
is true
as well. (optional, default true
)Returns Object all important information about the current operation
Instantiates an fs.createReadStream data processor to process data collected via nodesource/ah-fs
Parameters
$0
ObjectincludeActivities
boolean? if true
the actual activities are appended to the output (optional, default false
)Processes the supplied async activities and splits them into
groups, and operations each representing a file read stream fs.createReadStream
.
The returned value has a groups
property which just lists the ids
of async resources that were grouped together to form an operation
indexed by the fd
on which the readFile operated.
Thus the groups
is a map of sets.
If no file read stream was encountered the groups are empty.
Additionally an operations
property is included as well. Each operation
represents one full fs.createReadStream
execution. There will be one operation per
group and they are indexed by the corresponding fd
as well.
An operation
has the following properties:
fs.createReadStream
specific Operation PropertiesData about the async resources that were part of the operation, by default
only id
and triggerId
are included:
The sample return value was created with default options.
{ groups: Map { 10 => Set { 10, 12, 13, 14, 16 } },
operations:
Map {
10 => { lifeCycle:
{ created: { ms: '1.60ms', ns: 1600000 },
destroyed: { ms: '14.33ms', ns: 14329000 },
timeAlive: { ms: '12.73ms', ns: 12729000 } },
createdAt: 'at Test.<anonymous> (/Volumes/d/dev/js/async-hooks/ah-fs/test/readstream-one-file.js:94:6)',
open: { id: 10, triggerId: 3 },
stream:
{ id: 14,
triggerId: 12,
path: '/Volumes/d/dev/js/async-hooks/ah-fs/test/readstream-one-file.js',
flags: 'r',
fd: 19,
objectMode: false,
highWaterMark: 65536,
pipesCount: 0,
defaultEncoding: 'utf8',
encoding: null },
reads:
[ { id: 12,
triggerId: 10,
timeSpent: { ms: '0.83ms', ns: 830000 } },
{ id: 13,
triggerId: 12,
timeSpent: { ms: '0.24ms', ns: 240000 } } ],
close: { id: 16, triggerId: 13 },
userFunctions:
[ { file: '/Volumes/d/dev/js/async-hooks/ah-fs/test/readstream-one-file.js',
line: 99,
column: 16,
inferredName: '',
name: 'onend',
location: 'onend (/Volumes/d/dev/js/async-hooks/ah-fs/test/readstream-one-file.js:99:16)',
args: null,
propertyPaths: [ 'stream.resource.args[0]._events.end[1]' ] },
{ file: '/Volumes/d/dev/js/async-hooks/ah-fs/test/readstream-one-file.js',
line: 98,
column: 17,
inferredName: '',
name: 'ondata',
location: 'ondata (/Volumes/d/dev/js/async-hooks/ah-fs/test/readstream-one-file.js:98:17)',
args: null,
propertyPaths: [ 'stream.resource.args[0]._events.data' ] } ] } } }
Returns Object information about fs.createReadStream
operations with the
structure outlined above
The minimum number of steps, represented as an async resource each,
involved to execute fs.createReadStream
.
This can be used by higher level processors to group activities looking for larger operations first and then operations involving less steps.
Steps are: open, stream+, read+, close
Description of the operation: 'fs.createReadStream'.
Processes a group of async activities that represent a fs read stream operation.
It is used by the ReadStreamProcessor as part of process
.
Four operation steps are derived from the group, each providing some information about the operation in question.
Each step is processed into an operation in the corresponding private method, i.e. _processOpen
.
These methods are documented below for information's sake, they should not be called directly,
nor should you have a need to directly instantiate a ReadStreamOperation
in the first place.
Parameters
group
Map<Number, Set<Number>> the ids of the activities that were part of the operationincludeActivities
Boolean? if true
the activities are attached to
each operation step (optional, default false
)An open doesn't have too much info, but we can glean two very important data points:
createReadStream
was called.Parameters
info
Object information about the open step, pre-processed by the ReadStreamProcessor
.The ReadStream Tick gives us a lot of information. It has an args array with the ReadStream and its ReadableState respectively
The ReadStream provides us the following:
All callbacks on the _events of the ReadStream have been removed, but are present inside the functions object (see below).
The ReadableState provides us the following:
true|false
The information extracted from the tick is attached to a stream
property
provided with the summary
.
Parameters
info
Object information about the tick step, pre-processed by the ReadStreamProcessor
.The read resource doesn't give us too much information.
The stack traces originate in core and we don't see any registred
user callbacks, as those are present on the stream instead.
However we can count the amount of reads that occurred and deduce how
long each read took from the before
and after
timestamps.
Parameters
info
Object information about the read step, pre-processed by the ReadStreamProcessor
.The main information we pull from the close resource is the destroy
timestamp.
Combined with the init
timestamp of the open resource it allows us to deduce how long
the read stream was active.
Parameters
info
Object information about the close step, pre-processed by the ReadStreamProcessor
.Returns the summary of processing the group into an operation.
The summary of all operations has a very similar structure, but includes some properties that are specific to this particular operation.
The general properties lifeCycle
and createdAt
are documented as part of
the ReadFileProcessor
.
Therefore learn more here.
fs.createReadStream
readStreamOperation._processOpen
readStreamOperation._processTick
readStreamOperation._processRead
readStreamOperation._processClose
Parameters
$0
Object options
$0.separateFunctions
Boolean? when true
the user functions are separated out
from the specific operations and attached as a userFunctions
array directly to the returned
result (optional, default true
)$0.mergeFunctions
Boolean? if true
when a duplicate function is found in the
separated functions Array, they are merged into one while preserving all information
from both version. Note that this setting only activates if separateFunctions
is true
as well. (optional, default true
)Returns Object all important information about the current operation
Sample initStack of writeFile open, calles as first operation of fs.writeFile
.
In order to be sure this is a writeFile open we need to check the two topmost frames.
"at Object.fs.open (fs.js:581:11)", "at Object.fs.writeFile (fs.js:1155:6)", "at Test. (/Volumes/d/dev/js/async-hooks/ah-fs/test/write-one-file.js:28:6)",
Code at fs.js:581:
binding.open(pathModule._makeLong(path), ...
Code at fs.js:1155:
fs.open(path, flag, options.mode, function(openErr, fd) ...
Bottom frame has info about where the call fs.writeFile
originated.
Sample init stack of writeFile write, called afer fs.open
completes:
"at Object.fs.write (fs.js:643:20)", "at writeAll (fs.js:1117:6)", "at writeFd (fs.js:1168:5)", "at fs.js:1159:7", "at FSReqWrap.oncomplete (fs.js:117:15)"
Code at fs.js:643:
binding.writeBuffer(fd, buffer, offset, length, position, req);
Sample initStack of writeFile close, called after last fs.write
completes:
"at Object.fs.close (fs.js:555:11)", "at fs.js:1131:14", "at FSReqWrap.wrapper as oncomplete"
Code at fs.js:555:
binding.close(fd, req);
Instantiates an fs.writeFile data processor to process data collected via nodesource/ah-fs
Parameters
$0
ObjectincludeActivities
boolean? if true
the actual activities are appended to the output (optional, default false
)Processes the supplied async activities and splits them into
groups, and operations each representing a file read stream fs.createWriteFile
.
The returned value has a groups
property which just lists the ids
of async resources that were grouped together to form an operation
indexed by the id of the fs.open
activity that was part of the fs.writeFile
.
Thus the groups
is a map of sets.
If no file write file was encountered the groups are empty.
Additionally an operations
property is included as well. Each operation
represents one full fs.writeFile
execution. There will be one operation per
group and they are indexed by the corresponding open id as well.
An operation
has the following properties:
fs.createWriteFile
specific Operation PropertiesData about the async resources that were part of the operation, by default
only id
and triggerId
are included:
The sample return value was created with default options.
{ groups: Map { 10 => Set { 10, 11, 12 } },
operations:
Map {
10 => { lifeCycle:
{ created: { ms: '24.49ms', ns: 24491000 },
destroyed: { ms: '33.96ms', ns: 33964000 },
timeAlive: { ms: '9.47ms', ns: 9473000 } },
createdAt: 'at Test.<anonymous> (/Volumes/d/dev/js/async-hooks/ah-fs/test/write-one-file.js:28:6)',
open: { id: 10, triggerId: 1 },
write: { id: 11, triggerId: 10 },
close: { id: 12, triggerId: 11 } } } }
Returns Object information about fs.createWriteFile
operations with the
structure outlined above
The minimum number of steps, represented as an async resource each,
involved to execute fs.writeFile
.
This can be used by higher level processors to group activities looking for larger operations first and then operations involving less steps.
Steps are: open, write+, close
Description of the operation: 'fs.writeFile'.
Processes a group of async activities that represent a fs write file operation.
It is used by the WriteFileProcessor as part of process
.
Three operation steps are derived from the group, each providing some information about the operation in question.
Each step is processed into an operation in the corresponding private method, i.e. _processOpen
.
These methods are documented below for information's sake, they should not be called directly,
nor should you have a need to directly instantiate a WriteFileOperation
in the first place.
Parameters
group
Map<Number, Set<Number>> the ids of the activities that were part of the operationincludeActivities
Boolean? if true
the activities are attached to
each operation step (optional, default false
)The open resource tells us where in user code the fs.writeFile
originated
via the second frame of the stack trace, as well as when the operation
was created.
Parameters
info
Object information about the open step, pre-processed by the WriteFileProcessor
.The write resource gives us no interesting information.
Therefore we just capture the id
, triggerId
and if so desired
attach the activities.
Parameters
info
Object information about the write step, pre-processed by the WriteFileProcessor
.The main information we pull from the close resource is the destroy
timestamp.
Combined with the init
timestamp of the open resource it allows us to deduce how long
the file write took.
Parameters
info
Object information about the close step, pre-processed by the WriteFileProcessor
.Returns the summary of processing the group into an operation.
The summary of all operations has a very similar structure, but includes some properties that are specific to this particular operation.
The general properties lifeCycle
and createdAt
are documented as part of
the WriteFileProcessor
.
Therefore learn more here.
fs.writeFile
writeFileOperation._processOpen
writeFileOperation._processWrite
writeFileOperation._processClose
Note this summary function takes no parameters (like the other Operations) since we don't find any user functions related to the write file operation and thus have nothing to process.
Returns Object all important information about the current operation
Instantiates an fs.createWriteStream data processor to process data collected via nodesource/ah-fs
Parameters
$0
Object
$0.includeActivities
boolean? if true
the actual activities are appended to the output (optional, default false
)$0.separateFunctions
Boolean? when true
the user functions are separated out
from the specific resources and attached as a userFunctions
array directly to the returned
operations (optional, default true
)Processes the supplied async activities and splits them into
groups, and operations each representing a file write stream fs.createWriteStream
.
The returned value has a groups
property which just lists the ids
of async resources that were grouped together to form an operation
indexed by the fd
on which the writeFile operated.
Thus the groups
is a map of sets.
If no file write stream was encountered the groups are empty.
Additionally an operations
property is included as well. Each operation
represents one full fs.createWriteStream
execution. There will be one operation per
group and they are indexed by the corresponding fd
as well.
An operation
has the following properties:
fs.createWriteStream
specific Operation PropertiesData about the async resources that were part of the operation, by default
only id
and triggerId
are included:
The sample return value was created with default options.
{ groups: Map { 10 => Set { 14, 10, 16, 19 } },
operations:
Map {
10 => { lifeCycle:
{ created: { ms: '1.12ms', ns: 1123000 },
destroyed: { ms: '18.20ms', ns: 18205000 },
timeAlive: { ms: '17.08ms', ns: 17082000 } },
createdAt: 'at Test.<anonymous> (/Volumes/d/dev/js/async-hooks/ah-fs/test/read-stream-piped-into-write-stream.js:29:26)',
open: { id: 10, triggerId: 3 },
stream:
{ id: 16,
triggerId: 13,
path: '/dev/null',
flags: 'w',
fd: 19,
mode: 438 },
writes:
[ { id: 14,
triggerId: 13,
timeSpent: { ms: '0.14ms', ns: 139000 } } ],
close: { id: 19, triggerId: 15 },
userFunctions:
[ { file: '/Volumes/d/dev/js/async-hooks/ah-fs/test/read-stream-piped-into-write-stream.js',
line: 32,
column: 19,
inferredName: '',
name: 'onfinish',
location: 'onfinish (/Volumes/d/dev/js/async-hooks/ah-fs/test/read-stream-piped-into-write-stream.js:32:19)',
args: null,
propertyPaths: [ 'stream.resource.args[1].pipes._events.finish[1]' ] } ] } } }
Returns Object information about fs.createWriteStream
operations with the
structure outlined above
Here we try our best to piece together the parts of a WriteStream, Open | Write+ | WriteStreamTick | Close.
Since they aren't linked by a common file descriptor or similar we rely on async resource graph structure and the timestamps to take a best guess.
We just don't have the data available to piece this together with 100% certainty.
Below is a sample of collected async resources with all but types and ids removed.
{ type: 'FSREQWRAP', id: 10, tid: 3 } open, write stream triggered by root
{ type: 'FSREQWRAP', id: 11, tid: 3 } open, read stream triggered by root
{ type: 'TickObject', id: 12, tid: 3 } read stream tick, triggered by root
{ type: 'FSREQWRAP', id: 13, tid: 11 } read, triggerd by open of read steam
{ type: 'FSREQWRAP', id: 14, tid: 13 } write, triggerd by read of read steam
{ type: 'FSREQWRAP', id: 15, tid: 13 } read, next chunk, triggered by first read
{ type: 'TickObject', id: 16, tid: 13 } stream tick, triggerd by first read
{ type: 'FSREQWRAP', id: 18, tid: 15 } close read stream, triggered by last read
{ type: 'FSREQWRAP', id: 19, tid: 15 } close write stream, triggered by last read
We reason about that data as follows in order to piece together the WriteStream.
Write (id: 14) is triggered by read of read stream (id: 13). The same read triggers the last read (id: 15). That last read triggers the close of the write stream (id: 19).
Therefore we can connect the write stream write to the write stream close since they have a common parent in their ancestry (the first read of the read stream).
-- Read2:15 -- WriteStream:Close:19
/
Read1:13
\
-- WriteStream:Write:14
However I would imagine that this breaks down once we have on read stream piped into multiple write streams as then the writes have the same Read parent.
There is no 100% way to get this right, but if we assume that the first write happens right after the opening of the write stream in the same context we can do the following.
We already know that the common parent of WriteStream:Write and WriteStream:Close is Read1:13. Therefore we find all WriteStream:Opens that share a parent with Read1:13. The ones with the closest parent win.
If we find more than one, we pick the one that was initialized closest to the WriteStream:Write timewise, assuming that we write to the stream immediately after opening it.
-- ReadStream:Open:11 -- Read1:13 -- Read2:15 -- WriteStream:Close:19
/ \
Parent:3 -- WriteStream:Write:14
\
-- WriteStream:Open:10
The minimum number of steps, represented as an async resource each,
involved to execute fs.createWriteStream
.
This can be used by higher level processors to group activities looking for larger operations first and then operations involving less steps.
Steps are: open, stream, write+, close
Description of the operation: 'fs.createWriteStream'.
Processes a group of async activities that represent a fs write stream operation.
It is used by the writeStreamProcessor as part of process
.
Four operation steps are derived from the group, each providing some information about the operation in question.
Each step is processed into an operation in the corresponding private method, i.e. _processOpen
.
These methods are documented below for information's sake, they should not be called directly,
nor should you have a need to directly instantiate a writeStreamOperation
in the first place.
Parameters
group
Map<Number, Set<Number>> the ids of the activities that were part of the operationincludeActivities
Boolean? if true
the activities are attached to
each operation step (optional, default false
)An open doesn't have too much info, but we can glean two very important data points:
createWriteStream
was called.Parameters
info
Object information about the open step, pre-processed by the WriteStreamProcessor
.The WriteStream Tick gives us a lot of information. It is the same tick object that we process in the ReadStreamOperation to glean data about the read stream. It has an args array with the ReadStream and its ReadableState respectively.
The ReadableState included the WritableState which the ah-fs pre-processor
already plucked for us and added as the 3rd argument.
Additionally it includes lots of functions including user functions registered with the
WriteStream, i.e. on('finish')
.
Ergo the WriteStream provides us the following as part of the WritableState:
All callbacks on the _events of the ReadStream and WriteStream have been removed, but are present inside the functions object (see below).
The information extracted from the tick is attached to a stream
property
provided with the summary
.
Parameters
info
Object information about the tick step, pre-processed by the WriteStreamProcessor
.The write resource doesn't give us too much information.
The stack traces originate in core and we don't see any registred
user callbacks, as those are present on the stream instead.
However we can count the amount of writes that occurred and deduce how
long each write took from the before
and after
timestamps.
Parameters
info
Object information about the write step, pre-processed by the WriteStreamProcessor
.The main information we pull from the close resource is the destroy
timestamp.
Combined with the init
timestamp of the open resource it allows us to deduce how long
the write stream was active.
Parameters
info
Object information about the close step, pre-processed by the WriteStreamProcessor
.MIT
FAQs
Analyzes ah-fs data specifically for fs.readFile cases.
We found that ah-fs.processor demonstrated a not healthy version release cadence and project activity because the last version was released a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
require(esm) backported to Node.js 20, easing the transition to ESM-only packages and reducing complexity for developers as Node 18 nears end-of-life.
Security News
PyPI now supports iOS and Android wheels, making it easier for Python developers to distribute mobile packages.
Security News
Create React App is officially deprecated due to React 19 issues and lack of maintenance—developers should switch to Vite or other modern alternatives.