Security News
JSR Working Group Kicks Off with Ambitious Roadmap and Plans for Open Governance
At its inaugural meeting, the JSR Working Group outlined plans for an open governance model and a roadmap to enhance JavaScript package management.
The unified npm package is an interface for parsing, inspecting, transforming, and serializing content through syntax trees. It is built on the concept of syntax trees and is often used to work with content in markdown, HTML, and plain text formats. Unified is part of the unified collective which provides a range of plugins and utilities for content processing.
Parsing Markdown to Syntax Trees
This feature allows you to parse Markdown content into an abstract syntax tree (AST) using the remark-parse plugin.
const unified = require('unified');
const markdown = require('remark-parse');
const processor = unified().use(markdown);
const tree = processor.parse('# Hello world');
console.log(tree);
Transforming Syntax Trees
This feature demonstrates transforming a Markdown AST to an HTML AST and then serializing it to an HTML string.
const unified = require('unified');
const markdown = require('remark-parse');
const remark2rehype = require('remark-rehype');
const html = require('rehype-stringify');
const processor = unified()
.use(markdown)
.use(remark2rehype)
.use(html);
const file = processor.processSync('# Hello world');
console.log(String(file));
Linting and Validating Markdown
This feature shows how to use unified with remark-lint to lint and validate Markdown content.
const unified = require('unified');
const markdown = require('remark-parse');
const remarkLint = require('remark-lint');
const processor = unified()
.use(markdown)
.use(remarkLint);
processor.process('# Hello world', function (err, file) {
console.error(report(err || file));
});
Compiling Markdown to HTML
This feature illustrates compiling Markdown to a fully formatted HTML document.
const unified = require('unified');
const markdown = require('remark-parse');
const remark2rehype = require('remark-rehype');
const doc = require('rehype-document');
const format = require('rehype-format');
const html = require('rehype-stringify');
const processor = unified()
.use(markdown)
.use(remark2rehype)
.use(doc)
.use(format)
.use(html);
const file = processor.processSync('# Hello world');
console.log(String(file));
markdown-it is a Markdown parser with a focus on speed and extensibility. It is similar to unified in that it can parse and render Markdown, but it does not use an AST or provide the same plugin ecosystem.
remarkable is another Markdown parser and renderer. It offers a similar feature set to markdown-it but also does not use an AST or have the extensive plugin system that unified offers.
showdown is a JavaScript Markdown to HTML converter. It is similar to unified in that it converts Markdown to HTML, but it does not provide a unified interface for parsing, transforming, and serializing content.
unified is an interface for processing text using syntax trees. It’s what powers remark, retext, and rehype, but it also allows for processing between multiple syntaxes.
npm:
npm install unified
unified is also available as an AMD, CommonJS, and globals module, uncompressed and compressed.
var unified = require('unified');
var markdown = require('remark-parse');
var toc = require('remark-toc');
var remark2rehype = require('remark-rehype');
var document = require('rehype-document');
var html = require('rehype-stringify');
process.stdin
.pipe(unified())
.use(markdown)
.use(toc)
.use(remark2rehype)
.use(document)
.use(html)
.pipe(process.stdout);
unified is an interface for processing text using syntax trees. Syntax trees are a representation understandable to programs. Those programs, called plug-ins, take these trees and modify them, amongst other things. To get to the syntax tree from input text, there’s a parser, and, to get from that back to text, there’s a compiler. This is the process of a processor.
┌──────────────┐
┌─ │ Transformers │ ─┐
▲ └──────────────┘ ▼
└────────┐ ┌────────┘
│ │
┌────────┐ │ │ ┌──────────┐
Input ──▶ │ Parser │ ──▶ Tree ──▶ │ Compiler │ ──▶ Output
└────────┘ └──────────┘
Every processor implements another processor. To create a new processor, invoke another processor. This creates a new processor which is configured to function the same as its ancestor. But, when the descendant processor is configured in the future, that configuration does not change the ancestral processor.
Often, when processors are exposed from a library (for example, unified itself), they should not be modified directly, as that would change their behaviour for all users. Those processors are abstract, and they should be made concrete before they are used, by invoking them.
The syntax trees used in unified are Unist nodes,
which are plain JavaScript objects with a type
property. The
semantics of those type
s are defined by other projects.
There are several utilities for working with these nodes.
The following projects process different syntax trees. They parse text to their respective syntax tree, and they compile their syntax trees back to text. These processors can be used as-is, or their parser and compilers can be mixed and matched with other plug-ins to allow processing between different syntaxes.
When processing documents, metadata is often gathered about that document. VFile is a virtual file format which stores data, and handles metadata for unified and its plug-ins.
There are several utilities for working with these files.
To configure a processor, invoke its use
method, supply it a
plug-in, and optionally settings.
unified provides a streaming interface which enables it to plug into transformations outside of itself. An example, which reads markdown as input, adds a table of content, and writes it out, would be as follows:
var unified = require('unified');
var markdown = require('remark-parse');
var stringify = require('remark-stringify');
var toc = require('remark-toc');
process.stdin
.pipe(unified())
.use(parse)
.use(toc)
.use(stringify)
.pipe(process.stdout);
Which when given on stdin(4):
# Alpha
## Table of Content
## Bravo
Yields, on stdout(4):
# Alpha
## Table of Content
* [Bravo](#bravo)
## Bravo
Next to streaming, there’s also a programming interface, which gives access to processing metadata (such as lint messages), and supports multiple passed through files:
var unified = require('unified');
var markdown = require('remark-parse');
var lint = require('remark-lint');
var remark2retext = require('remark-retext');
var english = require('retext-english');
var equality = require('retext-equality');
var remark2rehype = require('remark-rehype');
var html = require('rehype-stringify');
var report = require('vfile-reporter');
unified()
.use(markdown)
.use(lint)
.use(remark2retext, unified().use(english).use(equality))
.use(remark2rehype)
.use(html)
.process('## Hey guys', function (err, file) {
console.err(report(err || file));
console.log(file.toString());
});
Which yields:
1:1-1:12 warning First heading level should be `1` first-heading-level
1:8-1:12 warning `guys` may be insensitive, use `people`, `persons`, `folks` instead gals-men
⚠ 3 warnings
<h2>Hey guys</h2>
The processors can be combined in two modes.
Bridge mode transforms the syntax tree from one flavour to another. Then, transformations are applied on that tree. Finally, the origin processor continues transforming the original syntax tree.
Mutate mode transforms the syntax tree from one flavour to another. Then, the origin processor continues transforming the destination syntax tree.
In the previous example (“Programming interface”), remark-retext
is
used in bridge mode: the origin syntax tree is kept after retext is
finished; whereas remark-rehype
is used in mutate mode: it sets a
new syntax tree and discards the original.
processor()
Object describing how to process text.
Function
— A new concrete processor which is
configured to function the same as its ancestor. But, when the
descendant processor is configured in the future, that configuration
does not change the ancestral processor.
The following example shows how a new processor can be created (from the remark processor) and linked to stdin(4) and stdout(4).
var remark = require('remark');
process.stdin.pipe(remark()).pipe(process.stdout);
processor.use(plugin[, options])
Configure the processor to use a plug-in, and configure that plug-in with optional options.
processor.use(plugin[, options])
;processor.use(plugins[, options])
;processor.use(list)
;processor.use(matrix)
.plugin
(Plugin
);options
(*
, optional) — Configuration for plugin
.plugins
(Array.<Function>
) — List of plugins;list
(Array
) — plugin
and options
in an array;matrix
(Array
) — array where each entry is a list
;processor
— The processor on which use
is invoked.
Plugin
A unified plugin changes the way the applied-on processor works, in the following ways:
Plug-in’s are a concept which materialise as attachers.
function attacher(processor[, options])
An attacher is the thing passed to use
. It configures the
processor and in turn can receive options.
Attachers can configure processors, such as by interacting with parsers and compilers, linking it to other processors, or specifying how the syntax tree is handled.
processor
(processor
) — Context on which it’s used;options
(*
, optional) — Configuration.transformer
— Optional.
function transformer(node, file[, next])
Transformers modify the syntax tree or metadata of a file.
A transformer is a function which is invoked each time a file is
passed through the transform phase. If an error occurs (either
because it’s thrown, returned, rejected, or passed to next
),
the process stops.
The transformation process in unified is handled by trough
,
see it’s documentation for the exact semantics of transformers.
Error
— Can be returned to stop the process;stringify
s to be performed on the new
tree;Promise
— If a promise is returned, the function is asynchronous,
and must be resolved (optionally with a Node) or
rejected (optionally with an Error
).function next(err[, tree[, file]])
If the signature of a transformer includes next
(third argument),
the function may finish asynchronous, and must invoke next()
.
err
(Error
, optional) — Stop the process;node
(Node, optional) — New syntax tree;file
(VFile, optional) — New virtual file.processor.parse(file|value[, options])
Parse text to a syntax tree.
file
(VFile);value
(string
) — String representation of a file.options
(Object
, optional) — Configuration given to the parser.Node — Syntax tree representation of input.
processor.Parser
A constructor handling the parsing of text to a syntax tree.
It’s instantiated by the parse phase in the process
with a VFile, settings
, and the processor.
The instance must expose a parse
method which is invoked without
arguments, and must return a syntax tree representation of the
VFile.
processor.stringify(node[, file|value][, options])
Compile a syntax tree to text.
node
(Node);file
(VFile, optional);value
(string
, optional) — String representation of a file;options
(Object
, optional) — Configuration given to the parser.string
— String representation of the syntax tree file.
processor.Compiler
A constructor handling the compilation of a syntax tree to text.
It’s instantiated by the stringify phase in the
process with a VFile, settings
, and the processor.
The instance must expose a compile
method which is invoked with
the syntax tree, and must return a string representation of that
syntax tree.
processor.run(node[, file|value][, done])
Transform a syntax tree by applying plug-ins to it.
If asynchronous plug-ins are configured, an error
is thrown if done
is not supplied.
node
(Node);file
(VFile, optional);value
(string
, optional) — String representation of a file.done
(Function
, optional).Node — The given syntax tree.
function done(err[, node, file])
Invoked when transformation is complete. Either invoked with an error, or a syntax tree and a file.
processor.process(file|value[, options][, done])
Process the given representation of a file as configured on the
processor. The process invokes parse
, run
, and stringify
internally.
If asynchronous plug-ins are configured, an error
is thrown if done
is not supplied.
file
(VFile);value
(string
) — String representation of a file;options
(Object
, optional) — Configuration for both the parser
and compiler;done
(Function
, optional).VFile — Virtual file with modified contents
.
function done(err, file)
Invoked when the process is complete. Invoked with a fatal error, if any, and the VFile.
err
(Error
, optional) — Fatal error;file
(VFile).processor.write(chunk[, encoding][, callback])
Note: Although the interface is compatible with streams, all data is currently buffered and passed through in one go. This might be changed later.
Write data to the in-memory buffer.
chunk
(Buffer
or string
);encoding
(string
, defaults to utf8
);callback
(Function
) — Invoked on successful write.boolean
— Whether the write was successful (currently, always true).
processor.end()
Signal the writing is complete. Passes all arguments to a final
write
, and starts the process (using, when available,
options given to pipe
).
data
(string
)
— When the process was successful, triggered with the compiled
file;error
(Error
)
— When the process was unsuccessful, triggered with the fatal
error;warning
(VFileMessage
)
— Each message created by the plug-ins in the process is triggered
and separately passed.boolean
— Whether the write was successful (currently, always true).
processor.pipe(stream[, options])
Note: This does not pass all processed data (e.g., from loose
process()
calls) to the destination stream. There’s one process created internally especially for streams. Only data piped into the processor is piped out.
Pipe data streamed into the processor, processed, to the destination
stream. Optionally also set the configuration for how the data
is processed. Calls Stream#pipe
with the given arguments under the hood.
stream
(WritableStream
);
options
(Object
, optional) — Configuration for process and
stream.pipe
.
WritableStream
— The given stream.
processor.data(key[, value])
Get or set information in an in-memory key-value store accessible to all phases of the process. An example is a list of HTML elements which are self-closing (i.e., do not need a closing tag), which is needed when parsing, transforming, and compiling HTML.
key
(string
) — Identifier;value
(*
, optional) — Value to set. Omit if getting key
.processor
— If setting, the processor on which data
is invoked;*
— If getting, the value at key
.The following example show how to get and set information:
var unified = require('unified');
console.log(unified().data('alpha', 'bravo').data('alpha'))
Yields:
bravo
processor.abstract()
Turn a processor into an abstract processor. Abstract processors are meant to be extended, and not to be configured or processed directly (as concrete processors are).
Once a processor is abstract, it cannot be made concrete again. But, a new concrete processor functioning just like it can be created by invoking the processor.
Processor
— The processor on which abstract
is invoked.
The following example, index.js
, shows how rehype
prevents extensions to itself:
var unified = require('unified');
var parse = require('rehype-parse');
var stringify = require('rehype-stringify');
module.exports = unified().use(parse).use(stringify).abstract();
The below example, a.js
, shows how that processor can be used to
create a command line interface which reformats markdown passed on
stdin(4) and outputs it on stdout(4).
var rehype = require('rehype');
process.stdin.pipe(rehype()).pipe(process.stdout);
The below example, b.js
, shows a similar looking example which
operates on the abstract rehype interface. If this
behaviour was allowed it would result in unexpected behaviour, so
an error is thrown. This is invalid:
var rehype = require('rehype');
process.stdin.pipe(rehype).pipe(process.stdout);
Yields:
~/index.js:118
throw new Error(
^
Error: Cannot pipe into abstract processor.
To make the processor concrete, invoke it: use `processor()` instead of `processor`.
at assertConcrete (~/index.js:118:13)
at Function.<anonymous> (~/index.js:135:7)
...
at Object.<anonymous> (~/b.js:76:15)
...
FAQs
parse, inspect, transform, and serialize content through syntax trees
The npm package unified receives a total of 9,327,726 weekly downloads. As such, unified popularity was classified as popular.
We found that unified demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 2 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
At its inaugural meeting, the JSR Working Group outlined plans for an open governance model and a roadmap to enhance JavaScript package management.
Security News
Research
An advanced npm supply chain attack is leveraging Ethereum smart contracts for decentralized, persistent malware control, evading traditional defenses.
Security News
Research
Attackers are impersonating Sindre Sorhus on npm with a fake 'chalk-node' package containing a malicious backdoor to compromise developers' projects.