What is es-module-lexer?
The es-module-lexer package is designed to perform lexical analysis of JavaScript modules to identify import and export statements. It is particularly useful for tools that need to analyze or transform ES module syntax, such as bundlers, compilers, and code analysis tools.
What are es-module-lexer's main functionalities?
Lexical Analysis
This feature allows you to perform lexical analysis on a string containing ES module source code. The `parse` function returns two arrays: one for the import statements and one for the export statements found in the source code.
import { init, parse } from 'es-module-lexer';
(async () => {
await init;
const source = `import { a } from 'module-a';`;
const [imports, exports] = parse(source);
console.log(imports);
console.log(exports);
})();
Other packages similar to es-module-lexer
acorn
Acorn is a small, fast, JavaScript-based JavaScript parser. It provides a simple interface for parsing JavaScript code and generating abstract syntax trees (AST). While es-module-lexer focuses specifically on ES module syntax, Acorn is a more general-purpose parser that can handle a wider range of JavaScript features.
cherow
Cherow is a fast, standards-compliant, self-hosted JavaScript parser with error recovery. It aims to parse according to the ECMAScript specification. Cherow can be seen as an alternative to es-module-lexer with a focus on compliance and error recovery, but it is not limited to module syntax analysis.
ES Module Lexer
JS module syntax lexer used in es-module-shims.
Very small (< 500 lines) and fast ES module lexer.
The output interfaces use minification-friendly names.
Usage
Note: this module is exposed as an ES module build only (lexer.js contains export default analyze(source) { ... }
).
npm install es-module-lexer
Using node --experimental-modules
-
import analyze from 'es-module-lexer';
const source = `
import { a } from 'asdf';
export var p = 5;
export function q () {
};
// Comments provided to demonstrate edge cases
import /*comment!*/ ('asdf');
import /*comment!*/.meta.asdf;
`;
try {
var [imports, exports] = analyze(source);
}
catch (e) {
console.log('Parsing failure');
}
source.substring(imports[0].s, imports[0].e);
exports.toString();
imports[1].d === -1;
source.substring(imports[1].s, imports[1].e);
source.substring(imports[1].d, imports[1].s);
imports[2].d === -1;
source.substring(imports[2].s, imports[2].e);
Benchmarks
Benchmarks can be run with npm run bench
.
Current results:
bench/samples/d3.js (497K)
> Cold: 55ms
> Warm: 7ms (average of 25 runs)
bench/samples/d3.min.js (268K)
> Cold: 13ms
> Warm: 5ms (average of 25 runs)
bench/samples/magic-string.js (35K)
> Cold: 4ms
> Warm: 0ms (average of 25 runs)
bench/samples/magic-string.min.js (20K)
> Cold: 0ms
> Warm: 0ms (average of 25 runs)
bench/samples/rollup.js (881K)
> Cold: 27ms
> Warm: 13ms (average of 25 runs)
bench/samples/rollup.min.js (420K)
> Cold: 8ms
> Warm: 8ms (average of 25 runs)
Limitations
The lexing approach is designed to deal with the full language grammar including RegEx / division operator ambiguity through backtracking and paren / brace tracking.
The only limitation to the reduced parser is that the "exports" list may not correctly gather all export identifiers in the following edge cases:
export var a = 'asdf', q = z;
export var { a: b } = asdf;
The above cases are handled gracefully in that the lexer will keep going fine, it will just not properly detect the export names above.
License
MIT