What is @lerna/exec?
@lerna/exec is a part of the Lerna monorepo management toolset. It allows you to execute shell commands in the context of each package in a Lerna-managed monorepo. This can be useful for running scripts, building, testing, or performing other tasks across multiple packages in a consistent manner.
What are @lerna/exec's main functionalities?
Execute Shell Commands
This feature allows you to run a shell command in each package managed by Lerna. For example, `lerna exec -- npm run build` will run the `npm run build` command in each package.
lerna exec -- <command>
Filter Packages
You can filter the packages on which to run the command using the `--scope` flag. For example, `lerna exec --scope my-package -- npm test` will run `npm test` only in the `my-package` package.
lerna exec --scope <package-name> -- <command>
Parallel Execution
This feature allows you to run commands in parallel across all packages. For example, `lerna exec --parallel -- npm install` will run `npm install` in all packages simultaneously.
lerna exec --parallel -- <command>
Other packages similar to @lerna/exec
npm-run-all
npm-run-all is a CLI tool to run multiple npm-scripts in parallel or sequential. It is not specifically designed for monorepos but can be used to run scripts across multiple packages by chaining commands.
concurrently
concurrently is a package that allows you to run multiple commands concurrently. It is useful for running multiple npm scripts at the same time, but it does not have the monorepo-specific features that @lerna/exec provides.
nx
Nx is a smart, fast, and extensible build system with first-class monorepo support and powerful integrations. It offers more advanced features compared to @lerna/exec, such as task scheduling, caching, and more.
@lerna/exec
Execute an arbitrary command in each package
Install lerna for access to the lerna
CLI.
Usage
$ lerna exec -- <command> [..args]
$ lerna exec -- rm -rf ./node_modules
$ lerna exec -- protractor conf.js
Run an arbitrary command in each package.
A double-dash (--
) is necessary to pass dashed flags to the spawned command, but is not necessary when all the arguments are positional.
The name of the current package is available through the environment variable LERNA_PACKAGE_NAME
:
$ lerna exec -- npm view \$LERNA_PACKAGE_NAME
You may also run a script located in the root dir, in a complicated dir structure through the environment variable LERNA_ROOT_PATH
:
$ lerna exec -- node \$LERNA_ROOT_PATH/scripts/some-script.js
Options
lerna exec
accepts all filter flags.
$ lerna exec --scope my-component -- ls -la
The commands are spawned in parallel, using the concurrency given (except with --parallel
).
The output is piped through, so not deterministic.
If you want to run the command in one package after another, use it like this:
$ lerna exec --concurrency 1 -- ls -la
--stream
Stream output from child processes immediately, prefixed with the originating
package name. This allows output from different packages to be interleaved.
$ lerna exec --stream -- babel src -d lib
--parallel
Similar to --stream
, but completely disregards concurrency and topological sorting, running a given command or script immediately in all matching packages with prefixed streaming output. This is the preferred flag for long-running processes such as babel src -d lib -w
run over many packages.
$ lerna exec --parallel -- babel src -d lib -w
Note: It is advised to constrain the scope of this command when using
the --parallel
flag, as spawning dozens of subprocesses may be
harmful to your shell's equanimity (or maximum file descriptor limit,
for example). YMMV
--no-bail
$ lerna exec --no-bail <command>
By default, lerna exec
will exit with an error if any execution returns a non-zero exit code.
Pass --no-bail
to disable this behavior, executing in all packages regardless of exit code.
--no-prefix
Disable package name prefixing when output is streaming (--stream
or --parallel
).
This option can be useful when piping results to other processes, such as editor plugins.
--profile
Profiles the command executions and produces a performance profile which can be analyzed using DevTools in a
Chromium-based browser (direct url: devtools://devtools/bundled/devtools_app.html
). The profile shows a timeline of
the command executions where each execution is assigned to an open slot. The number of slots is determined by the
--concurrency
option and the number of open slots is determined by --concurrency
minus the number of ongoing
operations. The end result is a visualization of the parallel execution of your commands.
The default location of the performance profile output is at the root of your project.
$ lerna exec --profile -- <command>
Note: Lerna will only profile when topological sorting is enabled (i.e. without --parallel
and --no-sort
).
--profile-location <location>
You can provide a custom location for the performance profile output. The path provided will be resolved relative to the current working directory.
$ lerna exec --profile --profile-location=logs/profile/ -- <command>
6.0.0 (2022-10-12)
Super fast, modern task-runner implementation for lerna run
As of version 6.0.0, Lerna will now delegate the implementation details of the lerna run
command to the super fast, modern task-runner (powered by Nx) by default.
If for some reason you wish to opt in to the legacy task-runner implementation details (powered by p-map
and p-queue
), you can do so by setting "useNx": false
in your lerna.json. (Please let us know via a Github issue if you feel the need to do that, however, as in general the new task-runner should just work how you expect it to as a lerna user).
Interactive configuration for lerna run
caching and task pipelines via the new lerna add-caching
command
When using the modern task-runner implementation described above, the way to get the most out of it is to tell it about the outputs of your various scripts, and also any relationships that exist between them (such as needing to run the build
script before the test
, for example).
Simply run lerna add-caching
and follow the instructions in order to generate all the relevant configuration for your workspace.
You can learn more about the configuration it generates here: https://lerna.js.org/docs/concepts/task-pipeline-configuration
Automatic loading of .env files in lerna run
with the new task-runner implementation
By default the modern task runner powered by Nx will automatically load .env
files for you. You can set --load-env-files
to false if you want to disable this behavior for any reason.
For more details about what .env
files will be loaded by default please see: https://nx.dev/recipes/environment-variables/define-environment-variables
Obsolete options in lerna run
with the new task-runner implementation
There are certain legacy options for lerna run
which are no longer applicable to the modern task-runner. Please see full details about those flags, and the reason behind their obselence, here:
https://lerna.js.org/docs/lerna6-obsolete-options
New lerna repair
command
When configuration changes over time as new versions of a tool are published it can be tricky to keep up with the changes and sometimes it's possible to miss out on optimizations as a result.
When you run the new command lerna repair
, lerna will execute a series of code migrations/codemods which update your workspace to the latest and greatest best practices for workspace configuration.
The actual codemods which run will be added to over time, but for now one you might see run on your workspace is that it will remove any explicit "useNx": true
references from lerna.json files, because that is no longer necessary and it's cleaner not to have it.
We are really excited about this feature and how we can use it to help users keep their workspaces up to date.