Shellac
A tool to make invoking a series of shell commands safer & better-looking.
Usage
import shellac from 'shellac'
test('morty', async () =>
await shellac`
$ echo "End-to-end CLI testing made nice"
$ node -p "5 * 9"
stdout >> ${(answer) => expect(Number(answer)).toBeGreaterThan(40)}
`)
Syntax
Basic commands
await shellac`
// To execute a command, use $
$ my command here
// If you want the output piped through to process.stdout/err, use $$
$$ echo "This command will print to terminal"
// Use stdout/err and >> to check the output of the last command
stdout >> ${(last_cmd_stdout) => {
expect(last_cmd_stdout).toBe('This command will print to terminal')
}}
`
Returning output
Shellac returns the stdout/err of the last command in a block as { stdout, stderr }
const { stdout, stderr } = await shellac`
$ echo "This command will run but its output will be lost"
$ echo "The last command executed returns its stdout/err"
`
expect(stdout).toBe('The last command executed returns its stdout/err')
You can also return named captures from a series of commands:
const { current_sha, current_branch } = await shellac`
$ git rev-parse --short HEAD
stdout >> current_sha
$ git rev-parse --abbrev-ref HEAD
stdout >> current_branch
`
Or even convert it to JSON before doing so:
const { tsconfig } = await shellac`
$ cat package.json
json >> ${(package_json) => ... }
$ cat tsconfig.json
json >> tsconfig
`
Branching
You can use if ${ ... } { ... } else { ... }
to run conditionally based on the value of an interpolation:
await shellac`
if ${process.env.CLEAN_RUN} {
$ yarn create react-app
} else {
$ git reset --hard
$ git clean -df
}
$$ npx fab init -y
// ...
`
Changing directory
You can either use an in
directive:
await shellac`
// Change directory for the duration of the block:
in ${__dirname} {
$ pwd
stdout >> ${(cwd) => expect(cwd).toBe(__dirname)}
}
// By default we run in process.cwd()
$ pwd
stdout >> ${(cwd) => expect(cwd).toBe(process.cwd())}
// Relative paths work too:
$ mkdir -p subdir
in ./subdir {
$ pwd
stdout >> ${(cwd) => expect(cwd).toBe(path.join(process.cwd(), 'subdir'))}
$ mkdir -p nesting-ok
in "nesting-ok" {
$ pwd
stdout >> ${(cwd) =>
expect(cwd).toBe(path.join(process.cwd(), 'subdir', 'nesting-ok'))}
}
}
`
If the whole script needs to run in one place, use shellac.in(dir)
:
import tmp from 'tmp-promise'
const dir = await tmp.dir()
await shellac.in(dir.path)`
$ pwd
stdout >> ${(cwd) => expect(cwd).toBe(dir.path)}
`
Background tasks
Shellac lets you run processes in the background, capturing the pid
and providing a promise
to wait on:
const { pid, promise } = await shellac.bg`
$$ for i in 1 2 3; do echo $i; sleep 1; done
$$ echo DONE
`
console.log(`Currently running process: ${pid}`)
const { stdout } = await promise
expect(stdout).toBe(`DONE`)
Setting environment variables
By default, shellac passes through the PATH
environment variable and nothing else. You can override this by calling .env()
with a map of keys to values:
await shellac.env({ ENV_VAR: 'value' })`
$ echo $ENV_VAR
stdout >> ${(stdout) => expect(stdout).toBe('value')}
`
This can be chained with .in()
and .bg
, although .bg
must go last as it has a different return signature:
await shellac.in(tmp_dir).env({
ENV_VAR: 'value'
}).bg`
$ sleep 1
$ echo $ENV_VAR
`
To pass through values from process.env
, we recommend combining shellac with just-pick
:
import pick from 'just-pick'
const { stdout } = await shellac.env(
pick(process.env, ['EDITOR', 'TMPDIR'])
)`
$ env
`
Async
Use the await
declaration to invoke & wait for some JS inline with your script. It works great when Bash doesn't quite do what you need.
import fs from 'fs-extra'
await shellac.in(cwd)`
await ${async () => {
await fs.writeFile(path.join(cwd, 'bigfile.dat'), huge_data)
}}
$ ls -l
stdout >> ${(files) => expect(files).toMatch('bigfile.dat')}
`
Interpolated commands
Inside a $
command you can use string interpolation like normal:
await shellac.in(cwd)`
$ echo "${JSON.stringify({ current_sha, current_branch })}" > git_info.json
`
These can even be promises or async functions:
const getAllPackageNames = async () => {
}
await shellac.in(cwd)`
// You can pass a promise and it will be awaited
$ yarn link ${getAllPackageNames()}
// ...
// Or pass an async function and shellac will call and await it
$ yarn unlink ${async () => getAllPackageNames()}
`
Persistence between commands
A shellac
call invokes a single instance of bash
for the duration, so changes you make are reflected later in the script:
await shellac`
$ echo $LOL
stdout >> ${(lol) => expect(lol).toBe('')}
$ LOL=boats
$ echo $LOL
stdout >> ${(lol) => expect(lol).toBe('boats')}
`
Note: the current working directory is only configured by shellac.in()
or the in ${} { ... }
directive:
const cwd = __dirname
const parent_dir = path.resolve(cwd, '..')
await shellac.in(cwd)`
// Normal behaviour
$ pwd
stdout >> ${(pwd) => expect(pwd).toBe(cwd)}
// Has no effect on the remaining commands
$ cd ..
$ pwd
stdout >> ${(pwd) => expect(pwd).toBe(cwd)}
// If you want to change dir use in {}
in ${parent_dir} {
$ pwd
stdout >> ${(pwd) => expect(pwd).toBe(parent_dir)}
}
// Or do it on a single line
$ cd .. && pwd
stdout >> ${(pwd) => expect(pwd).toBe(parent_dir)}
// Joining commands with ; also works
$ cd ..; pwd
stdout >> ${(pwd) => expect(pwd).toBe(parent_dir)}
`
Non-zero exit codes
Just wrap your command in an exits
block if something is going to return a non-zero error:
await shellac`
$ touch a.file
$ rm a.file
exits {
$ rm a.file
}
exitcode >> ${(code) => expect(code).toBe(1)}
stderr >> ${(stderr) => expect(stderr).toContain('No such file or directory')}
`
Since verifying an exitcode is so common, you can use an exits(code)
block instead:
await shellac`
exits(2) {
$ node -e "process.exit(2)"
}
`
Note: an exits
block can have multiple lines but every line is asserted to return the specified exit code.
All these examples are valid, since // single-line-comments
are ignored as expected.
Example
Works great with ts-jest:
import shellac from 'shellac'
describe('my CLI tool', () => {
it('should do everything I need', async () => {
await shellac`
$ echo "Hello, world!"
stdout >> ${(echo) => {
expect(echo).toBe('Hello, world!')
}}
$ rm -rf working-dir
$ mkdir -p working-dir/example
$ cp -R fixtures/run-1/* working-dir/example
await ${async () => {
// generate some more test data
}}
in ${'working-dir/example'} {
$ ls -l
stdout >> ${(files) => {
expect(files).toMatch('package.json')
}}
$ yarn
$$ run-app
}
`
})
})
Using CommonJS, import it like:
const test = require('ava')
const shellac = require('shellac').default
test('plugin should be installable', async (t) => {
await shellac.default`
$ echo "Hello, world!"
stdout >> ${(echo) => {
t.is(echo, 'Hello, world!')
}}
`
})
Snippets
Use double-$ $$
for logging while the test runs:
shellac.in(cwd)`
$$ ls -al
`
is the same as:
shellac.in(cwd)`
$ ls -al
stdout >> ${console.log}
`
Confirm a file is present:
shellac`
$ ls -l
stdout >> ${(files) => expect(files).toMatch('fab.zip')}
`
Contributing
To hack on the parser & source, run:
yarn
yarn dev
This will build the Parser (using reghex & babel) and the Runtime (using typescript) and watch for changes. Then, in another terminal
yarn test --watch
Add a test for what you're about to add, then hack the source until it passes!
Acknowledgements
@kitten
for reghex which is genuinely incredible and the only reason this library is possible at all.
@superhighfives
for coming up with the name!
exactly
, bats
, Expect
, cram
, aruba
for prior art.