pentf - Parallel End-To-End Test Framework
pentf runs end-to-end tests (with or without web browsers, emails, and/or direct HTTP requests) in a highly parallel manner, so that tests bound by client CPU can run while other tests are waiting for an email to arrive or slow external servers to answer.
Tests are written in plain JavaScript, typically using node's built-in assert. You can use any other assertion framework too; a test is simply an async
function which throws an exception to indicate test failure.
Browser tests using puppeteer benefit from special support such as isolation of parallel tests and screenshots of test failures as well as a number of helper functions, for example to wait for text to become visible.
Depending on the environment (you can set up configurations to run the same tests against dev, stage, prod etc.), tests can be skipped, or marked as expected to fail for test driven development where you write tests first before fixing a bug or implementing a feature.
A locking system prevents two tests or the same tests on two different machines from accessing a shared resource, e.g. a test account.
You can review test results in a PDF report.
Installation
npm i --save-dev pentf puppeteer
Usage
pentf can be used as a library (A standalone binary is also planned). Create a script named run
in the directory of your tests, and fill it like this:
#!/usr/bin/env node
require('pentf').main({
rootDir: __dirname,
description: 'Test my cool application',
});
Make the file executable with chmod a+x run
, and from then on type
./run
to execute all tests. You may also want to have a look at the options.
Writing tests
Plop a new .js
file into tests/
. Its name will be the test''s name, and it should have an async run
function, like this:
const assert = require('assert');
const {getMail} = require('pentf/email');
const {newPage, closePage} = require('pentf/browser_utils');
const {fetch} = require('pentf/net_utils');
const {makeRandomEmail} = require('pentf/utils');
async function run(config) {
const email = makeRandomEmail(config, 'pentf_example');
const start = new Date();
const response = await fetch(config, 'https://api.tonie.cloud/v2/users', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
locale: 'en',
email: email,
password: 'Secret123',
acceptedGeneralTerms: true,
acceptedPrivacyTerms: true,
}),
});
assert.equal(response.status, 201);
assert((await response.json()).jwt);
const mail = await getMail(config, start, email, 'Your Toniecloud confirmation link');
assert(mail);
const page = await newPage(config);
await page.goto('https://meine.tonies.de/');
await closePage(page);
}
module.exports = {
run,
description: 'pentf test example',
skip: config => config.env === 'prod',
expectedToFail: config => (config.env === 'alwaysbroken') ? 'Known to be broken here' : false,
resources: ['toniebox_1234', 'another_resource'],
};
Note that while the above example tests a webpage with puppeteer and uses pentf's has native support for HTTP requests (in net_utils
) and email sending (in email
), tests can be anything – they just have to fail the promise if the test fails.
Have a look in the API documentation for various helper functions.
Configuration
pentf is designed to be run against different configurations, e.g. local/dev/stage/prod. Create JSON files in the config
subdirectory for each environment. You can also add a programatic configuration by passing a function defaultConfig
to pentf.main
; see pentf's own run for an example.
The keys are up to you; for example you probably want to have a main entry point. Predefined keys are:
imap
If you are using the pentf/email
module to fetch and test emails, configure your imap connection here, like
"imap": {
"user": "user@example.com",
"password": "secret",
"host": "mail.example.com",
"port": 993,
"tls": true
}
rejectUnauthorized
Set to false
to not check the certificate in TLS connections.
Options
-h, --help Show this help message and exit.
-e YOUR_ENVIRONMENTS, --env YOUR_ENVIRONMENTS
The environment to test against. Default is local.
--version Print version of tests and test framework and exit.
Output
-v, --verbose Let tests output diagnostic details
-q, --quiet Do not output test status
--no-clear-line, --ci
Never clear the current output line (as if output is not a tty)
--print-config Output the effective configuration and exit.
-c, --print-curl Print curl commands for each HTTP request
-I REGEXP, --ignore-errors REGEXP
Do not output error messages matching the regular expression. Example: -I
"\(TOC-[0-9]+\)"
-E, --expect-nothing Ignore expectedToFail attributes on tests
Writing results to disk
-J, --json Write tests results as a JSON file.
--json-file FILE.json
JSON file to write to. Defaults to results.json .
-H, --html Write test results as an HTML file.
--html-file FILE.html
HTML file to write a report to. Defaults to results.html .
--pdf Write test results as a PDF file. (Now enabled by default)
--no-pdf Do not write a PDF report with test results.
--pdf-file FILE.pdf PDF file to write a report to. Defaults to results.pdf .
-M, --markdown Write tests results as a Markdown file.
--markdown-file FILE.md
Markdown file to write a report to. Defaults to results.md .
--load-json INPUT.json
Load test results from JSON (instead of executing tests)
Test selection
-f REGEXP, --filter REGEXP
Regular expression to match names of tests to run
-b REGEXP, --filter-body REGEXP
Run only tests whose full code is matched by this regular expression
-l, --list List all tests that would be run and exit
-a, --all, --include-slow-tests
Run tests that take a very long time
Email
--keep-emails Keep generated emails instead of deleting them
--email-verbose Log all IMAP commands and responses
puppeteer browser test
-V, --visible Make browser tests visible (i.e. not headless)
--no-screenshots Do not take screenshots of browser failures
--screenshot-directory DIR
Directory to write screenshots to (default: ./screenshots)
-s MS, --slow-mo MS Wait this many milliseconds after every call to the virtual browser
-k, --keep-open Keep browser sessions open in case of failures. Implies -V.
--devtools Start browser with devtools open. Implies -V
--devtools-preserve Configure devtools to preserve logs and network requests upon navigation. Implies
--devtools
--extensions [EXTENSION_DIR [EXTENSION_DIR ...]]
Load unpacked browser extensions
Test runner
-C COUNT, --concurrency COUNT
Maximum number of tests to run in parallel. 0 to run without a pool, sequentially.
Defaults to 10.
-S, --sequential Do not run tests in parallel (same as -C 0)
--fail-fast Abort once a test fails
--print-tasks Output all tasks that the runner would perform, and exit
--exit-zero Terminate with exit code 0 (success) even if tests fail. (Exit codes != 0 are still
emitted in cases of internal crashes)
Locking
-L, --no-locking Completely disable any locking of resources between tests.
--locking-verbose Output status messages about locking
--list-conflicts Show which tasks conflict on which resources, and exit immediately
--manually-lock RESOURCES
Externally lock the specified comma-separated resources for 60s before the test
--list-locks, --list-external-locks
List (external) locks and exit
--clear-locks, --clear-external-locks
Clear all external locks and exit
--no-external-locking
Disable external locking (via a lockserver)
--external-locking-url URL
Override URL of lockserver
--display-locking-client
Display the locking client ID we would use if we would lock something now
License
MIT. Patches welcome!