Run Lighthouse and Pa11y audits directly in your Cypress E2E test suites
Why cypress-audit?
The tools we can use nowadays to verify the quality of our applications are awesome. They help us get a huge amount of confidence about what we ship in production and alert us when some kind of regression occurs.
- Cypress has made business oriented workflow verification super easy and fun
- Lighthouse has provided incredible tools to verify the performance of an application
- Pa11y provides multiple tool to control the accessibility state of our applications in a wonderful way
The problem is that they run in their own context and with their own internal tricks for authentication and page browsing.
The idea of cypress-audit
is to unify all of this by providing some Cypress Custom Commands so that you can use these tools directly inside your Cypress tests, close to your custom shortcut for navigation and login.
Usage
Installation
To make cypress-audit
working in your project, you have to follow these 3 steps:
- In your favorite terminal:
$ yarn add -D cypress-audit
$ npm install --save-dev cypress-audit
- In the
cypress/plugins/index.js
file:
const { lighthouse, pa11y, prepareAudit } = require("cypress-audit");
module.exports = (on, config) => {
on("before:browser:launch", (browser = {}, launchOptions) => {
prepareAudit(launchOptions);
});
on("task", {
lighthouse: lighthouse(),
pa11y: pa11y(),
});
};
- In the
cypress/support/commands.js
file:
import "cypress-audit/commands";
In your code
After completing the Installation section, you are now able to use the cy.audit
and cy.pa11y
commands inside your tests.
it("should pass the audits", function () {
cy.lighthouse();
cy.pa11y();
});
cy.pa11y()
You can call cy.pa11Y(opts)
with opts
being any kind of the pa11y options.
cy.lighthouse()
Good to know before
Lighthouse is a tool that is supposed to run against a production bundle for computing the performance
and best-practices
metrics. But it's widely suggested by Cypress to run their test on development environment. While this seems a bit counter intuitive, we can rely on the Cypress project feature to run some dedicated test suites against production bundles and to have quick feedbacks (or prevent regression) on these metrics.
Thresholds per tests
If you don't provide any argument to the cy.audit
command, the test will fail if at least one of your metrics is under 100
.
You can make assumptions on the different metrics by passing an object as argument to the cy.audit
command:
it("should verify the lighthouse scores with thresholds", function () {
cy.lighthouse({
performance: 85,
accessibility: 100,
"best-practices": 85,
seo: 85,
pwa: 100,
});
});
If the Lighthouse analysis returns scores that are under the one set in arguments, the test will fail.
You can also make assumptions only on certain metrics. For example, the following test will only verify the "correctness" of the performance
metric:
it("should verify the lighthouse scores ONLY for performance and first contentful paint", function () {
cy.lighthouse({
performance: 85,
"first-contentful-paint": 2000,
});
});
This test will fail only when the performance
metric provided by Lighthouse will be under 85
.
Globally set thresholds
While I would recommend to make per-test assumptions, it's possible to define general metrics inside the cypress.json
file as following:
{
"lighthouse": {
"performance": 85,
"accessibility": 50,
"best-practices": 85,
"seo": 85,
"pwa": 50
}
}
Note: These metrics are override by the per-tests one.
Passing options and config to Lighthouse directly
You can also pass any argument directly to the Lighthouse module using the second and third options of the command:
const thresholds = {
};
const lighthouseOptions = {
};
const lighthouseConfig = {
};
cy.lighthouse(thresholds, lighthouseOptions, lighthouseConfig);
Available metrics
With Lighthouse 6, we're now able to make assumptions on categories and audits.
The categories are what we're used to with Lighthouse and provided a score between 0 and 100:
- performance
- accessibility
- best-practices
- seo
- pwa
The audits are things like the first meaningful paint and the score is provided in milliseconds:
- first-contentful-paint
- largest-contentful-paint
- first-meaningful-paint
- load-fast-enough-for-pwa
- speed-index
- estimated-input-latency
- max-potential-fid
- server-response-time
- first-cpu-idle
- interactive
- mainthread-work-breakdown
- bootup-time
- network-rtt
- network-server-latency
- metrics
- uses-long-cache-ttl
- total-byte-weight
- dom-size
Accessing the raw reports
When using custom tools, it can be conveniant to directly access to raw information provided by the specific tool for doing manual things like generating a custom reports.
To do so, you can pass a callback
function to the task initializer and when an audit is run, it will be triggered with the raw information.
In the cypress/plugins/index.js
file:
const { lighthouse, pa11y, prepareAudit } = require("cypress-audit");
module.exports = (on, config) => {
on("before:browser:launch", (browser = {}, launchOptions) => {
prepareAudit(launchOptions);
});
on("task", {
lighthouse: lighthouse((lighthouseReport) => {
console.log(lighthouseReport);
}),
pa11y: pa11y((pa11yReport) => {
console.log(pa11yReport);
}),
});
};