web-vitals
Overview
The web-vitals library is a tiny (~2K, brotli'd), modular library for measuring all the Web Vitals metrics on real users, in a way that accurately matches how they're measured by Chrome and reported to other Google tools (e.g. Chrome User Experience Report, Page Speed Insights, Search Console's Speed Report).
The library supports all of the Core Web Vitals as well as a number of other metrics that are useful in diagnosing real-user performance issues.
Core Web Vitals
Other metrics
Install and load the library
The web-vitals library uses the buffered flag for PerformanceObserver, allowing it to access performance entries that occurred before the library was loaded.
This means you do not need to load this library early in order to get accurate performance data. In general, this library should be deferred until after other user-impacting code has loaded.
From npm
You can install this library from npm by running:
npm install web-vitals
[!NOTE]
If you're not using npm, you can still load web-vitals via <script> tags from a CDN like unpkg.com. See the load web-vitals from a CDN usage example below for details.
There are a few different builds of the web-vitals library, and how you load the library depends on which build you want to use.
For details on the difference between the builds, see which build is right for you.
1. The "standard" build
To load the "standard" build, import modules from the web-vitals package in your application code (as you would with any npm package and node-based build tool):
import {onLCP, onINP, onCLS} from 'web-vitals';
2. The "attribution" build
Measuring the Web Vitals scores for your real users is a great first step toward optimizing the user experience. But if your scores aren't good, the next step is to understand why they're not good and work to improve them.
The "attribution" build helps you do that by including additional diagnostic information with each metric to help you identify the root cause of poor performance as well as prioritize the most important things to fix.
The "attribution" build is slightly larger than the "standard" build (by about 1.5K, brotli'd), so while the code size is still small, it's only recommended if you're actually using these features.
To load the "attribution" build, change any import statements that reference web-vitals to web-vitals/attribution:
import {onLCP, onINP, onCLS} from 'web-vitals';
import {onLCP, onINP, onCLS} from 'web-vitals/attribution';
Usage for each of the imported function is identical to the standard build, but when importing from the attribution build, the metric objects will contain an additional attribution property.
See Send attribution data for usage examples, and the attribution reference for details on what values are added for each metric.
From a CDN
The recommended way to use the web-vitals package is to install it from npm and integrate it into your build process. However, if you're not using npm, it's still possible to use web-vitals by requesting it from a CDN that serves npm package files.
The following examples show how to load web-vitals from unpkg.com. It is also possible to load this from jsDelivr, and cdnjs.
Important! The unpkg.com, jsDelivr, and cdnjs CDNs are shown here for example purposes only. unpkg.com, jsDelivr, and cdnjs are not affiliated with Google, and there are no guarantees that loading the library from those CDNs will continue to work in the future. Self-hosting the built files rather than loading from the CDN is better for security, reliability, and performance reasons.
Load the "standard" build (using a module script)
<script type="module">
import {onCLS, onINP, onLCP} from 'https://unpkg.com/web-vitals@5?module';
onCLS(console.log);
onINP(console.log);
onLCP(console.log);
</script>
Note: When the web-vitals code is isolated from the application code in this way, there is less need to depend on dynamic imports so this code uses a regular import line.
Load the "standard" build (using a classic script)
<script>
(function () {
var script = document.createElement('script');
script.src = 'https://unpkg.com/web-vitals@5/dist/web-vitals.iife.js';
script.onload = function () {
webVitals.onCLS(console.log);
webVitals.onINP(console.log);
webVitals.onLCP(console.log);
};
document.head.appendChild(script);
})();
</script>
Load the "attribution" build (using a module script)
<script type="module">
import {
onCLS,
onINP,
onLCP,
} from 'https://unpkg.com/web-vitals@5/dist/web-vitals.attribution.js?module';
onCLS(console.log);
onINP(console.log);
onLCP(console.log);
</script>
Load the "attribution" build (using a classic script)
<script>
(function () {
var script = document.createElement('script');
script.src =
'https://unpkg.com/web-vitals@5/dist/web-vitals.attribution.iife.js';
script.onload = function () {
webVitals.onCLS(console.log);
webVitals.onINP(console.log);
webVitals.onLCP(console.log);
};
document.head.appendChild(script);
})();
</script>
Usage
Basic usage
Each of the Web Vitals metrics is exposed as a single function that takes a callback function that will be called any time the metric value is available and ready to be reported.
The following example measures each of the Core Web Vitals metrics and logs the result to the console once its value is ready to report.
(The examples below import the "standard" build, but they will work with the "attribution" build as well.)
import {onCLS, onINP, onLCP} from 'web-vitals';
onCLS(console.log);
onINP(console.log);
onLCP(console.log);
Note that some of these metrics will not report until the user has interacted with the page, switched tabs, or the page starts to unload. If you don't see the values logged to the console immediately, try reloading the page (with preserve log enabled) or switching tabs and then switching back.
Also, in some cases a metric callback may never be called:
- INP is not reported if the user never interacts with the page.
- CLS, FCP, and LCP are not reported if the page was loaded in the background.
In other cases, a metric callback may be called more than once:
[!WARNING]
Do not call any of the Web Vitals functions (e.g. onCLS(), onINP(), onLCP()) more than once per page load. Each of these functions creates a PerformanceObserver instance and registers event listeners for the lifetime of the page. While the overhead of calling these functions once is negligible, calling them repeatedly on the same page may eventually result in a memory leak.
Report the value on every change
In most cases, you only want the callback function to be called when the metric is ready to be reported. However, it is possible to report every change (e.g. each larger layout shift as it happens) by setting reportAllChanges to true in the optional, configuration object (second parameter).
[!IMPORTANT] > reportAllChanges only reports when the metric changes, not for each input to the metric. For example, a new layout shift that does not increase the CLS metric will not be reported even with reportAllChanges set to true because the CLS metric has not changed. Similarly, for INP, each interaction is not reported even with reportAllChanges set to true—just when an interaction causes an increase to INP.
This can be useful when debugging, but in general using reportAllChanges is not needed (or recommended) for measuring these metrics in production.
import {onCLS} from 'web-vitals';
onCLS(console.log, {reportAllChanges: true});
Report only the delta of changes
Some analytics providers allow you to update the value of a metric, even after you've already sent it to their servers (overwriting the previously-sent value with the same id).
Other analytics providers, however, do not allow this, so instead of reporting the new value, you need to report only the delta (the difference between the current value and the last-reported value). You can then compute the total value by summing all metric deltas sent with the same ID.
The following example shows how to use the id and delta properties:
import {onCLS, onINP, onLCP} from 'web-vitals';
function logDelta({name, id, delta}) {
console.log(`${name} matching ID ${id} changed by ${delta}`);
}
onCLS(logDelta);
onINP(logDelta);
onLCP(logDelta);
[!NOTE]
The first time the callback function is called, its value and delta properties will be the same.
In addition to using the id field to group multiple deltas for the same metric, it can also be used to differentiate different metrics reported on the same page. For example, after a back/forward cache restore, a new metric object is created with a new id (since back/forward cache restores are considered separate page visits).
Send the results to an analytics endpoint
The following example measures each of the Core Web Vitals metrics and reports them to a hypothetical /analytics endpoint, as soon as each is ready to be sent.
The sendToAnalytics() function uses the navigator.sendBeacon() method, which is widely available across browsers, and supports sending data as the page is being unloaded.
import {onCLS, onINP, onLCP} from 'web-vitals';
function sendToAnalytics(metric) {
const body = JSON.stringify({
name: metric.name,
value: metric.value,
id: metric.id,
});
navigator.sendBeacon('/analytics', body);
}
onCLS(sendToAnalytics);
onINP(sendToAnalytics);
onLCP(sendToAnalytics);
Send the results to Google Analytics
Google Analytics does not support reporting metric distributions in any of its built-in reports; however, if you set a unique event parameter value (in this case, the metric_id, as shown in the example below) on every metric instance that you send to Google Analytics, you can create a report yourself by first getting the data via the Google Analytics Data API or via BigQuery export and then visualizing it any charting library you choose.
Google Analytics 4 introduces a new Event model allowing custom parameters instead of a fixed category, action, and label. It also supports non-integer values, making it easier to measure Web Vitals metrics compared to previous versions.
import {onCLS, onINP, onLCP} from 'web-vitals';
function sendToGoogleAnalytics({name, delta, value, id}) {
gtag('event', name, {
value: delta,
metric_id: id,
metric_value: value,
metric_delta: delta,
});
}
onCLS(sendToGoogleAnalytics);
onINP(sendToGoogleAnalytics);
onLCP(sendToGoogleAnalytics);
For details on how to query this data in BigQuery, or visualise it in Looker Studio, see Measure and debug performance with Google Analytics 4 and BigQuery.
Send the results to Google Tag Manager
While web-vitals can be called directly from Google Tag Manager, using a pre-defined custom template makes this considerably easier. Some recommended templates include:
Send attribution data
When using the attribution build, you can send additional data to help you debug why the metric values are the way they are.
This example sends an additional debug_target param to Google Analytics, corresponding to the element most associated with each metric.
import {onCLS, onINP, onLCP} from 'web-vitals/attribution';
function sendToGoogleAnalytics({name, delta, value, id, attribution}) {
const eventParams = {
value: delta,
metric_id: id,
metric_value: value,
metric_delta: delta,
};
switch (name) {
case 'CLS':
eventParams.debug_target = attribution.largestShiftTarget;
break;
case 'INP':
eventParams.debug_target = attribution.interactionTarget;
break;
case 'LCP':
eventParams.debug_target = attribution.target;
break;
}
gtag('event', name, eventParams);
}
onCLS(sendToGoogleAnalytics);
onINP(sendToGoogleAnalytics);
onLCP(sendToGoogleAnalytics);
[!NOTE]
This example relies on custom event parameters in Google Analytics 4.
See Debug performance in the field for more information and examples.
Batch multiple reports together
Rather than reporting each individual Web Vitals metric separately, you can minimize your network usage by batching multiple metric reports together in a single network request.
However, since not all Web Vitals metrics become available at the same time, and since not all metrics are reported on every page, you cannot simply defer reporting until all metrics are available.
Instead, you should keep a queue of all metrics that were reported and flush the queue whenever the page is backgrounded or unloaded:
import {onCLS, onINP, onLCP} from 'web-vitals';
const queue = new Set();
function addToQueue(metric) {
queue.add(metric);
}
function flushQueue() {
if (queue.size > 0) {
const body = JSON.stringify([...queue]);
navigator.sendBeacon('/analytics', body);
queue.clear();
}
}
onCLS(addToQueue);
onINP(addToQueue);
onLCP(addToQueue);
addEventListener('visibilitychange', () => {
if (document.visibilityState === 'hidden') {
flushQueue();
}
});
[!NOTE]
See the Page Lifecycle guide for an explanation of why visibilitychange is recommended over events like beforeunload and unload.
Build options
The web-vitals package includes both "standard" and "attribution" builds, as well as different formats of each to allow developers to choose the format that best meets their needs or integrates with their architecture.
The following table lists all the builds distributed with the web-vitals package on npm.
Filename (all within dist/*)
| Export | Description |
web-vitals.js | pkg.module |
An ES module bundle of all metric functions, without any attribution features.
This is the "standard" build and is the simplest way to consume this library out of the box.
|
web-vitals.umd.cjs | pkg.main |
A UMD version of the web-vitals.js bundle (exposed on the self.webVitals.* namespace).
|
web-vitals.iife.js | -- |
An IIFE version of the web-vitals.js bundle (exposed on the self.webVitals.* namespace).
|
web-vitals.attribution.js | -- |
An ES module version of all metric functions that includes attribution features.
|
web-vitals.attribution.umd.cjs | -- |
A UMD version of the web-vitals.attribution.js build (exposed on the self.webVitals.* namespace).
|
web-vitals.attribution.iife.js | -- |
An IIFE version of the web-vitals.attribution.js build (exposed on the self.webVitals.* namespace).
|
Which build is right for you?
Most developers will generally want to use "standard" build (via either the ES module or UMD version, depending on your bundler/build system), as it's the easiest to use out of the box and integrate into existing tools.
However, if you'd like to collect additional debug information to help you diagnose performance bottlenecks based on real-user issues, use the "attribution" build.
For guidance on how to collect and use real-user data to debug performance issues, see Debug performance in the field.
API
Types:
Metric
All metrics types inherit from the following base interface:
interface Metric {
name: 'CLS' | 'FCP' | 'INP' | 'LCP' | 'TTFB';
value: number;
rating: 'good' | 'needs-improvement' | 'poor';
delta: number;
id: string;
entries: PerformanceEntry[];
navigationType:
| 'navigate'
| 'reload'
| 'back-forward'
| 'back-forward-cache'
| 'prerender'
| 'restore';
}
Metric-specific subclasses:
CLSMetric
interface CLSMetric extends Metric {
name: 'CLS';
entries: LayoutShift[];
}
FCPMetric
interface FCPMetric extends Metric {
name: 'FCP';
entries: PerformancePaintTiming[];
}
INPMetric
interface INPMetric extends Metric {
name: 'INP';
entries: PerformanceEventTiming[];
}
LCPMetric
interface LCPMetric extends Metric {
name: 'LCP';
entries: LargestContentfulPaint[];
}
TTFBMetric
interface TTFBMetric extends Metric {
name: 'TTFB';
entries: PerformanceNavigationTiming[];
}
MetricRatingThresholds
The thresholds of metric's "good", "needs improvement", and "poor" ratings.
- Metric values up to and including [0] are rated "good"
- Metric values up to and including [1] are rated "needs improvement"
- Metric values above [1] are "poor"
| ≦ [0] | "good" |
| > [0] and ≦ [1] | "needs improvement" |
| > [1] | "poor" |
type MetricRatingThresholds = [number, number];
See also Rating Thresholds.
ReportOpts
interface ReportOpts {
reportAllChanges?: boolean;
}
Metric-specific subclasses:
INPReportOpts
interface INPReportOpts extends ReportOpts {
durationThreshold?: number;
}
AttributionReportOpts
A subclass of ReportOpts used for each metric function exported in the attribution build.
interface AttributionReportOpts extends ReportOpts {
generateTarget?: (el: Node | null) => string | null | undefined;
}
Metric-specific subclasses:
INPAttributionReportOpts
interface INPAttributionReportOpts extends AttributionReportOpts {
durationThreshold?: number;
includeProcessedEventEntries?: boolean;
}
LoadState
The LoadState type is used in several of the metric attribution objects.
type LoadState =
| 'loading'
| 'dom-interactive'
| 'dom-content-loaded'
| 'complete';
Functions:
onCLS()
function onCLS(callback: (metric: CLSMetric) => void, opts?: ReportOpts): void;
Calculates the CLS value for the current page and calls the callback function once the value is ready to be reported, along with all layout-shift performance entries that were used in the metric value calculation. The reported value is a double (corresponding to a layout shift score).
[!IMPORTANT]
CLS should be continually monitored for changes throughout the entire lifespan of a page—including if the user returns to the page after it's been hidden/backgrounded. However, since browsers often will not fire additional callbacks once the user has backgrounded a page, callback is always called when the page's visibility state changes to hidden. As a result, the callback function might be called multiple times during the same page load (see Reporting only the delta of changes for how to manage this).
If the reportAllChanges configuration option is set to true, the callback function will be called as soon as the value is initially determined as well as any time the value changes throughout the page lifespan (though not necessarily for every layout shift). Note that regardless of whether reportAllChanges is used, the final reported value will be the same.
onFCP()
function onFCP(callback: (metric: FCPMetric) => void, opts?: ReportOpts): void;
Calculates the FCP value for the current page and calls the callback function once the value is ready, along with the relevant paint performance entry used to determine the value. The reported value is a DOMHighResTimeStamp.
onINP()
function onINP(
callback: (metric: INPMetric) => void,
opts?: INPReportOpts,
): void;
Calculates the INP value for the current page and calls the callback function once the value is ready, along with the event performance entries reported for that interaction. The reported value is a DOMHighResTimeStamp.
[!IMPORTANT]
INP should be continually monitored for changes throughout the entire lifespan of a page—including if the user returns to the page after it's been hidden/backgrounded. However, since browsers often will not fire additional callbacks once the user has backgrounded a page, callback is always called when the page's visibility state changes to hidden. As a result, the callback function might be called multiple times during the same page load (see Reporting only the delta of changes for how to manage this).
A custom durationThreshold configuration option can optionally be passed to control the minimum duration filter for event-timing. Events which are faster than this threshold are not reported. Note that the first-input entry is always observed, regardless of duration, to ensure you always have some INP score. The default threshold, after the library is initialized, is 40 milliseconds (the event-timing default of 104 milliseconds applies to all events emitted before the library is initialised). This default threshold of 40 is chosen to strike a balance between usefulness and performance. Running this callback for any interaction that spans just one or two frames is likely not worth the insight that could be gained.
If the reportAllChanges configuration option is set to true, the callback function will be called as soon as the value is initially determined as well as any time the value changes throughout the page lifespan (though not necessarily for every interaction). Note that regardless of whether reportAllChanges is used, the final reported value will be the same.
onLCP()
function onLCP(callback: (metric: LCPMetric) => void, opts?: ReportOpts): void;
Calculates the LCP value for the current page and calls the callback function once the value is ready (along with the relevant largest-contentful-paint performance entry used to determine the value). The reported value is a DOMHighResTimeStamp.
If the reportAllChanges configuration option is set to true, the callback function will be called any time a new largest-contentful-paint performance entry is dispatched, or once the final value of the metric has been determined. Note that regardless of whether reportAllChanges is used, the final reported value will be the same.
onTTFB()
function onTTFB(
callback: (metric: TTFBMetric) => void,
opts?: ReportOpts,
): void;
Calculates the TTFB value for the current page and calls the callback function once the page has loaded, along with the relevant navigation performance entry used to determine the value. The reported value is a DOMHighResTimeStamp.
Note, this function waits until after the page is loaded to call callback in order to ensure all properties of the navigation entry are populated. This is useful if you want to report on other metrics exposed by the Navigation Timing API.
For example, the TTFB metric starts from the page's time origin, which means it includes time spent on DNS lookup, connection negotiation, network latency, and server processing time.
import {onTTFB} from 'web-vitals';
onTTFB((metric) => {
const requestTime = metric.value - metric.entries[0].requestStart;
console.log('Request time:', requestTime);
});
[!NOTE]
Browsers that do not support navigation entries will fall back to using performance.timing (with the timestamps converted from epoch time to DOMHighResTimeStamp). This ensures code referencing these values (like in the example above) will work the same in all browsers.
Rating Thresholds:
The thresholds of each metric's "good", "needs improvement", and "poor" ratings are available as MetricRatingThresholds.
Example:
import {CLSThresholds, INPThresholds, LCPThresholds} from 'web-vitals';
console.log(CLSThresholds);
console.log(INPThresholds);
console.log(LCPThresholds);
[!NOTE]
It's typically not necessary (or recommended) to manually calculate metric value ratings using these thresholds. Use the Metric['rating'] instead.
Attribution:
In the attribution build each of the metric functions has two primary differences from their standard build counterparts:
-
Their callback is invoked with a MetricWithAttribution objects instead of a Metric object. Each MetricWithAttribution extends the Metric object and adds an additional attribution object, which contains potentially-helpful debugging information that can be sent along with the metric values for the current page visit in order to help identify issues happening to real-users in the field.
-
They accept an AttributionReportOpts objects instead of a ReportOpts object. The AttributionReportOpts object supports an additional, optional, generateTarget() function that lets developers customize how DOM elements are stringified for reporting purposes. When passed, the return value of the generateTarget() function will be used for any "target" properties in the following attribution objects: CLSAttribution, INPAttribution, and LCPAttribution. If null or undefined is returned by the generateTarget() function, or no function is given, then the default selector function will be used.
interface AttributionReportOpts extends ReportOpts {
generateTarget?: (el: Node | null) => string | null | undefined;
}
For example, if a web page has unique data-name attribute on many elements, you may prefer to use those over the built-in selector-style strings that are generated by default.
function customGenerateTarget(el) {
if (el.dataset.name) {
return el.dataset.name;
}
}
onLCP(sendToAnalytics, {generateTarget: customGenerateTarget});
-
The onINP AttributionReportOpts supports an additional, optional, includeProcessedEventEntries configuration option. When set to false, the event performance entries will not be included in the attribution object to conserve memory if these entries are not needed. The default value is true.
interface INPAttributionReportOpts extends AttributionReportOpts {
durationThreshold?: number;
includeProcessedEventEntries?: boolean;
}
The next sections document the shape of the attribution object for each of the metrics:
CLSAttribution
interface CLSAttribution {
largestShiftTarget?: string;
largestShiftTime?: DOMHighResTimeStamp;
largestShiftValue?: number;
largestShiftEntry?: LayoutShift;
largestShiftSource?: LayoutShiftAttribution;
loadState?: LoadState;
}
FCPAttribution
interface FCPAttribution {
timeToFirstByte: number;
firstByteToFCP: number;
loadState: LoadState;
fcpEntry?: PerformancePaintTiming;
navigationEntry?: PerformanceNavigationTiming;
}
INPAttribution
interface INPAttribution {
interactionTarget: string;
interactionTime: DOMHighResTimeStamp;
interactionType: 'pointer' | 'keyboard';
nextPaintTime: DOMHighResTimeStamp;
processedEventEntries: PerformanceEventTiming[];
inputDelay: number;
processingDuration: number;
presentationDelay: number;
loadState: LoadState;
longAnimationFrameEntries: PerformanceLongAnimationFrameTiming[];
longestScript?: INPLongestScriptSummary;
totalScriptDuration?: number;
totalStyleAndLayoutDuration?: number;
totalPaintDuration?: number;
totalUnattributedDuration?: number;
}
INPLongestScriptSummary
interface INPLongestScriptSummary {
entry: PerformanceScriptTiming;
subpart: 'input-delay' | 'processing-duration' | 'presentation-delay';
intersectingDuration: number;
}
LCPAttribution
interface LCPAttribution {
target?: string;
url?: string;
timeToFirstByte: number;
resourceLoadDelay: number;
resourceLoadDuration: number;
elementRenderDelay: number;
navigationEntry?: PerformanceNavigationTiming;
lcpResourceEntry?: PerformanceResourceTiming;
lcpEntry?: LargestContentfulPaint;
}
TTFBAttribution
interface TTFBAttribution {
waitingDuration: number;
cacheDuration: number;
dnsDuration: number;
connectionDuration: number;
requestDuration: number;
navigationEntry?: PerformanceNavigationTiming;
}
Browser Support
The web-vitals code is tested in Chrome, Firefox, and Safari. In addition, all JavaScript features used in the code are part of (Baseline Widely Available), and thus should run without error in all versions of these browsers released within the last 30 months.
However, some of the APIs required to capture these metrics (notable CLS) are currently only available in some browsers. The latest browser support for each function is as follows:
onCLS(): Chromium
onFCP(): Chromium, Firefox, Safari
onINP(): Chromium, Firefox, Safari
onLCP(): Chromium, Firefox, Safari
onTTFB(): Chromium, Firefox, Safari
Limitations
The web-vitals library is primarily a wrapper around the Web APIs that measure the Web Vitals metrics, which means the limitations of those APIs will mostly apply to this library as well. More details on these limitations is available in this blog post.
The primary limitation of these APIs is they have no visibility into <iframe> content (not even same-origin iframes), which means pages that make use of iframes will likely see a difference between the data measured by this library and the data available in the Chrome User Experience Report (which does include iframe content).
For same-origin iframes, it's possible to use the web-vitals library to measure metrics, but it's tricky because it requires the developer to add the library to every frame and postMessage() the results to the parent frame for aggregation.
[!NOTE]
Given the lack of iframe support, the onCLS() function technically measures DCLS (Document Cumulative Layout Shift) rather than CLS, if the page includes iframes).
Development
Building the code
The web-vitals source code is written in TypeScript. To transpile the code and build the production bundles, run the following command.
npm run build
To build the code and watch for changes, run:
npm run watch
Running the tests
The web-vitals code is tested in real browsers using webdriver.io. Use the following command to run the tests:
npm test
To test any of the APIs manually, you can start the test server
npm run test:server
Then navigate to http://localhost:9090/test/<view>, where <view> is the basename of one the templates under /test/views/.
You'll likely want to combine this with npm run watch to ensure any changes you make are transpiled and rebuilt.
Integrations
License
Apache 2.0