Security News
JSR Working Group Kicks Off with Ambitious Roadmap and Plans for Open Governance
At its inaugural meeting, the JSR Working Group outlined plans for an open governance model and a roadmap to enhance JavaScript package management.
@jupyterlab/galata
Advanced tools
Galata*
Galata is a set of helpers and fixtures for JupyterLab UI Testing using Playwright Test Runner that provides:
Add Galata to your project:
jlpm add -D @jupyterlab/galata
# Install playwright supported browser
jlpm playwright install
Create a Playwright configuration file playwright.config.js
containing:
module.exports = require('@jupyterlab/galata/lib/playwright-config');
Create ui-tests/foo.spec.ts
to define your test.
import { expect, test } from '@jupyterlab/galata';
test.describe('Notebook Tests', () => {
test('Create New Notebook', async ({ page, tmpPath }) => {
const fileName = 'create_test.ipynb';
await page.notebook.createNew(fileName);
expect(
await page.waitForSelector(`[role="main"] >> text=${fileName}`)
).toBeTruthy();
expect(await page.contents.fileExists(`${tmpPath}/${fileName}`)).toEqual(
true
);
});
});
This will create a notebook, open it and check it exists.
Before running the test, you will need to launch the JupyterLab server with some specific options.
Create jupyter_server_test_config.py
with the following content.
from jupyterlab.galata import configure_jupyter_server
configure_jupyter_server(c)
# Uncomment to set server log level to debug level
# c.ServerApp.log_level = "DEBUG"
Then start the server with:
jupyter lab --config jupyter_server_test_config.py
If you need to customize the set up for galata, you can look at the
configure_jupyter_server
definition.
jlpm playwright test
Galata should generate console output similar to following
Using config at .../playwright.config.js
Running 1 test using 1 worker
✓ ui-tests/foo.spec.ts:5:3 › Notebook Tests Create New Notebook (13s)
1 passed (15s)
Playwright Test just ran a test using Chromium browser, in a headless manner. You can use headed browser to see what is going on during the test:
jlpm playwright test --headed
Test assets (including test videos) will be saved in a test-results
folder and by default a HTML
report will be created in playwright-report
folder. That report can be see by running:
jlpm playwright show-report
To create tests, the easiest way is to use the code generator tool of playwright:
jupyter lab --config jupyter_server_test_config.py &
jlpm playwright codegen localhost:8888
To debug tests, a good way is to use the inspector tool of playwright:
jupyter lab --config jupyter_server_test_config.py &
jlpm playwright test --debug
Or the UI mode:
jupyter lab --config jupyter_server_test_config.py &
jlpm playwright test --ui
If you have set up a custom login handler for your Jupyter application and don't want to remove it for your integration tests, you can try the following configuration (inspired by the Playwright documentation):
global-setup.ts
at the root of the test folder containing the login steps:// global-setup.ts
import { chromium, FullConfig } from '@playwright/test';
async function globalSetup(config: FullConfig) {
const { baseURL, storageState } = config.projects[0].use;
const browser = await chromium.launch();
const page = await browser.newPage();
// Here follows the step to log in if you setup a known password
// See the server documentation https://jupyter-server.readthedocs.io/en/latest/operators/public-server.html?#automatic-password-setup
await page.goto(baseURL ?? process.env.TARGET_URL ?? 'http://localhost:8888');
await page.locator('input[name="password"]').fill('test');
await page.locator('text=Log in').click();
// Save signed-in state.
await page.context().storageState({ path: storageState as string });
await browser.close();
}
export default globalSetup;
var baseConfig = require('@jupyterlab/galata/lib/playwright-config');
module.exports = {
...baseConfig,
globalSetup: require.resolve('./global-setup'),
use: {
...baseConfig.use,
// Tell all tests to load signed-in state from 'storageState.json'.
storageState: 'storageState.json'
}
};
When you will start your test, a file named storageStage.json
will be generated if the log in
steps were successful. Its content will look like that:
{
"cookies": [
{
"name": "_xsrf",
"value": "...REDACTED...",
"domain": "localhost",
"path": "/",
"expires": -1,
"httpOnly": false,
"secure": false,
"sameSite": "Lax"
},
{
"name": "username-localhost-8888",
"value": "...REDACTED...",
"domain": "localhost",
"path": "/",
"expires": 1664121119.118241,
"httpOnly": true,
"secure": false,
"sameSite": "Lax"
}
],
"origins": []
}
This will only work if the authentication is stored in a cookie and you can access the Jupyter app directly when that cookie is set.
You can add a listener that will be triggered when a JupyterLab dialog is shown:
await page.evaluate(() => {
window.galata.on('dialog', (dialog: Dialog<unknown> | null) => {
// Use the dialog
// You can for instance reject it
// dialog.reject()
});
});
The listener will be called when a dialog is started and when it is closed (in that case dialog == null
).
You can stop listening to the event with:
await page.evaluate(() => {
window.galata.off('dialog', listener);
});
Or you can listen to a single event with:
await page.evaluate(() => {
window.galata.once('dialog', listener);
});
You can add a listener that will be triggered when a JupyterLab dialog is shown:
await page.evaluate(() => {
window.galata.on(
'notification',
(notification: Notification.INotification) => {
// Use the notification
}
);
});
The listener will be called when a notification is created or updated.
You can stop listening to the event with:
await page.evaluate(() => {
window.galata.off('notification', listener);
});
Or you can listen to a single event with:
await page.evaluate(() => {
window.galata.once('notification', listener);
});
Here are the new test fixture introduced by Galata on top of Playwright fixtures.
Application base URL without /lab
. It defaults to environment variable TARGET_URL
or http://localhost:8888
if nothing
is defined.
Application URL path fragment; default "/lab"
Whether to go to JupyterLab page within the fixture or not; default true
.
If set to false
, it allows you to add route mock before loading JupyterLab.
Example:
test.use({ autoGoto: false });
test('Open language menu', async ({ page }) => {
await page.route(/.*\/api\/translation.*/, (route, request) => {
if (request.method() === 'GET') {
return route.fulfill({
status: 200,
body: '{"data": {"en": {"displayName": "English", "nativeName": "English"}}, "message": ""}'
});
} else {
return route.continue();
}
});
await page.goto();
// ...
});
Galata can keep the uploaded and created files in tmpPath
on
the server root for debugging purpose. By default the files are kept
on failure.
tmpPath
is deleted after each teststmpPath
is never deletedtmpPath
is deleted except if a test failed or timed out.Mock JupyterLab state in-memory or not. Possible values are:
Example:
test.use({
mockState: {
'layout-restorer:data': {
main: {
dock: {
type: 'tab-area',
currentIndex: 0,
widgets: []
}
},
down: {
size: 0,
widgets: []
},
left: {
collapsed: false,
visible: true,
current: 'running-sessions',
widgets: [
'filebrowser',
'jp-property-inspector',
'running-sessions',
'@jupyterlab/toc:plugin',
'debugger-sidebar',
'extensionmanager.main-view'
]
},
right: {
collapsed: true,
visible: true,
widgets: []
},
relativeSizes: [0.4, 0.6, 0]
}
} as any
});
test('should return the mocked state', async ({ page }) => {
expect(
await page.waitForSelector(
'[aria-label="Running Sessions section"] >> text=Open Tabs'
)
).toBeTruthy();
});
Mock JupyterLab settings in-memory or not. Possible values are:
true: JupyterLab settings will be mocked on a per test basis
false: JupyterLab settings won't be mocked (Be careful it will read & write settings local files)
Record<string, unknown>: Mapping {pluginId: settings} that will be default user settings
The default value is galata.DEFAULT_SETTINGS
By default the settings are stored in-memory. However the they are still initialized with the hard drive values.
Example:
test.use({
mockSettings: {
...galata.DEFAULT_SETTINGS,
'@jupyterlab/apputils-extension:themes': {
theme: 'JupyterLab Dark'
}
}
});
test('should return mocked settings', async ({ page }) => {
expect(await page.theme.getTheme()).toEqual('JupyterLab Dark');
});
Mock JupyterLab user in-memory or not.
Possible values are:
By default the user is stored in-memory.
Sessions created during the test. Possible values are:
Example:
test('should return the active sessions', async ({ page, sessions }) => {
await page.notebook.createNew();
// Wait for the poll to tick
await page.waitForResponse(
async response =>
response.url().includes('api/sessions') &&
response.request().method() === 'GET' &&
((await response.json()) as any[]).length === 1
);
expect(sessions.size).toEqual(1);
// You can introspect the sessions.values()[0] if needed
});
Terminals created during the test. Possible values are:
Example:
test('should return the active terminals', async ({ page, terminals }) => {
await Promise.all([
page.waitForResponse(
response =>
response.request().method() === 'POST' &&
response.url().includes('api/terminals')
),
page.menu.clickMenuItem('File>New>Terminal')
]);
// Wait for the poll to tick
await page.waitForResponse(
async response =>
response.url().includes('api/terminals') &&
response.request().method() === 'GET' &&
((await response.json()) as any[]).length === 1
);
expect(terminals.size).toEqual(1);
// You can introspect the [...terminals.values()][0] if needed
});
Unique test temporary path created on the server. Required if uploading files in beforeAll()
as otherwise the files would not be accessible from consecutive tests because by default tmpPath
has a random component added for each test.
Note: if you override this string, you will need to take care of creating the folder and cleaning it.
Example:
test.use({ tmpPath: 'test-toc' });
test.describe.serial('Table of Contents', () => {
test.beforeAll(async ({ request, tmpPath }) => {
const contents = galata.newContentsHelper(request);
await contents.uploadFile(
path.resolve(__dirname, `./notebooks/${fileName}`),
`${tmpPath}/${fileName}`
);
});
test.afterAll(async ({ request, tmpPath }) => {
const contents = galata.newContentsHelper(request);
await contents.deleteDirectory(tmpPath);
});
});
Benchmark of JupyterLab is done using Playwright. The actions measured are:
Two files are tested: a notebook with many code cells and another with many markdown cells.
The test is run on the CI by comparing the result in the commit at which a PR branch started and the PR branch head on the same CI job to ensure using the same hardware. The benchmark job is triggered on:
please run benchmark
The tests are located in the subfolder test/benchmark. And they can be executed with the following command:
jlpm run test:benchmark
A special report will be generated in the folder benchmark-results
that will contain 4 files:
lab-benchmark.json
: The execution time of the tests and some metadata.lab-benchmark.md
: A report in Markdownlab-benchmark.svg
: A comparison of execution time distributionlab-benchmark.vl.json
: The Vega-Lite description used to produce the figure.The reference, tagged expected, is stored in lab-benchmark-expected.json
. It can be
created using the -u
option of Playwright; i.e. jlpm run test:benchmark -u
.
The benchmark can be customized using the following environment variables:
BENCHMARK_NUMBER_SAMPLES
: Number of samples to compute the execution time distribution; default 20.BENCHMARK_OUTPUTFILE
: Benchmark result output file; default benchmark.json
. It is overridden in the playwright-benchmark.config.js
.BENCHMARK_REFERENCE
: Reference name of the data; default is actual
.BENCHMARK_EXPECTED_REFERENCE
: Reference name of the reference data; default is expected
.Install dependencies and build
cd galata
jlpm
jlpm run build
For tests to be run, a JupyterLab instance must be up and running. Launch it without credentials. Tests expect to connect JupyterLab from localhost:8888
by default. If a different URL is to be used, it can be specified by defining TARGET_URL
environment variable or setting the Playwright baseURL
fixture.
jlpm run start
The JupyterLab root directory is randomly generated in the temporary folder (prefixed with galata-test-).
Tests are grouped in two projects: galata
and jupyterlab
. The first one is testing Galata helpers and fixtures when the other one is running all tests for Jupyterlab.
By default, both projects will be executed when running jlpm run test
. But you can select one project with the CLI option --project <project-id>
.
Galata can be configured by using command line arguments or using playwright.config.js
file. Full list of config options can be accessed using jlpm playwright test --help
.
By default, Galata will generate a text report in the form of markdown
table and a Vega-Lite graph of execution time distribution. Users can customize these reports in two ways:
playwright.config.js
file: in reporter
section, users can supply two functions vegaLiteConfigFactory
and textReportFactory
to the reporter's constructor options. These functions will be used to create Vega-Lite configuration (vegaLiteConfigFactory
) or to create a text report (textReportFactory
) from test records. // An example of `playwright.config.js` with customized builder
reporter: [
...,
[
'@jupyterlab/galata/lib/benchmarkReporter',
{ outputFile: 'lab-benchmark.json',
vegaLiteConfigFactory: (
allData: Array<IReportRecord>, // All test records
comparison?: 'snapshot' | 'project'// Logic of test comparisons:'snapshot' or 'project' - default 'snapshot'.
) => {
// Return a Vega-Lite graph configuration object
return {};
}
textReportFactory: (
allData: Array<IReportRecord>, // All test records
comparison?: 'snapshot' | 'project'// Logic of test comparisons:'snapshot' or 'project' - default 'snapshot'.
) => {
// Return a promise of with the tuple [report content, file extension]
return Promise.resolve(['My report content', 'md']);
}
}
],
...
]
defaultTextReportFactory
) and Vega-Lite graph config factory (defaultVegaLiteConfigFactory
) of BenchmarkReporter
class in a sub-class and then use it as a reporter in playwright.config.js
file.Reference image are saved next to test files in <test-file-name>-snapshots
folders. If a reference screenshots does not exist, it will be generated at the first execution
of a test. You can also update them by running jlpm playwright test --update-snapshots
.
Galata framework is named after Galata Tower in Istanbul. Centuries ago, Galata Tower was used to spot fires in the city. Tower was also used as astronomical observatory in the past.
Development of this project began under Bloomberg organization by Mehmet Bektas, then it was transferred to JupyterLab organization. We gratefully acknowledge Bloomberg for the generous contribution and supporting open-source software community.
FAQs
JupyterLab UI Testing Framework
The npm package @jupyterlab/galata receives a total of 2,416 weekly downloads. As such, @jupyterlab/galata popularity was classified as popular.
We found that @jupyterlab/galata demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 0 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
At its inaugural meeting, the JSR Working Group outlined plans for an open governance model and a roadmap to enhance JavaScript package management.
Security News
Research
An advanced npm supply chain attack is leveraging Ethereum smart contracts for decentralized, persistent malware control, evading traditional defenses.
Security News
Research
Attackers are impersonating Sindre Sorhus on npm with a fake 'chalk-node' package containing a malicious backdoor to compromise developers' projects.