Galata*
Galata is a set of helpers and fixtures for JupyterLab UI Testing using Playwright Test Runner that provides:
Getting Started
Installation
Add Galata to your project:
jlpm add -D @jupyterlab/galata
jlpm playwright install
Create a Playwright configuration file playwright.config.js
containing:
module.exports = require('@jupyterlab/galata/lib/playwright-config');
First test
Create ui-tests/foo.spec.ts
to define your test.
import { test } from '@jupyterlab/galata';
import { expect } from '@playwright/test';
test.describe('Notebook Tests', () => {
test('Create New Notebook', async ({ page, tmpPath }) => {
const fileName = 'create_test.ipynb';
await page.notebook.createNew(fileName);
expect(
await page.waitForSelector(`[role="main"] >> text=${fileName}`)
).toBeTruthy();
expect(await page.contents.fileExists(`${tmpPath}/${fileName}`)).toEqual(
true
);
});
});
This will create a notebook, open it and check it exists.
Launch JupyterLab
Before running the test, you will need to launch the JupyterLab server with some
specific options.
Create jupyter_server_test_config.py
with the following content.
from tempfile import mkdtemp
c.ServerApp.port = 8888
c.ServerApp.open_browser = False
c.LabApp.dev_mode = True
c.ServerApp.root_dir = mkdtemp(prefix='galata-test-')
c.ServerApp.token = ""
c.ServerApp.password = ""
c.ServerApp.disable_check_xsrf = True
c.LabApp.expose_app_in_browser = True
Then start the server with:
jupyter lab --config jupyter_server_test_config.py
Run test project
jlpm playwright test
Galata should generate console output similar to following
Using config at .../playwright.config.js
Running 1 test using 1 worker
✓ ui-tests/foo.spec.ts:5:3 › Notebook Tests Create New Notebook (13s)
1 passed (15s)
Playwright Test just ran a test using Chromium browser, in a headless manner. You can use headed browser to see what is going on during the test:
jlpm playwright test --headed
Test assets (including test videos) will be saved in a test-results
folder and by default a HTML
report will be created in playwright-report
folder. That report can be see by running:
http-server ./playwright-report -a localhost -o
User advices
Create tests
To create tests, the easiest way is to use the code generator tool of playwright:
jupyter lab --config jupyter_server_test_config.py &
jlpm playwright codegen localhost:8888
Debug tests
To debug tests, a good way is to use the inspector tool of playwright:
jupyter lab --config jupyter_server_test_config.py &
PWDEBUG=1 jlpm playwright test
Fixtures
Here are the new test fixture introduced by Galata on top of Playwright fixtures.
appPath
Application URL path fragment; default "/lab"
autoGoto
Whether to go to JupyterLab page within the fixture or not; default true
.
If set to false
, it allows you to add route mock before loading JupyterLab.
Example:
test.use({ autoGoto: false });
test('Open language menu', async ({ page }) => {
await page.route(/.*\/api\/translation.*/, (route, request) => {
if (request.method() === 'GET') {
return route.fulfill({
status: 200,
body:
'{"data": {"en": {"displayName": "English", "nativeName": "English"}}, "message": ""}'
});
} else {
return route.continue();
}
});
await page.goto();
});
serverFiles
- type: <'on' | 'off' | 'only-on-failure'>
Galata can keep the uploaded and created files in tmpPath
on
the server root for debugging purpose. By default the files are kept
on failure.
- 'off' -
tmpPath
is deleted after each tests - 'on' -
tmpPath
is never deleted - 'only-on-failure' -
tmpPath
is deleted except if a test failed or timed out.
mockState
- type: < boolean | Record<string, unknown> >
Mock JupyterLab state in-memory or not.
Possible values are:
- true (default): JupyterLab state will be mocked on a per test basis
- false: JupyterLab state won't be mocked (Be careful it will write state in local files)
- Record<string, unknown>: Initial JupyterLab data state - Mapping (state key, value).
By default the state is stored in-memory.
Example:
test.use({
mockState: {
'layout-restorer:data': {
main: {
dock: {
type: 'tab-area',
currentIndex: 0,
widgets: []
}
},
down: {
size: 0,
widgets: []
},
left: {
collapsed: false,
visible: true,
current: 'running-sessions',
widgets: [
'filebrowser',
'jp-property-inspector',
'running-sessions',
'@jupyterlab/toc:plugin',
'debugger-sidebar',
'extensionmanager.main-view'
]
},
right: {
collapsed: true,
visible: true,
widgets: []
},
relativeSizes: [0.4, 0.6, 0]
}
} as any
});
test('should return the mocked state', async ({ page }) => {
expect(
await page.waitForSelector(
'[aria-label="Running Sessions section"] >> text=Open Tabs'
)
).toBeTruthy();
});
mockSettings
- type: < boolean | Record<string, unknown> >
Mock JupyterLab settings in-memory or not.
Possible values are:
-
true: JupyterLab settings will be mocked on a per test basis
-
false: JupyterLab settings won't be mocked (Be careful it will read & write settings local files)
-
Record<string, unknown>: Mapping {pluginId: settings} that will be default user settings
The default value is galata.DEFAULT_SETTINGS
By default the settings are stored in-memory. However the
they are still initialized with the hard drive values.
Example:
test.use({
mockSettings: {
...galata.DEFAULT_SETTINGS,
'@jupyterlab/apputils-extension:themes': {
theme: 'JupyterLab Dark'
}
}
});
test('should return mocked settings', async ({ page }) => {
expect(await page.theme.getTheme()).toEqual('JupyterLab Dark');
});
sessions
- type: <Map<string, Session.IModel> | null>
Sessions created during the test.
Possible values are:
- null: The sessions API won't be mocked
- Map<string, Session.IModel>: The sessions created during a test.
By default the sessions created during a test will be tracked and disposed at the end.
Example:
test('should return the active sessions', async ({ page, sessions }) => {
await page.notebook.createNew();
await page.waitForResponse(
async response =>
response.url().includes('api/sessions') &&
response.request().method() === 'GET' &&
((await response.json()) as any[]).length === 1
);
expect(sessions.size).toEqual(1);
});
terminals
- type: < Map<string, TerminalAPI.IModel> | null >
Terminals created during the test.
Possible values are:
- null: The Terminals API won't be mocked
- Map<string, TerminalsAPI.IModel>: The Terminals created during a test.
By default the Terminals created during a test will be tracked and disposed at the end.
Example:
test('should return the active terminals', async ({ page, terminals }) => {
await Promise.all([
page.waitForResponse(
response =>
response.request().method() === 'POST' &&
response.url().includes('api/terminals')
),
page.menu.clickMenuItem('File>New>Terminal')
]);
await page.waitForResponse(
async response =>
response.url().includes('api/terminals') &&
response.request().method() === 'GET' &&
((await response.json()) as any[]).length === 1
);
expect(terminals.size).toEqual(1);
});
tmpPath
Unique test temporary path created on the server.
Note: if you override this string, you will need to take care of creating the
folder and cleaning it.
Example:
test.use({ tmpPath: 'test-toc' });
test.describe.serial('Table of Contents', () => {
test.beforeAll(async ({ baseURL, tmpPath }) => {
const contents = galata.newContentsHelper(baseURL);
await contents.uploadFile(
path.resolve(__dirname, `./notebooks/${fileName}`),
`${tmpPath}/${fileName}`
);
});
test.afterAll(async ({ baseURL, tmpPath }) => {
const contents = galata.newContentsHelper(baseURL);
await contents.deleteDirectory(tmpPath);
});
});
Benchmark
Benchmark of JupyterLab is done automatically using Playwright. The actions measured are:
- Opening a file
- Switching from the file to a simple text file
- Switching back to the file
- Closing the file
Two files are tested: a notebook with many code cells and another with many markdown cells.
The tests are located in the subfolder test/benchmark. And they can be
executed with the following command:
jlpm run test:benchmark
A special report will be generated in the folder benchmark-results
that will contain 4 files:
lab-benchmark.json
: The execution time of the tests and some metadata.lab-benchmark.md
: A report in Markdownlab-benchmark.png
: A comparison of execution time distributionlab-benchmark.vl.json
: The Vega-Lite description used to produce the PNG file.
The reference, tagged expected, is stored in lab-benchmark-expected.json
. It can be
updated using the -u
option of Playwright; i.e. jlpm run test:benchmark -u
.
Benchmark parameters
The benchmark can be customized using the following environment variables:
BENCHMARK_NUMBER_SAMPLES
: Number of samples to compute the execution time distribution; default 20.BENCHMARK_OUTPUTFILE
: Benchmark result output file; default benchmark.json
. It is overridden in the playwright-benchmark.config.js
.BENCHMARK_REFERENCE
: Reference name of the data; default is actual
for current data and expected
for the reference.
Development
Build
Install dependencies and build
cd galata
jlpm
jlpm run build
For tests to be run, a JupyterLab instance must be up and running. Launch it without credentials. Tests expect to connect JupyterLab from localhost:8888
by default. If a different URL is to be used, it can be specified by defining TARGET_URL
environment variable or setting the Playwright baseURL
fixture.
jlpm run start
The JupyterLab root directory is randomly generated in the temporary folder (prefixed with galata-test-).
Running tests
Tests are grouped in two projects: galata
and jupyterlab
. The first one is testing Galata helpers and fixtures when the other one is running all tests for Jupyterlab.
By default, both projects will be executed when running jlpm run test
. But you can select one project with the CLI option --project <project-id>
.
Configuration
Galata can be configured by using command line arguments or using playwright.config.js
file. Full list of config options can be accessed using jlpm playwright test --help
.
Custom benchmark report
By default, Galata will generate a text report in the form of markdown
table and a Vega-Lite graph of execution time distribution. Users can customize these reports in two ways:
- Using
playwright.config.js
file: in reporter
section, users can supply two functions vegaLiteConfigFactory
and textReportFactory
to the reporter's constructor options. These functions will be used to create Vega-Lite configuration (vegaLiteConfigFactory
) or to create a text report (textReportFactory
) from test records.
reporter: [
...,
[
'@jupyterlab/galata/lib/benchmarkReporter',
{ outputFile: 'lab-benchmark.json',
vegaLiteConfigFactory: (
allData: Array<IReportRecord>,
comparison: 'snapshot' | 'project'
) => {
return {};
}
textReportFactory: (
allData: Array<IReportRecord>
) => {
return Promise.resolve(['My report content', 'md']);
}
}
],
...
]
- The second way to customize the reports is to override the default text report factory (
defaultTextReportFactory
) and Vega-Lite graph config factory (defaultVegaLiteConfigFactory
) of BenchmarkReporter
class in a sub-class and then use it as a reporter in playwright.config.js
file.
Reference Image Captures
Reference image are saved next to test files in <test-file-name>-snapshots
folders. If a reference screenshots does not exist, it will be generated at the first execution
of a test. You can also update them by running jlpm playwright test --update-snapshots
.
About Galata Name
Galata framework is named after Galata Tower in Istanbul. Centuries ago, Galata Tower was used to spot fires in the city. Tower was also used as astronomical observatory in the past.
Acknowledgement
Development of this project began under Bloomberg organization by Mehmet Bektas, then it was transferred to JupyterLab organization. We gratefully acknowledge Bloomberg for the generous contribution and supporting open-source software community.
4.0.0 - Highlights
Below are the major highlights in JupyterLab 4.0.0.
New text editor
CodeMirror, the text editor used for cells and file editors, has been updated to CodeMirror 6. This brings important
accessibility and performance improvements as well as better customization capabilities.
We have also improved the editor settings. Previously, users had to customize settings separately for each type of cell, the file editor, and the console editor. Now, you can change your settings in one place. It is now easier to use the default settings for all editors and to change some settings for specific cases. For example, you can now hide line numbers only for markdown cells.
Developers can now provide editor extensions, like themes and programming language parsers, through new application registries.
New extension manager
Starting with JupyterLab 3, extensions can be installed via Python packages
(or other providers of prebuilt extensions).
In JupyterLab 4, building on this feature, the Extension Manager now includes extensions from pypi.org.
This removes the build step from installation of extension when using Extension Manager.
Developers can provide an alternative package repository to display their own set of extensions.
Improved document search
The Search and Replace functionality has been improved with new features when searching in a notebook:
- Highlight matches in rendered markdown cells
- Search in selection
- Multi-line search
- Replace using regex capture-group references
- Replace while preserving case
UI improvements
Some new elements have been added or changed in the UI:
- Rework the running kernels section
- "Add a new cell" button at the bottom of a notebook
- Dialog to display keyboard shortcuts as in the Classic Notebook (use <kbd>Ctrl</kbd> + <kbd>Shift</kbd> + <kbd>H</kbd>)
- Display the first line of cell input and outputs when they are collapsed
Accessibility improvements
JupyterLab is not yet fully accessible. Currently, we are focused on making Notebook 7 accessible.
A big part of the code is shared, though, and the following accessibility improvements are in JupyterLab 4:
- Improved focus and keyboard navigation in the file browser
- More ARIA roles and labels were added to UI elements
- Main menu collapses to a hamburger menu if there is not enough space to display all items.
Performance enhancements
JupyterLab is now faster, thanks to the following improvements:
- CSS rules optimization: CSS selectors have been optimized to improve web browser performance when many elements are present on a page.
- Upgrade to CodeMirror 6: Especially for notebooks with many cells, the new CodeMirror version is far more efficient than the previous version. Large notebooks should load more quickly.
- Upgrade to MathJax 3: The mathematical equations renderer library has been been upgraded from v2 to v3 allowing faster rendering.
- Notebook windowing: By rendering only the parts of a notebook that fit in the web browser viewport, JupyterLab is much more efficient. See an important note below.
Notebook windowing might add side effects for example if some cell outputs are displaying iframes. Therefore it is not yet the default value. But we recommend user to switch to it and report bugs to help us polish it. To test it, you
need to set the user setting Notebook > Windowing mode to full
. If you have issues with notebook rendering, try changing back to defer
or none
. (none
should be used as a last resort, because it disables all optimizations.)
Real Time Collaboration
JupyterLab 3.6 already made significant improvements to the Real Time Collaboration (RTC) feature.
The feature is now in a separate repository: jupyter_collaboration.
The rationale is to limit the dependencies for users who don't need RTC. Separating RTC also helps organizations using JupyterLab that do not meet the specific requirements regarding file content management.
To enable RTC, install the jupyter-collaboration
package with either pip
or conda
.
- with pip:
pip install "jupyter-collaboration>=1.0.0a0"
- with conda: not yet available
RTC highlights in the standalone jupyter-collboration
package, version 1.0.0, include:
- Support for displaying multiple cursors and selections
- Support for registration of new shared model types
For developers
Here are the main tool updates that will benefit extension authors and developers:
- TypeScript v5
- Yarn v3
- React v18
- Lumino v2
We recommend using Node.js v18 or newer, because older versions will reach end of life in 2023 or earlier (see Node release schedule).
To ease code migration to JupyterLab 4, developers should review the migration guide. A few existing extensions have already been migrated and can be used as examples:
<!-- <START NEW CHANGELOG ENTRY> -->