
Product
Secure Your AI-Generated Code with Socket MCP
Socket MCP brings real-time security checks to AI-generated code, helping developers catch risky dependencies before they enter the codebase.
ui-coverage-scenario-tool
Advanced tools
UI Coverage Tool is an innovative, no-overhead solution for tracking and visualizing UI test coverage — directly on your actual application, not static snapshots.
UI Coverage Scenario Tool is an innovative, no-overhead solution for tracking and visualizing UI test coverage — directly on your actual application, not static snapshots. The tool collects coverage during UI test execution and generates an interactive HTML report. This report embeds a live iframe of your application and overlays coverage data on top, letting you see exactly what was tested and how.
CLICK
, FILL
,
VISIBLE
, etc.), or action groups. Ideal for analyzing specific scenarios or regression areas.track_element()
method.You can view an example of a coverage report generated by the tool here.
If you have any questions or need assistance, feel free to ask @Nikita Filonov.
There are two separate tools, each with its own purpose, strengths, and philosophy:
🟢 ui-coverage-tool — Simple & Instant Coverage This is the original tool. It’s designed to be:
Think of ui-coverage-tool as the lightweight, no-frills solution for getting instant test coverage insights with minimal setup.
🔵 ui-coverage-scenario-tool — Scenario-Based & Insightful This is the advanced version of the original tool, built on top of all its features — and more:
ui-coverage-tool
If your team needs deeper visibility into business processes and scenario coverage, ui-coverage-scenario-tool is the way to go.
While ui-coverage-scenario-tool
is more powerful, the original ui-coverage-tool
still has a place.
They serve different purposes:
Tool | Best For | Strengths |
---|---|---|
ui-coverage-tool | Quick setup, lightweight testing environments | Easy to integrate, minimal overhead |
ui-coverage-scenario-tool | Structured E2E scenarios, business test cases | Rich detail, scenario linkage, deeper insight |
Keeping them separate allows users to choose based on project needs, team maturity, and desired complexity.
Requires Python 3.11+
pip install ui-coverage-scenario-tool
To enable live interaction and visual highlighting in the report, you must embed the coverage agent into your application.
Add this to your HTML:
<script src="https://nikita-filonov.github.io/ui-coverage-scenario-report/agent.global.js"></script>
That’s it. No other setup required. Without this script, the coverage report will not be able to highlight elements.
Below are examples of how to use the tool with two popular UI automation frameworks: Playwright
and Selenium
. In
both cases, coverage data is automatically saved to the ./coverage-results
folder after each call to track_element
.
from playwright.sync_api import sync_playwright
# Import the main components of the tool:
# - UICoverageTracker — the main class for tracking coverage
# - SelectorType — type of selector (CSS, XPATH)
# - ActionType — type of action (CLICK, FILL, CHECK_VISIBLE, etc.)
from ui_coverage_scenario_tool import UICoverageTracker, SelectorType, ActionType
# Create an instance of the tracker.
# The `app` value should match the name in your UI_COVERAGE_APPS config.
tracker = UICoverageTracker(app="my-ui-app")
with sync_playwright() as playwright:
browser = playwright.chromium.launch()
page = browser.new_page()
page.goto("https://my-ui-app.com/login")
# Start a new scenario with metadata:
# - url: a link to the test case in TMS or documentation
# - name: a descriptive scenario name
tracker.start_scenario(
url="http://tms.com/test-cases/1",
name="Successful login"
)
username_input = page.locator("#username-input")
username_input.fill('user@example.com')
# Track this interaction with the tracker
tracker.track_element(
selector='#username-input', # The selector (CSS)
action_type=ActionType.FILL, # The action type: FILL
selector_type=SelectorType.CSS # The selector type: CSS
)
login_button = page.locator('//button[@id="login-button"]')
login_button.click()
# Track the click action with the tracker
tracker.track_element(
selector='//button[@id="login-button"]', # The selector (XPath)
action_type=ActionType.CLICK, # The action type: CLICK
selector_type=SelectorType.XPATH # The selector type: XPath
)
# End the current scenario.
# This finalizes and saves the coverage data for this test case.
tracker.end_scenario()
Quick summary:
tracker.start_scenario()
to begin a new scenario.tracker.track_element()
after each user interaction.tracker.end_scenario()
to finalize and save it.from selenium import webdriver
from ui_coverage_scenario_tool import UICoverageTracker, SelectorType, ActionType
driver = webdriver.Chrome()
# Initialize the tracker with the app key
tracker = UICoverageTracker(app="my-ui-app")
# Start a new scenario
tracker.start_scenario(url="http://tms.com/test-cases/1", name="Successful login")
driver.get("https://my-ui-app.com/login")
username_input = driver.find_element("css selector", "#username-input")
username_input.send_keys("user@example.com")
# Track the fill action
tracker.track_element('#username-input', ActionType.FILL, SelectorType.CSS)
login_button = driver.find_element("xpath", '//button[@id="login-button"]')
login_button.click()
# Track the click action
tracker.track_element('//button[@id="login-button"]', ActionType.CLICK, SelectorType.XPATH)
# End the current scenario
tracker.end_scenario()
This setup shows how to integrate ui-coverage-scenario-tool
into a Python Playwright project using a custom tracker
fixture. The UICoverageTracker
is injected into each test and passed to page objects for interaction tracking.
We define a tracker fixture that:
UICoverageTracker
per testrequest.node.name
./tests/conftest.py
from typing import Generator, Any
import pytest
from ui_coverage_scenario_tool import UICoverageTracker
@pytest.fixture
def ui_coverage_tracker(request) -> Generator[UICoverageTracker, Any, None]:
# Instantiate the UI coverage tracker with your app name
tracker = UICoverageTracker(app="ui-course")
# Start a new scenario using the test name for traceability
tracker.start_scenario(
url=None, # Optional external URL (e.g., link to TMS); can be set dynamically
name=request.node.name # Use pytest's node name (test function name)
)
# Provide the tracker to the test and any dependent components
yield tracker
# End the scenario after the test has run
tracker.end_scenario()
This fixture ensures a new, isolated tracker per test, which helps maintain clean test boundaries and supports parallel execution.
Here, we define a LoginPage
class that performs a user action and tracks it via the provided UICoverageTracker
.
./pages/login_page.py
from playwright.sync_api import Page
from ui_coverage_scenario_tool import ActionType, SelectorType, UICoverageTracker
class LoginPage:
def __init__(self, page: Page, tracker: UICoverageTracker):
self.page = page
self.tracker = tracker
# Track that the test has opened this page.
# Useful for identifying which pages were actually visited during test execution.
self.tracker.track_page(
url="/auth/login", # Logical or real URL of the page
page="LoginPage", # Human-readable name of the page
priority=0 # Used to indicate order on the pages graph
)
def click_login_button(self):
# Perform the UI interaction
self.page.click('#login')
# Track the interaction using the coverage tool
self.tracker.track_element(
selector='#login', # The CSS selector that was used
action_type=ActionType.CLICK, # Type of user action
selector_type=SelectorType.CSS # Type of selector used (CSS in this case)
)
# Track the navigation that follows this interaction.
# Helps build a picture of the flow between pages.
self.tracker.track_transition(from_page="LoginPage", to_page="DashboardPage")
This makes interaction tracking an integral part of your UI logic and encourages traceable, observable behavior within your components and flows. By logging page visits, element interactions, and navigation transitions, your test coverage becomes more transparent, measurable, and auditable.
Here’s a sample test that uses both the page
fixture (from Playwright) and the tracker
fixture you defined:
./tests/test_important_feature.py
from pages.login_page import LoginPage
from playwright.sync_api import Page
from ui_coverage_scenario_tool import UICoverageTracker
def test_login(page: Page, ui_coverage_tracker: UICoverageTracker):
# Pass both the Playwright page and tracker to your page object
login_page = LoginPage(page, ui_coverage_tracker)
# Perform the action — tracking happens automatically within the method
login_page.click_login_button()
The test itself stays clean and focused. Thanks to fixtures, all setup and teardown logic is handled automatically.
@pytest.fixture
for clean, composable test setup.UICoverageTracker
instance and scenario context.After every call to tracker.track_element(...)
, the tool automatically stores coverage data in
the ./coverage-results/
directory as JSON files. You don’t need to manually manage the folder — it’s created and
populated automatically.
./coverage-results/
├── 0a8b92e9-66e1-4c04-aa48-9c8ee28b99fa-element.json
├── 0a235af0-67ae-4b62-a034-a0f551c9ebb5-element.json
└── ...
When you call tracker.start_scenario(...)
, a new scenario automatically begins. All subsequent actions, such as
tracker.track_element(...)
, will be logged within the context of this scenario. To finalize and save the scenario,
you need to call tracker.end_scenario()
. This method ends the scenario and saves it to a JSON file.
./coverage-results/
├── 0a8b92e9-66e1-4c04-aa48-9c8ee28b99fa-scenario.json
├── 0a235af0-67ae-4b62-a034-a0f551c9ebb5-scenario.json
└── ...
Once your tests are complete and coverage data has been collected, generate a final interactive report using this command:
ui-coverage-scenario-tool save-report
This will generate:
index.html
— a standalone HTML report that you can:
coverage-report.json
— a structured JSON report that can be used for:
Important! The ui-coverage-scenario-tool save-report
command must be run from the root of your project, where
your config files (.env
, ui_coverage_scenario_config.yaml
, etc.) are located. Running it from another directory may
result in missing data or an empty report.
start_scenario
Signature: start_scenario(url: str | None, name: str)
What it does: Begins a new UI coverage scenario. This groups all tracked interactions under a single logical test case.
When to use: Call this at the beginning of each test, typically in a fixture or setup block.
Parameters:
url
: (Optional) External reference to a test case or issue (e.g., link to TMS or ticket)name
: A unique name for the scenario — for example, use request.node.name
in pytest
to tie it to the test
functionend_scenario
Signature: end_scenario()
What it does: Closes the current scenario and finalizes the coverage data collected for that test case.
When to use: Call this at the end of each test, usually in teardown logic or after yield
in a fixture.
track_page
Signature: track_page(url: str, page: str, priority: int)
What it does: Marks that a particular page was opened during the test. Useful for identifying what screens were visited and when.
When to use: Call once in the constructor of each Page Object, or at the point where the test navigates to that page.
Parameters:
url
: Logical or actual route (e.g. /auth/login
)page
: Readable identifier like "LoginPage"
priority
: Optional number to order or weigh pages in reportstrack_element
Signature: track_element(selector: str, action_type: ActionType, selector_type: SelectorType)
What it does: Tracks interaction with a specific UI element (e.g., click, fill, select).
When to use: Call it immediately after performing the user action — so that the test log reflects actual UI behavior.
Parameters:
selector
: The selector used in the action (e.g. #login
)action_type
: The type of action (CLICK
, FILL
, etc.)selector_type
: Type of selector (CSS
, XPATH
)track_transition
Signature: track_transition(from_page: str, to_page: str)
What it does: Marks a transition between two logical pages or views.
When to use: After an action that leads to navigation (e.g., after login button click that brings you to dashboard).
Parameters:
from_page
: Page before the transitionto_page
: Page after the transitionThese methods work together to give a complete picture of what pages, elements, and flows are covered by your tests — which can be visualized or analyzed later.
You can configure the UI Coverage Tool using a single file: either a YAML, JSON, or .env
file. By default, the
tool looks for configuration in:
ui_coverage_scenario_config.yaml
ui_coverage_scenario_config.json
.env
(for environment variable configuration)All paths are relative to the current working directory, and configuration is automatically loaded via get_settings().
Important! Files must be in the project root.
.env
All settings can be declared using environment variables. Nested fields use dot notation, and all variables must be
prefixed with UI_COVERAGE_SCENARIO_
.
Example: .env
# Define the applications that should be tracked. In the case of multiple apps, they can be added in a comma-separated list.
UI_COVERAGE_SCENARIO_APPS='[
{
"key": "my-ui-app",
"url": "https://my-ui-app.com/login",
"name": "My UI App",
"tags": ["UI", "PRODUCTION"],
"repository": "https://github.com/my-ui-app"
}
]'
# The directory where the coverage results will be saved.
UI_COVERAGE_SCENARIO_RESULTS_DIR="./coverage-results"
# The file that stores the history of coverage results.
UI_COVERAGE_SCENARIO_HISTORY_FILE="./coverage-history.json"
# The retention limit for the coverage history. It controls how many historical results to keep.
UI_COVERAGE_SCENARIO_HISTORY_RETENTION_LIMIT=30
# Optional file paths for the HTML and JSON reports.
UI_COVERAGE_SCENARIO_HTML_REPORT_FILE="./index.html"
UI_COVERAGE_SCENARIO_JSON_REPORT_FILE="./coverage-report.json"
Example: ui_coverage_scenario_config.yaml
apps:
- key: "my-ui-app"
url: "https://my-ui-app.com/login",
name: "My UI App"
tags: [ "UI", "PRODUCTION" ]
repository: "https://github.com/my-ui-app"
results_dir: "./coverage-results"
history_file: "./coverage-history.json"
history_retention_limit: 30
html_report_file: "./index.html"
json_report_file: "./coverage-report.json"
Example: ui_coverage_scenario_config.json
{
"apps": [
{
"key": "my-ui-app",
"url": "https://my-ui-app.com/login",
"name": "My UI App",
"tags": [
"UI",
"PRODUCTION"
],
"repository": "https://github.com/my-ui-app"
}
],
"results_dir": "./coverage-results",
"history_file": "./coverage-history.json",
"history_retention_limit": 30,
"html_report_file": "./index.html",
"json_report_file": "./coverage-report.json"
}
Key | Description | Required | Default |
---|---|---|---|
apps | List of applications to track. Each must define key , name , and url . | ✅ | — |
services[].key | Unique internal identifier for the service. | ✅ | — |
services[].url | Entry point URL of the app. | ✅ | — |
services[].name | Human-friendly name for the service (used in reports). | ✅ | — |
services[].tags | Optional tags used in reports for filtering or grouping. | ❌ | — |
services[].repository | Optional repository URL (will be shown in report). | ❌ | — |
results_dir | Directory to store raw coverage result files. | ❌ | ./coverage-results |
history_file | File to store historical coverage data. | ❌ | ./coverage-history.json |
history_retention_limit | Maximum number of historical entries to keep. | ❌ | 30 |
html_report_file | Path to save the final HTML report (if enabled). | ❌ | ./index.html |
json_report_file | Path to save the raw JSON report (if enabled). | ❌ | ./coverage-report.json |
Once configured, the tool automatically:
coverage-results/
.No manual data manipulation is required – the tool handles everything automatically based on your config.
The UI Coverage Tool provides several CLI commands to help with managing and generating coverage reports.
save-report
Generates a detailed coverage report based on the collected result files. This command will process all the raw coverage
data stored in the coverage-results
directory and generate an HTML report.
Usage:
ui-coverage-scenario-tool save-report
copy-report
This is an internal command mainly used during local development. It updates the report template for the generated coverage reports. It is typically used to ensure that the latest report template is available when you generate new reports.
Usage:
ui-coverage-scenario-tool copy-report
print-config
Prints the resolved configuration to the console. This can be useful for debugging or verifying that the configuration file has been loaded and parsed correctly.
Usage:
ui-coverage-scenario-tool print-config
ui_coverage_scenario_config.yaml
, ui_coverage_scenario_config.json
,
or .env
)
and prints the final configuration values to the console.start_scenario()
is called before the test.end_scenario()
is called after the test.track_page()
, track_element()
, track_transition()
is called during your test.ui-coverage-scenario-tool save-report
from the root directory.coverage-results
directory contains .json
files.FAQs
UI Coverage Tool is an innovative, no-overhead solution for tracking and visualizing UI test coverage — directly on your actual application, not static snapshots.
We found that ui-coverage-scenario-tool demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Product
Socket MCP brings real-time security checks to AI-generated code, helping developers catch risky dependencies before they enter the codebase.
Security News
As vulnerability data bottlenecks grow, the federal government is formally investigating NIST’s handling of the National Vulnerability Database.
Research
Security News
Socket’s Threat Research Team has uncovered 60 npm packages using post-install scripts to silently exfiltrate hostnames, IP addresses, DNS servers, and user directories to a Discord-controlled endpoint.