Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

pytest-json-report

Package Overview
Dependencies
Maintainers
1
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

pytest-json-report

A pytest plugin to report test results as JSON files

  • 1.5.0
  • PyPI
  • Socket score

Maintainers
1

Pytest JSON Report

CI PyPI Version Python Versions

This pytest plugin creates test reports as JSON. This makes it easy to process test results in other applications.

It can report a summary, test details, captured output, logs, exception tracebacks and more. Additionally, you can use the available fixtures and hooks to add metadata and customize the report as you like.

Table of contents

Installation

pip install pytest-json-report --upgrade 

Options

OptionDescription
--json-reportCreate JSON report
--json-report-file=PATHTarget path to save JSON report (use "none" to not save the report)
--json-report-summaryJust create a summary without per-test details
--json-report-omit=FIELD_LISTList of fields to omit in the report (choose from: collectors, log, traceback, streams, warnings, keywords)
--json-report-indent=LEVELPretty-print JSON with specified indentation level
--json-report-verbosity=LEVELSet verbosity (default is value of --verbosity)

Usage

Just run pytest with --json-report. The report is saved in .report.json by default.

$ pytest --json-report -v tests/
$ cat .report.json
{"created": 1518371686.7981803, ... "tests":[{"nodeid": "test_foo.py", "outcome": "passed", ...}, ...]}

If you just need to know how many tests passed or failed and don't care about details, you can produce a summary only:

$ pytest --json-report --json-report-summary

Many fields can be omitted to keep the report size small. E.g., this will leave out keywords and stdout/stderr output:

$ pytest --json-report --json-report-omit keywords streams

If you don't like to have the report saved, you can specify none as the target file name:

$ pytest --json-report --json-report-file none

Advanced usage

Metadata

The easiest way to add your own metadata to a test item is by using the json_metadata test fixture:

def test_something(json_metadata):
    json_metadata['foo'] = {"some": "thing"}
    json_metadata['bar'] = 123

Or use the pytest_json_runtest_metadata hook (in your conftest.py) to add metadata based on the current test run. The dict returned will automatically be merged with any existing metadata. E.g., this adds the start and stop time of each test's call stage:

def pytest_json_runtest_metadata(item, call):
    if call.when != 'call':
        return {}
    return {'start': call.start, 'stop': call.stop}

Also, you could add metadata using pytest-metadata's --metadata switch which will add metadata to the report's environment section, but not to a specific test item. You need to make sure all your metadata is JSON-serializable.

A note on hooks

If you're using a pytest_json_* hook although the plugin is not installed or not active (not using --json-report), pytest doesn't recognize it and may fail with an internal error like this:

INTERNALERROR> pluggy.manager.PluginValidationError: unknown hook 'pytest_json_runtest_metadata' in plugin <module 'conftest' from 'conftest.py'>

You can avoid this by declaring the hook implementation optional:

import pytest
@pytest.hookimpl(optionalhook=True)
def pytest_json_runtest_metadata(item, call):
    ...

Modifying the report

You can modify the entire report before it's saved by using the pytest_json_modifyreport hook.

Just implement the hook in your conftest.py, e.g.:

def pytest_json_modifyreport(json_report):
    # Add a key to the report
    json_report['foo'] = 'bar'
    # Delete the summary from the report
    del json_report['summary']

After pytest_sessionfinish, the report object is also directly available to script via config._json_report.report. So you can access it using some built-in hook:

def pytest_sessionfinish(session):
    report = session.config._json_report.report
    print('exited with', report['exitcode'])

If you really want to change how the result of a test stage run is turned into JSON, you can use the pytest_json_runtest_stage hook. It takes a TestReport and returns a JSON-serializable dict:

def pytest_json_runtest_stage(report):
    return {'outcome': report.outcome}

Direct invocation

You can use the plugin when invoking pytest.main() directly from code:

import pytest
from pytest_jsonreport.plugin import JSONReport

plugin = JSONReport()
pytest.main(['--json-report-file=none', 'test_foo.py'], plugins=[plugin])

You can then access the report object:

print(plugin.report)

And save the report manually:

plugin.save_report('/tmp/my_report.json')

Format

The JSON report contains metadata of the session, a summary, collectors, tests and warnings. You can find a sample report in sample_report.json.

KeyDescription
createdReport creation date. (Unix time)
durationSession duration in seconds.
exitcodeProcess exit code as listed in the pytest docs. The exit code is a quick way to tell if any tests failed, an internal error occurred, etc.
rootAbsolute root path from which the session was started.
environmentEnvironment entry.
summarySummary entry.
collectorsCollectors entry. (absent if --json-report-summary or if no collectors)
testsTests entry. (absent if --json-report-summary)
warningsWarnings entry. (absent if --json-report-summary or if no warnings)
Example
{
    "created": 1518371686.7981803,
    "duration": 0.1235666275024414,
    "exitcode": 1,
    "root": "/path/to/tests",
    "environment": ENVIRONMENT,
    "summary": SUMMARY,
    "collectors": COLLECTORS,
    "tests": TESTS,
    "warnings": WARNINGS,
}

Summary

Number of outcomes per category and the total number of test items.

KeyDescription
collectedTotal number of tests collected.
totalTotal number of tests run.
deselectedTotal number of tests deselected. (absent if number is 0)
<outcome>Number of tests with that outcome. (absent if number is 0)
Example
{
    "collected": 10,
    "passed": 2,
    "failed": 3,
    "xfailed": 1,
    "xpassed": 1,
    "error": 2,
    "skipped": 1,
    "total": 10
}

Environment

The environment section is provided by pytest-metadata. All metadata given by that plugin will be added here, so you need to make sure it is JSON-serializable.

Example
{
    "Python": "3.6.4",
    "Platform": "Linux-4.56.78-9-ARCH-x86_64-with-arch",
    "Packages": {
        "pytest": "3.4.0",
        "py": "1.5.2",
        "pluggy": "0.6.0"
    },
    "Plugins": {
        "json-report": "0.4.1",
        "xdist": "1.22.0",
        "metadata": "1.5.1",
        "forked": "0.2",
        "cov": "2.5.1"
    },
    "foo": "bar", # Custom metadata entry passed via pytest-metadata
}

Collectors

A list of collector nodes. These are useful to check what tests are available without running them, or to debug an error during test discovery.

KeyDescription
nodeidID of the collector node. (See docs) The root node has an empty node ID.
outcomeOutcome of the collection. (Not the test outcome!)
resultNodes collected by the collector.
longreprRepresentation of the collection error. (absent if no error occurred)

The result is a list of the collected nodes:

KeyDescription
nodeidID of the node.
typeType of the collected node.
linenoLine number. (absent if not applicable)
deselectedtrue if the test is deselected. (absent if not deselected)
Example
[
    {
        "nodeid": "",
        "outcome": "passed",
        "result": [
            {
                "nodeid": "test_foo.py",
                "type": "Module"
            }
        ]
    },
    {
        "nodeid": "test_foo.py",
        "outcome": "passed",
        "result": [
            {
                "nodeid": "test_foo.py::test_pass",
                "type": "Function",
                "lineno": 24,
                "deselected": true
            },
            ...
        ]
    },
    {
        "nodeid": "test_bar.py",
        "outcome": "failed",
        "result": [],
        "longrepr": "/usr/lib/python3.6 ... invalid syntax"
    },
    ...
]

Tests

A list of test nodes. Each completed test stage produces a stage object (setup, call, teardown) with its own outcome.

KeyDescription
nodeidID of the test node.
linenoLine number where the test starts.
keywordsList of keywords and markers associated with the test.
outcomeOutcome of the test run.
{setup, call, teardown}Test stage entry. To find the error in a failed test you need to check all stages. (absent if stage didn't run)
metadataMetadata item. (absent if no metadata)
Example
[
    {
        "nodeid": "test_foo.py::test_fail",
        "lineno": 50,
        "keywords": [
            "test_fail",
            "test_foo.py",
            "test_foo0"
        ],
        "outcome": "failed",
        "setup": TEST_STAGE,
        "call": TEST_STAGE,
        "teardown": TEST_STAGE,
        "metadata": {
            "foo": "bar",
        }
    },
    ...
]

Test stage

A test stage item.

KeyDescription
durationDuration of the test stage in seconds.
outcomeOutcome of the test stage. (can be different from the overall test outcome)
crashCrash entry. (absent if no error occurred)
tracebackList of traceback entries. (absent if no error occurred; affected by --tb option)
stdoutStandard output. (absent if none available)
stderrStandard error. (absent if none available)
logLog entry. (absent if none available)
longreprRepresentation of the error. (absent if no error occurred; format affected by --tb option)
Example
{
    "duration": 0.00018835067749023438,
    "outcome": "failed",
    "crash": {
        "path": "/path/to/tests/test_foo.py",
        "lineno": 54,
        "message": "TypeError: unsupported operand type(s) for -: 'int' and 'NoneType'"
    },
    "traceback": [
        {
            "path": "test_foo.py",
            "lineno": 65,
            "message": ""
        },
        {
            "path": "test_foo.py",
            "lineno": 63,
            "message": "in foo"
        },
        {
            "path": "test_foo.py",
            "lineno": 63,
            "message": "in <listcomp>"
        },
        {
            "path": "test_foo.py",
            "lineno": 54,
            "message": "TypeError"
        }
    ],
    "stdout": "foo\nbar\n",
    "stderr": "baz\n",
    "log": LOG,
    "longrepr": "def test_fail_nested():\n ..."
}

Log

A list of log records. The fields of a log record are the logging.LogRecord attributes, with the exception that the fields exc_info and args are always empty and msg contains the formatted log message.

You can apply logging.makeLogRecord() on a log record to convert it back to a logging.LogRecord object.

Example
[
    {
        "name": "root",
        "msg": "This is a warning.",
        "args": null,
        "levelname": "WARNING",
        "levelno": 30,
        "pathname": "/path/to/tests/test_foo.py",
        "filename": "test_foo.py",
        "module": "test_foo",
        "exc_info": null,
        "exc_text": null,
        "stack_info": null,
        "lineno": 8,
        "funcName": "foo",
        "created": 1519772464.291738,
        "msecs": 291.73803329467773,
        "relativeCreated": 332.90839195251465,
        "thread": 140671803118912,
        "threadName": "MainThread",
        "processName": "MainProcess",
        "process": 31481
    },
    ...
]

Warnings

A list of warnings that occurred during the session. (See the pytest docs on warnings.)

KeyDescription
filenameFile name.
linenoLine number.
messageWarning message.
whenWhen the warning was captured. ("config", "collect" or "runtest" as listed here)
Example
[
    {
        "code": "C1",
        "path": "/path/to/tests/test_foo.py",
        "nodeid": "test_foo.py::TestFoo",
        "message": "cannot collect test class 'TestFoo' because it has a __init__ constructor"
    }
]
  • pytest-json has some great features but appears to be unmaintained. I borrowed some ideas and test cases from there.

  • tox has a switch to create a JSON report including a test result summary. However, it just provides the overall outcome without any per-test details.

FAQs


Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc