Research
Security News
Quasar RAT Disguised as an npm Package for Detecting Vulnerabilities in Ethereum Smart Contracts
Socket researchers uncover a malicious npm package posing as a tool for detecting vulnerabilities in Etherium smart contracts.
A ``pytest`` fixture for benchmarking code. It will group the tests into rounds that are calibrated to the chosen timer.
A pytest
fixture for benchmarking code. It will group the tests into rounds that are calibrated to the chosen
timer.
See calibration_ and FAQ_.
::
pip install pytest-benchmark
For latest release: pytest-benchmark.readthedocs.org/en/stable <http://pytest-benchmark.readthedocs.org/en/stable/>
_.
For master branch (may include documentation fixes): pytest-benchmark.readthedocs.io/en/latest <http://pytest-benchmark.readthedocs.io/en/latest/>
_.
But first, a prologue:
This plugin tightly integrates into pytest. To use this effectively you should know a thing or two about pytest first.
Take a look at the `introductory material <http://docs.pytest.org/en/latest/getting-started.html>`_
or watch `talks <http://docs.pytest.org/en/latest/talks.html>`_.
Few notes:
* This plugin benchmarks functions and only that. If you want to measure block of code
or whole programs you will need to write a wrapper function.
* In a test you can only benchmark one function. If you want to benchmark many functions write more tests or
use `parametrization <http://docs.pytest.org/en/latest/parametrize.html>`_.
* To run the benchmarks you simply use `pytest` to run your "tests". The plugin will automatically do the
benchmarking and generate a result table. Run ``pytest --help`` for more details.
This plugin provides a benchmark
fixture. This fixture is a callable object that will benchmark any function passed
to it.
Example:
.. code-block:: python
def something(duration=0.000001):
"""
Function that needs some serious benchmarking.
"""
time.sleep(duration)
# You may return anything you want, like the result of a computation
return 123
def test_my_stuff(benchmark):
# benchmark something
result = benchmark(something)
# Extra code, to verify that the run completed correctly.
# Sometimes you may want to check the result, fast functions
# are no good if they return incorrect results :-)
assert result == 123
You can also pass extra arguments:
.. code-block:: python
def test_my_stuff(benchmark):
benchmark(time.sleep, 0.02)
Or even keyword arguments:
.. code-block:: python
def test_my_stuff(benchmark):
benchmark(time.sleep, duration=0.02)
Another pattern seen in the wild, that is not recommended for micro-benchmarks (very fast code) but may be convenient:
.. code-block:: python
def test_my_stuff(benchmark):
@benchmark
def something(): # unnecessary function call
time.sleep(0.000001)
A better way is to just benchmark the final function:
.. code-block:: python
def test_my_stuff(benchmark):
benchmark(time.sleep, 0.000001) # way more accurate results!
If you need to do fine control over how the benchmark is run (like a setup
function, exact control of iterations
and
rounds
) there's a special mode - pedantic_:
.. code-block:: python
def my_special_setup():
...
def test_with_setup(benchmark):
benchmark.pedantic(something, setup=my_special_setup, args=(1, 2, 3), kwargs={'foo': 'bar'}, iterations=10, rounds=100)
Normal run:
.. image:: https://github.com/ionelmc/pytest-benchmark/raw/master/docs/screenshot.png :alt: Screenshot of pytest summary
Compare mode (--benchmark-compare
):
.. image:: https://github.com/ionelmc/pytest-benchmark/raw/master/docs/screenshot-compare.png :alt: Screenshot of pytest summary in compare mode
Histogram (--benchmark-histogram
):
.. image:: https://cdn.rawgit.com/ionelmc/pytest-benchmark/94860cc8f47aed7ba4f9c7e1380c2195342613f6/docs/sample-tests_test_normal.py_test_xfast_parametrized%5B0%5D.svg :alt: Histogram sample
..
Also, it has `nice tooltips <https://cdn.rawgit.com/ionelmc/pytest-benchmark/master/docs/sample.svg>`_.
To run the all tests run::
tox
.. _FAQ: http://pytest-benchmark.readthedocs.org/en/latest/faq.html .. _calibration: http://pytest-benchmark.readthedocs.org/en/latest/calibration.html .. _pedantic: http://pytest-benchmark.readthedocs.org/en/latest/pedantic.html
TypeError: import_path() missing 1 required keyword-only argument: 'consider_namespace_packages'
issue).
Unfortunately this sets the minimum supported pytest version to 8.1.nbmake <https://pypi.org/project/nbmake/>
_ was enabled.Dropped support for now EOL Python 3.8. Also moved tests suite to only test the latest pytest versions (8.3.x).
Fix generate parametrize tests benchmark csv report errors (issue #268 <https://github.com/ionelmc/pytest-benchmark/issues/268>
).
Contributed by Johnny Huang in #269 <https://github.com/ionelmc/pytest-benchmark/pull/269>
.
Added the --benchmark-time-unit
cli option for overriding the measurement unit used for display.
Contributed by Tony Kuo in #257 <https://github.com/ionelmc/pytest-benchmark/pull/257>
_.
Fixes spelling in some help texts.
Contributed by Eugeniy in #267 <https://github.com/ionelmc/pytest-benchmark/pull/267>
_.
Added new cprofile options:
--benchmark-cprofile-loops=LOOPS
- previously profiling only ran the function once, this allow customization.--benchmark-cprofile-top=COUNT
- allows showing more rows.--benchmark-cprofile-dump=[FILENAME-PREFIX]
- allows saving to a file (that you can load in snakeviz <https://pypi.org/project/snakeviz/>
, RunSnakeRun <https://pypi.org/project/RunSnakeRun/>
or other tools).Removed hidden dependency on py.path <https://pypi.org/project/py/>
_ (replaced with pathlib).
py
library (that was not properly specified as a dependency anyway).test_utils.py
if appropriate VCS not available. Also fix typo.
Contributed by Sam James in #211 <https://github.com/ionelmc/pytest-benchmark/pull/211>
_.pytest.hookimpl
and pytest.hookspec
to configure hooks.
Contributed by Florian Bruhin in #224 <https://github.com/ionelmc/pytest-benchmark/pull/224>
_.--benchmark-disable
is used.
Fixes #205 <https://github.com/ionelmc/pytest-benchmark/issues/205>
.
Contributed by Friedrich Delgado in #207 <https://github.com/ionelmc/pytest-benchmark/pull/207>
.Republished with updated changelog.
I intended to publish a 3.3.0
release but I messed it up because bumpversion doesn't work well with pre-commit
apparently... thus 3.4.0
was set in by accident.
--benchmark-verbose
is used.
Contributed by Dimitris Rozakis in #149 <https://github.com/ionelmc/pytest-benchmark/pull/149>
_.#189 <https://github.com/ionelmc/pytest-benchmark/pull/189>
_.--benchmark-skip
and --benchmark-only
to apply early in the collection phase.
This means skipped tests won't make pytest run fixtures for said tests unnecessarily, but unfortunately this also means
the skipping behavior will be applied to any tests that requires a "benchmark" fixture, regardless if it would come from pytest-benchmark
or not.
MAY BE BACKWARDS INCOMPATIBLE--benchmark-quiet
- option to disable reporting and other information output.--benchmark-disable
and save options are used.
Fixes #199 <https://github.com/ionelmc/pytest-benchmark/issues/199>
_.PerformanceRegression
exception no longer inherits pytest.UsageError
(apparently a final class).#151 <https://github.com/ionelmc/pytest-benchmark/pull/151>
_.pytest_benchmark.utils.clonefunc
to work on Python 3.8.pytest_benchmark.__version__
.trial
x-axis histogram label. Contributed by Ken Crowell in
#95 <https://github.com/ionelmc/pytest-benchmark/pull/95>
_).#103 <https://github.com/ionelmc/pytest-benchmark/pull/103>
_.#129 <https://github.com/ionelmc/pytest-benchmark/pull/129>
_ and
#130 <https://github.com/ionelmc/pytest-benchmark/pull/130>
_.#97 <https://github.com/ionelmc/pytest-benchmark/pull/97>
,
#105 <https://github.com/ionelmc/pytest-benchmark/pull/105>
,
#110 <https://github.com/ionelmc/pytest-benchmark/pull/110>
,
#111 <https://github.com/ionelmc/pytest-benchmark/pull/111>
,
#115 <https://github.com/ionelmc/pytest-benchmark/pull/115>
,
#123 <https://github.com/ionelmc/pytest-benchmark/pull/123>
,
#131 <https://github.com/ionelmc/pytest-benchmark/pull/131>
_ and
#140 <https://github.com/ionelmc/pytest-benchmark/pull/140>
_.pytest_benchmark_update_machine_info
hook. Contributed by Alex Ford in
#109 <https://github.com/ionelmc/pytest-benchmark/pull/109>
_.--benchmark-disable
. Contributed by Francesco Ballarin in
#113 <https://github.com/ionelmc/pytest-benchmark/pull/113>
_.#114 <https://github.com/ionelmc/pytest-benchmark/pull/114>
_.--benchmark-skip
and --benchmark-only
, with the later having priority.
Contributed by Ofek Lev in
#116 <https://github.com/ionelmc/pytest-benchmark/pull/116>
_.#134 <https://github.com/ionelmc/pytest-benchmark/pull/134>
,
#136 <https://github.com/ionelmc/pytest-benchmark/pull/136>
and
#138 <https://github.com/ionelmc/pytest-benchmark/pull/138>
_.ops
field, see
#81 <https://github.com/ionelmc/pytest-benchmark/issues/81>
_).#82 <https://github.com/ionelmc/pytest-benchmark/issues/82>
_).ops
field in Stats
) metric --
shows the call rate of code being tested. Contributed by Alexey Popravka in
#78 <https://github.com/ionelmc/pytest-benchmark/pull/78>
_.time
field in commit_info
. Contributed by "varac" in
#71 <https://github.com/ionelmc/pytest-benchmark/pull/71>
_.author_time
field in commit_info
. Contributed by "varac" in
#75 <https://github.com/ionelmc/pytest-benchmark/pull/75>
_.--benchmark-netrc
option to use credentials from a netrc file when
storing data to elasticsearch. Both contributed by Andre Bianchi in
#73 <https://github.com/ionelmc/pytest-benchmark/pull/73>
_.#74 <https://github.com/ionelmc/pytest-benchmark/pull/74>
_.git
and hg
as system dependencies when guessing the project name.machine_info
now contains more detailed information about the CPU, in
particular the exact model. Contributed by Antonio Cuni in #61 <https://github.com/ionelmc/pytest-benchmark/pull/61>
_.benchmark.extra_info
, which you can use to save arbitrary stuff in
the JSON. Contributed by Antonio Cuni in the same PR as above.#68 <https://github.com/ionelmc/pytest-benchmark/pull/68>
_.commit_info
when not running in the root of the repository. Contributed by Vara Canero in
#69 <https://github.com/ionelmc/pytest-benchmark/pull/69>
_.--storage
/--verbose
options in CLI.pytest-benchmark
CLI bin (in addition to py.test-benchmark
) to match the madness in pytest.--help
in CLI.commit_info
in JSON outputs).--benchmark-columns
.Added --benchmark-columns
command line option. It selects what columns are displayed in the result table. Contributed by
Antonio Cuni in #34 <https://github.com/ionelmc/pytest-benchmark/pull/34>
_.
Added support for grouping by specific test parametrization (--benchmark-group-by=param:NAME
where NAME
is your
param name). Contributed by Antonio Cuni in #37 <https://github.com/ionelmc/pytest-benchmark/pull/37>
__.
Added support for name
or fullname
in --benchmark-sort
.
Contributed by Antonio Cuni in #37 <https://github.com/ionelmc/pytest-benchmark/pull/37>
_.
Changed signature for pytest_benchmark_generate_json
hook to take 2 new arguments: machine_info
and commit_info
.
Changed --benchmark-histogram
to plot groups instead of name-matching runs.
Changed --benchmark-histogram
to plot exactly what you compared against. Now it's 1:1
with the compare feature.
Changed --benchmark-compare
to allow globs. You can compare against all the previous runs now.
Changed --benchmark-group-by
to allow multiple values separated by comma.
Example: --benchmark-group-by=param:foo,param:bar
Added a command line tool to compare previous data: py.test-benchmark
. It has two commands:
list
- Lists all the available files.
compare
- Displays result tables. Takes options:
--sort=COL
--group-by=LABEL
--columns=LABELS
--histogram=[FILENAME-PREFIX]
Added --benchmark-cprofile
that profiles last run of benchmarked function. Contributed by Petr Šebek.
Changed --benchmark-storage
so it now allows elasticsearch storage. It allows to store data to elasticsearch instead to
json files. Contributed by Petr Šebek in #58 <https://github.com/ionelmc/pytest-benchmark/pull/58>
_.
--help
text for --benchmark-histogram
, --benchmark-save
and --benchmark-autosave
.pytest_benchmark_generate_json
in your conftest.py
).WBENCHMARK-C
(compare mode
issues) and WBENCHMARK-U
(usage issues).--benchmark-verbose
is used. They still will be always be shown in the
pytest-warnings section.WBENCHMARK-U1
).--benchmark-warmup
to take optional value and automatically activate on PyPy (default value is auto
).
MAY BE BACKWARDS INCOMPATIBLE--benchmark-disable
option. It's automatically activated when xdist is onstatistics
can't be imported then --benchmark-disable
is automatically activated (instead
of --benchmark-skip
). BACKWARDS INCOMPATIBLE__multicall__
with the new hookwrapper system.--benchmark-max-time
.statistics
doesn't create hard failures anymore. Benchmarks are automatically skipped if import
failure occurs. This would happen on Python 3.2 (or earlier Python 3).git/hg
installed.Added JSON report saving (the --benchmark-json
command line arguments). Based on initial work from Dave Collins in
#8 <https://github.com/ionelmc/pytest-benchmark/pull/8>
_.
Added benchmark data storage(the --benchmark-save
and --benchmark-autosave
command line arguments).
Added comparison to previous runs (the --benchmark-compare
command line argument).
Added performance regression checks (the --benchmark-compare-fail
command line argument).
Added possibility to group by various parts of test name (the --benchmark-compare-group-by
command line argument).
Added historical plotting (the --benchmark-histogram
command line argument).
Added option to fine tune the calibration (the --benchmark-calibration-precision
command line argument and
calibration_precision
marker option).
Changed benchmark_weave
to no longer be a context manager. Cleanup is performed automatically.
BACKWARDS INCOMPATIBLE
Added benchmark.weave
method (alternative to benchmark_weave
fixture).
Added new hooks to allow customization:
pytest_benchmark_generate_machine_info(config)
pytest_benchmark_update_machine_info(config, info)
pytest_benchmark_generate_commit_info(config)
pytest_benchmark_update_commit_info(config, info)
pytest_benchmark_group_stats(config, benchmarks, group_by)
pytest_benchmark_generate_json(config, benchmarks, include_data)
pytest_benchmark_update_json(config, benchmarks, output_json)
pytest_benchmark_compare_machine_info(config, benchmarksession, machine_info, compared_benchmark)
Changed the timing code to:
Added pedantic mode
via benchmark.pedantic()
. This mode disables calibration and allows a setup function.
cram
anymore).--benchmark-warmup
option.warmup_iterations
available as a marker argument (eg: @pytest.mark.benchmark(warmup_iterations=1234)
).--benchmark-verbose
's printouts to work properly with output capturing.ValueError: no option named 'dist'
when xdist wasn't installed.benchmark_weave
experimental fixture.xdist
plugin is active.xdist
is active.Moved the warmup in the calibration phase. Solves issues with benchmarking on PyPy.
Added a --benchmark-warmup-iterations
option to fine-tune that.
--help
section.--benchmark-verbose
).#4 <https://github.com/ionelmc/pytest-benchmark/pull/4>
_.FAQs
A ``pytest`` fixture for benchmarking code. It will group the tests into rounds that are calibrated to the chosen timer.
We found that pytest-benchmark demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Research
Security News
Socket researchers uncover a malicious npm package posing as a tool for detecting vulnerabilities in Etherium smart contracts.
Security News
Research
A supply chain attack on Rspack's npm packages injected cryptomining malware, potentially impacting thousands of developers.
Research
Security News
Socket researchers discovered a malware campaign on npm delivering the Skuld infostealer via typosquatted packages, exposing sensitive data.