pytest-memray
Advanced tools
+18
| .. module:: pytest_memray | ||
| pytest-memray API | ||
| ================= | ||
| Types | ||
| ----- | ||
| .. autoclass:: LeaksFilterFunction() | ||
| :members: __call__ | ||
| :show-inheritance: | ||
| .. autoclass:: Stack() | ||
| :members: | ||
| .. autoclass:: StackFrame() | ||
| :members: | ||
+11
-0
@@ -12,4 +12,6 @@ """Sphinx configuration file for pytest-memray documentation.""" | ||
| extensions = [ | ||
| "sphinx.ext.autodoc", | ||
| "sphinx.ext.extlinks", | ||
| "sphinx.ext.githubpages", | ||
| "sphinx.ext.intersphinx", | ||
| "sphinxarg.ext", | ||
@@ -40,3 +42,12 @@ "sphinx_inline_tabs", | ||
| # Try to resolve Sphinx references as Python objects by default. This means we | ||
| # don't need :func: or :class: etc, which keep docstrings more human readable. | ||
| default_role = "py:obj" | ||
| # Automatically link to Python standard library types. | ||
| intersphinx_mapping = { | ||
| "python": ("https://docs.python.org/3", None), | ||
| } | ||
| def _get_output(self): | ||
@@ -43,0 +54,0 @@ code, out = prev(self) |
@@ -24,2 +24,11 @@ Configuration | ||
| ``--stacks=STACKS`` | ||
| Show the N most recent stack entries when showing tracebacks of memory allocations | ||
| ``--native`` | ||
| Include native frames when showing tracebacks of memory allocations (will be slower) | ||
| ``--trace-python-allocators`` | ||
| Record allocations made by the Pymalloc allocator (will be slower) | ||
| .. tab:: Config file options | ||
@@ -35,1 +44,10 @@ | ||
| Hide the memray summary at the end of the execution. | ||
| ``stacks(int)`` | ||
| Show the N most recent stack entries when showing tracebacks of memory allocations | ||
| ``native(bool)`` | ||
| Include native frames when showing tracebacks of memory allocations (will be slower) | ||
| ``trace_python_allocators(bool)`` | ||
| Record allocations made by the Pymalloc allocator (will be slower) |
+1
-0
@@ -16,2 +16,3 @@ pytest-memray | ||
| configuration | ||
| api | ||
| news |
+103
-21
@@ -25,3 +25,3 @@ Usage | ||
| By default, the plugin will track allocations in all tests. This information is | ||
| By default, the plugin will track allocations at the high watermark in all tests. This information is | ||
| reported after tests run ends: | ||
@@ -35,31 +35,113 @@ | ||
| This plugin provides markers that can be used to enforce additional checks and | ||
| validations on tests when this plugin is enabled. | ||
| This plugin provides `markers <https://docs.pytest.org/en/latest/example/markers.html>`__ | ||
| that can be used to enforce additional checks and validations on tests. | ||
| .. important:: These markers do nothing when the plugin is not enabled. | ||
| .. py:function:: pytest.mark.limit_memory(memory_limit: str) | ||
| ``limit_memory`` | ||
| ---------------- | ||
| Fail the execution of the test if the test allocates more memory than allowed. | ||
| When this marker is applied to a test, it will cause the test to fail if the execution | ||
| of the test allocates more memory than allowed. It takes a single argument with a | ||
| string indicating the maximum memory that the test can allocate. | ||
| When this marker is applied to a test, it will cause the test to fail if the | ||
| execution of the test allocates more memory than allowed. It takes a single argument | ||
| with a string indicating the maximum memory that the test can allocate. | ||
| The format for the string is ``<NUMBER> ([KMGTP]B|B)``. The marker will raise | ||
| ``ValueError`` if the string format cannot be parsed correctly. | ||
| The format for the string is ``<NUMBER> ([KMGTP]B|B)``. The marker will raise | ||
| ``ValueError`` if the string format cannot be parsed correctly. | ||
| .. warning:: | ||
| .. warning:: | ||
| As the Python interpreter has its own | ||
| `object allocator <https://docs.python.org/3/c-api/memory.html>`__ is possible | ||
| that memory is not immediately released to the system when objects are deleted, so | ||
| tests using this marker may need to give some room to account for this. | ||
| As the Python interpreter has its own | ||
| `object allocator <https://docs.python.org/3/c-api/memory.html>`__ it's possible | ||
| that memory is not immediately released to the system when objects are deleted, | ||
| so tests using this marker may need to give some room to account for this. | ||
| Example of usage: | ||
| Example of usage: | ||
| .. code-block:: python | ||
| .. code-block:: python | ||
| @pytest.mark.limit_memory("24 MB") | ||
| def test_foobar(): | ||
| pass # do some stuff that allocates memory | ||
| @pytest.mark.limit_memory("24 MB") | ||
| def test_foobar(): | ||
| pass # do some stuff that allocates memory | ||
| .. py:function:: pytest.mark.limit_leaks(location_limit: str, filter_fn: LeaksFilterFunction | None = None) | ||
| Fail the execution of the test if any call stack in the test leaks more memory than | ||
| allowed. | ||
| .. important:: | ||
| To detect leaks, Memray needs to intercept calls to the Python allocators and | ||
| report native call frames. This is adds significant overhead, and will slow your | ||
| test down. | ||
| When this marker is applied to a test, the plugin will analyze the memory | ||
| allocations that are made while the test body runs and not freed by the time the | ||
| test body function returns. It groups them by the call stack leading to the | ||
| allocation, and sums the amount leaked by each **distinct call stack**. If the total | ||
| amount leaked from any particular call stack is greater than the configured limit, | ||
| the test will fail. | ||
| .. important:: | ||
| It's recommended to run your API or code in a loop when utilizing this plugin. | ||
| This practice helps in distinguishing genuine leaks from the "noise" generated | ||
| by internal caches and other incidental allocations. | ||
| The format for the string is ``<NUMBER> ([KMGTP]B|B)``. The marker will raise | ||
| ``ValueError`` if the string format cannot be parsed correctly. | ||
| The marker also takes an optional keyword-only argument ``filter_fn``. This argument | ||
| represents a filtering function that will be called once for each distinct call | ||
| stack that leaked more memory than allowed. If it returns *True*, leaks from that | ||
| location will be included in the final report. If it returns *False*, leaks | ||
| associated with the stack it was called with will be ignored. If all leaks are | ||
| ignored, the test will not fail. This can be used to discard any known false | ||
| positives. | ||
| .. tip:: | ||
| You can pass the ``--memray-bin-path`` argument to ``pytest`` to specify | ||
| a directory where Memray will store the binary files with the results. You | ||
| can then use the ``memray`` CLI to further investigate the allocations and the | ||
| leaks using any Memray reporters you'd like. Check `the memray docs | ||
| <https://bloomberg.github.io/memray/getting_started.html>`_ for more | ||
| information. | ||
| Example of usage: | ||
| .. code-block:: python | ||
| @pytest.mark.limit_leaks("1 MB") | ||
| def test_foobar(): | ||
| # Run the function we're testing in a loop to ensure | ||
| # we can differentiate leaks from memory held by | ||
| # caches inside the Python interpreter. | ||
| for _ in range(100): | ||
| do_some_stuff() | ||
| .. warning:: | ||
| It is **very** challenging to write tests that do not "leak" memory in some way, | ||
| due to circumstances beyond your control. | ||
| There are many caches inside the Python interpreter itself. Just a few examples: | ||
| - The `re` module caches compiled regexes. | ||
| - The `logging` module caches whether a given log level is active for | ||
| a particular logger the first time you try to log something at that level. | ||
| - A limited number of objects of certain heavily used types are cached for reuse | ||
| so that `object.__new__` does not always need to allocate memory. | ||
| - The mapping from bytecode index to line number for each Python function is | ||
| cached when it is first needed. | ||
| There are many more such caches. Also, within pytest, any message that you log or | ||
| print is captured, so that it can be included in the output if the test fails. | ||
| Memray sees these all as "leaks", because something was allocated while the test | ||
| ran and it was not freed by the time the test body finished. We don't know that | ||
| it's due to an implementation detail of the interpreter or pytest that the memory | ||
| wasn't freed. Morever, because these caches are implementation details, the | ||
| amount of memory allocated, the call stack of the allocation, and even the | ||
| allocator that was used can all change from one version to another. | ||
| Because of this, you will almost certainly need to allow some small amount of | ||
| leaked memory per call stack, or use the ``filter_fn`` argument to filter out | ||
| false-positive leak reports based on the call stack they're associated with. |
+2
-3
@@ -43,3 +43,3 @@ PYTHON ?= python | ||
| format: ## Autoformat all files | ||
| $(PYTHON) -m isort $(python_files) | ||
| $(PYTHON) -m ruff --fix $(python_files) | ||
| $(PYTHON) -m black $(python_files) | ||
@@ -49,4 +49,3 @@ | ||
| lint: ## Lint all files | ||
| $(PYTHON) -m isort --check $(python_files) | ||
| $(PYTHON) -m flake8 $(python_files) | ||
| $(PYTHON) -m ruff check $(python_files) | ||
| $(PYTHON) -m black --check --diff $(python_files) | ||
@@ -53,0 +52,0 @@ $(PYTHON) -m mypy src/pytest_memray --ignore-missing-imports |
+6
-4
| Metadata-Version: 2.1 | ||
| Name: pytest-memray | ||
| Version: 1.4.1 | ||
| Version: 1.5.0 | ||
| Summary: A simple plugin to use with pytest | ||
@@ -33,5 +33,5 @@ Project-URL: Bug Tracker, https://github.com/bloomberg/pytest-memray/issues | ||
| Requires-Dist: black==22.12; extra == 'lint' | ||
| Requires-Dist: flake8==6; extra == 'lint' | ||
| Requires-Dist: isort==5.11.4; extra == 'lint' | ||
| Requires-Dist: mypy==0.991; extra == 'lint' | ||
| Requires-Dist: ruff==0.0.272; extra == 'lint' | ||
| Provides-Extra: test | ||
@@ -141,4 +141,5 @@ Requires-Dist: covdefaults>=2.2.2; extra == 'test' | ||
| hex) | ||
| --stacks=STACKS - Show the N stack entries when showing tracebacks of memory allocations | ||
| --native - Show native frames when showing tracebacks of memory allocations (will be slower) | ||
| - `--stacks=STACKS` - Show the N stack entries when showing tracebacks of memory allocations | ||
| - `--native` - Show native frames when showing tracebacks of memory allocations (will be slower) | ||
| - `--trace-python-allocators` - Record allocations made by the Pymalloc allocator (will be slower) | ||
@@ -152,2 +153,3 @@ ## Configuration - INI | ||
| - `native(bool)`- Show native frames when showing tracebacks of memory allocations (will be slower) | ||
| - `trace_python_allocators` - Record allocations made by the Pymalloc allocator (will be slower) | ||
@@ -154,0 +156,0 @@ ## License |
+11
-1
@@ -35,3 +35,3 @@ [build-system] | ||
| "black==22.12", | ||
| "flake8==6", | ||
| "ruff==0.0.272", | ||
| "isort==5.11.4", | ||
@@ -118,1 +118,11 @@ "mypy==0.991", | ||
| ] | ||
| [tool.ruff] | ||
| ignore = ['E501'] | ||
| line-length = 95 | ||
| select = [ | ||
| 'E', | ||
| 'F', | ||
| 'W', | ||
| ] | ||
| isort = {known-first-party = ["pytest_memray"], required-imports = ["from __future__ import annotations"]} |
+4
-2
@@ -97,4 +97,5 @@ <img src="https://raw.githubusercontent.com/bloomberg/pytest-memray/main/docs/_static/images/logo.png" width="70%" style="display: block; margin: 0 auto" alt="logo"/> | ||
| hex) | ||
| --stacks=STACKS - Show the N stack entries when showing tracebacks of memory allocations | ||
| --native - Show native frames when showing tracebacks of memory allocations (will be slower) | ||
| - `--stacks=STACKS` - Show the N stack entries when showing tracebacks of memory allocations | ||
| - `--native` - Show native frames when showing tracebacks of memory allocations (will be slower) | ||
| - `--trace-python-allocators` - Record allocations made by the Pymalloc allocator (will be slower) | ||
@@ -108,2 +109,3 @@ ## Configuration - INI | ||
| - `native(bool)`- Show native frames when showing tracebacks of memory allocations (will be slower) | ||
| - `trace_python_allocators` - Record allocations made by the Pymalloc allocator (will be slower) | ||
@@ -110,0 +112,0 @@ ## License |
| from __future__ import annotations | ||
| from ._version import __version__ as __version__ | ||
| from .marks import LeaksFilterFunction | ||
| from .marks import Stack | ||
| from .marks import StackFrame | ||
| __all__ = [ | ||
| "__version__", | ||
| "LeaksFilterFunction", | ||
| "Stack", | ||
| "StackFrame", | ||
| ] |
@@ -1,1 +0,1 @@ | ||
| __version__ = "1.4.1" | ||
| __version__ = "1.5.0" |
+179
-32
| from __future__ import annotations | ||
| from dataclasses import dataclass | ||
| from pathlib import Path | ||
| from typing import Iterable | ||
| from typing import Optional | ||
| from typing import Protocol | ||
| from typing import Tuple | ||
@@ -8,2 +12,3 @@ from typing import cast | ||
| from memray import AllocationRecord | ||
| from memray import FileReader | ||
| from pytest import Config | ||
@@ -19,10 +24,59 @@ | ||
| @dataclass | ||
| class StackFrame: | ||
| """One frame of a call stack. | ||
| Each frame has attributes to tell you what code was executing. | ||
| """ | ||
| function: str | ||
| """The function being executed, or ``"???"`` if unknown.""" | ||
| filename: str | ||
| """The source file being executed, or ``"???"`` if unknown.""" | ||
| lineno: int | ||
| """The line number of the executing line, or ``0`` if unknown.""" | ||
| @dataclass | ||
| class Stack: | ||
| """The call stack that led to some memory allocation. | ||
| You can inspect the frames which make up the call stack. | ||
| """ | ||
| frames: Tuple[StackFrame, ...] | ||
| """The frames that make up the call stack, most recent first.""" | ||
| class LeaksFilterFunction(Protocol): | ||
| """A callable that can decide whether to ignore some memory leaks. | ||
| This can be used to suppress leak reports from locations that are known to | ||
| leak. For instance, you might know that objects of a certain type are | ||
| cached by the code you're invoking, and so you might want to ignore all | ||
| reports of leaked memory allocated below that type's constructor. | ||
| You can provide any callable with the following signature as the | ||
| ``filter_fn`` keyword argument for the `.limit_leaks` marker: | ||
| """ | ||
| def __call__(self, stack: Stack) -> bool: | ||
| """Return whether allocations from this stack should be reported. | ||
| Return ``True`` if you want the leak to be reported, or ``False`` if | ||
| you want it to be suppressed. | ||
| """ | ||
| ... | ||
| @dataclass | ||
| class _MemoryInfo: | ||
| """Type that holds all memray-related info for a failed test.""" | ||
| """Type that holds memory-related info for a failed test.""" | ||
| max_memory: float | ||
| total_allocated_memory: int | ||
| allocations: list[AllocationRecord] | ||
| num_stacks: int | ||
| native_stacks: bool | ||
| total_allocated_memory: int | ||
@@ -32,42 +86,92 @@ @property | ||
| """Return a tuple in the format expected by section reporters.""" | ||
| total_memory_str = sizeof_fmt(self.total_allocated_memory) | ||
| max_memory_str = sizeof_fmt(self.max_memory) | ||
| text_lines = [ | ||
| f"Test is using {total_memory_str} out of limit of {max_memory_str}", | ||
| "List of allocations: ", | ||
| ] | ||
| for record in self.allocations: | ||
| size = record.size | ||
| stack_trace = ( | ||
| record.hybrid_stack_trace() | ||
| if self.native_stacks | ||
| else record.stack_trace() | ||
| ) | ||
| if not stack_trace: | ||
| continue | ||
| padding = " " * 4 | ||
| text_lines.append(f"{padding}- {sizeof_fmt(size)} allocated here:") | ||
| stacks_left = self.num_stacks | ||
| for function, file, line in stack_trace: | ||
| if stacks_left <= 0: | ||
| break | ||
| text_lines.append(f"{padding*2}{function}:{file}:{line}") | ||
| stacks_left -= 1 | ||
| body = _generate_section_text( | ||
| self.allocations, self.native_stacks, self.num_stacks | ||
| ) | ||
| return ( | ||
| "memray-max-memory", | ||
| "List of allocations:\n" + body, | ||
| ) | ||
| return "memray-max-memory", "\n".join(text_lines) | ||
| @property | ||
| def long_repr(self) -> str: | ||
| """Generate a longrepr user-facing error message.""" | ||
| return ( | ||
| f"Test was limited to {sizeof_fmt(self.max_memory)} " | ||
| f"but allocated {sizeof_fmt(self.total_allocated_memory)}" | ||
| ) | ||
| @dataclass | ||
| class _LeakedInfo: | ||
| """Type that holds leaked memory-related info for a failed test.""" | ||
| max_memory: float | ||
| allocations: list[AllocationRecord] | ||
| num_stacks: int | ||
| native_stacks: bool | ||
| @property | ||
| def section(self) -> PytestSection: | ||
| """Return a tuple in the format expected by section reporters.""" | ||
| body = _generate_section_text( | ||
| self.allocations, self.native_stacks, self.num_stacks | ||
| ) | ||
| return ( | ||
| "memray-leaked-memory", | ||
| "List of leaked allocations:\n" + body, | ||
| ) | ||
| @property | ||
| def long_repr(self) -> str: | ||
| """Generate a longrepr user-facing error message.""" | ||
| total_memory_str = sizeof_fmt(self.total_allocated_memory) | ||
| max_memory_str = sizeof_fmt(self.max_memory) | ||
| return f"Test was limited to {max_memory_str} but allocated {total_memory_str}" | ||
| return ( | ||
| f"Test was allowed to leak {sizeof_fmt(self.max_memory)} " | ||
| "per location but at least one location leaked more" | ||
| ) | ||
| def _generate_section_text( | ||
| allocations: list[AllocationRecord], native_stacks: bool, num_stacks: int | ||
| ) -> str: | ||
| text_lines = [] | ||
| for record in allocations: | ||
| size = record.size | ||
| stack_trace = ( | ||
| record.hybrid_stack_trace() if native_stacks else record.stack_trace() | ||
| ) | ||
| if not stack_trace: | ||
| continue | ||
| padding = " " * 4 | ||
| text_lines.append(f"{padding}- {sizeof_fmt(size)} allocated here:") | ||
| stacks_left = num_stacks | ||
| for function, file, line in stack_trace: | ||
| if stacks_left <= 0: | ||
| text_lines.append(f"{padding*2}...") | ||
| break | ||
| text_lines.append(f"{padding*2}{function}:{file}:{line}") | ||
| stacks_left -= 1 | ||
| return "\n".join(text_lines) | ||
| def _passes_filter( | ||
| stack: Iterable[Tuple[str, str, int]], filter_fn: Optional[LeaksFilterFunction] | ||
| ) -> bool: | ||
| if filter_fn is None: | ||
| return True | ||
| frames = tuple(StackFrame(*frame) for frame in stack) | ||
| return filter_fn(Stack(frames)) | ||
| def limit_memory( | ||
| limit: str, *, _allocations: list[AllocationRecord], _config: Config | ||
| limit: str, *, _result_file: Path, _config: Config | ||
| ) -> _MemoryInfo | None: | ||
| """Limit memory used by the test.""" | ||
| reader = FileReader(_result_file) | ||
| allocations: list[AllocationRecord] = list( | ||
| reader.get_high_watermark_allocation_records(merge_threads=True) | ||
| ) | ||
| max_memory = parse_memory_string(limit) | ||
| total_allocated_memory = sum(record.size for record in _allocations) | ||
| total_allocated_memory = sum(record.size for record in allocations) | ||
| if total_allocated_memory < max_memory: | ||
@@ -78,8 +182,51 @@ return None | ||
| return _MemoryInfo( | ||
| max_memory, total_allocated_memory, _allocations, num_stacks, native_stacks | ||
| max_memory=max_memory, | ||
| allocations=allocations, | ||
| num_stacks=num_stacks, | ||
| native_stacks=native_stacks, | ||
| total_allocated_memory=total_allocated_memory, | ||
| ) | ||
| def limit_leaks( | ||
| location_limit: str, | ||
| *, | ||
| filter_fn: Optional[LeaksFilterFunction] = None, | ||
| _result_file: Path, | ||
| _config: Config, | ||
| ) -> _LeakedInfo | None: | ||
| reader = FileReader(_result_file) | ||
| allocations: list[AllocationRecord] = list( | ||
| reader.get_leaked_allocation_records(merge_threads=True) | ||
| ) | ||
| memory_limit = parse_memory_string(location_limit) | ||
| leaked_allocations = list( | ||
| allocation | ||
| for allocation in allocations | ||
| if ( | ||
| allocation.size >= memory_limit | ||
| and _passes_filter(allocation.hybrid_stack_trace(), filter_fn) | ||
| ) | ||
| ) | ||
| if not leaked_allocations: | ||
| return None | ||
| num_stacks: int = max(cast(int, value_or_ini(_config, "stacks")), 5) | ||
| return _LeakedInfo( | ||
| max_memory=memory_limit, | ||
| allocations=leaked_allocations, | ||
| num_stacks=num_stacks, | ||
| native_stacks=True, | ||
| ) | ||
| __all__ = [ | ||
| "limit_memory", | ||
| "limit_leaks", | ||
| "LeaksFilterFunction", | ||
| "Stack", | ||
| "StackFrame", | ||
| ] |
@@ -20,2 +20,3 @@ from __future__ import annotations | ||
| from typing import cast | ||
| from typing import Protocol | ||
@@ -38,2 +39,3 @@ from _pytest.terminal import TerminalReporter | ||
| from .marks import limit_memory | ||
| from .marks import limit_leaks | ||
| from .utils import WriteEnabledDirectoryAction | ||
@@ -44,4 +46,23 @@ from .utils import positive_int | ||
| MARKERS = {"limit_memory": limit_memory} | ||
| class SectionMetadata(Protocol): | ||
| long_repr: str | ||
| section: Tuple[str, str] | ||
| class PluginFn(Protocol): | ||
| def __call__( | ||
| *args: Any, | ||
| _result_file: Path, | ||
| _config: Config, | ||
| **kwargs: Any, | ||
| ) -> SectionMetadata | None: | ||
| ... | ||
| MARKERS = { | ||
| "limit_memory": limit_memory, | ||
| "limit_leaks": limit_leaks, | ||
| } | ||
| N_TOP_ALLOCS = 5 | ||
@@ -140,2 +161,5 @@ N_HISTOGRAM_BINS = 5 | ||
| if len(markers) > 1: | ||
| raise ValueError("Only one Memray marker can be applied to each test") | ||
| def _build_bin_path() -> Path: | ||
@@ -154,3 +178,9 @@ if self._tmp_dir is None and not os.getenv("MEMRAY_RESULT_PATH"): | ||
| native: bool = bool(value_or_ini(self.config, "native")) | ||
| trace_python_allocators: bool = bool( | ||
| value_or_ini(self.config, "trace_python_allocators") | ||
| ) | ||
| if markers and "limit_leaks" in markers: | ||
| native = trace_python_allocators = True | ||
| @functools.wraps(func) | ||
@@ -161,3 +191,7 @@ def wrapper(*args: Any, **kwargs: Any) -> object | None: | ||
| result_file = _build_bin_path() | ||
| with Tracker(result_file, native_traces=native): | ||
| with Tracker( | ||
| result_file, | ||
| native_traces=native, | ||
| trace_python_allocators=trace_python_allocators, | ||
| ): | ||
| test_result = func(*args, **kwargs) | ||
@@ -200,15 +234,13 @@ try: | ||
| for marker in item.iter_markers(): | ||
| marker_fn = MARKERS.get(marker.name) | ||
| if not marker_fn: | ||
| maybe_marker_fn = MARKERS.get(marker.name) | ||
| if not maybe_marker_fn: | ||
| continue | ||
| marker_fn: PluginFn = cast(PluginFn, maybe_marker_fn) | ||
| result = self.results.get(item.nodeid) | ||
| if not result: | ||
| continue | ||
| reader = FileReader(result.result_file) | ||
| func = reader.get_high_watermark_allocation_records | ||
| allocations = list((func(merge_threads=True))) | ||
| res = marker_fn( | ||
| *marker.args, | ||
| **marker.kwargs, | ||
| _allocations=allocations, | ||
| _result_file=result.result_file, | ||
| _config=self.config, | ||
@@ -292,3 +324,3 @@ ) | ||
| writeln = terminalreporter.write_line | ||
| writeln(f"Allocations results for {test_id}") | ||
| writeln(f"Allocation results for {test_id} at the high watermark") | ||
| writeln("") | ||
@@ -355,2 +387,8 @@ writeln(f"\t 📦 Total memory allocated: {sizeof_fmt(metadata.peak_memory)}") | ||
| ) | ||
| group.addoption( | ||
| "--trace-python-allocators", | ||
| action="store_true", | ||
| default=False, | ||
| help="Record allocations made by the Pymalloc allocator (will be slower)", | ||
| ) | ||
@@ -374,2 +412,7 @@ parser.addini("memray", "Activate pytest.ini setting", type="bool") | ||
| ) | ||
| parser.addini( | ||
| "trace_python_allocators", | ||
| help="Record allocations made by the Pymalloc allocator (will be slower)", | ||
| type="bool", | ||
| ) | ||
| help_msg = "Show the N tests that allocate most memory (N=0 for all)" | ||
@@ -376,0 +419,0 @@ parser.addini("most_allocations", help_msg) |
@@ -206,3 +206,5 @@ from __future__ import annotations | ||
| output = result.stdout.str() | ||
| mock.assert_called_once_with(ANY, native_traces=native) | ||
| mock.assert_called_once_with( | ||
| ANY, native_traces=native, trace_python_allocators=False | ||
| ) | ||
@@ -215,2 +217,45 @@ if native: | ||
| @pytest.mark.parametrize("trace_python_allocators", [True, False]) | ||
| def test_memray_report_python_allocators( | ||
| trace_python_allocators: bool, pytester: Pytester | ||
| ) -> None: | ||
| pytester.makepyfile( | ||
| """ | ||
| import pytest | ||
| from memray._test import PymallocMemoryAllocator | ||
| from memray._test import PymallocDomain | ||
| allocator = PymallocMemoryAllocator(PymallocDomain.PYMALLOC_OBJECT) | ||
| def allocate_with_pymalloc(): | ||
| allocator.malloc(256) | ||
| allocator.free() | ||
| @pytest.mark.limit_memory("128B") | ||
| def test_foo(): | ||
| allocate_with_pymalloc() | ||
| """ | ||
| ) | ||
| with patch("pytest_memray.plugin.Tracker", wraps=Tracker) as mock: | ||
| result = pytester.runpytest( | ||
| "--memray", | ||
| *(["--trace-python-allocators"] if trace_python_allocators else []), | ||
| ) | ||
| assert result.ret == ( | ||
| ExitCode.TESTS_FAILED if trace_python_allocators else ExitCode.OK | ||
| ) | ||
| output = result.stdout.str() | ||
| mock.assert_called_once_with( | ||
| ANY, native_traces=False, trace_python_allocators=trace_python_allocators | ||
| ) | ||
| if trace_python_allocators: | ||
| assert "allocate_with_pymalloc" in output | ||
| else: | ||
| assert "allocate_with_pymalloc" not in output | ||
| def test_memray_report(pytester: Pytester) -> None: | ||
@@ -509,6 +554,5 @@ pytester.makepyfile( | ||
| [ | ||
| (1024 * 5, ExitCode.TESTS_FAILED), | ||
| (1024 * 2, ExitCode.TESTS_FAILED), | ||
| (1024 * 2 - 1, ExitCode.OK), | ||
| (1024 * 1, ExitCode.OK), | ||
| (1024 * 20, ExitCode.TESTS_FAILED), | ||
| (1024 * 10, ExitCode.TESTS_FAILED), | ||
| (1024, ExitCode.OK), | ||
| ], | ||
@@ -525,3 +569,3 @@ ) | ||
| @pytest.mark.limit_memory("2KB") | ||
| @pytest.mark.limit_memory("10KB") | ||
| def test_memory_alloc_fails(): | ||
@@ -531,3 +575,3 @@ allocator.valloc({size}) | ||
| @pytest.mark.limit_memory("2KB") | ||
| @pytest.mark.limit_memory("10KB") | ||
| def test_memory_alloc_fails_2(): | ||
@@ -558,1 +602,149 @@ allocator.valloc({size}) | ||
| assert result.ret == ExitCode.OK | ||
| @pytest.mark.parametrize( | ||
| "size, outcome", | ||
| [ | ||
| (1, ExitCode.OK), | ||
| (1024 * 1 / 10, ExitCode.OK), | ||
| (1024 * 1, ExitCode.TESTS_FAILED), | ||
| (1024 * 10, ExitCode.TESTS_FAILED), | ||
| ], | ||
| ) | ||
| def test_leak_marker(pytester: Pytester, size: int, outcome: ExitCode) -> None: | ||
| pytester.makepyfile( | ||
| f""" | ||
| import pytest | ||
| from memray._test import MemoryAllocator | ||
| allocator = MemoryAllocator() | ||
| @pytest.mark.limit_leaks("5KB") | ||
| def test_memory_alloc_fails(): | ||
| for _ in range(10): | ||
| allocator.valloc({size}) | ||
| # No free call here | ||
| """ | ||
| ) | ||
| result = pytester.runpytest("--memray") | ||
| assert result.ret == outcome | ||
| @pytest.mark.parametrize( | ||
| "size, outcome", | ||
| [ | ||
| (1, ExitCode.OK), | ||
| (1024 * 1 / 10, ExitCode.OK), | ||
| (1024 * 1, ExitCode.TESTS_FAILED), | ||
| (1024 * 10, ExitCode.TESTS_FAILED), | ||
| ], | ||
| ) | ||
| def test_leak_marker_in_a_thread( | ||
| pytester: Pytester, size: int, outcome: ExitCode | ||
| ) -> None: | ||
| pytester.makepyfile( | ||
| f""" | ||
| import pytest | ||
| from memray._test import MemoryAllocator | ||
| allocator = MemoryAllocator() | ||
| import threading | ||
| def allocating_func(): | ||
| for _ in range(10): | ||
| allocator.valloc({size}) | ||
| # No free call here | ||
| @pytest.mark.limit_leaks("5KB") | ||
| def test_memory_alloc_fails(): | ||
| t = threading.Thread(target=allocating_func) | ||
| t.start() | ||
| t.join() | ||
| """ | ||
| ) | ||
| result = pytester.runpytest("--memray") | ||
| assert result.ret == outcome | ||
| def test_leak_marker_filtering_function(pytester: Pytester) -> None: | ||
| pytester.makepyfile( | ||
| """ | ||
| import pytest | ||
| from memray._test import MemoryAllocator | ||
| LEAK_SIZE = 1024 | ||
| allocator = MemoryAllocator() | ||
| def this_should_not_be_there(): | ||
| allocator.valloc(LEAK_SIZE) | ||
| # No free call here | ||
| def filtering_function(stack): | ||
| for frame in stack.frames: | ||
| if frame.function == "this_should_not_be_there": | ||
| return False | ||
| return True | ||
| @pytest.mark.limit_leaks("5KB", filter_fn=filtering_function) | ||
| def test_memory_alloc_fails(): | ||
| for _ in range(10): | ||
| this_should_not_be_there() | ||
| """ | ||
| ) | ||
| result = pytester.runpytest("--memray") | ||
| assert result.ret == ExitCode.OK | ||
| def test_leak_marker_does_work_if_memray_not_passed(pytester: Pytester) -> None: | ||
| pytester.makepyfile( | ||
| """ | ||
| import pytest | ||
| from memray._test import MemoryAllocator | ||
| allocator = MemoryAllocator() | ||
| @pytest.mark.limit_leaks("0B") | ||
| def test_memory_alloc_fails(): | ||
| allocator.valloc(512) | ||
| # No free call here | ||
| """ | ||
| ) | ||
| result = pytester.runpytest() | ||
| assert result.ret == ExitCode.TESTS_FAILED | ||
| def test_multiple_markers_are_not_supported(pytester: Pytester) -> None: | ||
| pytester.makepyfile( | ||
| """ | ||
| import pytest | ||
| @pytest.mark.limit_leaks("0MB") | ||
| @pytest.mark.limit_memory("0MB") | ||
| def test_bar(): | ||
| pass | ||
| """ | ||
| ) | ||
| result = pytester.runpytest("--memray") | ||
| assert result.ret == ExitCode.TESTS_FAILED | ||
| output = result.stdout.str() | ||
| assert "Only one Memray marker can be applied to each test" in output | ||
| def test_multiple_markers_are_not_supported_with_global_marker( | ||
| pytester: Pytester, | ||
| ) -> None: | ||
| pytester.makepyfile( | ||
| """ | ||
| import pytest | ||
| pytestmark = pytest.mark.limit_memory("1 MB") | ||
| @pytest.mark.limit_leaks("0MB") | ||
| def test_bar(): | ||
| pass | ||
| """ | ||
| ) | ||
| result = pytester.runpytest("--memray") | ||
| assert result.ret == ExitCode.TESTS_FAILED | ||
| output = result.stdout.str() | ||
| assert "Only one Memray marker can be applied to each test" in output |
+0
-4
@@ -80,5 +80,1 @@ [tox] | ||
| python -c 'import sys; print(sys.executable)' | ||
| [flake8] | ||
| max-line-length = 95 | ||
| ignore = E501, W503 |
Alert delta unavailable
Currently unable to show alert delta for PyPI packages.
354624
4.91%25
8.7%1395
30.13%