Sign inDemoInstall


Package Overview
File Explorer

Advanced tools

Install Socket

Detect and block malicious and high-risk dependencies



Securely clear secrets from memory. Built on stable Rust primitives which guarantee memory is zeroed using an operation will not be 'optimized away' by the compiler.




PyPI version CI

Securely clear secrets from memory. Built on stable Rust primitives which guarantee memory is zeroed using an operation will not be 'optimized away' by the compiler.

It uses zeroize crate under the hood to zeroize and memsec for mlock() and munlock(). Maximum you can mlock is 2662 KB.
It can work with bytearray and numpy array.

In the case of Copy-on-write fork you need to zeroize the memory before forking the child process, see example below.
Also by itself it doesn't work if memory is moved or moved to swap. You can use zeroize.mlock() to lock the memory, see example below.

Caveats of mlock()

mlock works on pages, so 2 variables could reside in the same page and if you munlock one it will munlock the whole page and also the memory for the other variable. Ideally you could munlock all your vars at same time so it would not be affected by the overlap. One strategy could be to expire your vars that store credentials when not used and to reload them again when needed. Like that you could mlock when you load them and munlock on expire and keep all vars under the same expire policy. Like this all var will be munlocked at the same time.


On Windows you can mlock up to 128 KB by default. If you need more you need to first call SetProcessWorkingSetSize to increase the dwMinimumWorkingSetSize. Will have an example below.

Lock and zeroize memory

from zeroize import zeroize1, mlock, munlock
import numpy as np

if __name__ == "__main__":
        print("allocate memory")

        # regular array
        # Maximum you can mlock is 2662 KB
        arr = bytearray(b"1234567890")

        # numpy array
        # Maximum you can mlock is 2662 KB
        arr_np = np.array([0] * 10, dtype=np.uint8)
        arr_np[:] = arr
        assert arr_np.tobytes() == b"1234567890"

        print("locking memory")


        print("zeroize'ing...: ")

        print("checking if is zeroized")
        assert arr == bytearray(b"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00")
        assert all(arr_np == 0)

        print("all good, bye!")

        # Unlock the memory
        print("unlocking memory")

Zeroing memory before forking child process

This mitigates the problems that appears on Copy-on-write fork. You need to zeroize the data before forking the child process.

import os
from zeroize import zeroize1, mlock, munlock

if __name__ == "__main__":
        # Maximum you can mlock is 2662 KB
        sensitive_data = bytearray(b"Sensitive Information")

        print("Before zeroization:", sensitive_data)

        print("After zeroization:", sensitive_data)

        # Forking after zeroization to ensure no sensitive data is copied
        pid = os.fork()
        if pid == 0:
            # This is the child process
            print("Child process memory after fork:", sensitive_data)
            # This is the parent process
            os.wait()  # Wait for the child process to exit
        print("all good, bye!")

        # Unlock the memory
        print("unlocking memory")

Locking more than 128 KB

On Windows if you need to mlock more than 128 KB you need to first call SetProcessWorkingSetSize to increase the dwMinimumWorkingSetSize.

Here is an example, set min_size to the size you want to mlock + some overhead.

import platform

def setup_memory_limit():
    if not platform.system() == "Windows":

    import ctypes
    from ctypes import wintypes

    # Define the Windows API functions
    kernel32 = ctypes.WinDLL('kernel32', use_last_error=True)

    GetCurrentProcess = kernel32.GetCurrentProcess
    GetCurrentProcess.restype = wintypes.HANDLE

    SetProcessWorkingSetSize = kernel32.SetProcessWorkingSetSize
    SetProcessWorkingSetSize.restype = wintypes.BOOL
    SetProcessWorkingSetSize.argtypes = [wintypes.HANDLE, ctypes.c_size_t, ctypes.c_size_t]

    # Get the handle of the current process
    current_process = GetCurrentProcess()

    # Set the working set size
    min_size = 6 * 1024 * 1024  # Minimum working set size
    max_size = 10 * 1024 * 1024  # Maximum working set size

    result = SetProcessWorkingSetSize(current_process, min_size, max_size)

    if not result:
        error_code = ctypes.get_last_error()
        error_message = ctypes.FormatError(error_code)
        raise RuntimeError(f"SetProcessWorkingSetSize failed with error code {error_code}: {error_message}")

# Call this before you mlock

Build from source


Open in Gitpod

Open in Codespaces

Geting sources from GitHub

Skip this if you're starting it in browser.

git clone && cd zeroize-python

Compile and run

curl --proto '=https' --tlsv1.2 -sSf | sh

To configure your current shell, you need to source the corresponding env file under $HOME/.cargo. This is usually done by running one of the following (note the leading DOT):

. "$HOME/.cargo/env"
python -m venv .env
source .env/bin/activate
pip install -r requirements.txt
maturin develop
python examples/
python examples/
python examples/


Feel free to fork it, change and use it in any way that you want. If you build something interesting and feel like sharing pull requests are always appreciated.

Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in this project by you, as defined in the Apache License, shall be dual-licensed as above, without any additional terms or conditions.

How to contribute

  1. Fork the repo
  2. Make the changes in your fork
  3. Add tests for your changes, if applicable
  4. cargo build --all --all-features and fix any issues
  5. cargo fmt --all, you can cnofigure your IDE to do this on save RustRover and VSCode
  6. cargo check --all --all-features and fix any errors and warnings
  7. cargo clippy --all --all-features and fix any errors
  8. cargo test --all --all-features and fix any issues
  9. cargo bench --all --all-features and fix any issues
  10. Create a PR
  11. Monitor the checks (GitHub actions runned)
  12. Respond to any comments
  13. In the end ideally it will be merged to main



Did you know?


Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.


Related posts

SocketSocket SOC 2 Logo


  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap


Stay in touch

Get open source security insights delivered straight into your inbox.

  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc