Gradient-Free-Optimizers is a Python library for gradient-free optimization of black-box functions. It provides a unified interface to 21 optimization algorithms, from simple hill climbing to Bayesian optimization, all operating on discrete numerical search spaces defined via NumPy arrays.
Designed for hyperparameter tuning, simulation optimization, and any scenario where gradients are unavailable or impractical. The library prioritizes simplicity: define your objective function, specify the search space, and run. It serves as the optimization backend for Hyperactive but can also be used standalone.
Installation
pip install gradient-free-optimizers
Optional dependencies
pip install gradient-free-optimizers[progress] # Progress bar with tqdm
pip install gradient-free-optimizers[sklearn] # scikit-learn for surrogate models
pip install gradient-free-optimizers[full] # All optional dependencies
Key Features
21 Optimization Algorithms Local, global, population-based, and sequential model-based optimizers. Switch algorithms with one line of code.
Zero Configuration Sensible defaults for all parameters. Start optimizing immediately without tuning the optimizer itself.
Memory System Built-in caching prevents redundant evaluations. Critical for expensive objective functions like ML models.
Discrete Search Spaces Define parameter spaces with familiar NumPy syntax using arrays and ranges.
Constraints Support Define constraint functions to restrict the search space. Invalid regions are automatically avoided.
Minimal Dependencies Only pandas required. Optional integrations for progress bars (tqdm) and surrogate models (scikit-learn).
Quick Start
import numpy as np
from gradient_free_optimizers import HillClimbingOptimizer
# Define objective function (maximize)defobjective(params):
x, y = params["x"], params["y"]
return -(x**2 + y**2) # Negative paraboloid, optimum at (0, 0)# Define search space
search_space = {
"x": np.arange(-5, 5, 0.1),
"y": np.arange(-5, 5, 0.1),
}
# Run optimization
opt = HillClimbingOptimizer(search_space)
opt.search(objective, n_iter=1000)
# Resultsprint(f"Best score: {opt.best_score}")
print(f"Best params: {opt.best_para}")
Output:
Best score: -0.02
Best params: {'x': 0.1, 'y': 0.1}
Core Concepts
flowchart LR
O["Optimizer
━━━━━━━━━━
21 algorithms"]
S["Search Space
━━━━━━━━━━━━
NumPy arrays"]
F["Objective
━━━━━━━━━━
f(params) → score"]
D[("Search Data
━━━━━━━━━━━
history")]
O -->|propose| S
S -->|params| F
F -->|score| O
O -.-> D
D -.->|warm start| O
Optimizer: Implements the search strategy. Choose from 21 algorithms across four categories: local search, global search, population-based, and sequential model-based.
Search Space: Defines valid parameter combinations as NumPy arrays. Each key is a parameter name, each value is an array of allowed values.
Objective Function: Your function to maximize. Takes a dictionary of parameters, returns a score. Use negation to minimize.
Search Data: Complete history of all evaluations accessible via opt.search_data for analysis and warm-starting future searches.
Examples
Hyperparameter Optimization
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.model_selection import cross_val_score
from sklearn.datasets import load_wine
import numpy as np
from gradient_free_optimizers import BayesianOptimizer
X, y = load_wine(return_X_y=True)
defobjective(params):
model = GradientBoostingClassifier(
n_estimators=params["n_estimators"],
max_depth=params["max_depth"],
learning_rate=params["learning_rate"],
)
return cross_val_score(model, X, y, cv=5).mean()
search_space = {
"n_estimators": np.arange(50, 300, 10),
"max_depth": np.arange(2, 10),
"learning_rate": np.logspace(-3, 0, 20),
}
opt = BayesianOptimizer(search_space)
opt.search(objective, n_iter=50)
import numpy as np
from gradient_free_optimizers import ParticleSwarmOptimizer
defrastrigin(params):
A = 10
values = [params[f"x{i}"] for i inrange(5)]
return -sum(v**2 - A * np.cos(2 * np.pi * v) + A for v in values)
search_space = {f"x{i}": np.arange(-5.12, 5.12, 0.1) for i inrange(5)}
opt = ParticleSwarmOptimizer(search_space, population=20)
opt.search(rastrigin, n_iter=500)
If you use this software in your research, please cite:
@software{gradient_free_optimizers,
author = {Simon Blanke},
title = {Gradient-Free-Optimizers: Simple and reliable optimization with local, global, population-based and sequential techniques in numerical search spaces},
year = {2020},
url = {https://github.com/SimonBlanke/Gradient-Free-Optimizers},
}
License
MIT License - Free for commercial and academic use.
Lightweight optimization with local, global, population-based and sequential techniques across mixed search spaces
We found that gradient-free-optimizers demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago.It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.