
Research
Security News
Malicious npm Packages Use Telegram to Exfiltrate BullX Credentials
Socket uncovers an npm Trojan stealing crypto wallets and BullX credentials via obfuscated code and Telegram exfiltration.
gradient-free-optimizers
Advanced tools
Simple and reliable optimization with local, global, population-based and sequential techniques in numerical discrete search spaces.
Master status: |
|
Dev status: |
|
Code quality: |
|
Latest versions: |
|
Gradient-Free-Optimizers provides a collection of easy to use optimization techniques, whose objective function only requires an arbitrary score that gets maximized. This makes gradient-free methods capable of solving various optimization problems, including:
Gradient-Free-Optimizers is the optimization backend of Hyperactive (in v3.0.0 and higher) but it can also be used by itself as a leaner and simpler optimization toolkit.
Easy to use:
You can optimize anything that can be defined in a python function. For example a simple parabola function:
def objective_function(para):
score = para["x1"] * para["x1"]
return -score
Define where to search via numpy ranges:
search_space = {
"x": np.arange(0, 5, 0.1),
}
That`s all the information the algorithm needs to search for the maximum in the objective function:
from gradient_free_optimizers import RandomSearchOptimizer
opt = RandomSearchOptimizer(search_space)
opt.search(objective_function, n_iter=100000)
During the optimization you will receive ongoing information in a progress bar:
High performance:
Gradient-Free-Optimizers provides not just meta-heuristic optimization methods but also sequential model based optimizers like bayesian optimization, which delivers good results for expensive objetive functions like deep-learning models.
Even for the very simple parabola function the optimization time is about 60% of the entire iteration time when optimizing with random search. This shows, that (despite all its features) Gradient-Free-Optimizers has an efficient optimization backend without any unnecessary slowdown.
Per default Gradient-Free-Optimizers will look for the current position in a memory dictionary before evaluating the objective function.
If the position is not in the dictionary the objective function will be evaluated and the position and score is saved in the dictionary.
If a position is already saved in the dictionary Gradient-Free-Optimizers will just extract the score from it instead of evaluating the objective function. This avoids reevaluating computationally expensive objective functions (machine- or deep-learning) and therefore saves time.
High reliability:
Gradient-Free-Optimizers is extensivly tested with more than 400 tests in 2500 lines of test code. This includes the testing of:
Each optimization algorithm must perform above a certain threshold to be included. Poorly performing algorithms are reworked or scraped.
Gradient-Free-Optimizers supports a variety of optimization algorithms, which can make choosing the right algorithm a tedious endeavor. The gifs in this section give a visual representation how the different optimization algorithms explore the search space and exploit the collected information about the search space for a convex and non-convex objective function. More detailed explanations of all optimization algorithms can be found in the official documentation.
Evaluates the score of n neighbours in an epsilon environment and moves to the best one.
Convex Function | Non-convex Function |
---|---|
![]() | ![]() |
Adds a probability to the hill climbing to move to a worse position in the search-space to escape local optima.
Convex Function | Non-convex Function |
---|---|
![]() | ![]() |
Hill climbing algorithm with the addition of increasing epsilon by a factor if no better neighbour was found.
Convex Function | Non-convex Function |
---|---|
![]() | ![]() |
Adds a probability to the hill climbing to move to a worse position in the search-space to escape local optima with decreasing probability over time.
Convex Function | Non-convex Function |
---|---|
![]() | ![]() |
Constructs a simplex from multiple positions that moves through the search-space by reflecting, expanding, contracting or shrinking.
Convex Function | Non-convex Function |
---|---|
![]() | ![]() |
Moves to random positions in each iteration.
Convex Function | Non-convex Function |
---|---|
![]() | ![]() |
Grid-search that moves through search-space diagonal (with step-size=1) starting from a corner.
Convex Function | Non-convex Function |
---|---|
![]() | ![]() |
Hill climbingm, that moves to a random position after n iterations.
Convex Function | Non-convex Function |
---|---|
![]() | ![]() |
Hill Climbing, that has large epsilon at the start of the search decreasing over time.
Convex Function | Non-convex Function |
---|---|
![]() | ![]() |
Creates cross-shaped collection of positions that move through search-space by moving as a whole towards optima or shrinking the cross.
Convex Function | Non-convex Function |
---|---|
![]() | ![]() |
Optimizes each search-space dimension at a time with a hill-climbing algorithm.
Convex Function | Non-convex Function |
---|---|
![]() | ![]() |
Population of n simulated annealers, which occasionally swap transition probabilities.
Convex Function | Non-convex Function |
---|---|
![]() | ![]() |
Population of n particles attracting each other and moving towards the best particle.
Convex Function | Non-convex Function |
---|---|
![]() | ![]() |
Population of n particles moving in a spiral pattern around the best position.
Convex Function | Non-convex Function |
---|---|
![]() | ![]() |
Evolutionary algorithm selecting the best individuals in the population, mixing their parameters to get new solutions.
Convex Function | Non-convex Function |
---|---|
![]() | ![]() |
Population of n hill climbers occasionally mixing positional information and removing worst positions from population.
Convex Function | Non-convex Function |
---|---|
![]() | ![]() |
Improves a population of candidate solutions by creating trial vectors through the differential mutation of three randomly selected individuals.
Convex Function | Non-convex Function |
---|---|
![]() | ![]() |
Gaussian process fitting to explored positions and predicting promising new positions.
Convex Function | Non-convex Function |
---|---|
![]() | ![]() |
Calculates an upper bound from the distances of the previously explored positions to find new promising positions.
Convex Function | Non-convex Function |
---|---|
![]() | ![]() |
Separates search space into subspaces. It evaluates the center position of each subspace to decide which subspace to sepate further.
Convex Function | Non-convex Function |
---|---|
![]() | ![]() |
Kernel density estimators fitting to good and bad explored positions and predicting promising new positions.
Convex Function | Non-convex Function |
---|---|
![]() | ![]() |
Ensemble of decision trees fitting to explored positions and predicting promising new positions.
Convex Function | Non-convex Function |
---|---|
![]() | ![]() |
The following packages are designed to support Gradient-Free-Optimizers and expand its use cases.
Package | Description |
---|---|
Search-Data-Collector | Simple tool to save search-data during or after the optimization run into csv-files. |
Search-Data-Explorer | Visualize search-data with plotly inside a streamlit dashboard. |
If you want news about Gradient-Free-Optimizers and related projects you can follow me on twitter.
The most recent version of Gradient-Free-Optimizers is available on PyPi:
pip install gradient-free-optimizers
import numpy as np
from gradient_free_optimizers import RandomSearchOptimizer
def parabola_function(para):
loss = para["x"] * para["x"]
return -loss
search_space = {"x": np.arange(-10, 10, 0.1)}
opt = RandomSearchOptimizer(search_space)
opt.search(parabola_function, n_iter=100000)
import numpy as np
from gradient_free_optimizers import RandomSearchOptimizer
def ackley_function(pos_new):
x = pos_new["x1"]
y = pos_new["x2"]
a1 = -20 * np.exp(-0.2 * np.sqrt(0.5 * (x * x + y * y)))
a2 = -np.exp(0.5 * (np.cos(2 * np.pi * x) + np.cos(2 * np.pi * y)))
score = a1 + a2 + 20
return -score
search_space = {
"x1": np.arange(-100, 101, 0.1),
"x2": np.arange(-100, 101, 0.1),
}
opt = RandomSearchOptimizer(search_space)
opt.search(ackley_function, n_iter=30000)
import numpy as np
from sklearn.model_selection import cross_val_score
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.datasets import load_wine
from gradient_free_optimizers import HillClimbingOptimizer
data = load_wine()
X, y = data.data, data.target
def model(para):
gbc = GradientBoostingClassifier(
n_estimators=para["n_estimators"],
max_depth=para["max_depth"],
min_samples_split=para["min_samples_split"],
min_samples_leaf=para["min_samples_leaf"],
)
scores = cross_val_score(gbc, X, y, cv=3)
return scores.mean()
search_space = {
"n_estimators": np.arange(20, 120, 1),
"max_depth": np.arange(2, 12, 1),
"min_samples_split": np.arange(2, 12, 1),
"min_samples_leaf": np.arange(1, 12, 1),
}
opt = HillClimbingOptimizer(search_space)
opt.search(model, n_iter=50)
import numpy as np
from gradient_free_optimizers import RandomSearchOptimizer
def convex_function(pos_new):
score = -(pos_new["x1"] * pos_new["x1"] + pos_new["x2"] * pos_new["x2"])
return score
search_space = {
"x1": np.arange(-100, 101, 0.1),
"x2": np.arange(-100, 101, 0.1),
}
def constraint_1(para):
# only values in 'x1' higher than -5 are valid
return para["x1"] > -5
# put one or more constraints inside a list
constraints_list = [constraint_1]
# pass list of constraints to the optimizer
opt = RandomSearchOptimizer(search_space, constraints=constraints_list)
opt.search(convex_function, n_iter=50)
search_data = opt.search_data
# the search-data does not contain any samples where x1 is equal or below -5
print("\n search_data \n", search_data, "\n")
Gradient-Free-Optimizers was created as the optimization backend of the Hyperactive package. Therefore the algorithms are exactly the same in both packages and deliver the same results. However you can still use Gradient-Free-Optimizers as a standalone package. The separation of Gradient-Free-Optimizers from Hyperactive enables multiple advantages:
While Gradient-Free-Optimizers is relatively simple, Hyperactive is a more complex project with additional features to make optimization of computationally expensive models (like engineering simulation or machine-/deep-learning models) more convenient.
@Misc{gfo2020,
author = {{Simon Blanke}},
title = {{Gradient-Free-Optimizers}: Simple and reliable optimization with local, global, population-based and sequential techniques in numerical search spaces.},
howpublished = {\url{https://github.com/SimonBlanke}},
year = {since 2020}
}
Gradient-Free-Optimizers is licensed under the following License:
FAQs
Simple and reliable optimization with local, global, population-based and sequential techniques in numerical discrete search spaces.
We found that gradient-free-optimizers demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Research
Security News
Socket uncovers an npm Trojan stealing crypto wallets and BullX credentials via obfuscated code and Telegram exfiltration.
Research
Security News
Malicious npm packages posing as developer tools target macOS Cursor IDE users, stealing credentials and modifying files to gain persistent backdoor access.
Security News
AI-generated slop reports are making bug bounty triage harder, wasting maintainer time, and straining trust in vulnerability disclosure programs.