Research
Security News
Malicious npm Package Targets Solana Developers and Hijacks Funds
A malicious npm package targets Solana developers, rerouting funds in 2% of transactions to a hardcoded address.
Adaptive is an open-source Python library that streamlines adaptive parallel function evaluations. Rather than calculating all points on a dense grid, it intelligently selects the "best" points in the parameter space based on your provided function and bounds. With minimal code, you can perform evaluations on a computing cluster, display live plots, and optimize the adaptive sampling algorithm.
Adaptive is most efficient for computations where each function evaluation takes at least ≈50ms due to the overhead of selecting potentially interesting points.
To see Adaptive in action, try the example notebook on Binder or explore the tutorial on Read the Docs.
Adaptively learning a 1D function and live-plotting the process in a Jupyter notebook:
from adaptive import notebook_extension, Runner, Learner1D
notebook_extension()
def peak(x, a=0.01):
return x + a**2 / (a**2 + x**2)
learner = Learner1D(peak, bounds=(-1, 1))
runner = Runner(learner, loss_goal=0.01)
runner.live_info()
runner.live_plot()
You can export the learned data as a NumPy array:
data = learner.to_numpy()
If you have Pandas installed, you can also export the data as a DataFrame:
df = learner.to_dataframe()
The core concept in adaptive
is the learner.
A learner samples a function at the most interesting locations within its parameter space, allowing for optimal sampling of the function.
As the function is evaluated at more points, the learner improves its understanding of the best locations to sample next.
The definition of the "best locations" depends on your application domain.
While adaptive
provides sensible default choices, the adaptive sampling process can be fully customized.
The following learners are implemented:
Learner1D
: for 1D functions f: ℝ → ℝ^N
,Learner2D
: for 2D functions f: ℝ^2 → ℝ^N
,LearnerND
: for ND functions f: ℝ^N → ℝ^M
,AverageLearner
: for random variables, allowing averaging of results over multiple evaluations,AverageLearner1D
: for stochastic 1D functions, estimating the mean value at each point,IntegratorLearner
: for integrating a 1D function f: ℝ → ℝ
,BalancingLearner
: for running multiple learners simultaneously and selecting the "best" one as more points are gathered.Meta-learners (to be used with other learners):
BalancingLearner
: for running several learners at once, selecting the "most optimal" one each time you get more points,DataSaver
: for when your function doesn't return just a scalar or a vector.In addition to learners, adaptive
offers primitives for parallel sampling across multiple cores or machines, with built-in support for:
concurrent.futures,
mpi4py,
loky,
ipyparallel, and
distributed.
adaptive
works with Python 3.7 and higher on Linux, Windows, or Mac, and provides optional extensions for working with the Jupyter/IPython Notebook.
The recommended way to install adaptive is using conda
:
conda install -c conda-forge adaptive
adaptive
is also available on PyPI:
pip install "adaptive[notebook]"
The [notebook]
above will also install the optional dependencies for running adaptive
inside a Jupyter notebook.
To use Adaptive in Jupyterlab, you need to install the following labextensions.
jupyter labextension install @jupyter-widgets/jupyterlab-manager
jupyter labextension install @pyviz/jupyterlab_pyviz
Clone the repository and run pip install -e ".[notebook,testing,other]"
to add a link to the cloned repo into your Python path:
git clone git@github.com:python-adaptive/adaptive.git
cd adaptive
pip install -e ".[notebook,testing,other]"
We recommend using a Conda environment or a virtualenv for package management during Adaptive development.
To avoid polluting the history with notebook output, set up the git filter by running:
python ipynb_filter.py
in the repository.
To maintain consistent code style, we use pre-commit. Install it by running:
pre-commit install
in the repository.
If you used Adaptive in a scientific work, please cite it as follows.
@misc{Nijholt2019,
doi = {10.5281/zenodo.1182437},
author = {Bas Nijholt and Joseph Weston and Jorn Hoofwijk and Anton Akhmerov},
title = {\textit{Adaptive}: parallel active learning of mathematical functions},
publisher = {Zenodo},
year = {2019}
}
If you're interested in the scientific background and principles behind Adaptive, we recommend taking a look at the draft paper that is currently being written. This paper provides a comprehensive overview of the concepts, algorithms, and applications of the Adaptive library.
We would like to give credits to the following people:
AdaptiveTriSampling
script (no longer available online since SciPy Central went down) which served as inspiration for the adaptive.Learner2D
.For general discussion, we have a Gitter chat channel. If you find any bugs or have any feature suggestions please file a GitHub issue or submit a pull request.
FAQs
Parallel active learning of mathematical functions
We found that adaptive demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 2 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Research
Security News
A malicious npm package targets Solana developers, rerouting funds in 2% of transactions to a hardcoded address.
Security News
Research
Socket researchers have discovered malicious npm packages targeting crypto developers, stealing credentials and wallet data using spyware delivered through typosquats of popular cryptographic libraries.
Security News
Socket's package search now displays weekly downloads for npm packages, helping developers quickly assess popularity and make more informed decisions.