![Oracle Drags Its Feet in the JavaScript Trademark Dispute](https://cdn.sanity.io/images/cgdhsj6q/production/919c3b22c24f93884c548d60cbb338e819ff2435-1024x1024.webp?w=400&fit=max&auto=format)
Security News
Oracle Drags Its Feet in the JavaScript Trademark Dispute
Oracle seeks to dismiss fraud claims in the JavaScript trademark dispute, delaying the case and avoiding questions about its right to the name.
🚀 ABLATOR is a DISTRIBUTED EXECUTION FRAMEWORK designed to enhance ablation studies in complex machine learning models. It automates the process of configuration and conducts multiple experiments in parallel.
It involves removing specific parts of a neural network architecture or changing different aspects of the training process to examine their contributions to the model's performance.
As machine learning models grow in complexity, the number of components that need to be ablated also increases. This consequently expands the search space of possible configurations, requiring an efficient approach to horizontally scale multiple parallel experimental trials. ABLATOR is a tool that aids in the horizontal scaling of experimental trials.
Instead of manually configuring and conducting multiple experiments with various hyperparameter settings, ABLATOR automates this process. It initializes experiments based on different hyperparameter configurations, tracks the state of each experiment, and provides experiment persistence on the cloud.
Left, ABLATOR efficiently conducts multiple trials in parallel based and log the experiment results.
Right, manually, one would need to run trials sequentially, demanding more effort and independent analysis.
For MacOS and Linux systems directly install via pip.
pip install ablator
If you are using Windows, you will need to install WSL using the official from Microsoft.
WSL is a Linux subsystem and for ABLATOR purposes is identical to using Linux.
from torch import nn
import torch
from ablator import (
ModelConfig,
ModelWrapper,
OptimizerConfig,
TrainConfig,
configclass,
Literal,
ParallelTrainer,
SearchSpace,
)
from ablator.config.mp import ParallelConfig
@configclass
class TrainConfig(TrainConfig):
dataset: str = "random"
dataset_size: int
@configclass
class ModelConfig(ModelConfig):
layer: Literal["layer_a", "layer_b"] = "layer_a"
@configclass
class ParallelConfig(ParallelConfig):
model_config: ModelConfig
train_config: TrainConfig
config = ParallelConfig(
experiment_dir="ablator-exp",
train_config=TrainConfig(
batch_size=128,
epochs=2,
dataset_size=100,
optimizer_config=OptimizerConfig(name="sgd", arguments={"lr": 0.1}),
scheduler_config=None,
),
model_config=ModelConfig(),
device="cpu",
search_space={
"model_config.layer": SearchSpace(categorical_values=["layer_a", "layer_b"])
},
total_trials=2,
)
class SimpleModel(nn.Module):
def __init__(self, config: ModelConfig) -> None:
super().__init__()
if config.layer == "layer_a":
self.param = nn.Parameter(torch.ones(100, 1))
else:
self.param = nn.Parameter(torch.randn(200, 1))
def forward(self, x: torch.Tensor):
x = self.param
return {"preds": x}, x.sum().abs()
class SimpleWrapper(ModelWrapper):
def make_dataloader_train(self, run_config: ParallelConfig):
dl = [torch.rand(100) for i in range(run_config.train_config.dataset_size)]
return dl
def make_dataloader_val(self, run_config: ParallelConfig):
dl = [torch.rand(100) for i in range(run_config.train_config.dataset_size)]
return dl
mywrapper = SimpleWrapper(SimpleModel)
with ParallelTrainer(mywrapper, config) as ablator:
ablator.launch(".")
|
|
|
|
Configuration Module | Training Module | Experiment and Metrics Module | Analysis Module |
Explore a variety of tutorials and examples on how to utilize ABLATOR. Ready to dive in? 👉 Ablation Tutorials
ABLATOR is open source, and we value contributions from our community! Check out our Development Guide for details on our development process and insights into the internals of the ABLATOR library.
For any bugs or feature requests related to ABLATOR, please visit our GitHub Issues or reach out to slack
Platform | Purpose | Support Level |
---|---|---|
GitHub Issues | To report issues or suggest new features. | ABLATOR Team |
Slack | To collaborate with fellow ABLATOR users. | Community |
Discord | To inquire about ABLATOR usage and collaborate with other ABLATOR enthusiasts. | Community |
For staying up-to-date on new features of Ablator. | ABLATOR Team |
@inproceedings{fostiropoulos2023ablator,
title={ABLATOR: Robust Horizontal-Scaling of Machine Learning Ablation Experiments},
author={Fostiropoulos, Iordanis and Itti, Laurent},
booktitle={AutoML Conference 2023 (ABCD Track)},
year={2023}
}
FAQs
Model Ablation Tool-Kit
We found that ablator demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Oracle seeks to dismiss fraud claims in the JavaScript trademark dispute, delaying the case and avoiding questions about its right to the name.
Security News
The Linux Foundation is warning open source developers that compliance with global sanctions is mandatory, highlighting legal risks and restrictions on contributions.
Security News
Maven Central now validates Sigstore signatures, making it easier for developers to verify the provenance of Java packages.