
Security News
Deno 2.4 Brings Back deno bundle, Improves Dependency Management and Observability
Deno 2.4 brings back bundling, improves dependency updates and telemetry, and makes the runtime more practical for real-world JavaScript projects.
gradient-accumulator
Advanced tools
GradientAccumulator was developed by SINTEF Health due to the lack of an easy-to-use method for gradient accumulation in TensorFlow 2.
The package is available on PyPI and is compatible with and have been tested against TensorFlow 2.2-2.12
and Python 3.6-3.11
, and works cross-platform (Ubuntu, Windows, macOS).
Build Type | Status |
---|---|
Code coverage | |
Documentations | |
Unit tests |
Stable release from PyPI:
pip install gradient-accumulator
Or from source:
pip install git+https://github.com/andreped/GradientAccumulator
A simple example to add gradient accumulation to an existing model is by:
from gradient_accumulator import GradientAccumulateModel
from tensorflow.keras.models import Model
model = Model(...)
model = GradientAccumulateModel(accum_steps=4, inputs=model.input, outputs=model.output)
Then simply use the model
as you normally would!
In practice, using gradient accumulation with a custom pipeline might require some extra overhead and tricks to get working.
For more information, see documentations which are hosted at gradientaccumulator.readthedocs.io
Gradient accumulation (GA) enables reduced GPU memory consumption through dividing a batch into smaller reduced batches, and performing gradient computation either in a distributing setting across multiple GPUs or sequentially on the same GPU. When the full batch is processed, the gradients are then accumulated to produce the full batch gradient.
Note that how we implemented gradient accumulation is slightly different from this illustration, as our design does not require having the entire batch in CPU memory. More information on what goes under the hood can be seen in the documentations.
In TensorFlow 2, there did not exist a plug-and-play method to use gradient accumulation with any custom pipeline. Hence, we have implemented two generic TF2-compatible approaches:
Method | Usage |
---|---|
GradientAccumulateModel | model = GradientAccumulateModel(accum_steps=4, inputs=model.input, outputs=model.output) |
GradientAccumulateOptimizer | opt = GradientAccumulateOptimizer(accum_steps=4, optimizer=tf.keras.optimizers.SGD(1e-2)) |
Both approaches control how frequently the weigths are updated but in their own way. Approach (1) overrides the train_step
method of a given Model, whereas approach (2) wraps the optimizer. (1) is only compatible with single-GPU usage, whereas (2) also supports distributed training (multi-GPU).
Our implementations enable theoretically infinitely large batch size, with identical memory consumption as for a regular mini batch. If a single GPU is used, this comes at the cost of increased training runtime. Multiple GPUs could be used to improve runtime performance.
Technique | Usage |
---|---|
Batch Normalization | layer = AccumBatchNormalization(accum_steps=4) |
Adaptive Gradient Clipping | model = GradientAccumulateModel(accum_steps=4, agc=True, inputs=model.input, outputs=model.output) |
Mixed precision | model = GradientAccumulateModel(accum_steps=4, mixed_precision=True, inputs=model.input, outputs=model.output) |
For more information on usage, supported techniques, and examples, refer to the documentations.
The gradient accumulator model wrapper is based on the implementation presented in this thread on stack overflow. The adaptive gradient clipping method is based on the implementation by @sayakpaul. The optimizer wrapper is derived from the implementation by @fsx950223 and @stefan-falk.
The documentations hosted here was made possible by the incredible Read The Docs team which offer free documentation hosting!
If you used this package or found the project relevant in your research, please, include the following citation:
@software{andre_pedersen_2023_7905351,
author = {André Pedersen and Tor-Arne Schmidt Nordmo and Javier Pérez de Frutos and David Bouget},
title = {andreped/GradientAccumulator: v0.5.0},
month = may,
year = 2023,
publisher = {Zenodo},
version = {v0.5.0},
doi = {10.5281/zenodo.7905351},
url = {https://doi.org/10.5281/zenodo.7905351}
}
FAQs
Package for gradient accumulation in TensorFlow
We found that gradient-accumulator demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Deno 2.4 brings back bundling, improves dependency updates and telemetry, and makes the runtime more practical for real-world JavaScript projects.
Security News
CVEForecast.org uses machine learning to project a record-breaking surge in vulnerability disclosures in 2025.
Security News
Browserslist-rs now uses static data to reduce binary size by over 1MB, improving memory use and performance for Rust-based frontend tools.