
Security News
Open Source Maintainers Demand Ability to Block Copilot-Generated Issues and PRs
Open source maintainers are urging GitHub to let them block Copilot from submitting AI-generated issues and pull requests to their repositories.
Get started: Create Account • Install CLI • Tutorials • Docs
Resources: Website • Blog • Support • Contact Sales
Gradient is an an end-to-end MLOps platform that enables individuals and organizations to quickly develop, train, and deploy Deep Learning models. The Gradient software stack runs on any infrastructure e.g. AWS, GCP, on-premise and low-cost Paperspace GPUs. Leverage automatic versioning, distributed training, built-in graphs & metrics, hyperparameter search, GradientCI, 1-click Jupyter Notebooks, our Python SDK, and more.
This is an SDK for performing Machine Learning with Gradientº, it can be installed in addition to gradient-cli.
This SDK requires Python 3.6+.
To install it, run:
pip install gradient-utils
Library for logging custom and framework metrics in Gradient.
Usage example:
from gradient_utils.metrics import init, add_metrics, MetricsLogger
# Initialize metrics logging
with init():
# do work here
pass
# Capture metrics produced by tensorboard
with init(sync_tensorboard=True):
# do work here
pass
# Log metrics with a single command
add_metrics({"loss": 0.25, "accuracy": 0.99})
# Insert metrics with a step value
# Note: add_metrics should be called once for a step.
# Multiple calls with the same step may result in loss of metrics.
add_metrics({"loss": 0.25, "accuracy": 0.99}, step=0)
# For more advanced use cases use the MetricsLogger
logger = MetricsLogger()
logger.add_gauge("loss") # add a specific gauge
logger["loss"].set(0.25)
logger["loss"].inc()
logger.add_gauge("accuracy")
logger["accuracy"].set(0.99)
logger.push_metrics() # you must explicitly push metrics each time you mutate values when using MetricsLogger
# You can also use steps with the MetricsLogger
logger = MetricsLogger(step=0)
logger.add_gauge("loss")
logger["loss"].set(0.25)
logger.push_metrics()
logger.set_step(1) # update step explicitly
logger["loss"].set(0.25)
logger.push_metrics()
Set the TF_CONFIG environment variable
For multi-worker training, you need to set the TF_CONFIG
environment variable for each binary running in your cluster. Set the value of TF_CONFIG
to a JSON string that specifies each task within the cluster, including each task's address and role within the cluster. We've provided a Kubernetes template in the tensorflow/ecosystem repo which sets TF_CONFIG
for your training tasks.
get_tf_config()
Function to set value of TF_CONFIG
when run on machines within Paperspace infrastructure.
It can raise a ConfigError
exception with message if there's a problem with its configuration in a particular machine.
Usage example:
from gradient_utils import get_tf_config
get_tf_config()
get_mongo_conn_str()
Function to check and construct MongoDB connection string.
It returns a connection string to MongoDB.
It can raise a ConfigError
exception with message if there's a problem with any values used to prepare the MongoDB connection string.
Usage example:
from gradient_utils import get_mongo_conn_str
conn_str = get_mongo_conn_str()
data_dir()
Function to retrieve path to job space.
Usage example:
from gradient_utils import data_dir
job_space = data_dir()
model_dir()
Function to retrieve path to model space.
Usage example:
from gradient_utils import model_dir
model_path = model_dir(model_name)
export_dir()
Function to retrieve path for model export.
Usage example:
from gradient_utils import export_dir
model_path = export_dir(model_name)
worker_hosts()
Function to retrieve information about worker hosts.
Usage example:
from gradient_utils import worker_hosts
model_path = worker_hosts()
ps_hosts()
Function to retrieve information about Paperspace hosts.
Usage example:
from gradient_utils import ps_hosts
model_path = ps_hosts()
task_index()
Function to retrieve information about task index.
Usage example:
from gradient_utils import task_index
model_path = task_index()
job_name()
Function to retrieve information about job name.
Usage example:
from gradient_utils import job_name
model_path = job_name()
We use Docker and Docker-compose to run the tests locally.
# To setup the integration test framework
docker-compose up --remove-orphans -d pushgateway
docker-compose build utils
# To run tests
docker-compose -f docker-compose.ci.yml run utils poetry run pytest
# To autoformat
docker-compose run utils poetry run autopep8 --in-place --aggressive --aggressive --recursive .
FAQs
This is an SDK for performing Machine Learning with Gradient.
We found that gradient-utils demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Open source maintainers are urging GitHub to let them block Copilot from submitting AI-generated issues and pull requests to their repositories.
Research
Security News
Malicious Koishi plugin silently exfiltrates messages with hex strings to a hardcoded QQ account, exposing secrets in chatbots across platforms.
Research
Security News
Malicious PyPI checkers validate stolen emails against TikTok and Instagram APIs, enabling targeted account attacks and dark web credential sales.