
Research
Malicious npm Packages Impersonate Flashbots SDKs, Targeting Ethereum Wallet Credentials
Four npm packages disguised as cryptographic tools steal developer credentials and send them to attacker-controlled Telegram infrastructure.
irisml-tasks-training
Advanced tools
This is a package for IrisML training-related tasks.
See irisml repository for the detail of irisml framework.
Train a pytorch model. A model object must have "criterion" and "predictor" property. See the documents for the detail. Returns a trained model.
Run inference with a given pytorch model. Returns prediction results.
Append a classifier layer to an encoder model.
Benchmark a given model.
Convert a multiclass classification Image Dataset into a dataset with text prompts.
Build a classifier FC layer from text features. See the CLIP repo for the detail.
Create a prompt generator for classification task.
Trace a pytorch model and export it as ONNX using torch.onnx.export(). Throws an exception if it couldn't export. Returns an exported onnx model.
Calculate top1 accuracy for given prediction results. It supports only image classification results.
Calculate mAP for object detection results.
Get a list or a tensor of targets from a Dataset.
Given a list of class ids, extract the sub-dataset of those classes.
Make a new model to extract intermediate features from the given model. Use the predict task to run the extractor model.
Make a new model to run image-text contrastive training like CLIP.
Make a transform function that can be used for a contrastive training
Oversample from a dataset and return a new dataset
Extract image_model and text_model from a image-text model.
Sample few-shot dataset from given a shot number and random seed.
Train a pytorch model using gradient cache. Useful for training a large contrastive model.
The tasks in this package expects the following interfaces
Notations
input
: An input object for a single example. For example, an image tensor.target
: A ground truth for a single example.inputs_batch
: A batch of input
.targets_batch
: A batch of target
class Model(torch.nn.Module):
def training_step(self, inputs_batch, targets_batch): # Returns {'loss': loss_tensor}
pass
A model for training must implement training_step() method. The trainer will provide inputs and targets to the method. It must return a dictionary containing 'loss' entry.
class Model(torch.nn.Module):
def prediction_step(self, inputs_batch): # Returns prediction results
pass
Similarily, a model for prediction must have 'prediction_step()' method. Inputs will be provided to this method and it must return prediction results.
For most of the case, a model implements both methods, training_step() and prediction_step().
The trainer accepts an instance of torch.utils.data.Dataset class. For each index, it must return a tuple (raw_input, target). Curretly, raw_input
must be a RGB PIL Image object.
A transform function must return (input, target) given (raw_inputs, target).
build_zero_shot_classifier task has a different interface. It doesn't require a Model instance. Instead, it requires two tensors, text_features and text_labels.
FAQs
IrisML tasks for pytorch training
We found that irisml-tasks-training demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Research
Four npm packages disguised as cryptographic tools steal developer credentials and send them to attacker-controlled Telegram infrastructure.
Security News
Ruby maintainers from Bundler and rbenv teams are building rv to bring Python uv's speed and unified tooling approach to Ruby development.
Security News
Following last week’s supply chain attack, Nx published findings on the GitHub Actions exploit and moved npm publishing to Trusted Publishers.