max-viz-utils
This package will allow you to quickly and easily create annotated images from predictions generated by the deep learning models of the Model Asset eXchange (MAX) in your JavaScript applications.
Currently the list of supported models includes:
MAX Image Segmenter
MAX Human Pose Estimator
MAX Object Detector
MAX Facial Recognizer
Uses
MAX Image Segmenter
getColorMap(imageData, segmentMap, options)
- this function takes an image and the corresponding segment map contained in the model payload, and returns the annotated colormap image.
MAX Human Pose Estimator
getPoseLines(imageData, poseData, options)
- this function takes an image with the corresponding detected poses contained in the model payload and returns an annotated image with drawn pose lines.
MAX Object Detector
getObjectBoxes(imageData, boxData, options)
- this function takes an image and the corresponding detected objects contained in the model payload, and returns an annotated image with drawn bounding boxes.
cropObjectBoxes(imageData, boxData, options)
- this function takes an image and the corresponding detected objects contained in the model payload, and returns an array of cropped images with metadata.
Customizing the Output
These utility functions accept an optional third parameter called options
that can handle the following values:
lineColor
: a string containing the name of any valid CSS color. Sets the color of annotations.
- default: cycles through different colors for each object.
linePad
: a number representing the thickness of lines in drawn annotations.
fontColor
: a string value, either white
or black
that sets the color of annotation text.
fontSize
: a number representing the size of the annotation text. Choose from 8, 16, 32, 64, 128
.
modelType
: a string containing the name of the type of MAX model. This is used to determine the structure of the model payload and build bounding boxes appropriately. Choose from object-detector, facial-recognizer