max-vis
max-vis
is a JavaScript library and command-line utility to help render the predictions returned by some of the deep learning models of the Model Asset eXchange (MAX).
Given the JSON result (prediction) from one of the MAX image models and the source image, with max-vis
you can render a new version of the image with predictions (i.e., bounding box, pose lines, etc) annotated on it.
Install
-
browser
<script src="https://cdn.jsdelivr.net/npm/@codait/max-vis"></script>
-
Node.js
npm install @codait/max-vis
-
command-line
npm install -g @codait/max-vis
Usage
See working examples for browser, Node.js, and command-line environments in the /examples
directory.
-
browser
const prediction = ...
const image = document.getElementById('myimage')
maxvis.annotate(prediction, image)
.then(annotatedImageBlob => {
let img = document.createElement('img')
img.src = URL.createObjectURL(annotatedImageBlob)
document.body.appendChild(img)
})
Note: When loaded in a browser, the global variable maxvis
will be available to access the API.
-
Node.js
const maxvis = require('@codait/max-vis')
const prediction = ...
const image = 'images/myImage.jpg'
maxvis.annotate(prediction, image)
.then(annotatedImageBuffer => {
fs.writeFile('myAnnotatedImage.png', annotatedImageBuffer, (err) => {
if (err) {
console.error(err)
}
})
})
-
command-line
Pass prediction directly from a file
$ maxvis images/myImage.jpg -p maxImageModelPrediction.json
or pipe prediction from curl
$ curl -X POST "http://max-image-model-endpoint/model/predict" \
-F "image=@images/myImage.jpg" \
| maxvis images/myImage.jpg
Note: When installed as a command-line utility, the global command maxvis
will be available.
API
overlay(prediction, image, [options])
Processes the prediction against the image and renders the prediction (in a Canvas
overlay) on top of the image. Not applicable when running in Node.js.
prediction
- (Required) the prediction output from a MAX image model
image
- (Required) an HTMLImageElement
or the id
of an HTMLImageElement
options
- (Optional) a JSON object of options to customize rendering. See Options for more info.
annotate(prediction, image, [options])
Processes the prediction against the image and creates a new version of the image that includes the rendered prediction.
prediction
- (Required) the prediction output from a MAX image model
image
- (Required) an HTMLImageElement
or HTMLCanvasElement
or the id
of an HTMLImageElement
or HTMLCanvasElement
.
options
- (Optional) a JSON object of options to customize rendering. See Options for more info.
Returns a Promise that resolves to a Blob
(in browsers) or Buffer
(in Node.js) of a PNG image containing the input image annotated with the prediction.
Processes the prediction against the image, extracts the components from the image.
prediction
- (Required) the prediction output from a MAX image model
image
- (Required) an HTMLImageElement
or HTMLCanvasElement
or the id
of an HTMLImageElement
or HTMLCanvasElement
.
options
- (Optional) a JSON object of options to customize rendering. See Options for more info.
Returns a Promise that resolves to an array of objects representing each item of the prediction. Each object in the array contains:
image
: a Blob
(in browsers) or Buffer
(in Node.js) of a PNG image containing the cropped out area of the input image identified in prediction.label
: a label for the image
version
Returns the max-vis
version number
API Options
Available options to pass to the API. All are optional and by default, max-vis
will try to determine the appropriate values from the prediction object.
Option | Type | Description |
---|
type | String | The name of type of rendering the prediction conforms to. Acceptable types are boxes (for bounding boxes), lines (for pose lines), or segments (for image segmentation). |
height | Number | The height (in pixels) of the image represented by the prediction |
width | Number | The width (in pixels) of the image represented by the prediction |
colors | 2D array or Object | An array of RGB values to use for rendering (e.g., [[255,0,200], [125,125,125], ...] ). Alternatively, for bounding boxes an object of label names mapped to preferred RGB values (e.g., {person: [255,0,200], horse: [125,125,125], ...} ) can be passed. |
segments | Array | An array of segmentation IDs to process (e.g., [0, 15]). If not provided, all segments will be processed. This is only applicable for predictions of type segments . |
exclude | Boolean | Set to true if segments option indicates segmentation that should be excluded instead of included in processing. Default is false . This is only applicable for predictions of type segments . |
lineWidth | Number | The thickness of the lines in the rendering. Default is 2. This is only applicable for predictions of type boxes or lines . |
CLI Parameters
Available parameters to pass to the CLI
Parameter | Description |
---|
--type | Same as type API option |
--extract | Extract and save each component of the prediction from the image instead of saving a single image will all components rendered |
--prediction | The path to a JSON file containing the prediction returned by a MAX image model |
Examples
The /examples
directory contains working examples for the browser, Node.js, and command-line environments.
Links
License
Apache-2.0