Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

mlserver-xgboost

Package Overview
Dependencies
Maintainers
1
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

mlserver-xgboost

XGBoost runtime for MLServer

  • 1.6.1
  • PyPI
  • Socket score

Maintainers
1

XGBoost runtime for MLServer

This package provides a MLServer runtime compatible with XGBoost.

Usage

You can install the runtime, alongside mlserver, as:

pip install mlserver mlserver-xgboost

For further information on how to use MLServer with XGBoost, you can check out this worked out example.

XGBoost Artifact Type

The XGBoost inference runtime will expect that your model is serialised via one of the following methods:

ExtensionDocsExample
*.jsonJSON Formatbooster.save_model("model.json")
*.ubjBinary JSON Formatbooster.save_model("model.ubj")
*.bst(Old) Binary Formatbooster.save_model("model.bst")
By default, the runtime will look for a file called `model.[json | ubj | bst]`.
However, this can be modified through the `parameters.uri` field of your
{class}`ModelSettings <mlserver.settings.ModelSettings>` config (see the
section on [Model Settings](../../docs/reference/model-settings.md) for more
details).

```{code-block} json
---
emphasize-lines: 3-5
---
{
  "name": "foo",
  "parameters": {
    "uri": "./my-own-model-filename.json"
  }
}
```

Content Types

If no content type is present on the request or metadata, the XGBoost runtime will try to decode the payload as a NumPy Array. To avoid this, either send a different content type explicitly, or define the correct one as part of your model's metadata.

Model Outputs

The XGBoost inference runtime exposes a number of outputs depending on the model type. These outputs match to the predict and predict_proba methods of the XGBoost model.

OutputReturned By DefaultAvailability
predictAvailable on all XGBoost models.
predict_probaOnly available on non-regressor models (i.e. XGBClassifier models).

By default, the runtime will only return the output of predict. However, you are able to control which outputs you want back through the outputs field of your {class}InferenceRequest <mlserver.types.InferenceRequest> payload.

For example, to only return the model's predict_proba output, you could define a payload such as:

---
emphasize-lines: 10-12
---
{
  "inputs": [
    {
      "name": "my-input",
      "datatype": "INT32",
      "shape": [2, 2],
      "data": [1, 2, 3, 4]
    }
  ],
  "outputs": [
    { "name": "predict_proba" }
  ]
}

FAQs


Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc