Security News
Fluent Assertions Faces Backlash After Abandoning Open Source Licensing
Fluent Assertions is facing backlash after dropping the Apache license for a commercial model, leaving users blindsided and questioning contributor rights.
This package provides a MLServer runtime compatible with XGBoost.
You can install the runtime, alongside mlserver
, as:
pip install mlserver mlserver-xgboost
For further information on how to use MLServer with XGBoost, you can check out this worked out example.
The XGBoost inference runtime will expect that your model is serialised via one of the following methods:
Extension | Docs | Example |
---|---|---|
*.json | JSON Format | booster.save_model("model.json") |
*.ubj | Binary JSON Format | booster.save_model("model.ubj") |
*.bst | (Old) Binary Format | booster.save_model("model.bst") |
By default, the runtime will look for a file called `model.[json | ubj | bst]`.
However, this can be modified through the `parameters.uri` field of your
{class}`ModelSettings <mlserver.settings.ModelSettings>` config (see the
section on [Model Settings](../../docs/reference/model-settings.md) for more
details).
```{code-block} json
---
emphasize-lines: 3-5
---
{
"name": "foo",
"parameters": {
"uri": "./my-own-model-filename.json"
}
}
```
If no content type is present on the request or metadata, the XGBoost runtime will try to decode the payload as a NumPy Array. To avoid this, either send a different content type explicitly, or define the correct one as part of your model's metadata.
The XGBoost inference runtime exposes a number of outputs depending on the
model type.
These outputs match to the predict
and predict_proba
methods of the XGBoost
model.
Output | Returned By Default | Availability |
---|---|---|
predict | ✅ | Available on all XGBoost models. |
predict_proba | ❌ | Only available on non-regressor models (i.e. XGBClassifier models). |
By default, the runtime will only return the output of predict
.
However, you are able to control which outputs you want back through the
outputs
field of your {class}InferenceRequest <mlserver.types.InferenceRequest>
payload.
For example, to only return the model's predict_proba
output, you could
define a payload such as:
---
emphasize-lines: 10-12
---
{
"inputs": [
{
"name": "my-input",
"datatype": "INT32",
"shape": [2, 2],
"data": [1, 2, 3, 4]
}
],
"outputs": [
{ "name": "predict_proba" }
]
}
FAQs
XGBoost runtime for MLServer
We found that mlserver-xgboost demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Fluent Assertions is facing backlash after dropping the Apache license for a commercial model, leaving users blindsided and questioning contributor rights.
Research
Security News
Socket researchers uncover the risks of a malicious Python package targeting Discord developers.
Security News
The UK is proposing a bold ban on ransomware payments by public entities to disrupt cybercrime, protect critical services, and lead global cybersecurity efforts.