
Research
SANDWORM_MODE: Shai-Hulud-Style npm Worm Hijacks CI Workflows and Poisons AI Toolchains
An emerging npm supply chain attack that infects repos, steals CI secrets, and targets developer AI toolchains for further compromise.
akerbp-mlops
Advanced tools
This is a framework for MLOps that deploys models as functions in Cognite Data Fusion
This assumes you are already familiar with the framework, and acts as a quick reference guide for deploying models using the prediction service, i.e. when model training is performed outside of the MLOps framework.
Follow these steps (in the context of your virtual environment):
pip install akerbp-mlops[cdf] (On some OSes you may need to escape the brackets by doing so pip install "akerbp-mlops[cdf]").github/workflows/main.yml and config file
mlops_settings.yaml by running this command from your repo's root folder:
python -m akerbp.mlops.deployment.setup
python -c "from akerbp.mlops.core.config import validate_user_settings; validate_user_settings()"
alternatively, run the setup again:
python -m akerbp.mlops.deployment.setup
model_code) and make
sure your model follows the same interface and file structure (see Files and Folders Structure)A this point every git push in master branch will trigger a deployment in the test environment. More information about the deployments pipelines is provided later.
Follow these steps:
pip install akerbp-mlops[cdf]==x, or upgrade your existing version to the latest release by running pip install --upgrade akerbp-mlops[cdf]python -m akerbp.mlops.deployment.setup
This will update the GitHub pipeline with the newest release of akerbp.mlops and validate your settings. Once the settings are validated, commit changes and
you're ready to go!Users should consider the following general guidelines:
model_artifact
does store model artifacts for the model defined in model_code, but it is
just to help users understand the framework (see this section on how to handle model artifacts)MLOps configuration is stored in mlops_settings.yaml. Example for a project
with a single model:
model_name: model1
human_friendly_model_name: 'My First Model'
model_file: model_code/model1.py
req_file: model_code/requirements.model
artifact_folder: model_artifact
artifact_version: 1 # Optional
test_file: model_code/test_model1.py
platform: cdf
dataset: mlops
python_version: py39
helper_models:
- my_helper_model
info:
prediction: &desc
description: 'Description prediction service, model1'
metadata:
required_input:
- ACS
- RDEP
- DEN
training_wells:
- 3/14
- 2/7-18
input_types:
- int
- float
- string
units:
- s/ft
- 1
- kg/m3
output_curves:
- AC
output_units:
- s/ft
petrel_exposure: False
imputed: True
num_filler: -999.15
cat_filler: UNKNOWN
owner: data@science.com
training:
<< : *desc
description: 'Description training service, model1'
metadata:
required_input:
- ACS
- RDEP
- DEN
output_curves:
- AC
hyperparameters:
learning_rate: 1e-3
batch_size: 100
epochs: 10
| Field | Description |
|---|---|
| model_name | a suitable name for your model. No spaces or dashes are allowed |
| human_friendly_model_name | Name of function (in CDF) |
| model_file | model file path relative to the repo's root folder. All required model code should be under the top folder in that path (model_code in the example above). |
| req_file | model requirement file. Do not use .txt extension! |
| artifact_folder | model artifact folder. It can be the name of an existing local folder (note that it should not be committed to the repo). In that case it will be used in local deployment. It still needs to be uploaded/promoted with the model manager so that it can be used in Test or Prod. If the folder does not exist locally, the framework will try to create that folder and download the artifacts there. Set to null if there is no model artifact. |
| artifact_version (optional) | artifact version number to use during deployment. Defaults to the latest version if not specified |
| test_file | test file to use. Set to null for no testing before deployment (not recommended). |
| platform | deployment platforms, either cdf (Cognite) or local for local testing. |
| python_version | If platform is set to cdf, the python_version required by the model to be deployed needs to be specified. Available versions can be found here |
| helper_models | Array of helper models using for feature engineering during preprocessing. During deployment, iterate through this list and check that helper model requirements are the same as the main model. For now we only check for akerbp-mlpet |
| dataset | CDF Dataset to use to read/write model artifacts (see Model Manager). Set to null is there is no dataset (not recommended). |
| info | description, metadata and owner information for the prediction and training services. Training field can be discarded if there's no such service. |
Note: all paths should be unix style, regardless of the platform.
Notes on metadata: We need to specify the metadata under info as a dictionary with strings as keys and values, as CDF only allows strings for now. We are also limited to the following
If there are multiple models, model configuration should be separated using
---. Example:
model_name: model1
human_friendly_model_name: 'My First Model'
model_file: model_code/model1.py
(...)
--- # <- this separates model1 and model2 :)
model_name: model2
human_friendly_model: 'My Second Model'
model_file: model_code/model2.py
(...)
All the model code and files should be under a single folder, e.g. model_code.
Required files in this folder:
model.py: implements the standard model interfacetest_model.py: tests to verify that the model code is correct and to verify
correct deploymentrequirements.model: libraries needed (with specific version numbers),
can't be called requirements.txt. Add the MLOps framework like this:
# requirements.model
(...) # your other reqs
akerbp-mlops==MLOPS_VERSION
During deployment, MLOPS_VERSION will be automatically replaced by the
specific version that you have installed locally. Make sure you have the latest release on your local machine prior to model deployment.For the prediction service we require the model interface to have the following class and function
For the training service we require the model interface to have the following class and function
The following structure is recommended for projects with multiple models:
model_code/model1/model_code/model2/model_code/common_code/This is because when deploying a model, e.g. model1, the top folder in the
path (model_code in the example above) is copied and deployed, i.e.
common_code folder (assumed to be needed by model1) is included. Note that
model2 folder would also be deployed (this is assumed to be unnecessary but
harmless).
The repo's root folder is the base folder when importing. For example, assume you have these files in the folder with model code:
model_code/model.pymodel_code/helper.pymodel_code/data.csvIf model.py needs to import helper.py, use: import model_code.helper. If
model.py needs to read data.csv, the right path is
os.path.join('model_code', 'data.csv').
It's of course possible to import from the Mlops package, e.g. its logger:
from akerbp.mlops.core import logger
logging=logger.get_logger("logger_name")
logging.debug("This is a debug log")
We consider two types of services: prediction and training.
Deployed services can be called with
from akerbp.mlops.xx.helpers import call_function
output = call_function(external_id, data)
Where xx is either 'cdf' or 'gc', and external_id follows the
structure model-service-model_env:
model: model name given by the user (settings file)service: either training or predictionmodel_env: either dev, test or prod (depending on the deployment
environment)The output has a status field (ok or error). If they are 'ok', they have
also a prediction and prediction_file or training field (depending on the type of service). The
former is determined by the predict method of the model, while the latter
combines artifact metadata and model metadata produced by the train function.
Prediction services have also a model_id field to keep track of which model
was used to predict.
See below for more details on how to call prediction services hosted in CDF.
Model services (described below) can be deployed to CDF, i.e. Cognite Data Fusion or Google Cloud Run. The deployment platform is specified in the settings file.
CDF Functions include metadata when they are called. This information can be
used to redeploy a function (specifically, the file_id field). Example:
import akerbp.mlops.cdf.helpers as cdf
human_readable_name = "My model"
external_id = "my_model-prediction-test"
cdf.set_up_cdf_client('deploy')
cdf.redeploy_function(
human_readable_name
external_id,
file_id,
'Description',
'your@email.com'
)
Note that the external-id of a function needs to be unique, as this is used to distinguish functions between services and hosting environment.
It's possible to query available functions (can be filtered by environment and/or tags). Example:
import akerbp.mlops.cdf.helpers as cdf
cdf.set_up_cdf_client('deploy')
all_functions = cdf.list_functions()
test_functions = cdf.list_functions(model_env="test")
tag_functions = cdf.list_functions(tags=["well_interpretation"])
Functions can be deleted. Example:
import akerbp.mlops.cdf.helpers as cdf
cdf.set_up_cdf_client('deploy')
cdf.delete_service("my_model-prediction-test")
Functions can be called in parallel. Example:
from akerbp.mlops.cdf.helpers import call_function_parallel
function_name = 'my_function-prediction-prod'
data = [dict(data='data_call_1'), dict(data='data_call_2')]
response1, response2 = call_function_parallel(function_name, data)
#TODO - Document common use cases for GCR
Model Manager is the module dedicated to managing the model artifacts used by prediction services (and generated by training services). This module uses CDF Files as backend.
Model artifacts are versioned and stored together with user-defined metadata. Uploading a new model increases the version count by 1 for that model and environment. When deploying a prediction service, the latest model version is chosen. It would be possible to extend the framework to allow deploying specific versions or filtering by metadata.
Model artifacts are segregated by environment (e.g. only production artifacts can be deployed to production). Model artifacts have to be uploaded manually to test (or dev) environment before deployment. Code example:
import akerbp.mlops.model_manager as mm
metadata = train(model_dir, secrets) # or define it directly
mm.setup()
folder_info = mm.upload_new_model_version(
model_name,
model_env,
folder_path,
metadata
)
If there are multiple models, you need to do this one at at time. Note that
model_name corresponds to one of the elements in model_names defined in
mlops_settings.py, model_env is the target environment (where the model should be
available), folder_path is the local model artifact folder and metadata is a
dictionary with artifact metadata, e.g. performance, git commit, etc.
Model artifacts needs to be promoted to the production environment (i.e. after they have been deployed successfully to test environment) so that a prediction service can be deployed in production.
# After a model's version has been successfully deployed to test
import akerbp.mlops.model_manager as mm
mm.setup()
mm.promote_model('model', 'version')
Each model artifact upload/promotion increments a version number (environment dependent) available in Model Manager. However, this doesn't modify the model artifacts used in existing prediction services (i.e. nothing changes in CDF Functions). To reflect the newly uploaded/promoted model artifacts in the existing services one need to deploy the services again. Note that we dont have to specify the artifact version explicitly if we want to deploy using the latest artifacts, as this is done by default.
Recommended process to update a model artifact and prediction service:
It's possible to get an overview of the model artifacts managed by Model
Manager. Some examples (see get_model_version_overview documentation for other
possible queries):
import akerbp.mlops.model_manager as mm
mm.setup()
# all artifacts
folder_info = mm.get_model_version_overview()
# all artifacts for a given model
folder_info = mm.get_model_version_overview(model_name='xx')
If the overview shows model artifacts that are not needed, it is possible to remove them. For example if artifact "my_model/dev/5" is not needed:
model_to_remove = "my_model/dev/5"
mm.delete_model_version(model_to_remove)
Model Manager will by default show information on the artifact to delete and ask for user confirmation before proceeding. It's possible (but not recommended) to disable this check. There's no identity check, so it's possible to delete any model artifact (from other data scientist). Be careful!
It's possible to download a model artifact (e.g. to verify its content). For example:
mm.download_model_version('model_name', 'test', 'artifact_folder', version=5)
If no version is specified, the latest one is downloaded by default.
By default, Model Manager assumes artifacts are stored in the mlops dataset.
If your project uses a different one, you need to specify during setup (see
setup function).
Further information:
setup function.model_name/dev/1. Note that these artifacts are not uploaded to CDF
Files.To allow for model versioning and rolling back to previous model deployments, the external id of the functions (in CDF) includes a version number that is reflected by the latest artifact version number when deploying the function (see above). Everytime we upload/promote new model artifacts and deploy our services, the version number of the external id of the functions representing the services are incremented (just as the version number for the artifacts).
To distinguish the latest model from the remaining model versions, we redeploy the latest model version using a predictable external id that does not contain the version number. By doing so we relieve the clients need of dealing with version numbers, and they will call the latest model by default. For every new deployment, we will thus have two model deployments - one with the version number, and one without the version number in the external id. However, the predictable external id is persisted across new model versions, so when deploying a new version the latest one, with the predictable external id, is simply overwritten.
We are thus concerned with two structures for the external id
<model_name>-<service>-<model_env>-<version> for rolling back to previous versions, and<model_name>-<service>-<model_env> for the latest deployed modelFor the latest model with a predictable external id, we tag the description of the model to specify that the model is in fact the latest version, and add the version number to the function metadata.
We can now list out multiple models with the same model name and external id prefix, and choose to make predictions and do inference with a specific model version. An example is shown below.
# List all prediction services (i.e. models) with name "My Model" hosted in the test environment, and model corresponding to the first element of the list
from akerbp.mlops.cdf.helpers import get_client
client = get_client(client_id=<client_id>, client_secret=<client_secret>)
my_models = client.functions.list(name="My Model", external_id_prefix="mymodel-prediction-test")
my_model_specific_version = my_models[0]
This section describes how you can call deployed models and obtain predictions for doing inference. We have two options for calling a function in CDF, either using the MLOps framework directly or by using the Cognite SDK. Independent of how you call your model, you have to pass the data as a dictionary with a key "data" containing a dictionary with your data, where the keys of the inner dictionary specifies the columns, and the values are list of samples for the corresponding columns.
First, load your data and transform it to a dictionary as assumed by the framework. Note that the data dictionary you pass to the function might vary based on your model interface. Make sure to align with what you specified in your model.py interface.
import pandas as pd
data = pd.read_csv("path_to_data")
input_data = data.drop(columns=[target_variables])
data_dict = {"data": input_data.to_dict(orient=list), "to_file": True}
The "to_file" key of the input data dictionary specifies how the predictions can be extracted downstream. More details are provided below
Calling deployed model using MLOps:
"<model_name>-<service>-<model_env>-<version>", and"<model_name>-<service>-<model_env>"Use the latter external id if you want to call the latest model. The former external id can be used if you want to call a previous version of your model.
from akerbp.mlops.cdf.helpers import set_up_cdf_client, call_function
set_up_cdf_client(context="deploy") #access CDF data, files and functions with deploy context
response = call_function(function_name="<model_name>-prediction-<model_env>", data=data_dict)
Calling deployed model using the Cognite SDK:
from akerbp.mlops.cdf.helpers import get_client
client = get_client(client_id=<client_id>, client_secret=<client_secret>)
client = CogniteClient(config=cnf)
function = client.functions.retrieve(external_id="<model_name>-prediction-<model_env>")
function_call = function.call(data=data_dict)
response = function_call.get_response()
Depending on how you specified the input dictionary, the predictions are available directly from the response or needs to be extracted from Cognite Files. If the input data dictionary contains a key "to_file" with value True, the predictions are uploaded to cognite Files, and the 'prediction_file' field in the response will contain a reference to the file containing the predictions. If "to_file" is set to False, or if the input dictionary does not contain such a key-value pair, the predictions are directly available through the function call response.
If "to_file" = True, we can extract the predictions using the following code-snippet
file_id = response["prediction_file"]
bytes_data = client.files.download_bytes(external_id=file_id)
predictions_df = pd.DataFrame.from_dict(json.loads(bytes_data))
Otherwise, the predictions are directly accessible from the response as follows.
predictions = response["predictions"]
Once a model is deployed, a user can extract potentially valuable metadata as follows.
my_function = client.functions.retrieve(external_id="my_model-prediction-test")
metadata = my_function.metadata
Where the metadata corresponds to whatever you specified in the mlops_settings.yaml file. For this example we get the following metadata
{'cat_filler': 'UNKNOWN',
'imputed': 'True',
'input_types': '[int, float, string]',
'num_filler': '-999.15',
'output_curves': '[AC]',
'output_unit': '[s/ft]',
'petrel_exposure': 'False',
'required_input': '[ACS, RDEP, DEN]',
'training_wells': '[3/1-4]',
'units': '[s/ft, 1, kg/m3]'}
It's possible to tests the functions locally, which can help you debug errors quickly. This is recommended before a deployment.
Define the following environmental variables (e.g. in .bashrc):
export MODEL_ENV=dev
export COGNITE_OIDC_BASE_URL=https://api.cognitedata.com
export COGNITE_TENANT_ID=<tenant id>
export COGNITE_CLIENT_ID_WRITE=<write access client id>
export COGNITE_CLIENT_SECRET_WRITE=<write access client secret>
export COGNITE_CLIENT_ID_READ=<read access client id>
export COGNITE_CLIENT_SECRET_READ=<read access client secret>
From your repo's root folder:
python -m pytest model_code (replace model_code by your model code folder
name)deploy_prediction_servicedeploy_training_service (if there's a training service)The first one will run your model tests. The last two run model tests but also the service tests implemented in the framework and simulate deployment.
If you want to run tests only you need to set TESTING_ONLY=True before calling the deployment script.
Deployments to the test environment are triggered by commits (you need to push
them). Deployments to the production environment are enabled manually from the
Bitbucket pipeline dashboard. Branches that match 'deploy/*' behave as master.
Branches that match feature/* run tests only (i.e. do not deploy).
It is assumed that most projects won't include a training service. A branch that matches 'mlops/*' deploys both prediction and training services. If a project includes both services, the pipeline file could instead be edited so that master deployed both services.
It is possible to schedule the training service in CDF, and then it can make sense to schedule the deployment pipeline of the model service (as often as new models are trained)
NOTE: Previous version of akerbp-mlops assumes that calling
LOCAL_DEPLOYMENT=True deploy_prediction_service will not deploy models and run tests.
The package is now refactored to only trigger tests when the environment variable
TESTING_ONLY is set to True.
Make sure to update the pipeline definition for branches with prefix feature/to call
TESTING_ONLY=True deploy_prediction_service instead.
The following environments need to be defined in repository settings > deployments:
dev: where two environment variables are defined
MODEL_ENV=devSERVICE_NAME=predictiontest: where two environment variables are defined
MODEL_ENV=testSERVICE_NAME=predictionprod: where two environment variables are defined
MODEL_ENV=prodSERVICE_NAME=predictioneThe following secrets need to be defined in repository settings > Secrets and variables > Actions > Repository secrets:
COGNITE_CLIENT_ID_WRITECOGNITE_CLIENT_SECRET_WRITECOGNITE_CLIENT_ID_READCOGNITE_CLIENT_SECRET_READCOGNITE_OIDC_BASE_URLCOGNITE_TENANT_ID
(these should be CDF client id and secrets for respective read and write access).GitHub Actions need to be enabled on the repo.
This package is managed using poetry. Please refer to the poetry documentation for more information on how to use poetry and install it
To install the package, run the following command from the root folder of the repo
poetry install -E cdf --with=dev,pre-commit,version,test
Poetry uses groups to manage dependencies. The above command installs the package with all the defined groups in the toml file.
The versioning of the package follows SemVer, using the MAJOR.MINOR.PATCH structure. We are thus updating the package version using the following convention
The version is updated based on the latest commit to the repo, and we are currently using the following rules.
majorminormajor nor minor if found in the commit messageprerelease, the package version is extended with a, thus taking the form MAJOR.MINOR.PATCHa.Note that the above keywords are not case sensitive. Moreover, major takes precedence over minor, so if both keywords are found in the commit message, the MAJOR version is incremented and the MINOR version is kept unchanged.
In dev and test environment, we release the package using the pre-release tag, and the package takes the following version number MAJOR.MINOR.PATCH-alpha.PRERELEASE.
The version number is automatically generated by combining poetry-dynamic-versioning with the increment_package_version.py script and is based off git tagging and the incremental version numbering system mentioned above.
These are the files and folders in the MLOps repo:
src contains the MLOps framework packagemlops_settings.yaml contains the user settings for the dummy modelmodel_code is a model template included to show the model interface. It is
not needed by the framework, but it is recommended to become familiar with it.model_artifact stores the artifacts for the model shown in model_code.
This is to help to test the model and learn the framework..github/* describes all the relevant configurations for the CI/CD pipeline run by GitHub Actionsbuild.sh is the script to build and upload the packagepyproject.toml is the project's configuration fileLICENSE is the package's licenseIn order to control access to the artifacts:
write_protected=True and a external_id, which
by default is expected to be mlops.To perform local testing of before pushing to GITHUB, you can run the following commands:
poetry run python -m pytest
(assuming you have first run poetry install -E cdf --with=dev,pre-commit,version,test" in the same environment)
Create an account in pypi, then create a token and a $HOME/.pypirc file if you want to deploy from local. Edit
pyproject.toml file and note the following:
bin folder in the PATH.The pipeline is setup to build the library, but it's possible to
build and upload the library from the development environment as well (as long as you have the PYPI_TOKEN environment variable set). To do so, run:
bash build.sh
In order to authenticate to GitHub to deploy to pypi you need to setup a token. Copy its content and add that to the secured repository secret PYPI_TOKEN.
Service testing happens in an independent process (subprocess library) to avoid setup problems:
FAQs
AkerBP MLOps framework
We found that akerbp-mlops demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 3 open source maintainers collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Research
An emerging npm supply chain attack that infects repos, steals CI secrets, and targets developer AI toolchains for further compromise.

Company News
Socket is proud to join the OpenJS Foundation as a Silver Member, deepening our commitment to the long-term health and security of the JavaScript ecosystem.

Security News
npm now links to Socket's security analysis on every package page. Here's what you'll find when you click through.