Xingu for automated ML model training
Xingu is a framework of a few classes that helps on full industrialization of
your machine learning training and deployment pipelines. Just write your
DataProvider class, mostly in a declarative way, that completely controls
your training and deployment pipeline.
Notebooks are useful in EDA time, but when the modeling is ready to become
a product, use Xingu proposed classes to organize interactions with DB
(queries), data cleanup, feature engineering, hyper-parameters optimization,
training algorithm, general and custom metrics computation, estimation
post-processing.
-
Don’t save a pickle at the end of your EDA, let Xingu organize a versioned
inventory of saved models (PKLs) linked and associated to commit hashes and
branches of your code.
-
Don’t save metrics manually and in an informal way. Metrics are first class
citizens, so use Xingu to write methods that compute metrics and let it
store metrics in an organized database that can be queried and compared.
-
Don’t make ad-hoc plots to understand your data. Plots are important assets
to measure the quality of your model, so use Xingu to write methods that
formaly generate versioned plots.
-
Do not worry or even write code that loads pre-req models, use Xingu pre-req
architecture to load pre-req models for you and package them together.
-
Don’t save ad-hoc hypermaters after optimizations. Let Xingu store and manage
those for you in a way that can be reused in future trains.
-
Don’t change your code if you want different functionality. Use Xingu
environment variables or command line parameters to strategize your trains.
-
Don’t manually copy PKLs to production environments on S3 or other object
storage. Use Xingu’s deployment tools to automate the deployment step.
-
Don’t write database integration code. Just provide your queries and Xingu
will give you the data. Xingu will also maintain a local cache of your data
so you won’t hammer your database across multiple retrains. Do the same with
static data files with parquet, CSV, on local filesystem or object storage.
-
Xingu can run anyware, from your laptop, with a plain SQLite database, to
large scale cloud-powered training pipelines with GitOps, Jenkins, Docker
etc. Xingu’s database is used only to cellect training information, it isn´t
required later when model is used to predict.
Install
pip install https://github.com/avibrazil/xingu
or
pip install xingu
Use to Train a Model
Check your project has the necessary files and folders:
$ find
dataproviders/
dataproviders/my_dataprovider.py
estimators/
estimators/myrandomestimator.py
models/
data/
plots/
Train with DataProviders id_of_my_dataprovider1
and id_of_my_dataprovider2
, both defined in dataproviders/my_dataprovider.py
:
$ xingu \
--dps id_of_my_dataprovider1,id_of_my_dataprovider2 \
--databases athena "awsathena+rest://athena.us..." \
--query-cache-path data \
--trained-models-path models \
--debug
Use the API
See the proof of concept notebook with vairous usage scenarios:
- POC 1. Train some Models
- POC 2. Use Pre-Trained Models for Batch Predict
- POC 3. Assess Metrics and create Comparative Reports
- POC 4. Check and report how Metrics evolved
- POC 5. Play with Xingu barebones
- POC 6. Play with the
ConfigManager
- POC 7. Xingu Estimators in the Command Line
- POC 8. Deploy Xingu Data and Estimators between environments (laptop, staging, production etc)
Procedures defined by Xingu
Xingu classes do all the heavy lifting while you focus on your machine learning
code only.
-
Class Coach
is responsible of coordinating the training process of one or
multiple models. You control parallelism via command line or environment
variables.
-
Class Model
implements a standard pipelines for train, train with hyperparam
optimization, load and save pickles, database access etc. These pipelines are
is fully controlled by your DataProvider or the environment.
-
Class DataProvider
is a base class that is constantly queried by the Model
to determine how the Model
should operate. Your should create a class derived
from DataProvider
and reimplement whatever you want to change. This will
completely change behaviour of Model
operation in a way that you´ll get a
completelly different model.
- It is your
DataProvider
that defines the source of training data as SQL
queries or URLs of parquets, CSVs, JSONs - It is your
DataProvider
that defines how multi-source data should be
integrated - It is your
DataProvider
that defines how data should be split into train
and test sets - Your
DataProvider
defines which Estimator
class to use - Your
DataProvider
defines how the Estimator
should be initialized and
optimized - Your
DataProvider
defines which metrics should be computed, how to
compute them and against which dataset - Your
DataProvider
defines which plots should be created and against
which dataset - See below when and how each method of your
DataProvider
will be called
by xingu.Model
-
Class Estimator
is another base class (that you can reimplement) to contain
estimator-specific affairs. There will be an Estimator
-derived class for an
XGBoostRegressor, other for a CatBoostClassifier, other for a
SciKit-Learn-specific algorithm, including hyperparam optimization logic and
libraries. A concrete Estimator
class can and should be reused across multiple
different models.
The hierarchical diagrams below expose complete Xingu pipelines with all their
steps. Steps marked with 💫 are were you put your code. All the rest is Xingu
boilerplate code ready to use.
Coach.team_train()
:
Train various Models, all possible in parallel.
Coach.team_train_parallel()
(background, parallelism controled by PARALLEL_TRAIN_MAX_WORKERS
):
Coach.team_load()
(for pre-req models not trained in this session)- Per DataProvider requested to be trained:
Coach.team_train_member()
(background):
Model.fit()
calls:
- 💫
DataProvider.get_dataset_sources_for_train()
return dict of queries and/or URLs Model.data_sources_to_data(sources)
- 💫
DataProvider.clean_data_for_train(dict of DataFrames)
- 💫
DataProvider.feature_engineering_for_train(DataFrame)
- 💫
DataProvider.last_pre_process_for_train(DataFrame)
- 💫
DataProvider.data_split_for_train(DataFrame)
return tuple of dataframes Model.hyperparam_optimize()
(decide origin of hyperparam)
- 💫
DataProvider.get_estimator_features_list()
- 💫
DataProvider.get_target()
- 💫
DataProvider.get_estimator_optimization_search_space()
- 💫
DataProvider.get_estimator_hyperparameters()
- 💫
Estimator.hyperparam_optimize()
(SKOpt, GridSearch et all) - 💫
Estimator.hyperparam_exchange()
- 💫
DataProvider.post_process_after_hyperparam_optimize()
- 💫
Estimator.fit()
- 💫
DataProvider.post_process_after_train()
Coach.post_train_parallel()
(background, only if POST_PROCESS=true
):
- Per trained Model (parallelism controled by
PARALLEL_POST_PROCESS_MAX_WORKERS
):
Model.save()
(PKL save in background)Model.trainsets_save()
(save the train datasets, background)Model.trainsets_predict()
:
Model.predict_proba()
or Model.predict()
(see below)- 💫
DataProvider.pre_process_for_trainsets_metrics()
Model.compute_and_save_metrics(channel=trainsets)
(see below)- 💫
DataProvider.post_process_after_trainsets_metrics()
Coach.single_batch_predict()
(see below)
Coach.team_batch_predict()
:
Load from storage and use various pre-trained Models to estimate data from a pre-defined SQL query.
The batch predict SQL query is defined into the DataProvider and this process will query the database
to get it.
Coach.team_load()
(for all requested DPs and their pre-reqs)- Per loaded model:
Coach.single_batch_predict()
(background)
Model.batch_predict()
- 💫
DataProvider.get_dataset_sources_for_batch_predict()
Model.data_sources_to_data()
- 💫
DataProvider.clean_data_for_batch_predict()
- 💫
DataProvider.feature_engineering_for_batch_predict()
- 💫
DataProvider.last_pre_process_for_batch_predict()
Model.predict_proba()
or Model.predict()
(see below)
Model.compute_and_save_metrics(channel=batch_predict)
(see below)Model.save_batch_predict_estimations()
Model.predict()
and Model.predict_proba()
:
Model.generic_predict()
- 💫
DataProvider.pre_process_for_predict()
or DataProvider.pre_process_for_predict_proba()
- 💫
DataProvider.get_estimator_features_list()
- 💫
Estimator.predict()
or Estimator.predict_proba()
- 💫
DataProvider.post_process_after_predict()
or DataProvider.post_process_after_predict_proba()
Model.compute_and_save_metrics()
:
Sub-system to compute various metrics, graphics and transformations over
a facet of the data.
This is executed right after a Model was trained and also during a batch predict.
Predicted data is computed before Model.compute_and_save_metrics()
is called.
By Model.trainsets_predict()
and Model.batch_predict()
.
Model.save_model_metrics()
calls:
Model.compute_model_metrics()
calls:
Model.compute_trainsets_model_metrics()
calls:
- All
Model.compute_trainsets_model_metrics_{NAME}()
- All 💫
DataProvider.compute_trainsets_model_metrics_{NAME}()
Model.compute_batch_model_metrics()
calls:
- All
Model.compute_batch_model_metrics_{NAME}()
- All 💫
DataProvider.compute_batch_model_metrics_{NAME}()
Model.compute_global_model_metrics()
calls:
- All
Model.compute_global_model_metrics_{NAME}()
- All 💫
DataProvider.compute_global_model_metrics_{NAME}()
Model.render_model_plots()
calls:
Model.render_trainsets_model_plots()
calls:
- All
Model.render_trainsets_model_plots_{NAME}()
- All 💫
DataProvider.render_trainsets_model_plots_{NAME}()
Model.render_batch_model_plots()
calls:
- All
Model.render_batch_model_plots_{NAME}()
- All 💫
DataProvider.render_batch_model_plots_{NAME}()
Model.render_global_model_plots()
calls:
- All
Model.render_global_model_plots_{NAME}()
- All 💫
DataProvider.render_global_model_plots_{NAME}()
Model.save_estimation_metrics()
calls:
Model.compute_estimation_metrics()
calls:
- All
Model.compute_estimation_metrics_{NAME}()
- All 💫
DataProvider.compute_estimation_metrics_{NAME}()