Security News
Research
Data Theft Repackaged: A Case Study in Malicious Wrapper Packages on npm
The Socket Research Team breaks down a malicious wrapper package that uses obfuscation to harvest credentials and exfiltrate sensitive data.
A generic way to build object-oriented datasets and algorithm pipelines and tools to evaluate them.
Easily install tpcp
via pip:
pip install tpcp
Or add it to your project with poetry:
poetry add tpcp
Evaluating Algorithms - in particular when they contain machine learning - is hard.
Besides understanding required concepts (cross validation, bias, overfitting, ...), you need to implement the required
steps and make them work together with your algorithms and data.
If you are doing something "regular" like training an SVM on tabular data, amazing libraries like sklearn,
tslearn, pytorch, and many others, have your back.
By using their built-in tools (e.g. sklearn.evaluation.GridSearchCV
) you prevent implementation errors, and you are
provided with a sensible structure to organize your code that is well understood in the community.
However, often the problems we are trying to solve are not regular. They are complex. As an example, here is the summary of the method from one of our recent papers:
None of the standard frameworks can easily abstract this problem, because here we have none-tabular data, multiple data sources per participant, a non-traditional ML algorithm, and a complex train-test split logic.
With tpcp
we want to provide a flexible framework to approach such complex problems with structure and confidence.
To make tpcp
easy to use, we try to focus on a couple of key ideas:
pytorch.datasets
, but more flexible) that can be split, iterated over, and queried.run
and optimize
interface, that can be implemented to fit any problem.tpcp
we consider everything that modifies parameters or weights as an optimization.
This allows to use the same concepts and code interfaces from simple algorithms that just require a grid search to optimize a parameter to neuronal network pipelines with hyperparameter tuning.tpcp
implements complicated constructs like cross validation and grid search and, whenever possible, tries to catch obvious errors in your approach.
However, for the actual algorithm and dataset you are free to do whatever is required to solve your current research question.Yes - the object-oriented Datasets have proven themselves to be a really nice and flexible way to encapsulate Datasets with data from multiple modalities. There is a clear path of integrating lazy-loading, load-cashing, data filtering, or pre-processing on loading. From our experience, even if you ignore all the other tpcp features, Datasets can greatly simplify how you interact with your data sources and can serve as a self-documenting API for your data.
Other projects using Datasets:
Maybe - All parameter optimization features in tpcp exist to provide a unified API, in case other specific frameworks are too specialised.
In cases where all your algorithms can be abstracted by sklearn
, pytorch
(with the skorch
wrapper), tensorflow
/Keras
(with the scikeras
wrapper),
or any other framework that provides a nice scikit-learn API, you will get all the features tpcp can provide with much less boilerplate by just using sklearn
and optuna
directly.
Even, if you need to implement completely custom algorithms, we would encourage you to see if you can emulate a sklearn-like API to make use of its vast ecosystem.
This will usually work well for all algorithms that can be abstracted by the fit-predict paradigm.
However, for more "traditional" algorithms with no "fit" step or complicated optimizations, the run
(with optional self_optimize
) API of tpcp might be a better fit.
So if you are using or developing algorithms across library domains, that don't all work well with a sklearn API, then Yes, tpcp is a good choice.
Learn more: General Concepts, Custom Algorithms, Parameter Optimization, Cross Validation
Other projects using tpcp features:
If you use tpcp
in your research, we would appreciate a citation to our JOSS paper.
This helps us to justify putting time into maintaining and improving the library.
Küderle et al., (2023). tpcp: Tiny Pipelines for Complex Problems -
A set of framework independent helpers for algorithms development and evaluation.
Journal of Open Source Software, 8(82), 4953,
https://doi.org/10.21105/joss.04953
@article{
Küderle2023,
doi = {10.21105/joss.04953},
url = {https://doi.org/10.21105/joss.04953},
year = {2023}, publisher = {The Open Journal},
volume = {8}, number = {82}, pages = {4953},
author = {Arne Küderle and Robert Richer and Raul C. Sîmpetru and Bjoern M. Eskofier},
title = {tpcp: Tiny Pipelines for Complex Problems -
A set of framework independent helpers for algorithms development and evaluation},
journal = {Journal of Open Source Software}
}
The entire development is managed via GitHub. If you run into any issues, want to discuss certain decisions, want to contribute features or feature requests, just reach out to us by opening a new issue.
We are using poetry to manage dependencies and poethepoet to run and manage dev tasks.
To set up the dev environment including the required dependencies for using and developing on tpcp
run the following
commands:
git clone https://github.com/mad-lab-fau/tpcp
cd tpcp
poetry install --all-extras
Afterward, you can start to develop and change things.
If you want to run tests, format your code, build the docs, ..., you can run one of the following poethepoet
commands
CONFIGURED TASKS
format
format_unsafe
lint Lint all files with ruff.
ci_check Check all potential format and linting issues.
test Run Pytest with coverage.
docs Build the html docs using Sphinx.
docs_clean Remove all old build files and build a clean version of the docs.
docs_preview Preview the built html docs.
version
by calling
poetry run poe <command name>
If you installed poethepoet
globally, you can skip the poetry run
part at the beginning.
FAQs
Pipeline and Dataset helpers for complex algorithm evaluation.
We found that tpcp demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 2 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Research
The Socket Research Team breaks down a malicious wrapper package that uses obfuscation to harvest credentials and exfiltrate sensitive data.
Research
Security News
Attackers used a malicious npm package typosquatting a popular ESLint plugin to steal sensitive data, execute commands, and exploit developer systems.
Security News
The Ultralytics' PyPI Package was compromised four times in one weekend through GitHub Actions cache poisoning and failure to rotate previously compromised API tokens.