Security News
Research
Data Theft Repackaged: A Case Study in Malicious Wrapper Packages on npm
The Socket Research Team breaks down a malicious wrapper package that uses obfuscation to harvest credentials and exfiltrate sensitive data.
This package contains functions to simplify common tasks used when developing and evaluating recommender systems. A short description of the submodules is provided below. For more details about what functions are available and how to use them, please review the doc-strings provided with the code or the online documentation.
Some dependencies require compilation during pip installation. On Linux this can be supported by adding build-essential dependencies:
sudo apt-get install -y build-essential libpython<version>
where <version>
should be the Python version (e.g. 3.8
).
On Windows you will need Microsoft C++ Build Tools
For more details about the software requirements that must be pre-installed on each supported platform, see the setup guide.
To install core utilities, CPU-based algorithms, and dependencies
pip install --upgrade pip setuptools
pip install recommenders
By default recommenders
does not install all dependencies used throughout the code and the notebook examples in this repo. Instead we require a bare minimum set of dependencies needed to execute functionality in the recommenders
package (excluding Spark, GPU and Jupyter functionality). We also allow the user to specify which groups of dependencies are needed at installation time (or later if updating the pip installation). The following groups are provided:
black
and pytest
required only for development or testingNote that, currently, xLearn and Vowpal Wabbit are in the experimental group.
These groups can be installed alone or in combination:
# install recommenders with core requirements and support for CPU-based recommender algorithms and notebooks
pip install recommenders[examples]
# add support for running example notebooks and GPU functionality
pip install recommenders[examples,gpu]
You will need CUDA Toolkit v11.2 and CuDNN v8.1 to enable both Tensorflow and PyTorch to use the GPU. For example, if you are using a conda environment, this can be installed with
conda install -c conda-forge cudatoolkit=11.2 cudnn=8.1
For a virtual environment, you may use a docker container by Nvidia.
For manual installation of the necessary requirements see TensorFlow and PyTorch installation pages.
When installing with GPU support you will need to point to the PyTorch index to ensure you are downloading a version of PyTorch compiled with CUDA support. This can be done using the --find-links or -f option below.
pip install recommenders[gpu] -f https://download.pytorch.org/whl/cu111/torch_stable.html
We are currently evaluating inclusion of the following dependencies:
Some dependencies are not available via the recommenders PyPI package, but can be installed in the following ways:
pip install "pymanopt@https://github.com/pymanopt/pymanopt/archive/fb36a272cdeecb21992cfd9271eb82baafeb316d.zip"
.For NNI a more recent version can be installed but is untested.
In case you want to use a version of the source code that is not published on PyPI, one alternative is to install from a clone of the source code on your machine. To this end, a setup.py file is provided in order to simplify the installation of the utilities in this repo from the main directory.
This still requires an environment to be installed as described in the setup guide. Once the necessary dependencies are installed, you can use the following command to install recommenders
as a python package.
pip install -e .
It is also possible to install directly from GitHub. Or from a specific branch as well.
pip install -e git+https://github.com/microsoft/recommenders/#egg=pkg
pip install -e git+https://github.com/microsoft/recommenders/@staging#egg=pkg
NOTE - The pip installation does not install all of the pre-requisites; it is assumed that the environment has already been set up according to the setup guide, for the utilities to be used.
Datasets module includes helper functions for pulling different datasets and formatting them appropriately as well as utilities for splitting data for training / testing.
There are dataloaders for several datasets. For example, the movielens module will allow you to load a dataframe in pandas or spark formats from the MovieLens dataset, with sizes of 100k, 1M, 10M, or 20M to test algorithms and evaluate performance benchmarks.
df = movielens.load_pandas_df(size="100k")
Currently three methods are available for splitting datasets. All of them support splitting by user or item and filtering out minimal samples (for instance users that have not rated enough items, or items that have not been rated by enough users).
The evaluation submodule includes functionality for calculating common recommendation metrics directly in Python or in a Spark environment using PySpark.
Currently available metrics include:
The models submodule contains implementations of various algorithms that can be used in addition to external packages to evaluate and develop new recommender system approaches. A description of all the algorithms can be found on this table. The following is a list of the algorithm utilities:
This submodule contains utilities for performing hyperparameter tuning.
This submodule contains high-level utilities for defining constants used in most algorithms as well as helper functions for managing aspects of different frameworks: GPU, Spark, Jupyter notebook.
FAQs
Recommenders - Python utilities for building recommendation systems
We found that recommenders demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 5 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Research
The Socket Research Team breaks down a malicious wrapper package that uses obfuscation to harvest credentials and exfiltrate sensitive data.
Research
Security News
Attackers used a malicious npm package typosquatting a popular ESLint plugin to steal sensitive data, execute commands, and exploit developer systems.
Security News
The Ultralytics' PyPI Package was compromised four times in one weekend through GitHub Actions cache poisoning and failure to rotate previously compromised API tokens.