Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

edamame

Package Overview
Dependencies
Maintainers
1
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

edamame

Exploratory data analysis tools

  • 0.59
  • PyPI
  • Socket score

Maintainers
1

Edamame


Documentation Status PyPI version PyPI - Downloads Maintenance

Edamame is inspired by packages such as pandas-profiling, pycaret, and yellowbrick. The goal of Edamame is to provide user-friendly functions for conducting exploratory data analysis (EDA) on datasets, as well as for training and analyzing batteries of models for regression or classification problems.

To install the package,

pip install edamame

the edamame package works correctly inside a jupyter-notebook. You can find the documentation of the package on the edamame-documentation page.


Functionalities

The package consists of three modules: eda, which performs exploratory data analysis; and regressor and classifier, which handle the training of machine learning models for regression and classification, respectively. To see examples of the uses of the edamame package, you can check out the examples folder in the repository.


Eda module

import edamame.eda as eda

The eda module provides a wide range of functions for performing exploratory data analysis (EDA) on datasets. With this module you can easily explore and manipulate your data, conduct descriptive statistics, correlation analysis, and prepare your data for machine learning. The "eda" module offers the following functionalities:

  • Data Exploration and Manipulation functions:

    • dimensions: The function displays the number of rows and columns of a pandas dataframe passed.
    • identify_types: Identify the data types of each column.
    • view_cardinality: View the number of unique values in each categorical column.
    • modify_cardinality: Modify the number of unique values in a column.
    • missing: Check if any missing data is present in the dataset.
    • handling_missing: Replace or remove missing values in the dataset.
    • drop_columns: Remove specific columns from the dataset.
    • num_to_categorical: The function returns a dataframe with the columns transformed into an "object".
    • interaction: The function display an interactive plot for analysing relationships between numerical columns with a scatterplot.
    • inspection: The function displays an interactive plot for analysing the distribution of a variable based on the distinct cardinalities of the target variable.
    • split_and_scaling: The function returns two pandas dataframes: the regressor matrix X contains all the predictors for the model, the series y contains the values of the response variable.
  • Descriptive Statistics functions:

    • describe_distribution: The function display the result of the describe() method applied to a pandas dataframe, divided by numerical and object columns.
    • plot_categorical: The function returns a sequence of tables and plots for the categorical variables.
    • plot_numerical: The function returns a sequence of tables and plots for the numerical variables.
    • num_variable_study: he function displays the following transformations of the variable col passed: log(x), sqrt(x), x^2, Box-cox, 1/x.
  • Correlation Analysis functions:

    • correlation_pearson: The function performs the Pearson's correlation between the columns pairs.
    • correlation_categorical: The function performs the Chi-Square Test of Independence between categorical variables of the dataset.
    • correlation_phik: Calculate the Phik correlation coefficient between all pairs of columns (Paper link).
  • Useful functions:

    • load_model: The function load the model saved in the pickle format.
    • setup: The function returns the following elements: X_train, y_train, X_test, y_test.
    • scaling: The function returns the normalised/standardized matrix.
    • ohe: The function returns the numpy array passed in as input, converted using one-hot encoding.

Regressor module

from edamame.regressor import TrainRegressor, regression_metrics

The TrainRegressor class is designed to be used as a pipeline for training and handling regression models.

The class provides several methods for fitting different regression models, computing model metrics, saving and loading models, and using AutoML to select the best model based on performance metrics. These methods include:

  • linear: Fits a linear regression model to the training data.
  • lasso: Fits a Lasso regression model to the training data.
  • ridge: Fits a Ridge regression model to the training data.
  • tree: Fits a decision tree regression model to the training data.
  • random_forest: Fits a random forest regression model to the training data.
  • xgboost: Fits an XGBoost regression model to the training data.
  • auto_ml: Uses AutoML to select the best model based on performance metrics.
  • model_metrics: Computes and prints the performance metrics for each trained model.
  • save_model: Saves the trained model to a file.

After saving a model with the save_model method, we can upload the model using the load_model function of the eda module and evaluate its performance on new data using the regression_metrics function.

from edamame.regressor import RegressorDiagnose

The RegressorDiagnose class is designed to diagnose regression models and analyze their performance. The class provides several methods for diagnosing and analyzing the performance of regression models. These methods include:

  • coefficients: Computes and prints the coefficients of the regression model.
  • random_forest_fi: Displays the feature importance plot for the random forest regression model.
  • random_forest_fi: Displays the feature importance plot for the xgboost regression model.
  • prediction_error: Computes and prints the prediction error of the regression model on the test data.
  • residual_plot: creates and displays a residual plot for the regression model.
  • qqplot: creates and displays a QQ plot for the regression model.

Example:

from sklearn.datasets import make_regression
from edamame.regressor import TrainRegressor
import pandas as pd
import edamame.eda as eda
from edamame.regressor import RegressorDiagnose
X, y = make_regression(n_samples=1000, n_features=5, n_targets=1, random_state=42)
X = pd.DataFrame(X, columns=["f1", "f2", "f3", "f4", "f5"])
y = pd.DataFrame(y, columns=["y"])
X_train, y_train, X_test, y_test = eda.setup(X, y)
X_train_s = eda.scaling(X_train)
X_test_s = eda.scaling(X_test)
regressor = TrainRegressor(X_train_s, y_train, X_test_s, y_test)
rf = regressor.random_forest()
regressor.model_metrics()
diagnose = RegressorDiagnose(X_train_s, y_train, X_test_s, y_test)
diagnose.random_forest_fi(model=rf)
diagnose.prediction_error(model=rf)

Classifier module

from edamame.classifier import TrainClassifier

The TrainClassifier class is designed to be used as a pipeline for training and handling clasification models.

The class provides several methods for fitting different regression models, computing model metrics, saving and loading models, and using AutoML to select the best model based on performance metrics. These methods include:

  • logistic: Fits a logistic model to the training data.
  • gaussian_nb: Fits a Gaussina Naive Bayes model to the training data.
  • knn: Fits a k-Nearest Neighbors classification model to the training data.
  • tree: Fits a decision tree classification model to the training data.
  • random_forest: Fits a random forest classification model to the training data.
  • xgboost: Fits an XGBoost classification model to the training data.
    • svm: Fits an Support Vector classification model to the training data.
  • auto_ml: Uses AutoML to select the best model based on performance metrics.
  • model_metrics: Computes and prints the performance metrics for each trained model.
  • save_model: Saves the trained model to a file.

After saving a model with the save_model method, we can upload the model using the load_model function of the eda module and evaluate its performance on new data using the classifier_metrics function.

from edamame.classifier import classifier_metrics

Example:

from edamame.classifier import TrainClassifier
from sklearn import datasets
import edamame.eda as eda
iris = datasets.load_iris()
X = iris.data
X = pd.DataFrame(X, columns=iris.feature_names)
y = iris.target
y = pd.DataFrame(y, columns=['y'])
X_train, y_train, X_test, y_test = eda.setup(X,y)
X_train_s = eda.scaling(X_train)
X_test_s = eda.scaling(X_test)
classifier = TrainClassifier(X_train_s, y_train, X_test_s, y_test)
models = classifier.auto_ml()
svm = classifier.svm()
classifier.model_metrics(model_name="svm")
classifier.save_model(model_name="svm")
svm_upload = eda.load_model(path="svm.pkl")
classifier_metrics(svm_upload, X_train_s, y_train)

Todos

  • Add the notebook for EDA in a classification problem to the edamame-notebook repository.
  • Add the notebook for training/diagnosing classification models to the edamame-notebook repository.
  • Add the rocauc method to the ClassifierDiagnose class.
  • Update example notebooks.

FAQs


Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc