You're Invited:Meet the Socket Team at BlackHat and DEF CON in Las Vegas, Aug 4-6.RSVP
Socket
Book a DemoInstallSign in
Socket

multilabel-eval-metrics

Package Overview
Dependencies
Maintainers
1
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

multilabel-eval-metrics

Quickly evaluate multi-label classifiers in various metrics

0.0.2
Source
PyPI
Maintainers
1

MultiLabel Classifier Evaluation Metrics

This toolkit focuses on different evaluation metrics that can be used for evaluating the performance of a multilabel classifier.

Intro

The evaluation metrics for multi-label classification can be broadly classified into two categories:

  • Example-Based Evaluation Metrics
  • Label Based Evaluation Metrics

Metrics

  • Exact Match Ratio (EMR)
  • 1/0 Loss
  • Hamming Loss
  • Example-Based Accuracy
  • Example-Based Precision
  • Label Based Metrics
  • Macro Averaged Accuracy
  • Macro Averaged Precision
  • Macro Averaged Recall
  • Micro Averaged Accuracy
  • Micro Averaged Precision
  • Micro Averaged Recall
  • α- Evaluation Score

Examples

from multilabel_eval_metrics import *
import numpy as np
if __name__=="__main__":
    y_true = np.array([[0, 1], [1, 1], [1, 1], [0, 1], [1, 0]])
    y_pred = np.array([[1, 1], [1, 0], [1, 1], [0, 1], [1, 0]])
    print(y_true)
    print(y_pred)
    result=MultiLabelMetrics(y_true,y_pred).get_metric_summary(show=True)

License

The multilabel-eval-metrics toolkit is provided by Donghua Chen with MIT License.

Reference

Evaluation Metrics for Multi-Label Classification

Keywords

multi-label-classifier

FAQs

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts