
Research
Malicious npm Packages Impersonate Flashbots SDKs, Targeting Ethereum Wallet Credentials
Four npm packages disguised as cryptographic tools steal developer credentials and send them to attacker-controlled Telegram infrastructure.
This repo contains evaluation metric codes used in Microsoft Cognitive Services Computer Vision for tasks such as classification, object detection, image caption, and image matting.
If you only need the image classification or object detection evaluation pipeline, JRE is not required. This repo
Evaluator
class, for bringing custom evaluators under the same interfaceThis repo isn't trying to re-invent the wheel, but to provide centralized defaults for most metrics across different vision tasks so dev/research teams can compare model performance on the same page. As expected, you can find many implementations backed up by well-known sklearn or pycocotools.
This repo currently offers evaluation metrics for three vision tasks:
TopKAccuracyEvaluator
: computes the top-k accuracy for multi-class classification problem. A prediction is considered correct, if the ground truth label is within the labels with top k confidences.ThresholdAccuracyEvaluator
: computes the threshold based accuracy (mainly for multi-label classification problem), i.e., accuracy of the predictions with confidence over a certain threshold.AveragePrecisionEvaluator
: computes the average precision, i.e., precision averaged across different confidence thresholds.PrecisionEvaluator
: computes precision.RecallEvaluator
: computes recall.BalancedAccuracyScoreEvaluator
: computes balanced accuracy, i.e., average recall across classes, for multiclass classification.RocAucEvaluator
: computes Area under the Receiver Operating Characteristic Curve.F1ScoreEvaluator
: computes f1-score (recall and precision will be reported as well).EceLossEvaluator
: computes the ECE loss, i.e., the expected calibration error, given the model confidence and true labels for a set of data points.ConfusionMatrixEvaluator
computes the confusion matrix of a classification. By definition a confusion matrix C is such that Cij is equal to the number of observations known to be in group i and predicted to be in group j (https://en.wikipedia.org/wiki/Confusion_matrix).
.CocoMeanAveragePrecisionEvaluator
: Coco mean average precision (mAP) computation across different classes, under multiple IoU(s).BleuScoreEvaluator
: computes the Bleu score. For more details, refer to BLEU: a Method for Automatic Evaluation of Machine Translation.METEORScoreEvaluator
: computes the Meteor score. For more details, refer to Project page. We use the latest version (1.5) of the Code.ROUGELScoreEvaluator
: computes the Rouge-L score. Refer to ROUGE: A Package for Automatic Evaluation of Summaries for more details.CIDErScoreEvaluator
: computes the CIDEr score. Refer to CIDEr: Consensus-based Image Description Evaluation for more details.SPICEScoreEvaluator
: computes the SPICE score. Refer to SPICE: Semantic Propositional Image Caption Evaluation for more details.MeanIOUEvaluator
: computes the mean intersection-over-union score.ForegroundIOUEvaluator
: computes the foreground intersection-over-union evaluator score.BoundaryMeanIOUEvaluator
: computes the boundary mean intersection-over-union score.BoundaryForegroundIOUEvaluator
: computes the boundary foreground intersection-over-union score.L1ErrorEvaluator
: computes the L1 error.MeanLpErrorEvaluator
: computes the mean Lp error (e.g. L1 error for p=1, L2 error for p=2, etc.).RecallAtKEvaluator(k)
: computes Recall@k, which is the percentage of relevant items in top-k among all relevant itemsPrecisionAtKEvaluator(k)
: computes Precision@k, which is the percentage of TP among all items classified as P in top-k.MeanAveragePrecisionAtK(k)
: computes Mean Average Precision@k, an information retrieval metric.PrecisionRecallCurveNPointsEvaluator(k)
: computes a Precision-Recall Curve, interpolated at k points and averaged over all samples.While different machine learning problems/applications prefer different metrics, below are some general recommendations:
The image caption evaluators requires Jave Runtime Environment (JRE) (Java 1.8.0) and some extra dependencies which can be installed with pip install vision-evaluation[caption]
. This is not required for other evaluators.
FAQs
Evaluation metric codes for various vision tasks.
We found that vision-evaluation demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 2 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Research
Four npm packages disguised as cryptographic tools steal developer credentials and send them to attacker-controlled Telegram infrastructure.
Security News
Ruby maintainers from Bundler and rbenv teams are building rv to bring Python uv's speed and unified tooling approach to Ruby development.
Security News
Following last week’s supply chain attack, Nx published findings on the GitHub Actions exploit and moved npm publishing to Trusted Publishers.