Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

autoevaluator

Package Overview
Dependencies
Maintainers
1
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

autoevaluator

Fully automated LLM evaluator

  • 1.0.3
  • PyPI
  • Socket score

Maintainers
1

AutoEvaluator: An LLM based LLM Evaluator

AutoEvaluator is a Python library that speeds up the large language models (LLMs) output generation QC work. It provides a simple, transparent, and user-friendly API to identify the True Positives (TP), False Positives (FP), and False Negatives (FN) statements based the generated statement and ground truth provided. Get ready to turbocharge your LLM evaluations!

Autoevaluator PyPI - Downloads

Static Badge Static Badge Twitter Follow

Features:

  • Evaluate LLM outputs against a reference dataset or human judgement.
  • Generate TP, FP, and FN sentences based on ground truth provided
  • Calculate Precision, Recall and F1 score

Installation

Autoevaluator requires Python 3.9 and several dependencies. You can install autoevaluator:

pip install autoevaluator

Usage

  1. Prepare your data:

    • Create a dataset containing LLM outputs and their corresponding ground truth labels.
    • The format of the data can be customized depending on the evaluation task.
    • Example: A CSV file with columns for "prompt," "llm_output," and "ground_truth"
  2. setup environment variables

import os
os.environ["OPENAI_API_KEY"] = "<OPENAI_API_KEY>"
os.environ["AZURE_OPENAI_API_KEY"] = "<AZURE_OPENAI_API_KEY>"
os.environ["AZURE_OPENAI_ENDPOINT"] = "<AZURE_OPENAI_ENDPOINT>"
os.environ["DEPLOYMENT"] = "<azure>/<not-azure>"
  1. run autoevaluator
# Import the evaluate function from the autoevaluator module
from autoevaluator import evaluate, setup_client

# setup openai client
client, model =  setup_client()

# Define the claim to be evaluated
claim = 'Feynmann was born in 1918 in Malaysia'

# Define the ground truth statement
ground_truth = 'Feynmann was born in 1918 in America.'

# Evaluate the claim against the ground truth
evaluate(claim, ground_truth, client=client, model_name = model)

# output
{'TP': ['Feynmann was born in 1918.'],
 'FP': ['Feynmann was born in Malaysia.'],
 'FN': ['Feynmann was born in America.'],
 'recall': 0.5,
 'precision': 0.5,
 'f1_score': 0.5}

  1. Output:
    • The script will generate a dictionary with the following information:
      • TP, FP, and FN sentences
      • Precision, Recall and F1 score

License:

This project is licensed under the MIT License. See the LICENSE file for details.

FAQs


Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc