AutoEvaluator: An LLM based LLM Evaluator
AutoEvaluator is a Python library that speeds up the large language models (LLMs) output generation QC work. It provides a simple, transparent, and user-friendly API to identify the True Positives (TP), False Positives (FP), and False Negatives (FN) statements based the generated statement and ground truth provided. Get ready to turbocharge your LLM evaluations!
Features:
- Evaluate LLM outputs against a reference dataset or human judgement.
- Generate TP, FP, and FN sentences based on ground truth provided
- Calculate Precision, Recall and F1 score
Installation
Autoevaluator requires Python 3.9
and several dependencies. You can install autoevaluator:
pip install autoevaluator
Usage
-
Prepare your data:
- Create a dataset containing LLM outputs and their corresponding ground truth labels.
- The format of the data can be customized depending on the evaluation task.
- Example: A CSV file with columns for "prompt," "llm_output," and "ground_truth"
-
setup environment variables
import os
os.environ["OPENAI_API_KEY"] = "<OPENAI_API_KEY>"
os.environ["AZURE_OPENAI_API_KEY"] = "<AZURE_OPENAI_API_KEY>"
os.environ["AZURE_OPENAI_ENDPOINT"] = "<AZURE_OPENAI_ENDPOINT>"
os.environ["DEPLOYMENT"] = "<azure>/<not-azure>"
- run autoevaluator
# Import the evaluate function from the autoevaluator module
from autoevaluator import evaluate, setup_client
# setup openai client
client, model = setup_client()
# Define the claim to be evaluated
claim = 'Feynmann was born in 1918 in Malaysia'
# Define the ground truth statement
ground_truth = 'Feynmann was born in 1918 in America.'
# Evaluate the claim against the ground truth
evaluate(claim, ground_truth, client=client, model_name = model)
# output
{'TP': ['Feynmann was born in 1918.'],
'FP': ['Feynmann was born in Malaysia.'],
'FN': ['Feynmann was born in America.'],
'recall': 0.5,
'precision': 0.5,
'f1_score': 0.5}
- Output:
- The script will generate a dictionary with the following information:
- TP, FP, and FN sentences
- Precision, Recall and F1 score
License:
This project is licensed under the MIT License. See the LICENSE
file for details.