Research
Security News
Quasar RAT Disguised as an npm Package for Detecting Vulnerabilities in Ethereum Smart Contracts
Socket researchers uncover a malicious npm package posing as a tool for detecting vulnerabilities in Etherium smart contracts.
Autoevals is a tool to quickly and easily evaluate AI model outputs.
It bundles together a variety of automatic evaluation methods including:
Autoevals is developed by the team at Braintrust.
Autoevals uses model-graded evaluation for a variety of subjective tasks including fact checking, safety, and more. Many of these evaluations are adapted from OpenAI's excellent evals project but are implemented so you can flexibly run them on individual examples, tweak the prompts, and debug their outputs.
You can also create your own model-graded evaluations with Autoevals. It's easy to add custom prompts, parse outputs, and manage exceptions.
Autoevals is distributed as a Python library on PyPI and Node.js library on NPM.
pip install autoevals
npm install autoevals
Use Autoevals to model-grade an example LLM completion using the factuality prompt.
By default, Autoevals uses your OPENAI_API_KEY
environment variable to authenticate with OpenAI's API.
from autoevals.llm import *
# Create a new LLM-based evaluator
evaluator = Factuality()
# Evaluate an example LLM completion
input = "Which country has the highest population?"
output = "People's Republic of China"
expected = "China"
result = evaluator(output, expected, input=input)
# The evaluator returns a score from [0,1] and includes the raw outputs from the evaluator
print(f"Factuality score: {result.score}")
print(f"Factuality metadata: {result.metadata['rationale']}")
If you need to use a custom OpenAI client, you can initialize the library with a custom client.
import openai
from autoevals import init
from autoevals.oai import LLMClient
openai_client = openai.OpenAI(base_url="https://api.openai.com/v1/")
class CustomClient(LLMClient):
openai=openai_client # you can also pass in openai module and we will instantiate it for you
embed = openai.embeddings.create
moderation = openai.moderations.create
RateLimitError = openai.RateLimitError
def complete(self, **kwargs):
# make adjustments as needed
return self.openai.chat.completions.create(**kwargs)
# Autoevals will now use your custom client
client = init(client=CustomClient)
If you only need to use a custom client for a specific evaluator, you can pass in the client to the evaluator.
evaluator = Factuality(client=CustomClient)
import { Factuality } from "autoevals";
(async () => {
const input = "Which country has the highest population?";
const output = "People's Republic of China";
const expected = "China";
const result = await Factuality({ output, expected, input });
console.log(`Factuality score: ${result.score}`);
console.log(`Factuality metadata: ${result.metadata.rationale}`);
})();
Once you grade an output using Autoevals, it's convenient to use Braintrust to log and compare your evaluation results.
from autoevals.llm import *
import braintrust
# Create a new LLM-based evaluator
evaluator = Factuality()
# Set up an example LLM completion
input = "Which country has the highest population?"
output = "People's Republic of China"
expected = "China"
# Set up a BrainTrust experiment to log our eval to
experiment = braintrust.init(
project="Autoevals", api_key="YOUR_BRAINTRUST_API_KEY"
)
# Start a span and run our evaluator
with experiment.start_span() as span:
result = evaluator(output, expected, input=input)
# The evaluator returns a score from [0,1] and includes the raw outputs from the evaluator
print(f"Factuality score: {result.score}")
print(f"Factuality metadata: {result.metadata['rationale']}")
span.log(
inputs={"query": input},
output=output,
expected=expected,
scores={
"factuality": result.score,
},
metadata={
"factuality": result.metadata,
},
)
print(experiment.summarize())
Create a file named example.eval.js
(it must end with .eval.js
or .eval.js
):
import { Eval } from "braintrust";
import { Factuality } from "autoevals";
Eval("Autoevals", {
data: () => [
{
input: "Which country has the highest population?",
expected: "China",
},
],
task: () => "People's Republic of China",
scores: [Factuality],
});
Then, run
npx braintrust run example.eval.js
Autoevals supports custom evaluation prompts for model-graded evaluation. To use them, simply pass in a prompt and scoring mechanism:
from autoevals import LLMClassifier
# Define a prompt prefix for a LLMClassifier (returns just one answer)
prompt_prefix = """
You are a technical project manager who helps software engineers generate better titles for their GitHub issues.
You will look at the issue description, and pick which of two titles better describes it.
I'm going to provide you with the issue description, and two possible titles.
Issue Description: {{input}}
1: {{output}}
2: {{expected}}
"""
# Define the scoring mechanism
# 1 if the generated answer is better than the expected answer
# 0 otherwise
output_scores = {"1": 1, "2": 0}
evaluator = LLMClassifier(
name="TitleQuality",
prompt_template=prompt_prefix,
choice_scores=output_scores,
use_cot=True,
)
# Evaluate an example LLM completion
page_content = """
As suggested by Nicolo, we should standardize the error responses coming from GoTrue, postgres, and realtime (and any other/future APIs) so that it's better DX when writing a client,
We can make this change on the servers themselves, but since postgrest and gotrue are fully/partially external may be harder to change, it might be an option to transform the errors within the client libraries/supabase-js, could be messy?
Nicolo also dropped this as a reference: http://spec.openapis.org/oas/v3.0.3#openapi-specification"""
output = (
"Standardize error responses from GoTrue, Postgres, and Realtime APIs for better DX"
)
expected = "Standardize Error Responses across APIs"
response = evaluator(output, expected, input=page_content)
print(f"Score: {response.score}")
print(f"Metadata: {response.metadata}")
import { LLMClassifierFromTemplate } from "autoevals";
(async () => {
const promptTemplate = `You are a technical project manager who helps software engineers generate better titles for their GitHub issues.
You will look at the issue description, and pick which of two titles better describes it.
I'm going to provide you with the issue description, and two possible titles.
Issue Description: {{input}}
1: {{output}}
2: {{expected}}`;
const choiceScores = { 1: 1, 2: 0 };
const evaluator =
LLMClassifierFromTemplate <
{ input: string } >
{
name: "TitleQuality",
promptTemplate,
choiceScores,
useCoT: true,
};
const input = `As suggested by Nicolo, we should standardize the error responses coming from GoTrue, postgres, and realtime (and any other/future APIs) so that it's better DX when writing a client,
We can make this change on the servers themselves, but since postgrest and gotrue are fully/partially external may be harder to change, it might be an option to transform the errors within the client libraries/supabase-js, could be messy?
Nicolo also dropped this as a reference: http://spec.openapis.org/oas/v3.0.3#openapi-specification`;
const output = `Standardize error responses from GoTrue, Postgres, and Realtime APIs for better DX`;
const expected = `Standardize Error Responses across APIs`;
const response = await evaluator({ input, output, expected });
console.log("Score", response.score);
console.log("Metadata", response.metadata);
})();
You can also create your own scoring functions that do not use LLMs. For example, to test whether the word 'banana'
is in the output, you can use the following:
from autoevals import Score
def banana_scorer(output, expected, input):
return Score(name="banana_scorer", score=1 if "banana" in output else 0)
input = "What is 1 banana + 2 bananas?"
output = "3"
expected = "3 bananas"
result = banana_scorer(output, expected, input)
print(f"Banana score: {result.score}")
import { Score } from "autoevals";
const bananaScorer = ({
output,
expected,
input,
}: {
output: string;
expected: string;
input: string;
}): Score => {
return { name: "banana_scorer", score: output.includes("banana") ? 1 : 0 };
};
(async () => {
const input = "What is 1 banana + 2 bananas?";
const output = "3";
const expected = "3 bananas";
const result = bananaScorer({ output, expected, input });
console.log(`Banana score: ${result.score}`);
})();
There is nothing particularly novel about the evaluation methods in this library. They are all well-known and well-documented. However, there are a few things that are particularly difficult when evaluating in practice:
input
, output
, and expected
values through a bunch of different evaluation methods.The full docs are available here.
FAQs
Universal library for evaluating AI models
We found that autoevals demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Research
Security News
Socket researchers uncover a malicious npm package posing as a tool for detecting vulnerabilities in Etherium smart contracts.
Security News
Research
A supply chain attack on Rspack's npm packages injected cryptomining malware, potentially impacting thousands of developers.
Research
Security News
Socket researchers discovered a malware campaign on npm delivering the Skuld infostealer via typosquatted packages, exposing sensitive data.