Security News
Supply Chain Attack Detected in Solana's web3.js Library
A supply chain attack has been detected in versions 1.95.6 and 1.95.7 of the popular @solana/web3.js library.
Baserun is the testing and observability platform for LLM apps.
pip install baserun
Create an account at https://baserun.ai. Then generate an API key for your project in the settings tab. Set it as an environment variable:
export BASERUN_API_KEY="your_api_key_here"
In order to have Baserun trace your LLM Requests, all you need to do is import OpenAI
from baserun
instead of openai
. Creating an OpenAI client object automatically starts the trace, and all future LLM requests made with this client object will be captured.
from baserun import OpenAI
def example():
client = OpenAI()
completion = client.chat.completions.create(
name="Paris Activities",
model="gpt-4o",
temperature=0.7,
messages=[
{
"role": "user",
"content": "What are three activities to do in Paris?"
}
],
)
if __name__ == "__main__":
print(example())
If, for some reason, you don't wish to use Baserun's OpenAI client, you can simply wrap your normal OpenAI client using init
.
from baserun import init
client = init(OpenAI())
When you start a trace by initializing an OpenAI object, there are several optional parameters you can set for that trace:
name
: A customized name for the traceresult
: Some end result or output for the traceuser
: A username or user ID to associate with this trace.session
: A session ID to associate with this trace.trace_id
: A previously-generated or custom UUID (e.g. to continue a previous trace)from baserun import OpenAI
def example():
client = OpenAI(result="What are three activities to do in Paris?")
client.name = "Example"
client.user = "user123"
client.session = "session123"
completion = client.chat.completions.create(
name="Paris Activities",
model="gpt-4o",
temperature=0.7,
messages=[
{
"role": "user",
"content": "What are three activities to do in Paris?"
}
],
)
client.result = "Done"
You can perform evals directly on a completion object. The includes
eval is used here as an example, and checks if a string is included in the completion's output. The argument passed to eval()
is a name or label used for your reference.
from baserun import OpenAI
def example():
client = OpenAI()
completion = client.chat.completions.create(
model="gpt-4o",
temperature=0.7,
messages=[
{
"role": "user",
"content": "What are three activities to do in Paris?"
}
],
)
client.eval("include_eiffel_tower").includes("Eiffel Tower")
You can add tags either to the traced OpenAI object or to the completion. There are several different types of tags:
log
: Any arbitrary logs you want to attach to a trace or completionfeedback
: Any score-based feedback given from users (e.g. thumbs up/down, star rating)variable
: Any variables used, e.g. while rendering a templatecustom
: Any arbitrary attributes you want to attach to a trace or completionEach tag type has functions on traced OpenAI objects and completions. Each tag function can accept a metadata
parameter which is an arbitrary dictionary with any values you might want to capture.
from baserun import OpenAI
def example():
client = OpenAI()
client.log("Gathering user input")
city = input()
completion = client.chat.completions.create(
model="gpt-4o",
temperature=0.7,
messages=[
{
"role": "user",
"content": f"What are three activities to do in {city}?"
}
],
)
completion.variable("city", city)
user_score = input()
client.feedback("User Score", score=user_score, metadata={"My key": "My value"})
After a trace has been completed you may wish to add additional tags to a trace or completion. For example, you might have user feedback that is gathered well after the fact. To add these tags, you need to store the trace_id
, and, if the tag is for a completion, the completion_id
. You can then use the tag
, log
, or feedback
functions to submit those tags.
from baserun import OpenAI, log, feedback
client = OpenAI(name="trace to be resumed")
completion = client.chat.completions.create(
name="completion to be resumed",
model="gpt-4o",
messages=[{"role": "user", "content": "What are three activities to do in Paris?"}],
)
# Store these values
trace_id = client.trace_id
completion_id = completion.completion_id
# A few moments later...
log("Tagging resumed", trace_id=trace_id, completion_id=completion_id)
feedback("User satisfaction", 0.9, trace_id=trace_id, completion_id=completion_id)
Baserun ships with support for OpenAI and Anthropic. If you use another provider or library, you can still use Baserun by manually creating "generic" objects. Notably, generic completions must be submitted explicitly using submit_to_baserun
. Here's what that looks like:
question = "What is the capital of the US?"
response = call_my_custom_model(question)
client = GenericClient(name="My Traced Client")
completion = GenericCompletion(
model="my custom model",
name="My Completion",
input_messages=[GenericInputMessage(content=question, role="user")],
choices=[GenericChoice(message=GenericCompletionMessage(content=response))],
client=client,
trace_id=client.trace_id,
)
completion.submit_to_baserun()
Baserun has built-in support for the datasets
library by HuggingFace. You can use the Dataset
class to submit datasets. See the HuggingFace documentation to learn more about the datasets
library.
Once you have loaded your dataset, you can submit it to Baserun by using the submit_dataset
function.
from datasets import Dataset
from baserun import submit_dataset
data_samples = {
"question": ["When was the first super bowl?"],
"answer": ["The first Super Bowl was held on January 15, 1967. It took place at the Los Angeles Memorial Coliseum in Los Angeles, California."],
"contexts": [
[
"The First AFL–NFL World Championship Game was an American football game played on January 15, 1967, at the Los Angeles Memorial Coliseum in Los Angeles,"
],
],
"ground_truth": [
"The first Super Bowl was held on January 15, 1967",
],
}
dataset = Dataset.from_dict(data_samples)
submit_dataset(dataset, "questions")
Once you have submitted a dataset, you can use the get_dataset
function to retrieve it. The retrieved dataset can automatically create scenarios from your data. From there, you can easily evaluate these scenarios by using the evaluate
function:
from baserun import OpenAI, evaluate
dataset = await get_dataset(name="capital questions")
question = "What is the capital of {country}?"
client = OpenAI()
experiment = Experiment(dataset=dataset, client=client, name="Dataset online eval run")
for scenario in experiment.scenarios:
evaluators = [Includes(scenario=scenario, expected="{city}"), Correctness(scenario=scenario, question=question)]
completion = client.chat.completions.create(
name=scenario.name,
model="gpt-4o",
messages=scenario.format_messages([{"role": "user", "content": question}]),
variables=scenario.input,
)
output = completion.choices[0].message.content
client.output = output
scenario.actual = output
evaluate(evaluators, scenario, completion=completion)
For a deeper dive on all capabilities and more advanced usage, please refer to our Documentation.
FAQs
tools for testing, debugging, and evaluating llm features.
We found that baserun demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 4 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
A supply chain attack has been detected in versions 1.95.6 and 1.95.7 of the popular @solana/web3.js library.
Research
Security News
A malicious npm package targets Solana developers, rerouting funds in 2% of transactions to a hardcoded address.
Security News
Research
Socket researchers have discovered malicious npm packages targeting crypto developers, stealing credentials and wallet data using spyware delivered through typosquats of popular cryptographic libraries.