LangSmith Client SDK
This package contains the Python client for interacting with the LangSmith platform.
To install:
pip install langchainplus-sdk
LangSmith helps you and your team develop and evaluate language models and intelligent agents. It is compatible with any LLM Application and provides seamless integration with LangChain, a widely recognized open-source framework that simplifies the process for developers to create powerful language model applications.
Note: You can enjoy the benefits of LangSmith without using the LangChain open-source packages! To get started with your own proprietary framework, set up your account and then skip to Logging Traces Outside LangChain.
A typical workflow looks like:
- Set up an account with LangSmith or host your local server.
- Log traces.
- Debug, Create Datasets, and Evaluate Runs.
We'll walk through these steps in more detail below.
1. Connect to LangSmith
Sign up for LangSmith using your GitHub, Discord accounts, or an email address and password. If you sign up with an email, make sure to verify your email address before logging in.
Then, create a unique API key on the Settings Page, which is found in the menu at the top right corner of the page.
Note: Save the API Key in a secure location. It will not be shown again.
2. Log Traces
You can log traces natively in your LangChain application or using a LangSmith RunTree.
Logging Traces with LangChain
LangSmith seamlessly integrates with the Python LangChain library to record traces from your LLM applications.
- Copy the environment variables from the Settings Page and add them to your application.
Tracing can be activated by setting the following environment variables or by manually specifying the LangChainTracer.
import os
os.environ["LANGCHAIN_TRACING_V2"] = "true"
os.environ["LANGCHAIN_ENDPOINT"] = "https://api.langchain.plus"
os.environ["LANGCHAIN_API_KEY"] = "<YOUR-LANGCHAINPLUS-API-KEY>"
Tip: Projects are groups of traces. All runs are logged to a project. If not specified, the project is set to default
.
- Run an Agent, Chain, or Language Model in LangChain
If the environment variables are correctly set, your application will automatically connect to the LangSmith platform.
from langchain.chat_models import ChatOpenAI
chat = ChatOpenAI()
response = chat.predict(
"Translate this sentence from English to French. I love programming."
)
print(response)
Logging Traces Outside LangChain
Note: this API is experimental and may change in the future
You can still use the LangSmith development platform without depending on any
LangChain code. You can connect either by setting the appropriate environment variables,
or by directly specifying the connection information in the RunTree.
- Copy the environment variables from the Settings Page and add them to your application.
import os
os.environ["LANGCHAIN_ENDPOINT"] = "https://api.langchain.plus"
os.environ["LANGCHAIN_API_KEY"] = "<YOUR-LANGCHAINPLUS-API-KEY>"
- Log traces using a RunTree.
A RunTree tracks your application. Each RunTree object is required to have a name
and run_type
. These and other important attributes are as follows:
name
: str
- used to identify the component's purposerun_type
: str
- Currently one of "llm", "chain" or "tool"; more options will be added in the futureinputs
: dict
- the inputs to the componentoutputs
: Optional[dict]
- the (optional) returned values from the componenterror
: Optional[str]
- Any error messages that may have arisen during the call
from langchainplus_sdk.run_trees import RunTree
parent_run = RunTree(
name="My Chat Bot",
run_type="chain",
inputs={"text": "Summarize this morning's meetings."},
serialized={},
)
child_llm_run = parent_run.create_child(
name="My Proprietary LLM",
run_type="llm",
inputs={
"prompts": [
"You are an AI Assistant. The time is XYZ."
" Summarize this morning's meetings."
]
},
)
child_llm_run.end(
outputs={
"generations": [
"I should use the transcript_loader tool"
" to fetch meeting_transcripts from XYZ"
]
}
)
child_tool_run = parent_run.create_child(
name="transcript_loader",
run_type="tool",
inputs={"date": "XYZ", "content_type": "meeting_transcripts"},
)
child_tool_run.end(outputs={"meetings": ["Meeting1 notes.."]})
child_chain_run = parent_run.create_child(
name="Unreliable Component",
run_type="tool",
inputs={"input": "Summarize these notes..."},
)
try:
raise ValueError("Something went wrong")
except Exception as e:
child_chain_run.end(error=f"I errored again {e}")
pass
parent_run.end(outputs={"output": ["The meeting notes are as follows:..."]})
res = parent_run.post(exclude_child_runs=False)
res.result()
Create a Dataset from Existing Runs
Once your runs are stored in LangSmith, you can convert them into a dataset.
For this example, we will do so using the Client, but you can also do this using
the web interface, as explained in the LangSmith docs.
from langchainplus_sdk import LangChainPlusClient
client = LangChainPlusClient()
dataset_name = "Example Dataset"
runs = client.list_runs(
project_name="my_project",
execution_order=1,
error=False,
)
dataset = client.create_dataset(dataset_name, description="An example dataset")
for run in runs:
client.create_example(
inputs=run.inputs,
outputs=run.outputs,
dataset_id=dataset.id,
)
Evaluating Runs
You can run evaluations directly using the LangSmith client.
from typing import Optional
from langchainplus_sdk.evaluation import StringEvaluator
def jaccard_chars(output: str, answer: str) -> float:
"""Naive Jaccard similarity between two strings."""
prediction_chars = set(output.strip().lower())
answer_chars = set(answer.strip().lower())
intersection = prediction_chars.intersection(answer_chars)
union = prediction_chars.union(answer_chars)
return len(intersection) / len(union)
def grader(run_input: str, run_output: str, answer: Optional[str]) -> dict:
"""Compute the score and/or label for this run."""
if answer is None:
value = "AMBIGUOUS"
score = 0.5
else:
score = jaccard_chars(run_output, answer)
value = "CORRECT" if score > 0.9 else "INCORRECT"
return dict(score=score, value=value)
evaluator = StringEvaluator(evaluation_name="Jaccard", grading_function=grader)
runs = client.list_runs(
project_name="my_project",
execution_order=1,
error=False,
)
for run in runs:
client.evaluate_run(run, evaluator)
Additional Documentation
To learn more about the LangSmith platform, check out the docs.