📚 Table of Contents
Langtrace is an open source observability software which lets you capture, debug and analyze traces and metrics from all your applications that leverages LLM APIs, Vector Databases and LLM based Frameworks.
✨ Features
- 📊 Open Telemetry Support: Built on OTEL standards for comprehensive tracing
- 🔄 Real-time Monitoring: Track LLM API calls, vector operations, and framework usage
- 🎯 Performance Insights: Analyze latency, costs, and usage patterns
- 🔍 Debug Tools: Trace and debug your LLM application workflows
- 📈 Analytics: Get detailed metrics and visualizations
- 🛠️ Framework Support: Extensive integration with popular LLM frameworks
- 🔌 Vector DB Integration: Support for major vector databases
- 🎨 Flexible Configuration: Customizable tracing and monitoring options
🚀 Quick Start
pip install langtrace-python-sdk
from langtrace_python_sdk import langtrace
langtrace.init(api_key='<your_api_key>')
🔗 Supported Integrations
Langtrace automatically captures traces from the following vendors:
LLM Providers
Provider | TypeScript SDK | Python SDK |
---|
OpenAI | ✅ | ✅ |
Anthropic | ✅ | ✅ |
Azure OpenAI | ✅ | ✅ |
Cohere | ✅ | ✅ |
Groq | ✅ | ✅ |
Perplexity | ✅ | ✅ |
Gemini | ❌ | ✅ |
Mistral | ❌ | ✅ |
AWS Bedrock | ✅ | ✅ |
Ollama | ❌ | ✅ |
Cerebras | ❌ | ✅ |
Frameworks
Framework | TypeScript SDK | Python SDK |
---|
Langchain | ❌ | ✅ |
LlamaIndex | ✅ | ✅ |
Langgraph | ❌ | ✅ |
LiteLLM | ❌ | ✅ |
DSPy | ❌ | ✅ |
CrewAI | ❌ | ✅ |
VertexAI | ✅ | ✅ |
EmbedChain | ❌ | ✅ |
Autogen | ❌ | ✅ |
HiveAgent | ❌ | ✅ |
Inspect AI | ❌ | ✅ |
Vector Databases
Database | TypeScript SDK | Python SDK |
---|
Pinecone | ✅ | ✅ |
ChromaDB | ✅ | ✅ |
QDrant | ✅ | ✅ |
Weaviate | ✅ | ✅ |
PGVector | ✅ | ✅ (SQLAlchemy) |
MongoDB | ❌ | ✅ |
Milvus | ❌ | ✅ |
🌐 Getting Started
Langtrace Cloud ☁️
- Sign up by going to this link.
- Create a new Project after signing up. Projects are containers for storing traces and metrics generated by your application. If you have only one application, creating 1 project will do.
- Generate an API key by going inside the project.
- In your application, install the Langtrace SDK and initialize it with the API key you generated in the step 3.
- The code for installing and setting up the SDK is shown below
Framework Quick Starts
FastAPI
from fastapi import FastAPI
from langtrace_python_sdk import langtrace
from openai import OpenAI
langtrace.init()
app = FastAPI()
client = OpenAI()
@app.get("/")
def root():
client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Say this is a test"}],
stream=False,
)
return {"Hello": "World"}
Django
from langtrace_python_sdk import langtrace
langtrace.init()
from django.http import JsonResponse
from openai import OpenAI
client = OpenAI()
def chat_view(request):
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": request.GET.get('message', '')}]
)
return JsonResponse({"response": response.choices[0].message.content})
Flask
from flask import Flask
from langtrace_python_sdk import langtrace
from openai import OpenAI
app = Flask(__name__)
langtrace.init()
client = OpenAI()
@app.route('/')
def chat():
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Hello!"}]
)
return {"response": response.choices[0].message.content}
LangChain
from langtrace_python_sdk import langtrace
from langchain.chat_models import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
langtrace.init()
chat = ChatOpenAI()
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant."),
("user", "{input}")
])
chain = prompt | chat
response = chain.invoke({"input": "Hello!"})
LlamaIndex
from langtrace_python_sdk import langtrace
from llama_index import VectorStoreIndex, SimpleDirectoryReader
langtrace.init()
documents = SimpleDirectoryReader('data').load_data()
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine()
response = query_engine.query("What's in the documents?")
DSPy
from langtrace_python_sdk import langtrace
import dspy
from dspy.teleprompt import BootstrapFewShot
langtrace.init()
lm = dspy.OpenAI(model="gpt-4")
dspy.settings.configure(lm=lm)
class SimpleQA(dspy.Signature):
"""Answer questions with short responses."""
question = dspy.InputField()
answer = dspy.OutputField(desc="short answer")
compiler = BootstrapFewShot(metric=dspy.metrics.Answer())
program = compiler.compile(SimpleQA)
CrewAI
from langtrace_python_sdk import langtrace
from crewai import Agent, Task, Crew
langtrace.init()
researcher = Agent(
role="Researcher",
goal="Research and analyze data",
backstory="Expert data researcher",
allow_delegation=False
)
task = Task(
description="Analyze market trends",
agent=researcher
)
crew = Crew(
agents=[researcher],
tasks=[task]
)
result = crew.kickoff()
For more detailed examples and framework-specific features, visit our documentation.
⚙️ Configuration
Initialize Options
The SDK can be initialized with various configuration options to customize its behavior:
langtrace.init(
api_key: Optional[str] = None,
batch: bool = True,
write_spans_to_console: bool = False,
custom_remote_exporter: Optional[Any] = None,
api_host: Optional[str] = None,
disable_instrumentations: Optional[Dict] = None,
service_name: Optional[str] = None,
disable_logging: bool = False,
headers: Dict[str, str] = {},
)
Configuration Details
Parameter | Type | Default Value | Description |
---|
api_key | str | LANGTRACE_API_KEY or None | The API key for authentication. Can be set via environment variable |
batch | bool | True | Whether to batch spans before sending them to reduce API calls |
write_spans_to_console | bool | False | Enable console logging for debugging purposes |
custom_remote_exporter | Optional[Exporter] | None | Custom exporter for sending traces to your own backend |
api_host | Optional[str] | https://langtrace.ai/ | Custom API endpoint for self-hosted deployments |
disable_instrumentations | Optional[Dict] | None | Disable specific vendor instrumentations (e.g., {'only': ['openai']} ) |
service_name | Optional[str] | None | Custom service name for trace identification |
disable_logging | bool | False | Disable SDK logging completely |
headers | Dict[str, str] | {} | Custom headers for API requests |
Environment Variables
Configure Langtrace behavior using these environment variables:
Variable | Description | Default | Impact |
---|
LANGTRACE_API_KEY | Primary authentication method | Required* | Required if not passed to init() |
TRACE_PROMPT_COMPLETION_DATA | Control prompt/completion tracing | true | Set to 'false' to opt out of prompt/completion data collection |
TRACE_DSPY_CHECKPOINT | Control DSPy checkpoint tracing | true | Set to 'false' to disable checkpoint tracing |
LANGTRACE_ERROR_REPORTING | Control error reporting | true | Set to 'false' to disable Sentry error reporting |
LANGTRACE_API_HOST | Custom API endpoint | https://langtrace.ai/ | Override default API endpoint for self-hosted deployments |
Performance Note: Setting TRACE_DSPY_CHECKPOINT=false
is recommended in production environments as checkpoint tracing involves state serialization which can impact latency.
Security Note: When TRACE_PROMPT_COMPLETION_DATA=false
, no prompt or completion data will be collected, ensuring sensitive information remains private.
🔧 Advanced Features
Root Span Decorator
Use the root span decorator to create custom trace hierarchies:
from langtrace_python_sdk import langtrace
@langtrace.with_langtrace_root_span(name="custom_operation")
def my_function():
pass
Additional Attributes
Inject custom attributes into your traces:
@langtrace.with_additional_attributes({"custom_key": "custom_value"})
def my_function():
pass
with langtrace.inject_additional_attributes({"custom_key": "custom_value"}):
pass
Prompt Registry
Register and manage prompts for better traceability:
from langtrace_python_sdk import langtrace
langtrace.register_prompt("greeting", "Hello, {name}!")
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": langtrace.get_prompt("greeting", name="Alice")}]
)
User Feedback System
Collect and analyze user feedback:
from langtrace_python_sdk import langtrace
langtrace.record_feedback(
trace_id="your_trace_id",
rating=5,
feedback_text="Great response!",
metadata={"user_id": "123"}
)
DSPy Checkpointing
Manage DSPy checkpoints for workflow tracking:
from langtrace_python_sdk import langtrace
langtrace.init(
api_key="your_api_key",
dspy_checkpoint_tracing=True
)
Vector Database Operations
Track vector database operations:
from langtrace_python_sdk import langtrace
with langtrace.inject_additional_attributes({"operation_type": "similarity_search"}):
results = vector_db.similarity_search("query", k=5)
For more detailed examples and use cases, visit our documentation.
📐 Examples
🏠 Langtrace Self Hosted
Get started with self-hosted Langtrace:
from langtrace_python_sdk import langtrace
langtrace.init(write_spans_to_console=True)
langtrace.init(custom_remote_exporter=<your_exporter>, batch=<True or False>)
🤝 Contributing
We welcome contributions! To get started:
- Fork this repository and start developing
- Join our Discord workspace
- Run examples:
python src/run_example.py
- Run tests:
pip install '.[test]' && pip install '.[dev]'
pytest -v
🔒 Security
To report security vulnerabilities, email us at security@scale3labs.com. You can read more on security here.
❓ Frequently Asked Questions
📜 License
Langtrace Python SDK is licensed under the Apache 2.0 License. You can read about this license here.