Observability and DevTool platform for AI Agents
AgentOps helps developers build, evaluate, and monitor AI agents. From prototype to production.
| |
---|
๐ Replay Analytics and Debugging | Step-by-step agent execution graphs |
๐ธ LLM Cost Management | Track spend with LLM foundation model providers |
๐งช Agent Benchmarking | Test your agents against 1,000+ evals |
๐ Compliance and Security | Detect common prompt injection and data exfiltration exploits |
๐ค Framework Integrations | Native Integrations with CrewAI, AG2(AutoGen), Camel AI, & LangChain |
Quick Start โจ๏ธ
pip install agentops
Session replays in 2 lines of code
Initialize the AgentOps client and automatically get analytics on all your LLM calls.
Get an API key
import agentops
agentops.init( < INSERT YOUR API KEY HERE >)
...
agentops.end_session('Success')
All your sessions can be viewed on the AgentOps dashboard
Agent Debugging
Session Replays
Summary Analytics
First class Developer Experience
Add powerful observability to your agents, tools, and functions with as little code as possible: one line at a time.
Refer to our documentation
from agentops import track_agent
@track_agent(name='SomeCustomName')
class MyAgent:
...
from agentops import record_tool
@record_tool('SampleToolName')
def sample_tool(...):
...
from agentops import record_action
@agentops.record_action('sample function being record')
def sample_function(...):
...
from agentops import record, ActionEvent
record(ActionEvent("received_user_input"))
Integrations ๐ฆพ
CrewAI ๐ถ
Build Crew agents with observability with only 2 lines of code. Simply set an AGENTOPS_API_KEY
in your environment, and your crews will get automatic monitoring on the AgentOps dashboard.
pip install 'crewai[agentops]'
AG2 ๐ค
With only two lines of code, add full observability and monitoring to AG2 (formerly AutoGen) agents. Set an AGENTOPS_API_KEY
in your environment and call agentops.init()
Camel AI ๐ช
Track and analyze CAMEL agents with full observability. Set an AGENTOPS_API_KEY
in your environment and initialize AgentOps to get started.
Installation
pip install "camel-ai[all]==0.2.11"
pip install agentops
import os
import agentops
from camel.agents import ChatAgent
from camel.messages import BaseMessage
from camel.models import ModelFactory
from camel.types import ModelPlatformType, ModelType
agentops.init(os.getenv("AGENTOPS_API_KEY"), default_tags=["CAMEL Example"])
from camel.toolkits import SearchToolkit
sys_msg = BaseMessage.make_assistant_message(
role_name='Tools calling operator',
content='You are a helpful assistant'
)
tools = [*SearchToolkit().get_tools()]
model = ModelFactory.create(
model_platform=ModelPlatformType.OPENAI,
model_type=ModelType.GPT_4O_MINI,
)
camel_agent = ChatAgent(
system_message=sys_msg,
model=model,
tools=tools,
)
response = camel_agent.step("What is AgentOps?")
print(response)
agentops.end_session("Success")
Check out our Camel integration guide for more examples including multi-agent scenarios.
Langchain ๐ฆ๐
AgentOps works seamlessly with applications built using Langchain. To use the handler, install Langchain as an optional dependency:
Installation
pip install agentops[langchain]
To use the handler, import and set
import os
from langchain.chat_models import ChatOpenAI
from langchain.agents import initialize_agent, AgentType
from agentops.partners.langchain_callback_handler import LangchainCallbackHandler
AGENTOPS_API_KEY = os.environ['AGENTOPS_API_KEY']
handler = LangchainCallbackHandler(api_key=AGENTOPS_API_KEY, tags=['Langchain Example'])
llm = ChatOpenAI(openai_api_key=OPENAI_API_KEY,
callbacks=[handler],
model='gpt-3.5-turbo')
agent = initialize_agent(tools,
llm,
agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
callbacks=[handler],
handle_parsing_errors=True)
Check out the Langchain Examples Notebook for more details including Async handlers.
Cohere โจ๏ธ
First class support for Cohere(>=5.4.0). This is a living integration, should you need any added functionality please message us on Discord!
Installation
pip install cohere
import cohere
import agentops
agentops.init(<INSERT YOUR API KEY HERE>)
co = cohere.Client()
chat = co.chat(
message="Is it pronounced ceaux-hear or co-hehray?"
)
print(chat)
agentops.end_session('Success')
import cohere
import agentops
agentops.init(<INSERT YOUR API KEY HERE>)
co = cohere.Client()
stream = co.chat_stream(
message="Write me a haiku about the synergies between Cohere and AgentOps"
)
for event in stream:
if event.event_type == "text-generation":
print(event.text, end='')
agentops.end_session('Success')
Anthropic ๏นจ
Track agents built with the Anthropic Python SDK (>=0.32.0).
Installation
pip install anthropic
import anthropic
import agentops
agentops.init(<INSERT YOUR API KEY HERE>)
client = anthropic.Anthropic(
api_key=os.environ.get("ANTHROPIC_API_KEY"),
)
message = client.messages.create(
max_tokens=1024,
messages=[
{
"role": "user",
"content": "Tell me a cool fact about AgentOps",
}
],
model="claude-3-opus-20240229",
)
print(message.content)
agentops.end_session('Success')
Streaming
import anthropic
import agentops
agentops.init(<INSERT YOUR API KEY HERE>)
client = anthropic.Anthropic(
api_key=os.environ.get("ANTHROPIC_API_KEY"),
)
stream = client.messages.create(
max_tokens=1024,
model="claude-3-opus-20240229",
messages=[
{
"role": "user",
"content": "Tell me something cool about streaming agents",
}
],
stream=True,
)
response = ""
for event in stream:
if event.type == "content_block_delta":
response += event.delta.text
elif event.type == "message_stop":
print("\n")
print(response)
print("\n")
Async
import asyncio
from anthropic import AsyncAnthropic
client = AsyncAnthropic(
api_key=os.environ.get("ANTHROPIC_API_KEY"),
)
async def main() -> None:
message = await client.messages.create(
max_tokens=1024,
messages=[
{
"role": "user",
"content": "Tell me something interesting about async agents",
}
],
model="claude-3-opus-20240229",
)
print(message.content)
await main()
Mistral ใฝ๏ธ
Track agents built with the Anthropic Python SDK (>=0.32.0).
Installation
pip install mistralai
Sync
from mistralai import Mistral
import agentops
agentops.init(<INSERT YOUR API KEY HERE>)
client = Mistral(
api_key=os.environ.get("MISTRAL_API_KEY"),
)
message = client.chat.complete(
messages=[
{
"role": "user",
"content": "Tell me a cool fact about AgentOps",
}
],
model="open-mistral-nemo",
)
print(message.choices[0].message.content)
agentops.end_session('Success')
Streaming
from mistralai import Mistral
import agentops
agentops.init(<INSERT YOUR API KEY HERE>)
client = Mistral(
api_key=os.environ.get("MISTRAL_API_KEY"),
)
message = client.chat.stream(
messages=[
{
"role": "user",
"content": "Tell me something cool about streaming agents",
}
],
model="open-mistral-nemo",
)
response = ""
for event in message:
if event.data.choices[0].finish_reason == "stop":
print("\n")
print(response)
print("\n")
else:
response += event.text
agentops.end_session('Success')
Async
import asyncio
from mistralai import Mistral
client = Mistral(
api_key=os.environ.get("MISTRAL_API_KEY"),
)
async def main() -> None:
message = await client.chat.complete_async(
messages=[
{
"role": "user",
"content": "Tell me something interesting about async agents",
}
],
model="open-mistral-nemo",
)
print(message.choices[0].message.content)
await main()
Async Streaming
import asyncio
from mistralai import Mistral
client = Mistral(
api_key=os.environ.get("MISTRAL_API_KEY"),
)
async def main() -> None:
message = await client.chat.stream_async(
messages=[
{
"role": "user",
"content": "Tell me something interesting about async streaming agents",
}
],
model="open-mistral-nemo",
)
response = ""
async for event in message:
if event.data.choices[0].finish_reason == "stop":
print("\n")
print(response)
print("\n")
else:
response += event.text
await main()
CamelAI ๏นจ
Track agents built with the CamelAI Python SDK (>=0.32.0).
Installation
pip install camel-ai[all]
pip install agentops
import agentops
import os
from getpass import getpass
from dotenv import load_dotenv
load_dotenv()
openai_api_key = os.getenv("OPENAI_API_KEY") or "<your openai key here>"
agentops_api_key = os.getenv("AGENTOPS_API_KEY") or "<your agentops key here>"
You can find usage examples here!.
LiteLLM ๐
AgentOps provides support for LiteLLM(>=1.3.1), allowing you to call 100+ LLMs using the same Input/Output Format.
Installation
pip install litellm
import litellm
...
response = litellm.completion(model="claude-3", messages=messages)
response = await litellm.acompletion(model="claude-3", messages=messages)
LlamaIndex ๐ฆ
AgentOps works seamlessly with applications built using LlamaIndex, a framework for building context-augmented generative AI applications with LLMs.
Installation
pip install llama-index-instrumentation-agentops
To use the handler, import and set
from llama_index.core import set_global_handler
set_global_handler("agentops")
Check out the LlamaIndex docs for more details.
Llama Stack ๐ฆ๐ฅ
AgentOps provides support for Llama Stack Python Client(>=0.0.53), allowing you to monitor your Agentic applications.
SwarmZero AI ๐
Track and analyze SwarmZero agents with full observability. Set an AGENTOPS_API_KEY
in your environment and initialize AgentOps to get started.
Installation
pip install swarmzero
pip install agentops
from dotenv import load_dotenv
load_dotenv()
import agentops
agentops.init(<INSERT YOUR API KEY HERE>)
from swarmzero import Agent, Swarm
Time travel debugging ๐ฎ
Try it out!
Agent Arena ๐ฅ
(coming soon!)
Evaluations Roadmap ๐งญ
Platform | Dashboard | Evals |
---|
โ
Python SDK | โ
Multi-session and Cross-session metrics | โ
Custom eval metrics |
๐ง Evaluation builder API | โ
Custom event tag trackingย | ๐ Agent scorecards |
โ
Javascript/Typescript SDK | โ
Session replays | ๐ Evaluation playground + leaderboard |
Debugging Roadmap ๐งญ
Performance testing | Environments | LLM Testing | Reasoning and execution testing |
---|
โ
Event latency analysis | ๐ Non-stationary environment testing | ๐ LLM non-deterministic function detection | ๐ง Infinite loops and recursive thought detection |
โ
Agent workflow execution pricing | ๐ Multi-modal environments | ๐ง Token limit overflow flags | ๐ Faulty reasoning detection |
๐ง Success validators (external) | ๐ Execution containers | ๐ Context limit overflow flags | ๐ Generative code validators |
๐ Agent controllers/skill tests | โ
Honeypot and prompt injection detection (PromptArmor) | ๐ API bill tracking | ๐ Error breakpoint analysis |
๐ Information context constraint testing | ๐ Anti-agent roadblocks (i.e. Captchas) | ๐ CI/CD integration checks | |
๐ Regression testing | ๐ Multi-agent framework visualization | | |
Why AgentOps? ๐ค
Without the right tools, AI agents are slow, expensive, and unreliable. Our mission is to bring your agent from prototype to production. Here's why AgentOps stands out:
- Comprehensive Observability: Track your AI agents' performance, user interactions, and API usage.
- Real-Time Monitoring: Get instant insights with session replays, metrics, and live monitoring tools.
- Cost Control: Monitor and manage your spend on LLM and API calls.
- Failure Detection: Quickly identify and respond to agent failures and multi-agent interaction issues.
- Tool Usage Statistics: Understand how your agents utilize external tools with detailed analytics.
- Session-Wide Metrics: Gain a holistic view of your agents' sessions with comprehensive statistics.
AgentOps is designed to make agent observability, testing, and monitoring easy.
Star History
Check out our growth in the community:
Popular projects using AgentOps
Generated using github-dependents-info, by Nicolas Vuillamy