Security News
Oracle Drags Its Feet in the JavaScript Trademark Dispute
Oracle seeks to dismiss fraud claims in the JavaScript trademark dispute, delaying the case and avoiding questions about its right to the name.
AgentOps helps developers build, evaluate, and monitor AI agents. From prototype to production.
๐ Replay Analytics and Debugging | Step-by-step agent execution graphs |
๐ธ LLM Cost Management | Track spend with LLM foundation model providers |
๐งช Agent Benchmarking | Test your agents against 1,000+ evals |
๐ Compliance and Security | Detect common prompt injection and data exfiltration exploits |
๐ค Framework Integrations | Native Integrations with CrewAI, AG2(AutoGen), Camel AI, & LangChain |
pip install agentops
Initialize the AgentOps client and automatically get analytics on all your LLM calls.
import agentops
# Beginning of your program (i.e. main.py, __init__.py)
agentops.init( < INSERT YOUR API KEY HERE >)
...
# End of program
agentops.end_session('Success')
All your sessions can be viewed on the AgentOps dashboard
Add powerful observability to your agents, tools, and functions with as little code as possible: one line at a time.
Refer to our documentation
# Automatically associate all Events with the agent that originated them
from agentops import track_agent
@track_agent(name='SomeCustomName')
class MyAgent:
...
# Automatically create ToolEvents for tools that agents will use
from agentops import record_tool
@record_tool('SampleToolName')
def sample_tool(...):
...
# Automatically create ActionEvents for other functions.
from agentops import record_action
@agentops.record_action('sample function being record')
def sample_function(...):
...
# Manually record any other Events
from agentops import record, ActionEvent
record(ActionEvent("received_user_input"))
Build Crew agents with observability with only 2 lines of code. Simply set an AGENTOPS_API_KEY
in your environment, and your crews will get automatic monitoring on the AgentOps dashboard.
pip install 'crewai[agentops]'
With only two lines of code, add full observability and monitoring to AG2 (formerly AutoGen) agents. Set an AGENTOPS_API_KEY
in your environment and call agentops.init()
Track and analyze CAMEL agents with full observability. Set an AGENTOPS_API_KEY
in your environment and initialize AgentOps to get started.
pip install "camel-ai[all]==0.2.11"
pip install agentops
import os
import agentops
from camel.agents import ChatAgent
from camel.messages import BaseMessage
from camel.models import ModelFactory
from camel.types import ModelPlatformType, ModelType
# Initialize AgentOps
agentops.init(os.getenv("AGENTOPS_API_KEY"), default_tags=["CAMEL Example"])
# Import toolkits after AgentOps init for tracking
from camel.toolkits import SearchToolkit
# Set up the agent with search tools
sys_msg = BaseMessage.make_assistant_message(
role_name='Tools calling operator',
content='You are a helpful assistant'
)
# Configure tools and model
tools = [*SearchToolkit().get_tools()]
model = ModelFactory.create(
model_platform=ModelPlatformType.OPENAI,
model_type=ModelType.GPT_4O_MINI,
)
# Create and run the agent
camel_agent = ChatAgent(
system_message=sys_msg,
model=model,
tools=tools,
)
response = camel_agent.step("What is AgentOps?")
print(response)
agentops.end_session("Success")
Check out our Camel integration guide for more examples including multi-agent scenarios.
AgentOps works seamlessly with applications built using Langchain. To use the handler, install Langchain as an optional dependency:
pip install agentops[langchain]
To use the handler, import and set
import os
from langchain.chat_models import ChatOpenAI
from langchain.agents import initialize_agent, AgentType
from agentops.partners.langchain_callback_handler import LangchainCallbackHandler
AGENTOPS_API_KEY = os.environ['AGENTOPS_API_KEY']
handler = LangchainCallbackHandler(api_key=AGENTOPS_API_KEY, tags=['Langchain Example'])
llm = ChatOpenAI(openai_api_key=OPENAI_API_KEY,
callbacks=[handler],
model='gpt-3.5-turbo')
agent = initialize_agent(tools,
llm,
agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
callbacks=[handler], # You must pass in a callback handler to record your agent
handle_parsing_errors=True)
Check out the Langchain Examples Notebook for more details including Async handlers.
First class support for Cohere(>=5.4.0). This is a living integration, should you need any added functionality please message us on Discord!
pip install cohere
import cohere
import agentops
# Beginning of program's code (i.e. main.py, __init__.py)
agentops.init(<INSERT YOUR API KEY HERE>)
co = cohere.Client()
chat = co.chat(
message="Is it pronounced ceaux-hear or co-hehray?"
)
print(chat)
agentops.end_session('Success')
import cohere
import agentops
# Beginning of program's code (i.e. main.py, __init__.py)
agentops.init(<INSERT YOUR API KEY HERE>)
co = cohere.Client()
stream = co.chat_stream(
message="Write me a haiku about the synergies between Cohere and AgentOps"
)
for event in stream:
if event.event_type == "text-generation":
print(event.text, end='')
agentops.end_session('Success')
Track agents built with the Anthropic Python SDK (>=0.32.0).
pip install anthropic
import anthropic
import agentops
# Beginning of program's code (i.e. main.py, __init__.py)
agentops.init(<INSERT YOUR API KEY HERE>)
client = anthropic.Anthropic(
# This is the default and can be omitted
api_key=os.environ.get("ANTHROPIC_API_KEY"),
)
message = client.messages.create(
max_tokens=1024,
messages=[
{
"role": "user",
"content": "Tell me a cool fact about AgentOps",
}
],
model="claude-3-opus-20240229",
)
print(message.content)
agentops.end_session('Success')
Streaming
import anthropic
import agentops
# Beginning of program's code (i.e. main.py, __init__.py)
agentops.init(<INSERT YOUR API KEY HERE>)
client = anthropic.Anthropic(
# This is the default and can be omitted
api_key=os.environ.get("ANTHROPIC_API_KEY"),
)
stream = client.messages.create(
max_tokens=1024,
model="claude-3-opus-20240229",
messages=[
{
"role": "user",
"content": "Tell me something cool about streaming agents",
}
],
stream=True,
)
response = ""
for event in stream:
if event.type == "content_block_delta":
response += event.delta.text
elif event.type == "message_stop":
print("\n")
print(response)
print("\n")
Async
import asyncio
from anthropic import AsyncAnthropic
client = AsyncAnthropic(
# This is the default and can be omitted
api_key=os.environ.get("ANTHROPIC_API_KEY"),
)
async def main() -> None:
message = await client.messages.create(
max_tokens=1024,
messages=[
{
"role": "user",
"content": "Tell me something interesting about async agents",
}
],
model="claude-3-opus-20240229",
)
print(message.content)
await main()
Track agents built with the Anthropic Python SDK (>=0.32.0).
pip install mistralai
Sync
from mistralai import Mistral
import agentops
# Beginning of program's code (i.e. main.py, __init__.py)
agentops.init(<INSERT YOUR API KEY HERE>)
client = Mistral(
# This is the default and can be omitted
api_key=os.environ.get("MISTRAL_API_KEY"),
)
message = client.chat.complete(
messages=[
{
"role": "user",
"content": "Tell me a cool fact about AgentOps",
}
],
model="open-mistral-nemo",
)
print(message.choices[0].message.content)
agentops.end_session('Success')
Streaming
from mistralai import Mistral
import agentops
# Beginning of program's code (i.e. main.py, __init__.py)
agentops.init(<INSERT YOUR API KEY HERE>)
client = Mistral(
# This is the default and can be omitted
api_key=os.environ.get("MISTRAL_API_KEY"),
)
message = client.chat.stream(
messages=[
{
"role": "user",
"content": "Tell me something cool about streaming agents",
}
],
model="open-mistral-nemo",
)
response = ""
for event in message:
if event.data.choices[0].finish_reason == "stop":
print("\n")
print(response)
print("\n")
else:
response += event.text
agentops.end_session('Success')
Async
import asyncio
from mistralai import Mistral
client = Mistral(
# This is the default and can be omitted
api_key=os.environ.get("MISTRAL_API_KEY"),
)
async def main() -> None:
message = await client.chat.complete_async(
messages=[
{
"role": "user",
"content": "Tell me something interesting about async agents",
}
],
model="open-mistral-nemo",
)
print(message.choices[0].message.content)
await main()
Async Streaming
import asyncio
from mistralai import Mistral
client = Mistral(
# This is the default and can be omitted
api_key=os.environ.get("MISTRAL_API_KEY"),
)
async def main() -> None:
message = await client.chat.stream_async(
messages=[
{
"role": "user",
"content": "Tell me something interesting about async streaming agents",
}
],
model="open-mistral-nemo",
)
response = ""
async for event in message:
if event.data.choices[0].finish_reason == "stop":
print("\n")
print(response)
print("\n")
else:
response += event.text
await main()
Track agents built with the CamelAI Python SDK (>=0.32.0).
pip install camel-ai[all]
pip install agentops
#Import Dependencies
import agentops
import os
from getpass import getpass
from dotenv import load_dotenv
#Set Keys
load_dotenv()
openai_api_key = os.getenv("OPENAI_API_KEY") or "<your openai key here>"
agentops_api_key = os.getenv("AGENTOPS_API_KEY") or "<your agentops key here>"
You can find usage examples here!.
AgentOps provides support for LiteLLM(>=1.3.1), allowing you to call 100+ LLMs using the same Input/Output Format.
pip install litellm
# Do not use LiteLLM like this
# from litellm import completion
# ...
# response = completion(model="claude-3", messages=messages)
# Use LiteLLM like this
import litellm
...
response = litellm.completion(model="claude-3", messages=messages)
# or
response = await litellm.acompletion(model="claude-3", messages=messages)
AgentOps works seamlessly with applications built using LlamaIndex, a framework for building context-augmented generative AI applications with LLMs.
pip install llama-index-instrumentation-agentops
To use the handler, import and set
from llama_index.core import set_global_handler
# NOTE: Feel free to set your AgentOps environment variables (e.g., 'AGENTOPS_API_KEY')
# as outlined in the AgentOps documentation, or pass the equivalent keyword arguments
# anticipated by AgentOps' AOClient as **eval_params in set_global_handler.
set_global_handler("agentops")
Check out the LlamaIndex docs for more details.
AgentOps provides support for Llama Stack Python Client(>=0.0.53), allowing you to monitor your Agentic applications.
Track and analyze SwarmZero agents with full observability. Set an AGENTOPS_API_KEY
in your environment and initialize AgentOps to get started.
pip install swarmzero
pip install agentops
from dotenv import load_dotenv
load_dotenv()
import agentops
agentops.init(<INSERT YOUR API KEY HERE>)
from swarmzero import Agent, Swarm
# ...
(coming soon!)
Platform | Dashboard | Evals |
---|---|---|
โ Python SDK | โ Multi-session and Cross-session metrics | โ Custom eval metrics |
๐ง Evaluation builder API | โ Custom event tag trackingย | ๐ Agent scorecards |
โ Javascript/Typescript SDK | โ Session replays | ๐ Evaluation playground + leaderboard |
Performance testing | Environments | LLM Testing | Reasoning and execution testing |
---|---|---|---|
โ Event latency analysis | ๐ Non-stationary environment testing | ๐ LLM non-deterministic function detection | ๐ง Infinite loops and recursive thought detection |
โ Agent workflow execution pricing | ๐ Multi-modal environments | ๐ง Token limit overflow flags | ๐ Faulty reasoning detection |
๐ง Success validators (external) | ๐ Execution containers | ๐ Context limit overflow flags | ๐ Generative code validators |
๐ Agent controllers/skill tests | โ Honeypot and prompt injection detection (PromptArmor) | ๐ API bill tracking | ๐ Error breakpoint analysis |
๐ Information context constraint testing | ๐ Anti-agent roadblocks (i.e. Captchas) | ๐ CI/CD integration checks | |
๐ Regression testing | ๐ Multi-agent framework visualization |
Without the right tools, AI agents are slow, expensive, and unreliable. Our mission is to bring your agent from prototype to production. Here's why AgentOps stands out:
AgentOps is designed to make agent observability, testing, and monitoring easy.
Check out our growth in the community:
Repository | Stars |
---|---|
ย geekan / MetaGPT | 42787 |
ย run-llama / llama_index | 34446 |
ย crewAIInc / crewAI | 18287 |
ย camel-ai / camel | 5166 |
ย superagent-ai / superagent | 5050 |
ย iyaja / llama-fs | 4713 |
ย BasedHardware / Omi | 2723 |
ย MervinPraison / PraisonAI | 2007 |
ย AgentOps-AI / Jaiqu | 272 |
ย swarmzero / swarmzero | 195 |
ย strnad / CrewAI-Studio | 134 |
ย alejandro-ao / exa-crewai | 55 |
ย tonykipkemboi / youtube_yapper_trapper | 47 |
ย sethcoast / cover-letter-builder | 27 |
ย bhancockio / chatgpt4o-analysis | 19 |
ย breakstring / Agentic_Story_Book_Workflow | 14 |
ย MULTI-ON / multion-python | 13 |
Generated using github-dependents-info, by Nicolas Vuillamy
FAQs
Observability and DevTool Platform for AI Agents
We found that agentops demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago.ย It has 2 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Oracle seeks to dismiss fraud claims in the JavaScript trademark dispute, delaying the case and avoiding questions about its right to the name.
Security News
The Linux Foundation is warning open source developers that compliance with global sanctions is mandatory, highlighting legal risks and restrictions on contributions.
Security News
Maven Central now validates Sigstore signatures, making it easier for developers to verify the provenance of Java packages.