CUGA is an open-source generalist agent for the enterprise, supporting complex task execution on web and APIs, OpenAPI/MCP integrations, composable architecture, reasoning modes, and policy-aware features.
CUGA: Configurable Generalist Agent โ Agent Harness for the Enterprise
Start with a generalist. Customize for your domain. Deploy faster!
Building a domain-specific enterprise agent from scratch is complex and requires significant effort: agent and tool orchestration, planning logic, safety and alignment policies, evaluation for performance/cost tradeoffs and ongoing improvements. CUGA is a state-of-the-art generalist agent designed with enterprise needs in mind, so you can focus on configuring your domain tools, policies and workflow.
๐ NEW: CUGA Enterprise SDK with Policy System โ Build production-ready AI agents with enterprise-grade governance. Programmatically configure safety guards, workflow controls, and compliance policies via Python SDK or visual UI. Ensure consistent, secure, and compliant agent behavior across your organization.
Policy Types & Enterprise Value:
Policy Type
Value
Use Cases
Intent Guard
Block unauthorized actions
Data deletion prevention, access restrictions, compliance enforcement
CUGA achieves state-of-the-art performance on leading benchmarks:
๐ฅ #1 on AppWorld โ a benchmark with 750 real-world tasks across 457 APIs
๐ฅ Top-tier on WebArena (#1 from 02/25 - 09/25) โ a complex benchmark for autonomous web agents across application domains
โจ Key Features & Capabilities
High-performing generalist agent โ Benchmarked on complex web and API tasks. Combines best-of-breed agentic patterns (e.g. planner-executor, code-act) with structured planning and smart variable management to prevent hallucination and handle complexity
Configurable reasoning modes โ Balance performance and cost/latency with flexible modes ranging from fast heuristics to deep planning, optimizing for your specific task requirements
Flexible agent and tool integration โ Seamlessly integrate tools via OpenAPI specs, MCP servers, and Langchain, enabling rapid connection to REST APIs, custom protocols, and Python functions
Integrates with Langflow โ Low-code visual build experience for designing and deploying agent workflows without extensive coding
Open-source and composable โ Built with modularity in mind, CUGA itself can be exposed as a tool to other agents, enabling nested reasoning and multi-agent collaboration. Evolving toward enterprise-grade reliability
Policy System โ Configure agent behavior with 5 policy types (Intent Guard, Playbook, Tool Approval, Tool Guide, Output Formatter) via the Python SDK or standalone UI in demo mode. Includes human-in-the-loop approval gates for safe agent behavior in enterprise contexts. See SDK Docs and Policies Guide
Save-and-reuse capabilities(Experimental) โ Capture and reuse successful execution paths (plans, code, and trajectories) for faster and consistent behavior across repeated tasks
get top account by revenue from digital sales then add it to current page
๐ฏ What you'll see: CUGA will fetch data from the Digital Sales API and then interact with the web page to add the account information directly to the current page - demonstrating seamless API-to-web workflow integration!
Human in the Loop Task Execution
Watch CUGA pause for human approval during critical decision points:
# In terminal, clone the repository and navigate into it
git clone https://github.com/cuga-project/cuga-agent.git
cd cuga-agent
# 1. Create and activate virtual environment
uv venv --python=3.12 && source .venv/bin/activate
# 2. Install dependencies
uv sync# 3. Set up environment variables# Create .env file with your API keysecho"OPENAI_API_KEY=your-openai-api-key-here" > .env# 4. Start the demo
cuga start demo_crm --read-only
# Chrome will open automatically at https://localhost:7860# then try sending your task to CUGA: 'from contacts.txt show me which users belong to the crm system'# 5. View agent trajectories (optional)
cuga viz
# This launches a web-based dashboard for visualizing and analyzing# agent execution trajectories, decision-making, and tool usage
CUGA supports multiple LLM providers with flexible configuration options. You can configure models through TOML files or override specific settings using environment variables.
Supported Platforms
OpenAI - GPT models via OpenAI API (also supports LiteLLM via base URL override)
IBM WatsonX - IBM's enterprise LLM platform
Azure OpenAI - Microsoft's Azure OpenAI service
Groq - High-performance inference platform with fast LLM models
# OpenAI Configuration
OPENAI_API_KEY=sk-...your-key-here...
AGENT_SETTING_CONFIG="settings.openai.toml"
# Optional overrides
MODEL_NAME=gpt-4o # Override model name
OPENAI_BASE_URL=https://api.openai.com/v1 # Override base URL
OPENAI_API_VERSION=2024-08-06 # Override API version
# WatsonX Configuration
WATSONX_API_KEY=your-watsonx-api-key
WATSONX_PROJECT_ID=your-project-id
WATSONX_URL=https://us-south.ml.cloud.ibm.com # or your region
AGENT_SETTING_CONFIG="settings.watsonx.toml"
# Optional override
MODEL_NAME=meta-llama/llama-4-maverick-17b-128e-instruct-fp8 # Override model for all agents
CUGA supports LiteLLM through the OpenAI configuration by overriding the base URL:
Add to your .env file:
# LiteLLM Configuration (using OpenAI settings)
OPENAI_API_KEY=your-api-key
AGENT_SETTING_CONFIG="settings.openai.toml"
# Override for LiteLLM
MODEL_NAME=Azure/gpt-4o # Override model name
OPENAI_BASE_URL=https://your-litellm-endpoint.com # Override base URL
OPENAI_API_VERSION=2024-08-06 # Override API version
CUGA can be easily integrated into your Python applications as a library. The SDK provides a clean, minimal API for creating and invoking agents with custom tools.
from cuga import CugaAgent
from langchain_core.tools import tool
import asyncio
@tooldefadd_numbers(a: int, b: int) -> int:
'''Add two numbers together'''return a + b
@tooldefmultiply_numbers(a: int, b: int) -> int:
'''Multiply two numbers together'''return a * b
# Create agent with tools
agent = CugaAgent(tools=[add_numbers, multiply_numbers])
asyncdefmain():
# Add an Intent Guard to block specific operationsawait agent.policies.add_intent_guard(
name="Block Delete Operations",
description="Prevents deletion of critical data",
keywords=["delete", "remove", "erase"],
response="Deletion operations are not permitted for security reasons.",
priority=100# Higher priority = checked first
)
# Add a Playbook to provide step-by-step guidance for complex workflowsawait agent.policies.add_playbook(
name="Budget Analysis Workflow",
description="Multi-step process for analyzing financial budgets",
natural_language_trigger=["When user asks to analyze their budget"],
content="""# Budget Analysis Workflow
## Step 1: Calculate Total Expenses
- Sum all expense categories using add_numbers
- Document each category amount
## Step 2: Calculate Total Revenue
- Sum all revenue streams using add_numbers
- Include all income sources
## Step 3: Calculate Profit Margin
- Use multiply_numbers to calculate profit (revenue - expenses)
- Calculate margin percentage
## Step 4: Generate Recommendations
- Compare against target budget
- Identify areas for optimization
- Provide actionable insights""",
priority=50
)
result = await agent.invoke("Analyze my budget: expenses are 5000 and 3000, revenue is 12000")
print(result.answer) # The agent's responseif __name__ == "__main__":
asyncio.run(main())
CUGA includes a built-in knowledge base powered by LangChain and local vector stores. Docling is integrated for document ingestion: it parses and normalizes PDFs, Office files, HTML, Markdown, images, and other supported types before chunking and embedding, so the pipeline stays self-contained with no external document services.
When enabled, the agent can search, ingest, and manage documents.
Try the knowledge demo: same as the main demo but with the knowledge engine on (upload documents and query them):
cuga start demo_knowledge
Knowledge is enabled by default via settings.toml. The SDK auto-injects knowledge tools
and awareness into the agent, so it knows what documents are available and how to search them.
Programmatic Access
from cuga import CugaAgent
import asyncio
agent = CugaAgent(enable_knowledge=True)
asyncdefmain():
# Ingest a documentawait agent.knowledge.ingest("/path/to/quarterly_report.pdf")
# The agent now automatically knows about this document
result = await agent.invoke("What does the report say about Q4 revenue?")
print(result.answer) # Agent searches knowledge base and answers# Direct search
results = await agent.knowledge.search("Q4 revenue figures")
for r in results:
print(f"{r['filename']} (page {r['page']}): {r['text'][:100]}")
# List documents
docs = await agent.knowledge.list_documents()
# Clean upawait agent.aclose()
asyncio.run(main())
Session-Scoped Knowledge
Documents can be scoped to a specific conversation thread:
PDF, DOCX, XLSX, PPTX, HTML, Markdown, images, and more (via Docling).
CugaSupervisor (Multi-Agent)
Orchestrate multiple agents with a single supervisor: delegate tasks to specialized sub-agents, mix local agents with remote A2A agents, and pass data between them.
Try the supervisor demo: run the multi-agent demo (CRM + email sub-agents) with:
cuga start demo_supervisor
Quick Start
from cuga import CugaAgent, CugaSupervisor
from langchain_core.tools import tool
import asyncio
@tooldefget_customers(limit: int = 10) -> str:
"""Fetch top customers from CRM with name, email, and revenue. Returns a formatted string."""
customers = [
"Alice (alice@example.com, $250,000)",
"Bob (bob@example.com, $180,000)",
"Carol (carol@example.com, $120,000)",
"Dave (dave@example.com, $95,000)",
"Eve (eve@example.com, $88,000)",
]
top = customers[: min(limit, len(customers))]
return"Top customers by revenue: " + "; ".join(f"{i+1}. {c}"for i, c inenumerate(top))
@tooldefsend_email(to: str, body: str) -> str:
"""Send an email. Returns confirmation."""returnf"Email sent successfully to {to}"asyncdefmain():
crm_agent = CugaAgent(tools=[get_customers])
crm_agent.description = "CRM and customer data"
email_agent = CugaAgent(tools=[send_email])
email_agent.description = "Sending emails and notifications"
supervisor = CugaSupervisor(agents={
"crm": crm_agent,
"email": email_agent,
})
result = await supervisor.invoke("Get our top 5 customers by revenue, then send the top customer a thank-you email")
print(result.answer)
asyncio.run(main())
To add a remote agent via A2A, pass an external config in agents: "analytics": {"type": "external", "description": "...", "config": {"a2a_protocol": {"endpoint": "http://localhost:9999", "transport": "http"}}}.
Supervisor features
Delegation: Supervisor hands work to sub-agents and can pass variables between them when needed.
Internal + external: Combine local CugaAgent instances with external agents via A2A, task-only or variables in metadata if enabled.
Variable passing: Use variables=["var_name"] to pass previous agent outputs or context to the next agent (for internal agents, or A2A when pass_variables_a2a is enabled in settings).
Agent cards: For A2A agents, capabilities and description are taken from the agent card and shown in the supervisor prompt.
You can also load agents from YAML with CugaSupervisor.from_yaml("path/to/config.yaml"). Enable the supervisor in settings.toml under [supervisor] when using the server.
Configurations
๐ Running with a secure code sandbox
Cuga supports isolated code execution using Docker/Podman containers for enhanced security.
Install container runtime: Download and install Rancher Desktop or Docker.
Install sandbox dependencies:
uv sync --group sandbox
Start with remote sandbox enabled:
cuga start demo --sandbox
This automatically configures Cuga to use Docker/Podman for code execution instead of local execution.
Test your sandbox setup (optional):
# Test local sandbox (default)
cuga test-sandbox
# Test remote sandbox with Docker/Podman
cuga test-sandbox --remote
You should see the output: ('test succeeded\n', {})
Note: Without the --sandbox flag, Cuga uses local Python execution (default), which is faster but provides less isolation.
โ๏ธ Running with E2B Cloud Sandbox
CUGA supports E2B for cloud-based code execution in secure, ephemeral sandboxes. This provides better isolation than local execution while being faster than Docker/Podman containers.
# Install E2B CLI
npm install -g @e2b/cli
# Login with your API key
e2b auth login
# Create a template (one-time setup)# This creates a 'cuga-langchain' template that CUGA uses
e2b template build --name cuga-langchain
Install E2B dependencies:
uv sync --group e2b
Configure environment:
Add to your .env file:
E2B_API_KEY=your-e2b-api-key-here
Exposing Registry to E2B (Required)
E2B runs in the cloud and needs to call your local API registry to execute tools. You need to expose your local registry publicly using a tunneling service like ngrok.
Option 1: Expose Registry Directly (Port 8001)
Best if you have multiple ports available:
# In a separate terminal, start ngrok tunnel to registry
ngrok http 8001
# You'll get a public URL like: https://abc123.ngrok.io# Copy this URL
Then edit ./src/cuga/settings.toml:
[server_ports]function_call_host = "https://abc123.ngrok.io"# Your ngrok URL
Option 2: Expose CUGA Port with Proxy (Port 7860)
Best if you're restricted to 1 port - CUGA will proxy calls to the registry:
# In a separate terminal, start ngrok tunnel to CUGA
ngrok http 7860
# You'll get a public URL like: https://xyz789.ngrok.io# Copy this URL
Then edit ./src/cuga/settings.toml:
[server_ports]function_call_host = "https://xyz789.ngrok.io"# Your ngrok URL
CUGA automatically proxies /functions/call requests to the registry when using the CUGA port.
๐ฏ Task Mode Configuration - Switch between API/Web/Hybrid modes
Available Task Modes
Mode
Description
api
API-only mode - executes API tasks (default)
web
Web-only mode - executes web tasks using browser extension
hybrid
Hybrid mode - executes both API tasks and web tasks using browser extension
How Task Modes Work
API Mode (mode = 'api')
Opens tasks in a regular web browser
Best for API/Tools-focused workflows and testing
Web Mode (mode = 'web')
Interface inside a browser extension (available next to browser)
Optimized for web-specific tasks and interactions
Direct access to web page content and controls
Hybrid Mode (mode = 'hybrid')
Opens inside browser extension like web mode
Can execute both API/Tools tasks and web page tasks simultaneously
Starts from configurable URL defined in demo_mode.start_url
Most versatile mode for complex workflows combining web and API operations
Configuration
Edit ./src/cuga/settings.toml:
[demo_mode]start_url = "https://opensource-demo.orangehrmlive.com/web/index.php/auth/login"# Starting URL for hybrid mode[advanced_features]mode = 'api'# 'api', 'web', or 'hybrid'
๐ Special Instructions Configuration
How It Works
Each .md file contains specialized instructions that are automatically integrated into the CUGA's internal prompts when that component is active. Simply edit the markdown files to customize behavior for each node type.
Available instruction sets:answer, api_planner, code_agent, plan_controller, reflection, shortlister, task_decomposition
[instructions]instruction_set = "default"# or any instruction set above
๐ง Optional: Use Evolve with CugaLite
Evolve can now be used with CugaLite to bring task-specific guidance into the prompt before execution and save completed trajectories after the run.
This flow is:
Opt-in - disabled by default
Non-blocking - Evolve failures do not fail the task
CugaLite-focused - enabled for lite mode by default
Optional integration - install cuga[evolve] if you want the upstream Evolve package available locally, or let uvx fetch it on demand
Setup Steps:
Choose how Evolve will be started.
Recommended for normal CUGA usage: let the CUGA MCP registry launch Evolve for you.
In the manager UI, add an MCP tool with:
Name: evolve
Connection type: Command (stdio)
Command: uvx
Args: --from altk-evolve --with setuptools<70 evolve-mcp
Important: this command starts Evolve in stdio mode through the upstream Evolve package. It is intended to be launched by the CUGA registry, not run manually in a separate terminal.
Alternative for standalone/manual debugging: run Evolve yourself as an SSE server:
If you run Evolve from a checked-out altk-evolve repo instead of uvx, install the Postgres extras first with uv sync --extra pgvector.
Each env://... value tells CUGA to read the real secret or setting from its own process environment at runtime, so make sure PostgreSQL is reachable, pgvector is available, and the configured OpenAI/LiteLLM-compatible model is one your gateway is allowed to use.
[Optional] Edit ./src/cuga/settings.toml and enable lite mode plus Evolve:
If you use the recommended registry-managed setup above, keep mode = "auto" or set mode = "registry".
If you run Evolve manually as a standalone SSE server, keep url = "http://127.0.0.1:8201/sse" and set mode = "direct" if you want to skip registry lookup entirely.
If you use Evolve tip generation, make sure the environment for the Evolve MCP server includes the required Evolve model settings. Otherwise save_trajectory may fail later with a LiteLLM/OpenAI model access error even when the MCP connection itself works.
Start the same CRM demo with sample workspace files:
cuga start demo_crm --sample-memory-data
Run a task that routes through CugaLite, for example:
Identify the common cities between my cuga_workspace/cities.txt and cuga_workspace/company.txt
What happens during a run?
CUGA derives the task description from the current sub-task or first user message
CugaLite asks Evolve for relevant guidelines
Returned guidelines are appended to the system prompt under an Evolve Guidelines section
The task executes normally
The user / assistant trajectory is saved back to Evolve after completion
Notes
async_save = true saves trajectories in the background and avoids blocking the response
save_on_success and save_on_failure let you control which runs are recorded
mode = "auto" lets CUGA use a registry-managed Evolve MCP server when available and fall back to the direct SSE URL otherwise
mode = "registry" is best when you want Evolve to be fully managed as a normal CUGA MCP tool
mode = "direct" is best when you are manually running an SSE Evolve server outside CUGA
If Evolve is unavailable, times out, or returns no guidance, CUGA continues normally
CUGA is an open-source generalist agent for the enterprise, supporting complex task execution on web and APIs, OpenAPI/MCP integrations, composable architecture, reasoning modes, and policy-aware features.
We found that cuga demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago.ย It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
The remediated findings include organization permission bugs, stale project access after transfers, OIDC replay edge cases, audit logging gaps, and an IDOR in API token deletion.
GitHub account BufferZoneCorp published sleeper packages that later added credential theft, GitHub Actions tampering, fake go wrappers, and SSH persistence.