
Research
/Security News
Critical Vulnerability in NestJS Devtools: Localhost RCE via Sandbox Escape
A flawed sandbox in @nestjs/devtools-integration lets attackers run code on your machine via CSRF, leading to full Remote Code Execution (RCE).
Universal MCP Client with multi-transport support and LLM-powered tool routing
MCPOmni Connect is the complete AI platform that evolved from a world-class MCP client into a revolutionary ecosystem. It now includes OmniAgent - the ultimate AI agent builder born from MCPOmni Connect's powerful foundation. Build production-ready AI agents, use the advanced MCP CLI, or combine both for maximum power.
Born from MCPOmni Connect's foundation - create intelligent, autonomous agents with:
Advanced command-line interface for connecting to any Model Context Protocol server with:
π― Perfect for: Developers who want the complete AI ecosystem - build custom agents AND have world-class MCP connectivity.
π Introducing OmniAgent - A revolutionary AI agent system that brings plug-and-play intelligence to your applications!
@tool_registry.register_tool("tool_name")
run_omni_agent.py
for 12+ EXAMPLE tool registration patternsπ New User? Start with the βοΈ Configuration Guide to understand the difference between config files, transport types, and OAuth behavior. Then check out the π§ͺ Testing section to get started quickly.
/memory_store:redis
, /memory_store:database:postgresql://user:pass@host/db
/memory_mode:sliding_window:5
, /memory_mode:token_budget:3000
QDRANT_HOST
and QDRANT_PORT
)ENABLE_VECTOR_DB=true
for long-term and episodic memory/event_store:redis_stream
, /event_store:in_memory
MCPOmni Connect Platform
βββ π€ OmniAgent System (Revolutionary Agent Builder)
β βββ Local Tools Registry
β βββ Background Agent Manager
β βββ Custom Agent Creation
β βββ Agent Orchestration Engine
βββ π Universal MCP Client (World-Class CLI)
β βββ Transport Layer (stdio, SSE, HTTP, Docker, NPX)
β βββ Multi-Server Orchestration
β βββ Authentication & Security
β βββ Connection Lifecycle Management
βββ π§ Shared Memory System (Both Systems)
β βββ Multi-Backend Storage (Redis, DB, In-Memory)
β βββ Vector Database Integration (Qdrant, ChromaDB)
β βββ Memory Strategies (Sliding Window, Token Budget)
β βββ Session Management
βββ π‘ Event System (Both Systems)
β βββ In-Memory Event Processing
β βββ Redis Streams for Persistence
β βββ Real-Time Event Monitoring
βββ π οΈ Tool Management (Both Systems)
β βββ Dynamic Tool Discovery
β βββ Cross-Server Tool Routing
β βββ Local Python Tool Registration
β βββ Tool Execution Engine
βββ π€ AI Integration (Both Systems)
βββ LiteLLM (100+ Models)
βββ Context Management
βββ ReAct Agent Processing
βββ Response Generation
uv add mcpomni-connect
pip install mcpomni-connect
# Set up environment variables
echo "LLM_API_KEY=your_api_key_here" > .env
# Optional: Configure Redis connection
echo "REDIS_URL=redis://localhost:6379/0" >> .env
# Optional: Configure database connection
echo "DATABASE_URL=sqlite:///mcpomni_memory.db" >> .env
# Configure your servers in servers_config.json
MCPOmni Connect uses two separate configuration files for different purposes:
.env
File - Environment VariablesContains sensitive information like API keys and optional settings:
# Required: Your LLM provider API key
LLM_API_KEY=your_api_key_here
# Optional: Memory Storage Configuration
DATABASE_URL=sqlite:///mcpomni_memory.db
REDIS_URL=redis://localhost:6379/0
servers_config.json
- Server & Agent ConfigurationContains application settings, LLM configuration, and MCP server connections:
{
"AgentConfig": {
"tool_call_timeout": 30,
"max_steps": 15,
"request_limit": 1000,
"total_tokens_limit": 100000
},
"LLM": {
"provider": "openai",
"model": "gpt-4o-mini",
"temperature": 0.7,
"max_tokens": 5000,
"top_p": 0.7
},
"mcpServers": {
"your-server-name": {
"transport_type": "stdio",
"command": "uvx",
"args": ["mcp-server-package"]
}
}
}
MCPOmni Connect supports multiple ways to connect to MCP servers:
Use when: Connecting to local MCP servers that run as separate processes
{
"server-name": {
"transport_type": "stdio",
"command": "uvx",
"args": ["mcp-server-package"]
}
}
Use when: Connecting to HTTP-based MCP servers using Server-Sent Events
{
"server-name": {
"transport_type": "sse",
"url": "http://your-server.com:4010/sse",
"headers": {
"Authorization": "Bearer your-token"
},
"timeout": 60,
"sse_read_timeout": 120
}
}
Use when: Connecting to HTTP-based MCP servers with or without OAuth
Without OAuth (Bearer Token):
{
"server-name": {
"transport_type": "streamable_http",
"url": "http://your-server.com:4010/mcp",
"headers": {
"Authorization": "Bearer your-token"
},
"timeout": 60
}
}
With OAuth:
{
"server-name": {
"transport_type": "streamable_http",
"auth": {
"method": "oauth"
},
"url": "http://your-server.com:4010/mcp"
}
}
http://localhost:3000
Important: When using OAuth authentication, MCPOmni Connect automatically starts an OAuth callback server.
π₯οΈ Started callback server on http://localhost:3000
http://localhost:3000
is hardcoded and cannot be changed"auth": {"method": "oauth"}
in your config"auth"
section from your server configuration"headers"
with "Authorization": "Bearer token"
insteadPossible Causes & Solutions:
Wrong Transport Type
Problem: Your server expects 'stdio' but you configured 'streamable_http'
Solution: Check your server's documentation for the correct transport type
OAuth Configuration Mismatch
Problem: Your server doesn't support OAuth but you have "auth": {"method": "oauth"}
Solution: Remove the "auth" section entirely and use headers instead:
"headers": {
"Authorization": "Bearer your-token"
}
Server Not Running
Problem: The MCP server at the specified URL is not running
Solution: Start your MCP server first, then connect with MCPOmni Connect
Wrong URL or Port
Problem: URL in config doesn't match where your server is running
Solution: Verify the server's actual address and port
Yes, this is completely normal when:
"auth": {"method": "oauth"}
in any server configurationIf you don't want the OAuth server:
"auth": {"method": "oauth"}
from all server configurations{
"mcpServers": {
"local-tools": {
"transport_type": "stdio",
"command": "uvx",
"args": ["mcp-server-tools"]
}
}
}
{
"mcpServers": {
"remote-api": {
"transport_type": "streamable_http",
"url": "http://api.example.com:8080/mcp",
"headers": {
"Authorization": "Bearer abc123token"
}
}
}
}
{
"mcpServers": {
"oauth-server": {
"transport_type": "streamable_http",
"auth": {
"method": "oauth"
},
"url": "http://oauth-server.com:8080/mcp"
}
}
}
Start the CLI - ensure your API key is exported or create .env
file:
mcpomni_connect
# Run all tests with verbose output
pytest tests/ -v
# Run specific test file
pytest tests/test_specific_file.py -v
# Run tests with coverage report
pytest tests/ --cov=src --cov-report=term-missing
tests/
βββ unit/ # Unit tests for individual components
Installation
# Clone the repository
git clone https://github.com/Abiorh001/mcp_omni_connect.git
cd mcp_omni_connect
# Create and activate virtual environment
uv venv
source .venv/bin/activate
# Install dependencies
uv sync
Configuration
# Set up environment variables
echo "LLM_API_KEY=your_api_key_here" > .env
# Configure your servers in servers_config.json
Start Client
uv run run.py
Or:
python run.py
You can run the basic CLI example to interact with MCPOmni Connect directly from the terminal.
Using uv (recommended):
uv run examples/basic.py
Or using Python directly:
python examples/basic.py
Build intelligent agents that combine MCP tools with local tools for powerful automation.
from mcpomni_connect.omni_agent import OmniAgent
from mcpomni_connect.memory_store.memory_router import MemoryRouter
from mcpomni_connect.events.event_router import EventRouter
from mcpomni_connect.agents.tools.local_tools_registry import ToolRegistry
# Create local tools registry
tool_registry = ToolRegistry()
# Register your custom tools directly with the agent
@tool_registry.register_tool("calculate_area")
def calculate_area(length: float, width: float) -> str:
"""Calculate the area of a rectangle."""
area = length * width
return f"Area of rectangle ({length} x {width}): {area} square units"
@tool_registry.register_tool("analyze_text")
def analyze_text(text: str) -> str:
"""Analyze text and return word count and character count."""
words = len(text.split())
chars = len(text)
return f"Analysis: {words} words, {chars} characters"
# Initialize memory store
memory_store = MemoryRouter(memory_store_type="redis") # or "postgresql", "sqlite", "mysql"
event_router = EventRouter(event_store_type="in_memory")
# Create OmniAgent with LOCAL TOOLS + MCP TOOLS
agent = OmniAgent(
name="my_agent",
system_instruction="You are a helpful assistant with access to custom tools and file operations.",
model_config={
"provider": "openai",
"model": "gpt-4o",
"max_context_length": 50000,
},
# Your custom local tools
local_tools=tool_registry,
# MCP server tools
mcp_tools=[
{
"name": "filesystem",
"transport_type": "stdio",
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/home"],
}
],
memory_store=memory_store,
event_router=event_router
)
# Now the agent can use BOTH your custom tools AND MCP tools!
result = await agent.run("Calculate the area of a 10x5 rectangle, then analyze this text: 'Hello world'")
print(f"Response: {result['response']}")
print(f"Session ID: {result['session_id']}")
Create autonomous agents that run in the background and execute tasks automatically:
from mcpomni_connect.omni_agent.background_agent.background_agent_manager import BackgroundAgentManager
from mcpomni_connect.memory_store.memory_router import MemoryRouter
from mcpomni_connect.events.event_router import EventRouter
# Initialize components
memory_store = MemoryRouter(memory_store_type="in_memory")
event_router = EventRouter(event_store_type="in_memory")
# Create background agent manager
manager = BackgroundAgentManager(
memory_store=memory_store,
event_router=event_router
)
# Create a self-flying background agent
agent_config = {
"agent_id": "system_monitor",
"system_instruction": "You are a system monitoring agent that checks system health.",
"model_config": {
"provider": "openai",
"model": "gpt-4o",
"temperature": 0.7,
},
"local_tools": tool_registry, # Your tool registry
"agent_config": {
"max_steps": 10,
"tool_call_timeout": 30,
},
"interval": 60, # Run every 60 seconds
"max_retries": 3,
"retry_delay": 30,
"task_config": {
"query": "Check system status and report any critical issues.",
"description": "System health monitoring task"
}
}
# Create and start the background agent
result = manager.create_agent(agent_config)
manager.start() # Start all background agents
# Monitor events in real-time
async for event in manager.get_agent("system_monitor").stream_events(result["session_id"]):
print(f"Background Agent Event: {event.type} - {event.payload}")
# Runtime task updates
manager.update_task_config("system_monitor", {
"query": "Perform emergency system check and report critical issues immediately.",
"description": "Emergency system check task",
"priority": "high"
})
Maintain conversation continuity across multiple interactions:
# Use session ID for conversation continuity
session_id = "user_123_conversation"
result1 = await agent.run("Hello! My name is Alice.", session_id)
result2 = await agent.run("What did I tell you my name was?", session_id)
# Get conversation history
history = await agent.get_session_history(session_id)
# Stream events in real-time
async for event in agent.stream_events(session_id):
print(f"Event: {event.type} - {event.payload}")
Study these comprehensive examples to see OmniAgent in action:
examples/omni_agent_example.py
- β COMPLETE DEMO showing all OmniAgent featuresexamples/background_agent_example.py
- Self-flying background agentsrun_omni_agent.py
- Advanced EXAMPLE patterns (study only, not for end-user use)examples/basic.py
- Simple agent setup patternsexamples/web_server.py
- FastAPI web interfaceexamples/vector_db_examples.py
- Advanced vector memoryanthropic.py
, groq.py
, azure.py
, ollama.py
π‘ Pro Tip: Run python examples/omni_agent_example.py
to see the full capabilities in action!
# Study the examples to learn patterns:
python examples/basic.py # Simple setup
python examples/omni_agent_example.py # Complete demo
python examples/background_agent_example.py # Self-flying agents
python examples/web_server.py # Web interface
# Then build your own using the patterns!
# World-class MCP client with advanced features
python run.py
# OR: mcpomni-connect --config servers_config.json
# Features: Connect to MCP servers, agentic modes, advanced memory
# Comprehensive testing interface - Study 12+ EXAMPLE tools
python run_omni_agent.py --mode cli
# Study this file to see tool registration patterns and CLI features
# Contains many examples of how to create custom tools
π‘ Pro Tip: Most developers use both paths - the MCP CLI for daily workflow and OmniAgent for building custom solutions!
One of OmniAgent's most powerful features is the ability to register your own Python functions as AI tools. The agent can then intelligently use these tools to complete tasks.
from mcpomni_connect.agents.tools.local_tools_registry import ToolRegistry
# Create tool registry
tool_registry = ToolRegistry()
# Register your custom tools with simple decorator
@tool_registry.register_tool("calculate_area")
def calculate_area(length: float, width: float) -> str:
"""Calculate the area of a rectangle."""
area = length * width
return f"Area of rectangle ({length} x {width}): {area} square units"
@tool_registry.register_tool("analyze_text")
def analyze_text(text: str) -> str:
"""Analyze text and return word count and character count."""
words = len(text.split())
chars = len(text)
return f"Analysis: {words} words, {chars} characters"
@tool_registry.register_tool("system_status")
def get_system_status() -> str:
"""Get current system status information."""
import platform
import time
return f"System: {platform.system()}, Time: {time.strftime('%Y-%m-%d %H:%M:%S')}"
# Use tools with OmniAgent
agent = OmniAgent(
name="my_agent",
local_tools=tool_registry, # Your custom tools!
# ... other config
)
# Now the AI can use your tools!
result = await agent.run("Calculate the area of a 10x5 rectangle and tell me the current system time")
No built-in tools - You create exactly what you need! Study these EXAMPLE patterns from run_omni_agent.py
:
Mathematical Tools Examples:
@tool_registry.register_tool("calculate_area")
def calculate_area(length: float, width: float) -> str:
area = length * width
return f"Area: {area} square units"
@tool_registry.register_tool("analyze_numbers")
def analyze_numbers(numbers: str) -> str:
num_list = [float(x.strip()) for x in numbers.split(",")]
return f"Count: {len(num_list)}, Average: {sum(num_list)/len(num_list):.2f}"
System Tools Examples:
@tool_registry.register_tool("system_info")
def get_system_info() -> str:
import platform
return f"OS: {platform.system()}, Python: {platform.python_version()}"
File Tools Examples:
@tool_registry.register_tool("list_files")
def list_directory(path: str = ".") -> str:
import os
files = os.listdir(path)
return f"Found {len(files)} items in {path}"
1. Simple Function Tools:
@tool_registry.register_tool("weather_check")
def check_weather(city: str) -> str:
"""Get weather information for a city."""
# Your weather API logic here
return f"Weather in {city}: Sunny, 25Β°C"
2. Complex Analysis Tools:
@tool_registry.register_tool("data_analysis")
def analyze_data(data: str, analysis_type: str = "summary") -> str:
"""Analyze data with different analysis types."""
import json
try:
data_obj = json.loads(data)
if analysis_type == "summary":
return f"Data contains {len(data_obj)} items"
elif analysis_type == "detailed":
# Complex analysis logic
return "Detailed analysis results..."
except:
return "Invalid data format"
3. File Processing Tools:
@tool_registry.register_tool("process_file")
def process_file(file_path: str, operation: str) -> str:
"""Process files with different operations."""
try:
if operation == "read":
with open(file_path, 'r') as f:
content = f.read()
return f"File content (first 100 chars): {content[:100]}..."
elif operation == "count_lines":
with open(file_path, 'r') as f:
lines = len(f.readlines())
return f"File has {lines} lines"
except Exception as e:
return f"Error processing file: {e}"
Create a .env
file with your configuration:
# ===============================================
# Required: AI Model API Key
# ===============================================
LLM_API_KEY=your_api_key_here
# ===============================================
# Memory Storage Configuration (NEW!)
# ===============================================
# Database backend (PostgreSQL, MySQL, SQLite)
DATABASE_URL=sqlite:///mcpomni_memory.db
# DATABASE_URL=postgresql://user:password@localhost:5432/mcpomni
# DATABASE_URL=mysql://user:password@localhost:3306/mcpomni
# Redis for memory and event storage (single URL)
REDIS_URL=redis://localhost:6379/0
# REDIS_URL=redis://:password@localhost:6379/0 # With password
# ===============================================
# Vector Database Configuration (NEW!)
# ===============================================
# Enable vector databases for long-term & episodic memory
ENABLE_VECTOR_DB=true
# Qdrant (Production-grade vector search)
QDRANT_HOST=localhost
QDRANT_PORT=6333
# ChromaDB uses local storage automatically if Qdrant not available
For Long-term & Episodic Memory:
Enable Vector Databases:
ENABLE_VECTOR_DB=true
Option A: Use Qdrant (Recommended for Production):
# Install and run Qdrant
docker run -p 6333:6333 qdrant/qdrant
# Set environment variables
QDRANT_HOST=localhost
QDRANT_PORT=6333
Option B: Use ChromaDB (Automatic Local Fallback):
# Install ChromaDB (usually auto-installed)
pip install chromadb
# No additional configuration needed - uses local .chroma_db directory
Memory Store Management:
# Switch between memory backends
/memory_store:in_memory # Fast in-memory storage (default)
/memory_store:redis # Redis persistent storage
/memory_store:database # SQLite database storage
/memory_store:database:postgresql://user:pass@host/db # PostgreSQL
/memory_store:database:mysql://user:pass@host/db # MySQL
# Memory strategy configuration
/memory_mode:sliding_window:10 # Keep last 10 messages
/memory_mode:token_budget:5000 # Keep under 5000 tokens
Event Store Management:
# Switch between event backends
/event_store:in_memory # Fast in-memory events (default)
/event_store:redis_stream # Redis Streams for persistence
Enhanced Commands:
# Memory operations
/history # Show conversation history
/clear_history # Clear conversation history
/save_history <file> # Save history to file
/load_history <file> # Load history from file
# Server management
/add_servers:<config.json> # Add servers from config
/remove_server:<server_name> # Remove specific server
/refresh # Refresh server capabilities
# Debugging and monitoring
/debug # Toggle debug mode
/api_stats # Show API usage statistics
The MCPOmni Connect CLI is the most advanced MCP client available, providing professional-grade MCP functionality with enhanced memory, event management, and agentic modes:
# Launch the advanced MCP CLI
python run.py
# OR: mcpomni-connect --config servers_config.json
# Core MCP client commands:
/tools # List all available tools
/prompts # List all available prompts
/resources # List all available resources
/prompt:<name> # Execute a specific prompt
/resource:<uri> # Read a specific resource
/subscribe:<uri> # Subscribe to resource updates
/query <your_question> # Ask questions using tools
# Advanced platform features:
/memory_store:redis # Switch to Redis memory
/event_store:redis_stream # Switch to Redis events
/add_servers:<config.json> # Add MCP servers dynamically
/remove_server:<name> # Remove MCP server
/mode:auto # Switch to autonomous agentic mode
/mode:orchestrator # Switch to multi-server orchestration
MCPOmni Connect is not just a CLI toolβit's also a powerful Python library. OmniAgent consolidates everything - you no longer need to manually manage MCP clients, configurations, and agents separately!
OmniAgent automatically includes MCP client functionality - just specify your MCP servers and you're ready to go:
from mcpomni_connect.omni_agent import OmniAgent
from mcpomni_connect.memory_store.memory_router import MemoryRouter
from mcpomni_connect.events.event_router import EventRouter
from mcpomni_connect.agents.tools.local_tools_registry import ToolRegistry
# Create tool registry for custom tools
tool_registry = ToolRegistry()
@tool_registry.register_tool("analyze_data")
def analyze_data(data: str) -> str:
"""Analyze data and return insights."""
return f"Analysis complete: {len(data)} characters processed"
# OmniAgent automatically handles MCP connections + your tools
agent = OmniAgent(
name="my_app_agent",
system_instruction="You are a helpful assistant with access to MCP servers and custom tools.",
model_config={
"provider": "openai",
"model": "gpt-4o",
"temperature": 0.7
},
# Your custom local tools
local_tools=tool_registry,
# MCP servers - automatically connected!
mcp_tools=[
{
"name": "filesystem",
"transport_type": "stdio",
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/home"]
},
{
"name": "github",
"transport_type": "streamable_http",
"url": "http://localhost:8080/mcp",
"headers": {"Authorization": "Bearer your-token"}
}
],
memory_store=MemoryRouter(memory_store_type="redis"),
event_router=EventRouter(event_store_type="in_memory")
)
# Use in your app - gets both MCP tools AND your custom tools!
result = await agent.run("List files in the current directory and analyze the filenames")
If you need the old manual approach for some reason:
OmniAgent makes building APIs incredibly simple. See examples/web_server.py
for a complete FastAPI example:
from fastapi import FastAPI
from mcpomni_connect.omni_agent import OmniAgent
app = FastAPI()
agent = OmniAgent(...) # Your agent setup from above
@app.post("/chat")
async def chat(message: str, session_id: str = None):
result = await agent.run(message, session_id)
return {"response": result['response'], "session_id": result['session_id']}
@app.get("/tools")
async def get_tools():
# Returns both MCP tools AND your custom tools automatically
return agent.get_available_tools()
Key Benefits:
{
"AgentConfig": {
"tool_call_timeout": 30,
"max_steps": 15,
"request_limit": 1000,
"total_tokens_limit": 100000
},
"LLM": {
"provider": "openai",
"model": "gpt-4",
"temperature": 0.5,
"max_tokens": 5000,
"max_context_length": 30000,
"top_p": 0
},
"mcpServers": {
"ev_assistant": {
"transport_type": "streamable_http",
"auth": {
"method": "oauth"
},
"url": "http://localhost:8000/mcp"
},
"sse-server": {
"transport_type": "sse",
"url": "http://localhost:3000/sse",
"headers": {
"Authorization": "Bearer token"
},
"timeout": 60,
"sse_read_timeout": 120
},
"streamable_http-server": {
"transport_type": "streamable_http",
"url": "http://localhost:3000/mcp",
"headers": {
"Authorization": "Bearer token"
},
"timeout": 60,
"sse_read_timeout": 120
}
}
}
{
"LLM": {
"provider": "anthropic",
"model": "claude-3-5-sonnet-20241022",
"temperature": 0.7,
"max_tokens": 4000,
"max_context_length": 200000,
"top_p": 0.95
}
}
{
"LLM": {
"provider": "groq",
"model": "llama-3.1-8b-instant",
"temperature": 0.5,
"max_tokens": 2000,
"max_context_length": 8000,
"top_p": 0.9
}
}
{
"LLM": {
"provider": "azureopenai",
"model": "gpt-4",
"temperature": 0.7,
"max_tokens": 2000,
"max_context_length": 100000,
"top_p": 0.95,
"azure_endpoint": "https://your-resource.openai.azure.com",
"azure_api_version": "2024-02-01",
"azure_deployment": "your-deployment-name"
}
}
{
"LLM": {
"provider": "ollama",
"model": "llama3.1:8b",
"temperature": 0.5,
"max_tokens": 5000,
"max_context_length": 100000,
"top_p": 0.7,
"ollama_host": "http://localhost:11434"
}
}
{
"LLM": {
"provider": "openrouter",
"model": "anthropic/claude-3.5-sonnet",
"temperature": 0.7,
"max_tokens": 4000,
"max_context_length": 200000,
"top_p": 0.95
}
}
MCPOmni Connect supports multiple authentication methods for secure server connections:
{
"server_name": {
"transport_type": "streamable_http",
"auth": {
"method": "oauth"
},
"url": "http://your-server/mcp"
}
}
{
"server_name": {
"transport_type": "streamable_http",
"headers": {
"Authorization": "Bearer your-token-here"
},
"url": "http://your-server/mcp"
}
}
{
"server_name": {
"transport_type": "streamable_http",
"headers": {
"X-Custom-Header": "value",
"Authorization": "Custom-Auth-Scheme token"
},
"url": "http://your-server/mcp"
}
}
MCPOmni Connect supports dynamic server configuration through commands:
# Add one or more servers from a configuration file
/add_servers:path/to/config.json
The configuration file can include multiple servers with different authentication methods:
{
"new-server": {
"transport_type": "streamable_http",
"auth": {
"method": "oauth"
},
"url": "http://localhost:8000/mcp"
},
"another-server": {
"transport_type": "sse",
"headers": {
"Authorization": "Bearer token"
},
"url": "http://localhost:3000/sse"
}
}
# Remove a server by its name
/remove_server:server_name
/tools
- List all available tools across servers/prompts
- View available prompts/prompt:<name>/<args>
- Execute a prompt with arguments/resources
- List available resources/resource:<uri>
- Access and analyze a resource/debug
- Toggle debug mode/refresh
- Update server capabilities/memory
- Toggle Redis memory persistence (on/off)/mode:auto
- Switch to autonomous agentic mode/mode:chat
- Switch back to interactive chat mode/add_servers:<config.json>
- Add one or more servers from a configuration file/remove_server:<server_name>
- Remove a server by its name# Enable Redis memory persistence
/memory
# Check memory status
Memory persistence is now ENABLED using Redis
# Disable memory persistence
/memory
# Check memory status
Memory persistence is now DISABLED
# Switch to autonomous mode
/mode:auto
# System confirms mode change
Now operating in AUTONOMOUS mode. I will execute tasks independently.
# Switch back to chat mode
/mode:chat
# System confirms mode change
Now operating in CHAT mode. I will ask for approval before executing tasks.
Chat Mode (Default)
Autonomous Mode
Orchestrator Mode
# List all available prompts
/prompts
# Basic prompt usage
/prompt:weather/location=tokyo
# Prompt with multiple arguments depends on the server prompt arguments requirements
/prompt:travel-planner/from=london/to=paris/date=2024-03-25
# JSON format for complex arguments
/prompt:analyze-data/{
"dataset": "sales_2024",
"metrics": ["revenue", "growth"],
"filters": {
"region": "europe",
"period": "q1"
}
}
# Nested argument structures
/prompt:market-research/target=smartphones/criteria={
"price_range": {"min": 500, "max": 1000},
"features": ["5G", "wireless-charging"],
"markets": ["US", "EU", "Asia"]
}
The client intelligently:
MCPOmni Connect now provides advanced controls and visibility over your API usage and resource limits.
Use the /api_stats
command to see your current usage:
/api_stats
This will display:
You can set limits to automatically stop execution when thresholds are reached:
You can configure these in your servers_config.json
under the AgentConfig
section:
"AgentConfig": {
"tool_call_timeout": 30, // Tool call timeout in seconds
"max_steps": 15, // Max number of steps before termination
"request_limit": 1000, // Max number of requests allowed
"total_tokens_limit": 100000 // Max number of tokens allowed
}
# Check your current API usage and limits
/api_stats
# Set a new request limit (example)
# (This can be done by editing servers_config.json or via future CLI commands)
# Example of automatic tool chaining if the tool is available in the servers connected
User: "Find charging stations near Silicon Valley and check their current status"
# Client automatically:
1. Uses Google Maps API to locate Silicon Valley
2. Searches for charging stations in the area
3. Checks station status through EV network API
4. Formats and presents results
# Automatic resource processing
User: "Analyze the contents of /path/to/document.pdf"
# Client automatically:
1. Identifies resource type
2. Extracts content
3. Processes through LLM
4. Provides intelligent summary
π For comprehensive configuration help, see the βοΈ Configuration Guide section above, which covers:
- Config file differences (
.env
vsservers_config.json
)- Transport type selection and authentication
- OAuth server behavior explanation
- Common connection issues and solutions
Connection Issues
Error: Could not connect to MCP server
servers_config.json
API Key Issues
Error: Invalid API key
.env
Redis Connection
Error: Could not connect to Redis
.env
Tool Execution Failures
Error: Tool execution failed
Enable debug mode for detailed logging:
/debug
For additional support, please:
We welcome contributions! See our Contributing Guide for details.
Complete documentation is available at: MCPOmni Connect Docs
To build documentation locally:
./docs.sh serve # Start development server at http://127.0.0.1:8080
./docs.sh build # Build static documentation
This project is licensed under the MIT License - see the LICENSE file for details.
Built with β€οΈ by the MCPOmni Connect Team
FAQs
Universal MCP Client with multi-transport support and LLM-powered tool routing
We found that mcpomni-connect demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago.Β It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Research
/Security News
A flawed sandbox in @nestjs/devtools-integration lets attackers run code on your machine via CSRF, leading to full Remote Code Execution (RCE).
Product
Customize license detection with Socketβs new license overlays: gain control, reduce noise, and handle edge cases with precision.
Product
Socket now supports Rust and Cargo, offering package search for all users and experimental SBOM generation for enterprise projects.