
Research
Security News
The Growing Risk of Malicious Browser Extensions
Socket researchers uncover how browser extensions in trusted stores are used to hijack sessions, redirect traffic, and manipulate user behavior.
Universal MCP Client with multi-transport support and LLM-powered tool routing
MCPOmni Connect is a powerful, universal command-line interface (CLI) that serves as your gateway to the Model Context Protocol (MCP) ecosystem. It seamlessly integrates multiple MCP servers, AI models, and various transport protocols into a unified, intelligent interface.
MCPOmni Connect
βββ Transport Layer
β βββ Stdio Transport
β βββ SSE Transport
β βββ Docker Integration
βββ Session Management
β βββ Multi-Server Orchestration
β βββ Connection Lifecycle Management
βββ Tool Management
β βββ Dynamic Tool Discovery
β βββ Cross-Server Tool Routing
β βββ Tool Execution Engine
βββ AI Integration
βββ LLM Processing
βββ Context Management
βββ Response Generation
# with uv recommended
uv add mcpomni-connect
# using pip
pip install mcpomni-connect
# Set up environment variables
echo "LLM_API_KEY=your_key_here" > .env
# Optional: Configure Redis connection
echo "REDIS_HOST=localhost" >> .env
echo "REDIS_PORT=6379" >> .env
echo "REDIS_DB=0" >> .env"
# Configure your servers in servers_config.json
Variable | Description | Example |
---|---|---|
LLM_API_KEY | API key for LLM provider | sk-... |
REDIS_HOST | Redis server hostname (optional) | localhost |
REDIS_PORT | Redis server port (optional) | 6379 |
REDIS_DB | Redis database number (optional) | 0 |
# start the cli running the command ensure your api key is exported or create .env
mcpomni_connect
# Run all tests with verbose output
pytest tests/ -v
# Run specific test file
pytest tests/test_specific_file.py -v
# Run tests with coverage report
pytest tests/ --cov=src --cov-report=term-missing
tests/
βββ unit/ # Unit tests for individual components
Installation
# Clone the repository
git clone https://github.com/Abiorh001/mcp_omni_connect.git
cd mcp_omni_connect
# Create and activate virtual environment
uv venv
source .venv/bin/activate
# Install dependencies
uv sync
Configuration
# Set up environment variables
echo "LLM_API_KEY=your_key_here" > .env
# Configure your servers in servers_config.json
** Start Client**
# Start the client
uv run run.py
# or
python run.py
You can run the basic CLI example to interact with MCPOmni Connect directly from the terminal.
Using uv (recommended):
uv run examples/basic.py
Or using Python directly:
python examples/basic.py
You can also run MCPOmni Connect as a FastAPI server for web or API-based interaction.
Using uv:
uv run examples/fast_api_iml.py
Or using Python directly:
python examples/fast_api_iml.py
A simple web client is provided in examples/index.html
.
http://localhost:8000
and provides a chat interface.http://localhost:8000
by default.examples/index.html
for a simple web client)./chat/agent_chat
(POST){
"query": "Your question here",
"chat_id": "unique-chat-id"
}
{
"message_id": "...",
"usid": "...",
"role": "assistant",
"content": "Response text",
"meta": [],
"likeordislike": null,
"time": "2024-06-10 12:34:56"
}
MCPOmni Connect is not just a CLI toolβit's also a powerful Python library that you can use to build your own backend services, custom clients, or API servers.
You can import MCPOmni Connect in your Python project to:
See examples/fast_api_iml.py
for a full-featured example.
Minimal Example:
from mcpomni_connect.client import Configuration, MCPClient
from mcpomni_connect.llm import LLMConnection
from mcpomni_connect.agents.react_agent import ReactAgent
from mcpomni_connect.agents.orchestrator import OrchestratorAgent
config = Configuration()
client = MCPClient(config)
llm_connection = LLMConnection(config)
# Choose agent mode
agent = ReactAgent(...) # or OrchestratorAgent(...)
# Use in your API endpoint
response = await agent.run(
query="Your user query",
sessions=client.sessions,
llm_connection=llm_connection,
# ...other arguments...
)
You can easily expose your MCP client as an API using FastAPI.
See the FastAPI example for:
Key Features for Developers:
{
"AgentConfig": {
"tool_call_timeout": 30,
"max_steps": 15,
"request_limit": 1000,
"total_tokens_limit": 100000
},
"LLM": {
"provider": "openai",
"model": "gpt-4",
"temperature": 0.5,
"max_tokens": 5000,
"max_context_length": 30000,
"top_p": 0
},
"mcpServers": {
"ev_assistant": {
"transport_type": "streamable_http",
"auth": {
"method": "oauth"
},
"url": "http://localhost:8000/mcp"
},
"sse-server": {
"transport_type": "sse",
"url": "http://localhost:3000/sse",
"headers": {
"Authorization": "Bearer token"
},
"timeout": 60,
"sse_read_timeout": 120
},
"streamable_http-server": {
"transport_type": "streamable_http",
"url": "http://localhost:3000/mcp",
"headers": {
"Authorization": "Bearer token"
},
"timeout": 60,
"sse_read_timeout": 120
}
}
}
MCPOmni Connect supports multiple authentication methods for secure server connections:
{
"server_name": {
"transport_type": "streamable_http",
"auth": {
"method": "oauth"
},
"url": "http://your-server/mcp"
}
}
{
"server_name": {
"transport_type": "streamable_http",
"headers": {
"Authorization": "Bearer your-token-here"
},
"url": "http://your-server/mcp"
}
}
{
"server_name": {
"transport_type": "streamable_http",
"headers": {
"X-Custom-Header": "value",
"Authorization": "Custom-Auth-Scheme token"
},
"url": "http://your-server/mcp"
}
}
MCPOmni Connect supports dynamic server configuration through commands:
# Add one or more servers from a configuration file
/add_servers:path/to/config.json
The configuration file can include multiple servers with different authentication methods:
{
"new-server": {
"transport_type": "streamable_http",
"auth": {
"method": "oauth"
},
"url": "http://localhost:8000/mcp"
},
"another-server": {
"transport_type": "sse",
"headers": {
"Authorization": "Bearer token"
},
"url": "http://localhost:3000/sse"
}
}
# Remove a server by its name
/remove_server:server_name
/tools
- List all available tools across servers/prompts
- View available prompts/prompt:<name>/<args>
- Execute a prompt with arguments/resources
- List available resources/resource:<uri>
- Access and analyze a resource/debug
- Toggle debug mode/refresh
- Update server capabilities/memory
- Toggle Redis memory persistence (on/off)/mode:auto
- Switch to autonomous agentic mode/mode:chat
- Switch back to interactive chat mode/add_servers:<config.json>
- Add one or more servers from a configuration file/remove_server:<server_name>
- Remove a server by its name# Enable Redis memory persistence
/memory
# Check memory status
Memory persistence is now ENABLED using Redis
# Disable memory persistence
/memory
# Check memory status
Memory persistence is now DISABLED
# Switch to autonomous mode
/mode:auto
# System confirms mode change
Now operating in AUTONOMOUS mode. I will execute tasks independently.
# Switch back to chat mode
/mode:chat
# System confirms mode change
Now operating in CHAT mode. I will ask for approval before executing tasks.
Chat Mode (Default)
Autonomous Mode
Orchestrator Mode
# List all available prompts
/prompts
# Basic prompt usage
/prompt:weather/location=tokyo
# Prompt with multiple arguments depends on the server prompt arguments requirements
/prompt:travel-planner/from=london/to=paris/date=2024-03-25
# JSON format for complex arguments
/prompt:analyze-data/{
"dataset": "sales_2024",
"metrics": ["revenue", "growth"],
"filters": {
"region": "europe",
"period": "q1"
}
}
# Nested argument structures
/prompt:market-research/target=smartphones/criteria={
"price_range": {"min": 500, "max": 1000},
"features": ["5G", "wireless-charging"],
"markets": ["US", "EU", "Asia"]
}
The client intelligently:
MCPOmni Connect now provides advanced controls and visibility over your API usage and resource limits.
Use the /api_stats
command to see your current usage:
/api_stats
This will display:
You can set limits to automatically stop execution when thresholds are reached:
You can configure these in your servers_config.json
under the AgentConfig
section:
"AgentConfig": {
"tool_call_timeout": 30, // Tool call timeout in seconds
"max_steps": 15, // Max number of steps before termination
"request_limit": 1000, // Max number of requests allowed
"total_tokens_limit": 100000 // Max number of tokens allowed
}
# Check your current API usage and limits
/api_stats
# Set a new request limit (example)
# (This can be done by editing servers_config.json or via future CLI commands)
# Example of automatic tool chaining if the tool is available in the servers connected
User: "Find charging stations near Silicon Valley and check their current status"
# Client automatically:
1. Uses Google Maps API to locate Silicon Valley
2. Searches for charging stations in the area
3. Checks station status through EV network API
4. Formats and presents results
# Automatic resource processing
User: "Analyze the contents of /path/to/document.pdf"
# Client automatically:
1. Identifies resource type
2. Extracts content
3. Processes through LLM
4. Provides intelligent summary
Connection Issues
Error: Could not connect to MCP server
servers_config.json
API Key Issues
Error: Invalid API key
.env
Redis Connection
Error: Could not connect to Redis
.env
Tool Execution Failures
Error: Tool execution failed
Enable debug mode for detailed logging:
/debug
For additional support, please:
We welcome contributions! See our Contributing Guide for details.
This project is licensed under the MIT License - see the LICENSE file for details.
Built with β€οΈ by the MCPOmni Connect Team
FAQs
Universal MCP Client with multi-transport support and LLM-powered tool routing
We found that mcpomni-connect demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago.Β It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Research
Security News
Socket researchers uncover how browser extensions in trusted stores are used to hijack sessions, redirect traffic, and manipulate user behavior.
Research
Security News
An in-depth analysis of credential stealers, crypto drainers, cryptojackers, and clipboard hijackers abusing open source package registries to compromise Web3 development environments.
Security News
pnpm 10.12.1 introduces a global virtual store for faster installs and new options for managing dependencies with version catalogs.