
Research
Security News
The Growing Risk of Malicious Browser Extensions
Socket researchers uncover how browser extensions in trusted stores are used to hijack sessions, redirect traffic, and manipulate user behavior.
Unlock the power of the Model Context Protocol (MCP) directly within your CrewAI agents.
crewai-mcp-toolbox
provides a robust and developer-friendly bridge between the CrewAI framework and any compliant MCP server communicating via stdio. It automatically discovers tools exposed by an MCP server, converts them into type-safe CrewAI BaseTool
instances, and reliably manages the underlying server process lifecycle for you.
The Model Context Protocol standardizes how AI models interact with external tools and data sources. While powerful, integrating an MCP server into an agent framework like CrewAI often involves repetitive and error-prone tasks:
stdio
communication, request/response cycles, and error handling defined by the MCP specification.BaseTool
.crewai-mcp-toolbox
? Robustness and Developer Experiencecrewai-mcp-toolbox
tackles these challenges head-on, offering more than just a simple wrapper:
crewai-mcp-toolbox
uses a dedicated background worker thread (MCPWorkerThread
) to isolate and manage the MCP server's lifecycle and communication. This ensures the server starts correctly, runs reliably in the background, and is terminated cleanly when your application exits or the context manager scope is left.stdio
-based servers, the worker thread currently utilizes Python's robust asyncio.create_subprocess_exec
to directly manage the server process. While leveraging native library clients like mcp.client.stdio.stdio_client
was explored, this direct management approach proved more reliable across different server types during testing (like uvx
-based servers), ensuring compatibility and preventing hangs observed with the library client in specific scenarios.inputSchema
for each tool. This provides type hinting, validation, and seamless integration with CrewAI's argument handling, significantly reducing runtime errors.MCPToolSet
acts as a simple context manager (async with
), abstracting away the complexities of process management, threading, and protocol handling. Get a list of ready-to-use CrewAI tools with just a few lines of code.stdio
.MCPToolSet
as a context manager, managed by the background worker.stdio
servers using asyncio
primitives within the worker thread for proven compatibility.mcp
Library: Uses the core mcp.ClientSession
for handling MCP protocol message details once the stdio streams are established.async with
Interface: Easily integrate MCP tools into your async CrewAI applications.npx
for common types (like server-filesystem
) or by specifying a custom command and arguments.stdio
-based MCP servers. (SSE support is planned for future versions.)# Create and activate a virtual environment (recommended)
python -m venv .venv
source .venv/bin/activate # or .\.venv\Scripts\activate on Windows
# Install the toolbox
pip install crewai-mcp-toolbox
# Ensure you have necessary MCP server dependencies installed
# e.g., for server-filesystem or other Node.js servers:
# npm install -g npx
# e.g., for uvx based servers:
# pip install uvx build # or equivalent uv/pip install build uvx
Integrate MCP tools into your CrewAI agent with minimal setup:
import asyncio
from crewai import Agent, Task, Crew
from crewai_mcp_toolbox import MCPToolSet
async def main():
# Example 1: Using a filesystem server (requires Node.js/npx)
# This will start 'npx @modelcontextprotocol/server-filesystem ./my_mcp_data'
# The directory will be created if it doesn't exist.
print("Starting Filesystem MCP Server...")
async with MCPToolSet(directory_path="./my_mcp_data") as filesystem_tools:
print(f"Filesystem Tools Found: {[tool.name for tool in filesystem_tools]}")
if not filesystem_tools:
print("Warning: No filesystem tools discovered. Ensure server started correctly.")
return # Exit if no tools found
# Example Agent using these tools (add your LLM details)
# fs_agent = Agent(...)
# task = Task(...)
# crew = Crew(...)
# result = await crew.kickoff_async()
# print("Filesystem Crew Result:", result)
print("-" * 20)
print("Filesystem MCP Server Stopped.")
# Example 2: Using a custom command (e.g., a Python-based MCP server)
# Assumes 'uvx my-custom-mcp-server' starts an MCP server
print("Starting Custom MCP Server...")
# Ensure 'my-custom-mcp-server' is runnable via 'uvx' in your environment
try:
async with MCPToolSet(command="uvx", args=["my-custom-mcp-server"]) as custom_tools:
print(f"Custom Tools Found: {[tool.name for tool in custom_tools]}")
if not custom_tools:
print("Warning: No custom tools discovered. Ensure server command is correct and functional.")
return # Exit if no tools found
# Example Agent using these tools (add your LLM details)
# custom_agent = Agent(...)
# task = Task(...)
# crew = Crew(...)
# result = await crew.kickoff_async()
# print("Custom Crew Result:", result)
except FileNotFoundError:
print("Error: 'uvx' command not found. Make sure uvx is installed and in your PATH.")
except Exception as e:
print(f"Error running custom server: {e}") # Catch other potential errors
print("-" * 20)
print("Custom MCP Server Stopped (or failed to start).")
if __name__ == "__main__":
# Basic check if main can run
try:
asyncio.run(main())
except Exception as e:
print(f"An error occurred: {e}")
You can customize timeouts by passing a config dictionary during MCPToolSet initialization:
config = {
"worker_startup_timeout": 60.0, # Default: 60s
"tool_execution_timeout": 120.0, # Default: 90s (for the CrewAI tool call wrapper)
"batch_execution_timeout": 180.0, # Default: 180s (for batch_execute method)
"per_call_timeout": 45.0, # Default: 60s (for the underlying MCP ClientSession.call_tool)
"health_check_interval": 15.0 # Default: 10s
}
`health_check_interval` controls how frequently the worker verifies the MCP
server connection. Lower values detect failures sooner but may increase
overhead.
# Make sure the directory exists or the command is valid
try:
async with MCPToolSet(directory_path="./data", config=config) as tools:
print("Tools with custom config:", [t.name for t in tools])
# ... use tools ...
except Exception as e:
print(f"Failed to initialize MCPToolSet with custom config: {e}")
Tests are located in the tests directory and use pytest. Fixtures in tests/conftest.py handle setting up different MCP server types (Docker, Chroma, Everything Server) via STDIO.
To run tests:
# Install testing requirements (ensure you have pytest, pytest-asyncio etc.)
# Example: pip install -e .[dev] or pip install -r requirements-dev.txt
# Run tests from the project root directory
pytest tests/
Note: Some tests may require external dependencies like Docker, Node.js (npx), or specific Python packages (uvx, chroma-mcp). The 'everything' server test fixture clones and builds an external repository.
Contributions are welcome! Please fork the repository, create a feature branch, make your changes, add tests, and submit a pull request.
This project is licensed under the MIT License - see the LICENSE file for details.
FAQs
CrewAI integration for the Model Context Protocol (MCP)
We found that crewai-mcp-toolbox demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Research
Security News
Socket researchers uncover how browser extensions in trusted stores are used to hijack sessions, redirect traffic, and manipulate user behavior.
Research
Security News
An in-depth analysis of credential stealers, crypto drainers, cryptojackers, and clipboard hijackers abusing open source package registries to compromise Web3 development environments.
Security News
pnpm 10.12.1 introduces a global virtual store for faster installs and new options for managing dependencies with version catalogs.