A CLI tool for interacting with Model Context Protocol (MCP) servers using natural language. Built with mcp-use.
Features
- 🤖 Natural language interface for MCP servers
- 💬 Interactive chat interface with tool call visualization
- ⚡ Direct integration with mcp-use (no API layer needed)
- 🚀 Single command installation
- 🔄 Over a dozen LLM providers (OpenAI, Anthropic, Google, Mistral, Groq, Cohere, and more)
- ⚙️ Slash commands for configuration (like Claude Code)
- 🔑 Smart API key prompting - automatically asks for keys when needed
- 💾 Persistent secure storage - encrypted keys and settings saved across sessions
Install
$ npm install --global @mcp-use/cli
Quick Start
-
Install and run:
$ npm install --global @mcp-use/cli
$ mcp-use
-
Choose your model (CLI handles API key setup automatically):
/model openai gpt-4o
/model anthropic claude-3-5-sonnet-20240620
/model google gemini-1.5-pro
/model groq llama-3.1-70b-versatile
/model ollama llama3
-
Get API keys when prompted from providers like:
Keys are stored securely encrypted in ~/.mcp-use-cli/config.json and persist across sessions.
Alternative Setup
If you prefer environment variables:
export OPENAI_API_KEY=your_key_here
export ANTHROPIC_API_KEY=your_key_here
Usage
$ mcp-use --help
Usage
$ mcp-use
Options
--name Your name (optional)
--config Path to MCP configuration file (optional)
Examples
$ mcp-use
$ mcp-use --name=Jane
Environment Variables
<PROVIDER>_API_KEY Set API keys (e.g., OPENAI_API_KEY, ANTHROPIC_API_KEY)
Setup
1. Run: mcp-use
2. Use /model or /setkey to configure an LLM.
3. Use /server commands to connect to your tools.
4. Start chatting!
Connecting to Tools (MCP Servers)
This CLI is a client for Model Context Protocol (MCP) servers. MCP servers act as tools that the AI can use. You need to connect the CLI to one or more servers to give it capabilities.
You can manage servers with the /server commands:
/server add
/servers
/server connect <server-name>
/server disconnect <server-name>
When you add a server, you'll be prompted for its configuration details, such as the command to run it. Here is an example of what a server configuration for a filesystem tool looks like:
{
"filesystem-tool": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-filesystem",
"/path/to/your/project"
],
"env": {}
}
}
This configuration would be created interactively by running /server add and answering the prompts.
Slash Commands
Switch LLM providers and configure settings using slash commands:
/setkey openai sk-1234567890abcdef...
/setkey anthropic ant_1234567890abcdef...
/clearkeys
/model openai gpt-4o
/model anthropic claude-3-5-sonnet-20240620
/model google gemini-1.5-pro
/model mistral mistral-large-latest
/model groq llama-3.1-70b-versatile
/models
/server add
/servers
/server connect <name>
/server disconnect <name>
/config temp 0.5
/config tokens 4000
/status
/help
Chat Examples
- "List files in the current directory"
- "Create a new file called hello.txt with the content 'Hello, World!'"
- "Search for files containing 'TODO'"
- "What's the structure of this project?"
Architecture
This CLI uses:
- Frontend: React + Ink for the terminal UI
- Agent: mcp-use MCPAgent for LLM + MCP integration
- LLM: Your choice of 12+ providers
- Transport: Direct TypeScript integration (no API layer)
License
MIT