Unified MCP Client Library
π mcp-use is a complete TypeScript framework for building and using MCP (Model Context Protocol) applications. It provides both a powerful client library for connecting LLMs to MCP servers and a server framework for building your own MCP servers with UI capabilities.
π‘ Build custom AI agents, create MCP servers with React UI widgets, and debug everything with the built-in inspector - all in TypeScript.
π¦ mcp-use Ecosystem
β¨ Key Features
| π Ease of use | Create an MCP-capable agent in just a few lines of TypeScript. |
| π€ LLM Flexibility | Works with any LangChain.js-supported LLM that supports tool calling. |
| π HTTP Support | Direct SSE/HTTP connection to MCP servers. |
| βοΈ Dynamic Server Selection | Agents select the right MCP server from a pool on the fly. |
| π§© Multi-Server Support | Use multiple MCP servers in one agent. |
| π‘οΈ Tool Restrictions | Restrict unsafe tools like filesystem or network. |
| π§ Custom Agents | Build your own agents with LangChain.js adapter or implement new adapters. |
| π Observability | Built-in support for Langfuse with dynamic metadata and tag handling. |
π Quick Start
Requirements
- Node.js 22.0.0 or higher
- npm, yarn, or pnpm (examples use pnpm)
Installation
npm install mcp-use
npm install langchain @langchain/openai dotenv
npm install langfuse @langfuse/langchain
Create a .env:
OPENAI_API_KEY=your_api_key
Basic Usage
import { ChatOpenAI } from '@langchain/openai'
import { MCPAgent, MCPClient } from 'mcp-use'
import 'dotenv/config'
async function main() {
const config = {
mcpServers: {
playwright: { command: 'npx', args: ['@playwright/mcp@latest'] },
},
}
const client = MCPClient.fromDict(config)
const llm = new ChatOpenAI({ modelName: 'gpt-4o' })
const agent = new MCPAgent({ llm, client, maxSteps: 20 })
const result = await agent.run(
'Find the best restaurant in Tokyo using Google Search'
)
console.log('Result:', result)
}
main().catch(console.error)
π§ API Methods
MCPAgent Methods
The MCPAgent class provides several methods for executing queries with different output formats:
run(query: string, maxSteps?: number): Promise<string>
Executes a query and returns the final result as a string.
const result = await agent.run('What tools are available?')
console.log(result)
stream(query: string, maxSteps?: number): AsyncGenerator<AgentStep, string, void>
Yields intermediate steps during execution, providing visibility into the agent's reasoning process.
const stream = agent.stream('Search for restaurants in Tokyo')
for await (const step of stream) {
console.log(`Tool: ${step.action.tool}, Input: ${step.action.toolInput}`)
console.log(`Result: ${step.observation}`)
}
streamEvents(query: string, maxSteps?: number): AsyncGenerator<StreamEvent, void, void>
Yields fine-grained LangChain StreamEvent objects, enabling token-by-token streaming and detailed event tracking.
const eventStream = agent.streamEvents('What is the weather today?')
for await (const event of eventStream) {
switch (event.event) {
case 'on_chat_model_stream':
if (event.data?.chunk?.content) {
process.stdout.write(event.data.chunk.content)
}
break
case 'on_tool_start':
console.log(`\nTool started: ${event.name}`)
break
case 'on_tool_end':
console.log(`Tool completed: ${event.name}`)
break
}
}
Key Differences
run(): Best for simple queries where you only need the final result
stream(): Best for debugging and understanding the agent's tool usage
streamEvents(): Best for real-time UI updates with token-level streaming
π AI SDK Integration
The library provides built-in utilities for integrating with Vercel AI SDK, making it easy to build streaming UIs with React hooks like useCompletion and useChat.
Installation
npm install ai @langchain/anthropic
Basic Usage
import { ChatAnthropic } from '@langchain/anthropic'
import { LangChainAdapter } from 'ai'
import {
createReadableStreamFromGenerator,
MCPAgent,
MCPClient,
streamEventsToAISDK,
} from 'mcp-use'
async function createApiHandler() {
const config = {
mcpServers: {
everything: {
command: 'npx',
args: ['-y', '@modelcontextprotocol/server-everything'],
},
},
}
const client = new MCPClient(config)
const llm = new ChatAnthropic({ model: 'claude-sonnet-4-20250514' })
const agent = new MCPAgent({ llm, client, maxSteps: 5 })
return async (request: { prompt: string }) => {
const streamEvents = agent.streamEvents(request.prompt)
const aiSDKStream = streamEventsToAISDK(streamEvents)
const readableStream = createReadableStreamFromGenerator(aiSDKStream)
return LangChainAdapter.toDataStreamResponse(readableStream)
}
}
Enhanced Usage with Tool Visibility
import { streamEventsToAISDKWithTools } from 'mcp-use'
async function createEnhancedApiHandler() {
const config = {
mcpServers: {
everything: {
command: 'npx',
args: ['-y', '@modelcontextprotocol/server-everything'],
},
},
}
const client = new MCPClient(config)
const llm = new ChatAnthropic({ model: 'claude-sonnet-4-20250514' })
const agent = new MCPAgent({ llm, client, maxSteps: 8 })
return async (request: { prompt: string }) => {
const streamEvents = agent.streamEvents(request.prompt)
const enhancedStream = streamEventsToAISDKWithTools(streamEvents)
const readableStream = createReadableStreamFromGenerator(enhancedStream)
return LangChainAdapter.toDataStreamResponse(readableStream)
}
}
Next.js API Route Example
import { ChatAnthropic } from '@langchain/anthropic'
import { LangChainAdapter } from 'ai'
import {
createReadableStreamFromGenerator,
MCPAgent,
MCPClient,
streamEventsToAISDK,
} from 'mcp-use'
export async function POST(req: Request) {
const { prompt } = await req.json()
const config = {
mcpServers: {
everything: {
command: 'npx',
args: ['-y', '@modelcontextprotocol/server-everything'],
},
},
}
const client = new MCPClient(config)
const llm = new ChatAnthropic({ model: 'claude-sonnet-4-20250514' })
const agent = new MCPAgent({ llm, client, maxSteps: 10 })
try {
const streamEvents = agent.streamEvents(prompt)
const aiSDKStream = streamEventsToAISDK(streamEvents)
const readableStream = createReadableStreamFromGenerator(aiSDKStream)
return LangChainAdapter.toDataStreamResponse(readableStream)
} finally {
await client.closeAllSessions()
}
}
Frontend Integration
import { useCompletion } from 'ai/react'
export function Chat() {
const { completion, input, handleInputChange, handleSubmit } = useCompletion({
api: '/api/chat',
})
return (
<div>
<div>{completion}</div>
<form onSubmit={handleSubmit}>
<input
value={input}
onChange={handleInputChange}
placeholder="Ask me anything..."
/>
</form>
</div>
)
}
Available AI SDK Utilities
streamEventsToAISDK(): Converts streamEvents to basic text stream
streamEventsToAISDKWithTools(): Enhanced stream with tool usage notifications
createReadableStreamFromGenerator(): Converts async generator to ReadableStream
π Observability & Monitoring
mcp-use-ts provides built-in observability support through the ObservabilityManager, with integration for Langfuse and other observability platforms.
To enable observability simply configure Environment Variables
LANGFUSE_PUBLIC_KEY=pk-lf-your-public-key
LANGFUSE_SECRET_KEY=sk-lf-your-secret-key
LANGFUSE_HOST=https://cloud.langfuse.com
Advanced Observability Features
Dynamic Metadata and Tags
agent.setMetadata({
userId: 'user123',
sessionId: 'session456',
environment: 'production',
})
agent.setTags(['production', 'user-query', 'tool-discovery'])
const result = await agent.run('Search for restaurants in Tokyo')
Monitoring Agent Performance
const eventStream = agent.streamEvents('Complex multi-step query')
for await (const event of eventStream) {
switch (event.event) {
case 'on_llm_start':
console.log('LLM call started:', event.data)
break
case 'on_tool_start':
console.log('Tool execution started:', event.name, event.data)
break
case 'on_tool_end':
console.log('Tool execution completed:', event.name, event.data)
break
case 'on_chain_end':
console.log('Agent execution completed:', event.data)
break
}
}
Disabling Observability
To disable observability, either remove langfuse env variables or
const agent = new MCPAgent({
llm,
client,
observe: false,
})
π Configuration File
You can store servers in a JSON file:
{
"mcpServers": {
"playwright": {
"command": "npx",
"args": ["@playwright/mcp@latest"]
}
}
}
Load it:
import { MCPClient } from 'mcp-use'
const client = MCPClient.fromConfigFile('./mcp-config.json')
π Examples
We provide a comprehensive set of examples demonstrating various use cases. All examples are located in the examples/ directory with a dedicated README.
Running Examples
npm install
npm run example:airbnb
npm run example:browser
npm run example:chat
npm run example:stream
npm run example:stream_events
npm run example:ai_sdk
npm run example:filesystem
npm run example:http
npm run example:everything
npm run example:multi
Example Highlights
- Browser Automation: Control browsers to navigate websites and extract information
- File Operations: Read, write, and manipulate files through MCP
- Multi-Server: Combine multiple MCP servers (Airbnb + Browser) in a single task
- Sandboxed Execution: Run MCP servers in isolated E2B containers
- OAuth Flows: Authenticate with services like Linear using OAuth2
- Streaming Methods: Demonstrate both step-by-step and token-level streaming
- AI SDK Integration: Build streaming UIs with Vercel AI SDK and React hooks
See the examples README for detailed documentation and prerequisites.
π Multi-Server Example
const config = {
mcpServers: {
airbnb: { command: 'npx', args: ['@openbnb/mcp-server-airbnb'] },
playwright: { command: 'npx', args: ['@playwright/mcp@latest'] },
},
}
const client = MCPClient.fromDict(config)
const agent = new MCPAgent({ llm, client, useServerManager: true })
await agent.run('Search Airbnb in Barcelona, then Google restaurants nearby')
π Tool Access Control
const agent = new MCPAgent({
llm,
client,
disallowedTools: ['file_system', 'network'],
})
π₯οΈ MCP Server Framework
Beyond being a powerful MCP client, mcp-use also provides a complete server framework for building your own MCP servers with built-in UI capabilities and automatic inspector integration.
Quick Server Setup
import { createMCPServer } from 'mcp-use/server'
const server = createMCPServer('my-awesome-server', {
version: '1.0.0',
description: 'My MCP server with tools, resources, and prompts',
})
server.tool('search_web', {
description: 'Search the web for information',
parameters: z.object({
query: z.string().describe('Search query'),
}),
execute: async (args) => {
return { results: await performSearch(args.query) }
},
})
server.resource('config', {
description: 'Application configuration',
uri: 'config://settings',
mimeType: 'application/json',
fetch: async () => {
return JSON.stringify(await getConfig(), null, 2)
},
})
server.prompt('code_review', {
description: 'Review code for best practices',
arguments: [{ name: 'code', description: 'Code to review', required: true }],
render: async (args) => {
return `Please review this code:\n\n${args.code}`
},
})
server.listen(3000)
Key Server Features
| π Auto Inspector | Inspector UI automatically mounts at /inspector for debugging |
| π¨ UI Widgets | Build custom React UI components served alongside your MCP tools |
| π OAuth Support | Built-in OAuth flow handling for secure authentication |
| π‘ Multiple Transports | HTTP/SSE and WebSocket support out of the box |
| π οΈ TypeScript First | Full TypeScript support with type inference |
| β»οΈ Hot Reload | Development mode with automatic reloading |
| π Observability | Built-in logging and monitoring capabilities |
MCP-UI Resources
mcp-use provides a unified uiResource() method for registering interactive UI widgets that are compatible with MCP-UI clients. This automatically creates both a tool (for dynamic parameters) and a resource (for static access).
Quick Start
import { createMCPServer } from 'mcp-use/server'
const server = createMCPServer('my-server', { version: '1.0.0' })
server.uiResource({
type: 'externalUrl',
name: 'kanban-board',
widget: 'kanban-board',
title: 'Kanban Board',
description: 'Interactive task management board',
props: {
initialTasks: {
type: 'array',
description: 'Initial tasks',
required: false,
},
theme: {
type: 'string',
default: 'light',
},
},
size: ['900px', '600px'],
})
server.listen(3000)
This automatically creates:
- Tool:
kanban-board - Accepts parameters and returns UIResource
- Resource:
ui://widget/kanban-board - Static access with defaults
Three Resource Types
1. External URL (Iframe)
Serve widgets from your filesystem via iframe:
server.uiResource({
type: 'externalUrl',
name: 'dashboard',
widget: 'dashboard',
props: { userId: { type: 'string', required: true } },
})
2. Raw HTML
Direct HTML content rendering:
server.uiResource({
type: 'rawHtml',
name: 'welcome-card',
htmlContent: `
<!DOCTYPE html>
<html>
<body><h1>Welcome!</h1></body>
</html>
`,
})
3. Remote DOM
Interactive components using MCP-UI React components:
server.uiResource({
type: 'remoteDom',
name: 'quick-poll',
script: `
const button = document.createElement('ui-button');
button.setAttribute('label', 'Vote');
root.appendChild(button);
`,
framework: 'react',
})
Get Started with Templates
npx create-mcp-use-app my-app
cd my-app
npm install
npm run dev
Building Custom UI Widgets
mcp-use supports building custom UI widgets for your MCP tools using React:
import React, { useState } from 'react'
import { useMcp } from 'mcp-use/react'
export default function TaskManager() {
const { callTool } = useMcp()
const [tasks, setTasks] = useState<Task[]>([])
const addTask = async (title: string) => {
const result = await callTool('create_task', { title })
setTasks([...tasks, result])
}
return (
<div>
<h1>Task Manager</h1>
{/* Your UI implementation */}
</div>
)
}
Build and serve widgets using the mcp-use CLI:
npx @mcp-use/cli dev
npx @mcp-use/cli build
npx @mcp-use/cli start
Advanced Server Configuration
const server = createMCPServer('advanced-server', {
version: '1.0.0',
description: 'Advanced MCP server with custom configuration',
inspectorPath: '/debug',
mcpPath: '/api/mcp',
cors: {
origin: ['http://localhost:3000', 'https://myapp.com'],
credentials: true,
},
oauth: {
clientId: process.env.OAUTH_CLIENT_ID,
clientSecret: process.env.OAUTH_CLIENT_SECRET,
authorizationUrl: 'https://api.example.com/oauth/authorize',
tokenUrl: 'https://api.example.com/oauth/token',
scopes: ['read', 'write'],
},
middleware: [authenticationMiddleware, rateLimitingMiddleware],
})
Server Deployment
Deploy your MCP server to any Node.js hosting platform:
npm run build
pm2 start dist/index.js --name mcp-server
docker build -t my-mcp-server .
docker run -p 3000:3000 my-mcp-server
Integration with Express
You can also integrate MCP server into existing Express applications:
import express from 'express'
import { mountMCPServer } from 'mcp-use/server'
const app = express()
app.get('/api/health', (req, res) => res.send('OK'))
const mcpServer = createMCPServer('integrated-server', {
})
mountMCPServer(app, mcpServer, {
basePath: '/mcp-service',
})
app.listen(3000)
π₯ Contributors
π License
MIT Β© Zane