
Security News
Attackers Are Hunting High-Impact Node.js Maintainers in a Coordinated Social Engineering Campaign
Multiple high-impact npm maintainers confirm they have been targeted in the same social engineering campaign that compromised Axios.
mcp-proxy-wrapper
Advanced tools
A powerful proxy wrapper for Model Context Protocol (MCP) servers with hooks and plugin system for intercepting, monitoring, and modifying tool calls
A lightweight, powerful wrapper for Model Context Protocol (MCP) servers that provides a comprehensive hook system for intercepting, monitoring, and modifying tool calls without changing your existing server code.
npm install mcp-proxy-wrapper
import { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js';
import { wrapWithProxy } from 'mcp-proxy-wrapper';
import { z } from 'zod';
// Create your existing MCP server
const server = new McpServer({
name: 'My Server',
version: '1.0.0'
});
// Wrap it with proxy functionality
const proxiedServer = await wrapWithProxy(server, {
hooks: {
// Monitor all tool calls
beforeToolCall: async (context) => {
console.log(`🔧 Calling tool: ${context.toolName}`);
console.log(`📝 Arguments:`, context.args);
},
// Process results
afterToolCall: async (context, result) => {
console.log(`✅ Tool completed: ${context.toolName}`);
return result; // Pass through unchanged
}
},
debug: true // Enable detailed logging
});
// Register tools normally
proxiedServer.tool('greet', { name: z.string() }, async (args) => {
return {
content: [{ type: 'text', text: `Hello, ${args.name}!` }]
};
});
The MCP Proxy Wrapper includes a powerful plugin architecture that allows you to create reusable, composable functionality.
import { LLMSummarizationPlugin, ChatMemoryPlugin } from 'mcp-proxy-wrapper';
const summarizationPlugin = new LLMSummarizationPlugin();
const memoryPlugin = new ChatMemoryPlugin();
const proxiedServer = await wrapWithProxy(server, {
plugins: [
summarizationPlugin,
memoryPlugin
]
});
Automatically summarizes long tool responses using AI:
import { LLMSummarizationPlugin } from 'mcp-proxy-wrapper';
const plugin = new LLMSummarizationPlugin();
plugin.updateConfig({
options: {
provider: 'openai', // or 'mock' for testing
openaiApiKey: process.env.OPENAI_API_KEY,
model: 'gpt-4o-mini',
minContentLength: 500,
summarizeTools: ['research', 'analyze', 'fetch-data'],
saveOriginal: true // Store original responses for retrieval
}
});
const proxiedServer = await wrapWithProxy(server, {
plugins: [plugin]
});
// Tool responses are automatically summarized
const result = await client.callTool({
name: 'research',
arguments: { topic: 'artificial intelligence' }
});
console.log(result._meta.summarized); // true
console.log(result._meta.originalLength); // 2000
console.log(result._meta.summaryLength); // 200
console.log(result.content[0].text); // "Summary: ..."
Provides conversational interface for saved tool responses:
import { ChatMemoryPlugin } from 'mcp-proxy-wrapper';
const memoryPlugin = new ChatMemoryPlugin();
memoryPlugin.updateConfig({
options: {
provider: 'openai',
openaiApiKey: process.env.OPENAI_API_KEY,
saveResponses: true,
enableChat: true,
maxEntries: 1000
}
});
const proxiedServer = await wrapWithProxy(server, {
plugins: [memoryPlugin]
});
// Tool responses are automatically saved
await client.callTool({
name: 'research',
arguments: { topic: 'climate change', userId: 'user123' }
});
// Chat with your saved data
const sessionId = await memoryPlugin.startChatSession('user123');
const response = await memoryPlugin.chatWithMemory(
sessionId,
"What did I research about climate change?",
'user123'
);
console.log(response); // AI response based on saved research
import { BasePlugin, PluginContext, ToolCallResult } from 'mcp-proxy-wrapper';
class MyCustomPlugin extends BasePlugin {
name = 'my-custom-plugin';
version = '1.0.0';
async afterToolCall(context: PluginContext, result: ToolCallResult): Promise<ToolCallResult> {
// Add custom metadata
return {
...result,
result: {
...result.result,
_meta: {
...result.result._meta,
processedBy: this.name,
customField: 'custom value'
}
}
};
}
}
const proxiedServer = await wrapWithProxy(server, {
plugins: [new MyCustomPlugin()]
});
const plugin = new LLMSummarizationPlugin();
// Runtime configuration updates
plugin.updateConfig({
enabled: true,
priority: 10,
options: {
minContentLength: 200,
provider: 'openai'
},
includeTools: ['research', 'analyze'], // Only these tools
excludeTools: ['chat'], // Skip these tools
debug: true
});
const proxiedServer = wrapWithProxy(server, {
hooks: {
beforeToolCall: async (context) => {
// Add timestamp to all tool calls
context.args.timestamp = new Date().toISOString();
// Sanitize user input
if (context.args.message) {
context.args.message = context.args.message.trim();
}
}
}
});
const proxiedServer = wrapWithProxy(server, {
hooks: {
afterToolCall: async (context, result) => {
// Add metadata to all responses
if (result.result.content) {
result.result._meta = {
toolName: context.toolName,
processedAt: new Date().toISOString(),
version: '1.0.0'
};
}
return result;
}
}
});
const proxiedServer = wrapWithProxy(server, {
hooks: {
beforeToolCall: async (context) => {
// Block certain tools
if (context.toolName === 'delete' && !context.args.adminKey) {
return {
result: {
content: [{ type: 'text', text: 'Access denied: Admin key required' }],
isError: true
}
};
}
// Rate limiting
if (await isRateLimited(context.args.userId)) {
return {
result: {
content: [{ type: 'text', text: 'Rate limit exceeded. Try again later.' }],
isError: true
}
};
}
}
}
});
const proxiedServer = wrapWithProxy(server, {
hooks: {
beforeToolCall: async (context) => {
// Log to monitoring service
await analytics.track('tool_call_started', {
tool: context.toolName,
userId: context.args.userId,
timestamp: Date.now()
});
},
afterToolCall: async (context, result) => {
// Handle errors
if (result.result.isError) {
await errorLogger.log({
tool: context.toolName,
error: result.result.content[0].text,
context: context.args
});
}
return result;
}
}
});
The proxy wrapper provides two main hooks:
beforeToolCall: Executed before the original tool function
afterToolCall: Executed after the original tool function
ToolCallResultEvery hook receives a ToolCallContext with:
interface ToolCallContext {
toolName: string; // Name of the tool being called
args: Record<string, any>; // Tool arguments (mutable)
metadata?: Record<string, any>; // Additional context data
}
The afterToolCall hook works with ToolCallResult:
interface ToolCallResult {
result: any; // The tool's return value
metadata?: Record<string, any>; // Additional result metadata
}
wrapWithProxy(server, options)Wraps an MCP server instance with proxy functionality.
Parameters:
server (McpServer): The MCP server to wrapoptions (ProxyWrapperOptions): Configuration optionsReturns:
Promise<McpServer> - A new MCP server instance with proxy capabilities
interface ProxyWrapperOptions {
hooks?: ProxyHooks; // Hook functions
plugins?: ProxyPlugin[]; // Plugin instances
pluginConfig?: Record<string, any>; // Global plugin configuration
metadata?: Record<string, any>; // Global metadata
debug?: boolean; // Enable debug logging
}
interface ProxyHooks {
beforeToolCall?: (context: ToolCallContext) => Promise<void | ToolCallResult>;
afterToolCall?: (context: ToolCallContext, result: ToolCallResult) => Promise<ToolCallResult>;
}
The MCP Proxy Wrapper includes comprehensive testing with real MCP client-server communication:
# Run all tests
npm test
# Run with coverage
npm run test:coverage
# Run specific test suites
npm test -- --testNamePattern="Comprehensive Tests"
npm test -- --testNamePattern="Edge Cases"
npm test -- --testNamePattern="Protocol Compliance"
The proxy wrapper is designed to be a drop-in replacement:
// Before
const server = new McpServer(config);
server.tool('myTool', schema, handler);
// After
const server = new McpServer(config);
const proxiedServer = await wrapWithProxy(server, {
hooks: myHooks,
plugins: [new LLMSummarizationPlugin()]
});
proxiedServer.tool('myTool', schema, handler); // Same API!
const authProxy = wrapWithProxy(server, {
hooks: {
beforeToolCall: async (context) => {
if (!await validateApiKey(context.args.apiKey)) {
return { result: { content: [{ type: 'text', text: 'Invalid API key' }], isError: true }};
}
}
}
});
const rateLimitedProxy = wrapWithProxy(server, {
hooks: {
beforeToolCall: async (context) => {
const userId = context.args.userId;
if (await rateLimiter.isExceeded(userId)) {
return { result: { content: [{ type: 'text', text: 'Rate limit exceeded' }], isError: true }};
}
await rateLimiter.increment(userId);
}
}
});
const cachedProxy = wrapWithProxy(server, {
hooks: {
beforeToolCall: async (context) => {
const cacheKey = `${context.toolName}:${JSON.stringify(context.args)}`;
const cached = await cache.get(cacheKey);
if (cached) {
return { result: cached };
}
},
afterToolCall: async (context, result) => {
const cacheKey = `${context.toolName}:${JSON.stringify(context.args)}`;
await cache.set(cacheKey, result.result, { ttl: 300 });
return result;
}
}
});
const monitoredProxy = await wrapWithProxy(server, {
hooks: {
beforeToolCall: async (context) => {
await metrics.increment('tool_calls_total', { tool: context.toolName });
context.startTime = Date.now();
},
afterToolCall: async (context, result) => {
const duration = Date.now() - context.startTime;
await metrics.histogram('tool_call_duration', duration, { tool: context.toolName });
return result;
}
}
});
import { LLMSummarizationPlugin, ChatMemoryPlugin } from 'mcp-proxy-wrapper';
const aiEnhancedProxy = await wrapWithProxy(server, {
plugins: [
new LLMSummarizationPlugin({
options: {
provider: 'openai',
openaiApiKey: process.env.OPENAI_API_KEY,
summarizeTools: ['research', 'analyze', 'fetch-data'],
minContentLength: 500
}
}),
new ChatMemoryPlugin({
options: {
provider: 'openai',
openaiApiKey: process.env.OPENAI_API_KEY,
saveResponses: true,
enableChat: true
}
})
]
});
// Long research responses are automatically summarized
// All responses are saved for conversational querying
We welcome contributions! Please see our Contributing Guide for details.
git clone https://github.com/crazyrabbitLTC/mcp-proxy-wrapper.git
cd mcp-proxy-wrapper
npm install
npm run build
npm test
MIT License - see LICENSE file for details.
FAQs
A powerful proxy wrapper for Model Context Protocol (MCP) servers with hooks and plugin system for intercepting, monitoring, and modifying tool calls
We found that mcp-proxy-wrapper demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
Multiple high-impact npm maintainers confirm they have been targeted in the same social engineering campaign that compromised Axios.

Security News
Axios compromise traced to social engineering, showing how attacks on maintainers can bypass controls and expose the broader software supply chain.

Security News
Node.js has paused its bug bounty program after funding ended, removing payouts for vulnerability reports but keeping its security process unchanged.