Socket
Book a DemoInstallSign in
Socket

@celp/agent-fleet

Package Overview
Dependencies
Maintainers
1
Versions
1
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

@celp/agent-fleet

A powerful CLI tool for creating, managing, and deploying AI agents with MCP integration

0.1.0
latest
Source
npmnpm
Version published
Weekly downloads
1
Maintainers
1
Weekly downloads
Β 
Created
Source

AI Agent Platform

A comprehensive platform for creating, managing, and deploying standalone AI agents with tool support and MCP server integration.

πŸš€ Key Features

  • Standalone Agent Deployment: Each agent is an independent, deployable application
  • Automatic Setup: Dependencies installed automatically during agent creation
  • Tool-Aware Instructions: Agents automatically understand their available tools (Gmail, Calendar, Slack)
  • Supported Models: gpt-5, gpt-5-mini, gpt-5-nano, o4-mini, o3
  • Distributed Architecture: Centralized development, distributed deployment
  • Multiple Deployment Targets: Lambda, Docker, HTTP servers, custom environments
  • Runtime Package: Shared ai-agent-runtime package for easy updates
  • Custom Tools: Agent-specific tools and business logic
  • MCP Server Support: Connect to Model Context Protocol servers with OAuth support
  • Template System: Pre-configured agent templates
  • Version Management: Controlled runtime updates across agents

Quick Start

Prerequisites

Create a .env file with your OpenAI API key:

cp .env.example .env
# Edit .env and add your OpenAI API key

Installation

git clone <repository>
npm install
npm run build

What's Working βœ…

Core Functionality:

  • OpenAI Agents SDK integration with auto-execution
  • Built-in tools: calculator, file operations, shell, web search, PDF tools
  • Multi-turn conversations with tool awareness
  • MCP server integration with OAuth support
  • Standalone agent deployment (Lambda, Docker, HTTP server)

CLI Features:

  • Interactive chat interface
  • Agent configuration management
  • Tool listing and execution
  • Verbose mode for debugging

Create Your First Standalone Agent

# Create a standalone agent repository (dependencies auto-installed)
npm run dev -- agent create-repo -n "my-assistant" -m "gpt-5" -i "You are a helpful assistant" --mcp-servers "google-workspace"

# Ready to use immediately! Chat with your agent:
npm run dev -- chat --agent "my-assistant"

# Or run directly from agent directory:
cd ~/.agent-fleet/agents/my-assistant/
npm run dev  # Interactive development mode

# Or build and deploy
npm run build
npm start    # HTTP server mode

Alternative: Platform-managed Agents

# Create agent in platform database (legacy)
npm run dev agent add --interactive

# Chat through platform
npm run dev chat --agent my-assistant

Try These Commands

Once you're chatting with an agent, try these examples to see tool execution in action:

"Calculate 25 * 4 + 10"              # Uses calculator tool
"Read the package.json file"         # Uses file reading tool  
"List files in the current directory" # Uses directory listing
"What's the current date?"           # Uses shell command
"Search for TypeScript best practices" # Uses web search tool

Note: Tools are automatically executed when the agent determines they're needed. Use --verbose flag to see detailed tool inputs/outputs.

Architecture

The platform uses a distributed architecture designed for scalability:

β”Œβ”€ AI Agent CLI Platform ─┐     β”Œβ”€ ai-agent-runtime ─┐
β”‚  β”œβ”€ Agent Builder       │────▢│  β”œβ”€ AgentRuntime    β”‚
β”‚  β”œβ”€ Template System     β”‚     β”‚  β”œβ”€ Built-in Tools  β”‚
β”‚  β”œβ”€ Database (SQLite)   β”‚     β”‚  β”œβ”€ MCP Integration β”‚
β”‚  └─ Repository Manager  β”‚     β”‚  └─ Tool-Aware AI   β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜     β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
            β”‚                                   β”‚
    β”Œβ”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”                          β”‚
    β”‚ Configuration β”‚                          β”‚
    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜                          β”‚
                                               β”‚
            β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
            β”‚
    β”Œβ”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”
    β”‚ Standalone    β”‚
    β”‚ Agents        β”‚
    β”œβ”€ Lambda       β”‚
    β”œβ”€ Docker       β”‚
    β”œβ”€ HTTP Server  β”‚
    └─ Custom       β”‚

Components

  • CLI Platform: Agent creation, management, and development tools
  • Runtime Package: Shared ai-agent-runtime package for deployed agents
  • Agent Repositories: Independent TypeScript applications with auto-generated tool awareness
  • Template System: Reusable agent configurations
  • Version Management: Centralized updates, distributed deployment

Configuration

Platform Environment Variables

Create a .env file in the project root:

OPENAI_API_KEY=your-openai-api-key
AI_MODEL=gpt-5

Agent Environment Variables

Each agent has its own .env file:

# In ~/.agent-fleet/agents/my-agent/.env
OPENAI_API_KEY=your-openai-api-key
NODE_ENV=development
PORT=3000

AWS Lambda Base Configuration

To avoid setting up AWS credentials for each agent individually, create a base .env.lambda file in the project root:

# Copy the template and fill in your AWS credentials
cp .env.lambda.example .env.lambda
# In agent-fleet/.env.lambda
AWS_ACCESS_KEY_ID=your-aws-access-key-id
AWS_SECRET_ACCESS_KEY=your-aws-secret-access-key
AWS_REGION=us-east-1
AWS_ACCOUNT_ID=123456789012
ECR_REGISTRY=123456789012.dkr.ecr.us-east-1.amazonaws.com
LAMBDA_ROLE_ARN=arn:aws:iam::123456789012:role/lambda-execution-role

When you generate new agents, they'll automatically inherit these AWS credentials. Existing agents can sync credentials using:

cd ~/.agent-fleet/agents/my-agent
npm run sync:aws-creds

Standalone Agent Development

Custom Tools

Add agent-specific tools in src/custom-tools.ts:

import { z } from 'zod';
import type { ToolDefinition } from 'ai-agent-runtime';

export async function getCustomTools(): Promise<ToolDefinition[]> {
  return [
    {
      name: 'weather_check',
      description: 'Check current weather for a location',
      parameters: z.object({
        location: z.string().describe('City name')
      }),
      execute: async ({ location }) => {
        // Your weather API integration
        return { temperature: 72, condition: 'sunny' };
      }
    }
  ];
}

Deployment Options

# HTTP Server
npm run build && npm start

# Docker
npm run deploy:docker

# AWS Lambda
npm run deploy:lambda

# Development
npm run dev

MCP Server Integration

Connect to external services using MCP servers:

# agent.yaml
mcpServers:
  - name: filesystem
    url: npx @modelcontextprotocol/server-filesystem /allowed/path
  - name: database
    url: node ./custom-mcp-server.js
    env:
      DB_CONNECTION: "postgresql://..."

Development

Platform Development

# Build CLI platform
npm run build
npm run typecheck

# Build runtime package
cd packages/runtime/
npm run build

Agent Development

# In agent directory
cd ~/.agent-fleet/agents/my-agent/

# Development mode
npm run dev

# Build for production
npm run build

# Type checking
npm run typecheck

# Validate configuration
npm run validate

Runtime Updates

# Update runtime package
cd packages/runtime/
npm version minor
npm publish

# Update specific agents
cd ~/.agent-fleet/agents/my-agent/
npm update ai-agent-runtime
npm run build && npm run deploy

Agent Templates

Available templates for quick agent creation:

  • sample-developer-assistant: File system and shell access for development tasks
  • personal-assistant: General purpose assistant with productivity tools
  • custom: Start from scratch
# Create from built-in template
npm run dev -- agent create-repo --template "sample-developer-assistant" -n "my-agent"

# Interactive template selection
npm run dev -- agent create-repo --interactive

⚠️ Note: Template and migrate commands were removed from the CLI:

  • npm run dev -- template [subcommand] β†’ DELETED (files removed)
  • npm run dev -- migrate [subcommand] β†’ DELETED (files removed)

Use agent create-repo with built-in templates instead.

Version Management

Centralized Updates

  • Platform team updates runtime:

    cd packages/runtime/
    # Add new tools/features
    npm version minor  # 1.0.0 β†’ 1.1.0
    npm publish
    
  • Agent owners update selectively:

    cd ~/.agent-fleet/agents/my-agent/
    npm update ai-agent-runtime@1.1.0
    npm run build && npm run deploy
    
  • Bulk updates (optional):

    npm run dev agent bulk-update --runtime-version 1.1.0
    

Production Examples

HTTP Server Agent

// Automatically included in each agent
import { createAgentRuntime, loadManifestFromFile } from 'ai-agent-runtime';
import { startServer } from './server.js';

const manifest = await loadManifestFromFile('agent.yaml');
const runtime = await createAgentRuntime(manifest, customTools);
await startServer(runtime, 3000);

AWS Lambda Agent

# Deploy with streaming support
npm run deploy:lambda:complete

# Test your Lambda function
curl -X POST https://your-function-url.lambda-url.region.on.aws/chat \
  -H "Content-Type: application/json" \
  -d '{"message": "Hello, Lambda!"}'

Docker Deployment

# Automatically included in each agent
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
RUN npm run build
EXPOSE 3000
CMD ["npm", "start"]

Built-in Tools

Each agent includes these built-in tools:

  • calculator: Mathematical calculations
  • read_file: Read file contents
  • write_file: Write to files
  • list_directory: List directory contents
  • shell: Execute shell commands
  • web_search: Web search via OpenAI Responses API
  • analyze_pdf: Analyze PDF documents
  • generate_pdf: Generate PDF documents

Tool Configuration

Customize tool behavior in your agent.yaml:

# Enable specific tools
enabledTools:
  - web_search
  - calculator
  - analyze_pdf

# Configure individual tools
toolConfigurations:
  web_search:
    model: gpt-5-mini           # Override model for web searches
    maxTokens: 3000            # Limit response length  
    reasoning_effort: medium    # For reasoning models (minimal/low/medium/high)
  
  analyze_pdf:
    model: gpt-5               # Model for PDF analysis
    maxTokens: 8000            # Token limit for analysis

Available configurations:

  • web_search: model (defaults to agent model), maxTokens (5000), reasoning_effort
  • analyze_pdf: model (defaults to agent model), maxTokens

πŸ“– Complete configuration guide: AGENT-CONFIGURATION.md

Contributing

Contributions are welcome! Please read the contributing guidelines before submitting PRs.

  • Runtime Package: Shared functionality for all agents
  • CLI Platform: Agent creation and management tools
  • Templates: Reusable agent configurations
  • Documentation: Guides and examples

Documentation

License

MIT License - see LICENSE file for details.

Keywords

ai

FAQs

Package last updated on 11 Aug 2025

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

About

Packages

Stay in touch

Get open source security insights delivered straight into your inbox.

  • Terms
  • Privacy
  • Security

Made with ⚑️ by Socket Inc

U.S. Patent No. 12,346,443 & 12,314,394. Other pending.