π Dank Agent Service
Docker-based AI Agent Orchestration Platform
Dank is a powerful Node.js service that allows you to define, deploy, and manage AI agents using Docker containers. Each agent runs in its own isolated environment with configurable resources, LLM providers, and custom handlers. Built for production with comprehensive CI/CD support and Docker registry integration.
π Website: https://dank-ai.xyz
π¦ NPM Package: https://www.npmjs.com/package/dank-ai
βοΈ Cloud Deployment: https://cloud.dank-ai.xyz - Serverless for AI Agents
βοΈ Deploy to the Cloud
Serverless for AI Agents - Deploy your Dank agents seamlessly to the cloud with zero infrastructure management.
π https://cloud.dank-ai.xyz - The seamless cloud deployment management serverless solution for Dank. Scale your AI agents automatically, pay only for what you use, and focus on building great agents instead of managing servers.
β¨ Features
- π€ Multi-LLM Support: OpenAI, Anthropic, Cohere, Ollama, and custom providers
- π³ Docker Orchestration: Isolated agent containers with resource management
- β‘ Easy Configuration: Define agents with simple JavaScript configuration
- π Real-time Monitoring: Built-in health checks and status monitoring
- π§ Flexible Handlers: Custom event handlers for agent outputs and errors
- π― CLI Interface: Powerful command-line tools for agent management
- ποΈ Production Builds: Build and push Docker images to registries with custom naming
- π CI/CD Ready: Seamless integration with GitHub Actions, GitLab CI, and other platforms
π Quick Start
Prerequisites
Before you begin, make sure you have:
- Node.js 16+ installed
- Docker Desktop or Docker Engine (will be installed automatically if missing)
- API keys for your chosen LLM provider(s)
π Auto-Docker Installation: Dank will automatically detect, install, and start Docker if it's not available on your system. No manual setup required!
1. Install Dank globally
npm install -g dank-ai
2. Initialize a new project
mkdir my-agent-project
cd my-agent-project
dank init my-agent-project
This creates:
my-agent-project/
βββ dank.config.js # Your agent configuration
βββ agents/ # Custom agent code (optional)
β βββ example-agent.js
βββ .dank/ # Generated files
βββ project.yaml
3. Set up environment variables
Create a .env file or export environment variables:
export OPENAI_API_KEY="your-openai-api-key"
export ANTHROPIC_API_KEY="your-anthropic-api-key"
export COHERE_API_KEY="your-cohere-api-key"
4. Configure your agents
Edit dank.config.js to define your agents:
const { createAgent } = require('dank');
module.exports = {
name: 'my-agent-project',
agents: [
createAgent('assistant')
.setLLM('openai', {
apiKey: process.env.OPENAI_API_KEY,
model: 'gpt-3.5-turbo',
temperature: 0.7
})
.setPrompt('You are a helpful assistant that responds with enthusiasm!')
.setInstanceType('small')
.addHandler('output', (data) => {
console.log('Assistant says:', data);
})
]
};
5. Build Docker images (optional)
dank build
```bash
dank run
dank run --detached
7. Monitor your agents
dank status
dank status --watch
dank logs assistant
dank logs assistant --follow
8. Build for production (optional)
dank build:prod
dank build:prod --push
dank build:prod --tag v1.0.0 --registry ghcr.io --namespace myorg --push
dank build:prod --registry ghcr.io --namespace myorg --tag-by-agent --push
π CLI Commands
Core Commands
dank run
dank status
dank stop [agents...]
dank stop --all
dank logs [agent]
Management Commands
dank init [name]
dank build
dank build:prod
dank clean
Agent Image Build Commands
dank build:prod
dank build:prod --push
dank build:prod --tag v1.0.0
dank build:prod --registry ghcr.io
dank build:prod --force
dank build:prod --output-metadata deployment.json
dank build:prod --json
π‘ Push Control: The --push option is the only way to push images to registries. Agent configuration defines naming, CLI controls pushing.
Advanced Options
dank run --detached
dank run --no-build
dank run --pull
dank status --watch
dank logs --follow
Production Build Options
dank build:prod --push
dank build:prod --tag v1.0.0
dank build:prod --registry ghcr.io
dank build:prod --namespace mycompany
dank build:prod --tag-by-agent
dank build:prod --force
dank build:prod --output-metadata <file>
dank build:prod --json
π€ Agent Configuration
Basic Agent Setup
const agent = createAgent('my-agent')
.setLLM('openai', {
apiKey: process.env.OPENAI_API_KEY,
model: 'gpt-4',
temperature: 0.8
})
.setPrompt('Your system prompt here')
.setPromptingServer({
port: 3000,
authentication: false,
maxConnections: 50
})
.setInstanceType('medium');
Adding HTTP Routes
HTTP automatically enables when you add routes. Here's a simple "Hello World" POST endpoint:
const agent = createAgent('hello-agent')
.setLLM('openai', {
apiKey: process.env.OPENAI_API_KEY,
model: 'gpt-3.5-turbo'
})
.setPromptingServer({
port: 3000
})
.post('/hello', (req, res) => {
res.json({
message: 'Hello, World!',
received: req.body,
timestamp: new Date().toISOString()
});
});
Test it:
curl -X POST http://localhost:3000/hello \
-H "Content-Type: application/json" \
-d '{"name": "User"}'
Response:
{
"message": "Hello, World!",
"received": {"name": "User"},
"timestamp": "2024-01-15T10:30:00.000Z"
}
Supported LLM Providers
OpenAI
.setLLM('openai', {
apiKey: 'your-api-key',
model: 'gpt-4',
temperature: 0.7,
maxTokens: 1000
})
Anthropic
.setLLM('anthropic', {
apiKey: 'your-api-key',
model: 'claude-3-sonnet-20240229',
maxTokens: 1000
})
Ollama (Local)
.setLLM('ollama', {
baseURL: 'http://localhost:11434',
model: 'llama2'
})
Cohere
.setLLM('cohere', {
apiKey: 'your-api-key',
model: 'command',
temperature: 0.7
})
Hugging Face
.setLLM('huggingface', {
apiKey: 'your-api-key',
model: 'microsoft/DialoGPT-medium'
})
Custom Provider
.setLLM('custom', {
baseURL: 'https://api.your-provider.com',
apiKey: 'your-key',
model: 'your-model'
})
Event Handlers
Dank provides a comprehensive event system with three main sources of events. Each event handler follows specific naming patterns for maximum flexibility and control.
π Auto-Detection: Dank automatically enables communication features based on your usage:
- Event Handlers: Auto-enabled when you add
.addHandler() calls
- Direct Prompting: Auto-enabled when you use
.setPrompt() + .setLLM()
- HTTP API: Auto-enabled when you add routes with
.get(), .post(), etc.
π― Event Handler Patterns
1. Direct Prompting Events (request_output)
Events triggered when agents receive and respond to direct prompts via HTTP:
agent
.addHandler('request_output', (data) => {
console.log('LLM Response:', {
prompt: data.prompt,
finalPrompt: data.finalPrompt,
response: data.response,
conversationId: data.conversationId,
processingTime: data.processingTime,
promptModified: data.promptModified,
usage: data.usage,
model: data.model
});
})
.addHandler('request_output:start', (data) => {
console.log('Processing prompt:', data.conversationId);
console.log('Original prompt:', data.prompt);
const enhancedPrompt = `Context: You are a helpful assistant. Please be concise and friendly.\n\nUser Question: ${data.prompt}`;
return {
prompt: enhancedPrompt
};
})
.addHandler('request_output:end', (data) => {
console.log('Completed in:', data.processingTime + 'ms');
console.log('Original response:', data.response.substring(0, 50) + '...');
const enhancedResponse = `${data.response}\n\n---\nπ‘ This response was generated by Dank Framework`;
return {
response: enhancedResponse
};
})
.addHandler('request_output:error', (data) => {
console.error('Prompt processing failed:', data.error);
});
π Event Modification Capabilities:
request_output:start: Can modify the prompt before it's sent to the LLM by returning an object with a prompt property
request_output:end: Can modify the response before it's sent back to the caller by returning an object with a response property
- Event Data: All events include both original and final values, plus modification flags for tracking changes
β±οΈ Event Flow Timeline:
-
request_output:start β Fires when prompt is received
- Can modify prompt before LLM processing
- Contains:
{ prompt, conversationId, context, timestamp }
-
LLM Processing β The (potentially modified) prompt is sent to the LLM
-
request_output β Fires after LLM responds successfully
- Contains:
{ prompt, finalPrompt, response, conversationId, promptModified, ... }
-
request_output:end β Fires after request_output, before sending to caller
- Can modify response before returning to client
- Contains:
{ prompt, finalPrompt, response, conversationId, promptModified, success, ... }
-
Response Sent β The (potentially modified) response is sent back to the caller
π‘ Practical Examples:
.addHandler('request_output:start', (data) => {
const enhancedPrompt = `System: You are a helpful AI assistant. Be concise and professional.
User Question: ${data.prompt}
Please provide a clear, helpful response.`;
return { prompt: enhancedPrompt };
})
.addHandler('request_output:end', (data) => {
const brandedResponse = `${data.response}
---
π€ Generated by Dank Framework Agent
β±οΈ Processing time: ${data.processingTime}ms
π Conversation: ${data.conversationId}`;
return { response: brandedResponse };
})
.addHandler('request_output', (data) => {
console.log('Interaction logged:', {
originalPrompt: data.prompt,
modifiedPrompt: data.finalPrompt,
wasModified: data.promptModified,
responseLength: data.response.length,
model: data.model,
usage: data.usage
});
})
2. Tool Events (tool:*)
Events triggered by tool usage, following the pattern tool:<tool-name>:<action>:<specifics>:
agent
.addHandler('tool:httpRequest:*', (data) => {
console.log('HTTP Request Tool:', data);
});
Tool Event Pattern Structure:
tool:<tool-name>:* - All events for a specific tool
tool:<tool-name>:call - Tool invocation/input events
tool:<tool-name>:response - Tool output/result events
tool:<tool-name>:error - Tool-specific errors
Note: HTTP API routes (added via .get(), .post(), etc.) are part of the main HTTP server, not a separate tool. They don't emit tool events.
3. System Events (Legacy/System)
Traditional system-level events:
agent
.addHandler('output', (data) => {
console.log('General output:', data);
})
.addHandler('error', (error) => {
console.error('System error:', error);
})
.addHandler('heartbeat', () => {
console.log('Agent heartbeat');
})
.addHandler('start', () => {
console.log('Agent started');
})
.addHandler('stop', () => {
console.log('Agent stopped');
});
π₯ Advanced Event Patterns
Wildcard Matching:
.addHandler('tool:*', (data) => {
console.log('Any tool activity:', data);
})
.addHandler('request_output:*', (data) => {
console.log('Any request event:', data);
})
Multiple Handlers:
agent
.addHandler('request_output', (data) => {
console.log('Response:', data.response);
})
.addHandler('request_output', (data) => {
saveToDatabase(data);
})
.addHandler('request_output', (data) => {
trackAnalytics(data);
});
π Event Data Structures
Request Output Event Data:
{
prompt: "User's input prompt",
response: "LLM's response",
conversationId: "unique-conversation-id",
usage: { total_tokens: 150, prompt_tokens: 50, completion_tokens: 100 },
model: "gpt-3.5-turbo",
processingTime: 1250,
timestamp: "2024-01-15T10:30:00.000Z"
}
npm
ποΈ Communication Method Control
Each communication method can be enabled/disabled independently:
createAgent('flexible-agent')
.setPromptingServer({
port: 3000,
authentication: false,
maxConnections: 50
})
.disableDirectPrompting()
.addHandler('request_output', (data) => {
console.log('HTTP response:', data.response);
})
.get('/api/status', (req, res) => {
res.json({ status: 'ok' });
});
Resource Management
Configure container resources:
.setInstanceType('small')
Note: setInstanceType() is only used during deployments to Dank Cloud services. When running agents locally with dank run, this setting is disregarded and containers run without resource limits.
Agent Image Configuration
Configure Docker image naming and registry settings for agent builds:
.setAgentImageConfig({
registry: 'ghcr.io',
namespace: 'mycompany',
tag: 'v1.0.0'
})
ποΈ Agent Image Build Workflow
The agent image build feature allows you to create properly tagged Docker images for deployment to container registries. This is essential for:
- CI/CD Pipelines: Automated builds and deployments
- Container Orchestration: Kubernetes, Docker Swarm, etc.
- Multi-Environment Deployments: Dev, staging, production
- Version Management: Semantic versioning with tags
π Note: Image pushing is controlled exclusively by the CLI --push option. Agent configuration only defines image naming (registry, namespace, tag) - not push behavior.
π Complete Agent Image Example
const { createAgent } = require('dank');
module.exports = {
name: 'production-system',
agents: [
createAgent('customer-service')
.setLLM('openai', {
apiKey: process.env.OPENAI_API_KEY,
model: 'gpt-4',
temperature: 0.7
})
.setPrompt('You are a professional customer service representative.')
.setPromptingServer({
port: 3000,
authentication: true,
maxConnections: 100
})
.setInstanceType('medium')
.setAgentImageConfig({
registry: 'ghcr.io',
namespace: 'mycompany',
tag: 'v1.2.0'
})
.addHandler('request_output', (data) => {
console.log(`[${new Date().toISOString()}] Customer Service: ${data.response.substring(0, 100)}...`);
}),
createAgent('data-processor')
.setLLM('openai', {
apiKey: process.env.OPENAI_API_KEY,
model: 'gpt-4',
temperature: 0.1
})
.setPrompt('You are a data analysis expert.')
.setPromptingServer({
port: 3001,
authentication: false,
maxConnections: 50
})
.setInstanceType('large')
.setAgentImageConfig({
registry: 'docker.io',
namespace: 'mycompany',
tag: 'latest'
})
.addHandler('request_output', (data) => {
console.log(`[Data Processor] Analysis completed: ${data.processingTime}ms`);
})
]
};
π Production Build Commands
Basic Production Build:
dank build:prod
dank build:prod --config production.config.js
Registry and Tagging:
dank build:prod --tag v2.1.0
dank build:prod --registry ghcr.io --namespace myorg
dank build:prod --registry docker.io --namespace mycompany
dank build:prod --registry registry.company.com --namespace ai-agents
Push and Force Rebuild:
dank build:prod --push
dank build:prod --force
dank build:prod --force --push
dank build:prod --tag release-2024.1 --push
Deployment Metadata Output:
dank build:prod --output-metadata deployment.json
dank build:prod --push --output-metadata deployment.json
dank build:prod --config production.config.js --output-metadata deployment.json
The --output-metadata option generates a JSON file containing all deployment information needed for your backend infrastructure:
- Base image used (
setBaseImage() value)
- Prompting server configuration (port, authentication, maxConnections)
- Resource limits (memory, CPU, timeout)
- Ports that need to be opened
- Features enabled (direct prompting, HTTP API, event handlers)
- HTTP server configuration (if enabled)
- LLM provider and model information
- Event handlers registered
- Environment variables required
- Build options (registry, namespace, tag, image name)
This metadata file is perfect for CI/CD pipelines to automatically configure your deployment infrastructure, determine which ports to open, and which features to enable/disable.
Example Metadata Output:
{
"project": "my-agent-project",
"buildTimestamp": "2024-01-15T10:30:00.000Z",
"agents": [
{
"name": "customer-service",
"imageName": "ghcr.io/mycompany/customer-service:v1.2.0",
"baseImage": {
"full": "deltadarkly/dank-agent-base:nodejs-20",
"tag": "nodejs-20"
},
"promptingServer": {
"port": 3000,
"authentication": false,
"maxConnections": 50,
"timeout": 30000
},
"resources": {
"memory": "512m",
"cpu": 1,
"timeout": 30000
},
"ports": [
{
"port": 3000,
"description": "Direct prompting server"
}
],
"features": {
"directPrompting": true,
"httpApi": false,
"eventHandlers": true
},
"llm": {
"provider": "openai",
"model": "gpt-3.5-turbo",
"temperature": 0.7,
"maxTokens": 1000
},
"handlers": ["request_output", "request_output:start"],
"buildOptions": {
"registry": "ghcr.io",
"namespace": "mycompany",
"tag": "v1.2.0",
"tagByAgent": false
}
}
],
"summary": {
"total": 1,
"successful": 1,
"failed": 0,
"pushed": 1
}
}
π·οΈ Image Naming Convention
Default (Per-Agent Repository):
- Format:
{registry}/{namespace}/{agent-name}:{tag}
- Example:
ghcr.io/mycompany/customer-service:v1.2.0
Tag By Agent (Common Repository):
- Enabled with
--tag-by-agent or agent.config.agentImage.tagByAgent = true
- Repository:
{registry}/{namespace}/dank-agent
- Tag: normalized agent name (lowercase, [a-z0-9_.-], max 128 chars)
- Example:
ghcr.io/myorg/dank-agent:customer-service
Without Configuration:
- Format:
{agent-name}:{tag}
- Example:
customer-service:latest
π§ Registry Authentication
Docker Hub:
docker login
dank build:prod --registry docker.io --namespace myusername --push
GitHub Container Registry:
echo $GITHUB_TOKEN | docker login ghcr.io -u USERNAME --password-stdin
dank build:prod --registry ghcr.io --namespace myorg --push
Private Registry:
docker login registry.company.com
dank build:prod --registry registry.company.com --namespace ai-agents --push
π Build Output Example
$ dank build:prod --push
ποΈ Building production Docker images...
π¦ Building production image for agent: customer-service
info: Building production image for agent: customer-service -> ghcr.io/mycompany/customer-service:v1.2.0
Step 1/3 : FROM deltadarkly/dank-agent-base:latest
---> 7b560f235fe3
Step 2/3 : COPY agent-code/ /app/agent-code/
---> d766de6e95c4
Step 3/3 : USER dankuser
---> Running in c773e808270c
Successfully built 43a664c636a2
Successfully tagged ghcr.io/mycompany/customer-service:v1.2.0
info: Production image 'ghcr.io/mycompany/customer-service:v1.2.0' built successfully
info: Pushing image to registry: ghcr.io/mycompany/customer-service:v1.2.0
info: Successfully pushed image: ghcr.io/mycompany/customer-service:v1.2.0
β
Successfully built: ghcr.io/mycompany/customer-service:v1.2.0
π Successfully pushed: ghcr.io/mycompany/customer-service:v1.2.0
π Build Summary:
================
β
Successful builds: 2
π Pushed to registry: 2
π¦ Built Images:
- ghcr.io/mycompany/customer-service:v1.2.0
- docker.io/mycompany/data-processor:latest
π Production build completed successfully!
π CI/CD Integration
GitHub Actions Example:
name: Build and Push Production Images
on:
push:
tags:
- 'v*'
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: '18'
- name: Install Dank
run: npm install -g dank-ai
- name: Login to GHCR
uses: docker/login-action@v2
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and Push Production Images
run: |
dank build:prod \
--registry ghcr.io \
--namespace ${{ github.repository_owner }} \
--tag ${{ github.ref_name }} \
--push
GitLab CI Example:
build_production:
stage: build
image: node:18
before_script:
- npm install -g dank-ai
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
script:
- dank build:prod --registry $CI_REGISTRY --namespace $CI_PROJECT_NAMESPACE --tag $CI_COMMIT_TAG --push
only:
- tags
π³ Docker Compose Integration
Use your production images in Docker Compose:
version: '3.8'
services:
customer-service:
image: ghcr.io/mycompany/customer-service:v1.2.0
ports:
- "3000:3000"
environment:
- OPENAI_API_KEY=${OPENAI_API_KEY}
restart: unless-stopped
data-processor:
image: docker.io/mycompany/data-processor:latest
ports:
- "3001:3001"
environment:
- OPENAI_API_KEY=${OPENAI_API_KEY}
restart: unless-stopped
π¨ Troubleshooting Production Builds
Common Issues:
-
Registry Authentication:
docker login ghcr.io
-
Push Permissions:
dank build:prod --namespace your-username --push
-
Image Already Exists:
dank build:prod --tag v1.2.1 --push
-
Build Context Issues:
echo "node_modules/" > .dockerignore
echo "*.log" >> .dockerignore
ποΈ Project Structure
my-project/
βββ dank.config.js # Agent configuration
βββ agents/ # Custom agent code (optional)
β βββ example-agent.js
βββ .dank/ # Generated files
βββ project.yaml # Project state
βββ logs/ # Agent logs
π¦ Package Exports
When you install Dank via npm, you can import the following:
const {
createAgent,
DankAgent,
DankProject,
SUPPORTED_LLMS,
DEFAULT_CONFIG
} = require("dank");
π Example Files
The examples/ directory contains two configuration files:
dank.config.js - Local development example (uses ../lib/index.js)
dank.config.template.js - Production template (uses require("dank"))
For Local Development
dank run --config example/dank.config.js
For Production Use
cp example/dank.config.template.js ./dank.config.js
npm install dank-ai
dank run
π³ Docker Architecture
Dank uses a layered Docker approach:
- Base Image (
deltadarkly/dank-agent-base): Common runtime with Node.js, LLM clients
- Agent Images: Extend base image with agent-specific code and custom tags
- Containers: Running instances with resource limits and networking
Container Features
- Isolated Environments: Each agent runs in its own container
- Resource Limits: Memory and CPU constraints per agent
- Health Monitoring: Built-in health checks and status reporting
- Automatic Restarts: Container restart policies for reliability
- Logging: Centralized log collection and viewing
π Automatic Docker Management
Dank automatically handles Docker installation and startup for you:
Auto-Detection & Installation
When you run any Dank command, it will:
- Check if Docker is installed - Runs
docker --version to detect installation
- Install Docker if missing - Automatically installs Docker for your platform:
- macOS: Uses Homebrew to install Docker Desktop
- Linux: Installs Docker CE via apt package manager
- Windows: Uses Chocolatey to install Docker Desktop
- Start Docker if stopped - Automatically starts Docker service
- Wait for availability - Ensures Docker is ready before proceeding
Platform-Specific Installation
macOS:
brew install --cask docker
open -a Docker
Linux (Ubuntu/Debian):
sudo apt-get update
sudo apt-get install -y docker-ce docker-ce-cli containerd.io
sudo systemctl start docker
sudo systemctl enable docker
sudo usermod -aG docker $USER
Windows:
choco install docker-desktop -y
start "" "C:\Program Files\Docker\Docker\Docker Desktop.exe"
Manual Fallback
If automatic installation fails, Dank will provide clear instructions:
β Docker installation failed: Homebrew not found
π‘ Please install Docker Desktop manually from:
https://www.docker.com/products/docker-desktop/
Status Messages
Dank provides clear feedback during the process:
π Checking Docker availability...
π¦ Docker is not installed. Installing Docker...
π₯οΈ Installing Docker Desktop for macOS...
β³ Installing Docker Desktop via Homebrew...
β
Docker installation completed
π Starting Docker Desktop...
β³ Waiting for Docker to become available...
β
Docker is now available
π³ Docker connection established
πΌ Using Dank in Your Project
Step-by-Step Integration Guide
1. Project Setup
npm install -g dank-ai
dank init
2. Basic Agent Configuration
Start with a simple agent configuration in dank.config.js:
const { createAgent } = require('dank');
module.exports = {
name: 'my-project',
agents: [
createAgent('helper')
.setLLM('openai', {
apiKey: process.env.OPENAI_API_KEY,
model: 'gpt-3.5-turbo'
})
.setPrompt('You are a helpful assistant.')
.addHandler('output', console.log)
]
};
3. Multi-Agent Setup
Configure multiple specialized agents:
const { createAgent } = require('dank');
module.exports = {
name: 'multi-agent-system',
agents: [
createAgent('customer-service')
.setLLM('openai', {
apiKey: process.env.OPENAI_API_KEY,
model: 'gpt-3.5-turbo',
temperature: 0.7
})
.setPrompt(`
You are a friendly customer service representative.
- Be helpful and professional
- Resolve customer issues quickly
- Escalate complex problems appropriately
`)
.setInstanceType('small')
.addHandler('output', (data) => {
console.log('[Customer Service]:', data);
}),
createAgent('analyst')
.setLLM('openai', {
apiKey: process.env.OPENAI_API_KEY,
model: 'gpt-4',
temperature: 0.3
})
.setPrompt(`
You are a data analyst expert.
- Analyze trends and patterns
- Provide statistical insights
- Create actionable recommendations
`)
.setInstanceType('medium')
.addHandler('output', (data) => {
console.log('[Analyst]:', data);
}),
createAgent('content-creator')
.setLLM('anthropic', {
apiKey: process.env.ANTHROPIC_API_KEY,
model: 'claude-3-sonnet-20240229'
})
.setPrompt(`
You are a creative content writer.
- Write engaging, original content
- Adapt tone to target audience
- Follow brand guidelines
`)
.setInstanceType('small')
.addHandler('output', (data) => {
console.log('[Content Creator]:', data);
})
]
};
π― Common Use Cases
Use Case 1: Customer Support Automation
createAgent('support-bot')
.setLLM('openai', {
apiKey: process.env.OPENAI_API_KEY,
model: 'gpt-3.5-turbo'
})
.setPrompt(`
You are a customer support specialist for [Your Company].
Guidelines:
- Always be polite and helpful
- For technical issues, provide step-by-step solutions
- If you cannot resolve an issue, escalate to human support
- Use the customer's name when available
Knowledge Base:
- Product features: [list your features]
- Common issues: [list common problems and solutions]
- Contact info: support@yourcompany.com
`)
.addHandler('output', (response) => {
sendToCustomer(response);
})
.addHandler('error', (error) => {
escalateToHuman(error);
});
Use Case 2: Content Generation Pipeline
const contentAgents = [
createAgent('researcher')
.setLLM('openai', { model: 'gpt-4' })
.setPrompt('Research and gather information on given topics')
.addHandler('output', (research) => {
triggerContentCreation(research);
}),
createAgent('writer')
.setLLM('anthropic', { model: 'claude-3-sonnet' })
.setPrompt('Write engaging blog posts based on research data')
.addHandler('output', (article) => {
saveDraft(article);
notifyEditor(article);
}),
createAgent('seo-optimizer')
.setLLM('openai', { model: 'gpt-3.5-turbo' })
.setPrompt('Optimize content for SEO and readability')
.addHandler('output', (optimizedContent) => {
publishContent(optimizedContent);
})
];
Use Case 3: Data Analysis Workflow
createAgent('data-processor')
.setLLM('openai', {
apiKey: process.env.OPENAI_API_KEY,
model: 'gpt-4',
temperature: 0.1
})
.setPrompt(`
You are a data analyst. Analyze the provided data and:
1. Identify key trends and patterns
2. Calculate important metrics
3. Provide actionable insights
4. Format results as JSON
`)
.setInstanceType('large')
.addHandler('output', (analysis) => {
try {
const results = JSON.parse(analysis);
saveAnalysisResults(results);
generateReport(results);
checkAlerts(results);
} catch (error) {
console.error('Failed to parse analysis:', error);
}
});
π§ Advanced Configuration
Custom Agent Code
For complex logic, create custom agent files in the agents/ directory:
module.exports = {
async main(llmClient, handlers) {
console.log('Custom agent starting...');
setInterval(async () => {
try {
const response = await llmClient.chat.completions.create({
model: 'gpt-3.5-turbo',
messages: [
{ role: 'system', content: 'You are a helpful assistant' },
{ role: 'user', content: 'Generate a daily report' }
]
});
const outputHandlers = handlers.get('output') || [];
outputHandlers.forEach(handler =>
handler(response.choices[0].message.content)
);
} catch (error) {
const errorHandlers = handlers.get('error') || [];
errorHandlers.forEach(handler => handler(error));
}
}, 60000);
},
handlers: {
output: [
(data) => console.log('Custom output:', data)
],
error: [
(error) => console.error('Custom error:', error)
]
}
};
Environment-Specific Configuration
const { createAgent } = require('dank');
const isDevelopment = process.env.NODE_ENV === 'development';
const isProduction = process.env.NODE_ENV === 'production';
module.exports = {
name: 'my-project',
agents: [
createAgent('main-agent')
.setLLM('openai', {
apiKey: process.env.OPENAI_API_KEY,
model: isDevelopment ? 'gpt-3.5-turbo' : 'gpt-4',
temperature: isDevelopment ? 0.9 : 0.7
})
.setInstanceType(isDevelopment ? 'small' : 'medium')
.addHandler('output', (data) => {
if (isDevelopment) {
console.log('DEV:', data);
} else {
logger.info('Agent output', { data });
}
})
]
};
π¨ Troubleshooting
Common Issues and Solutions
1. Docker Connection Issues
docker --version
docker ps
sudo systemctl start docker
1a. Docker Installation Issues
brew install --cask docker
open -a Docker
sudo apt-get update
sudo apt-get install -y docker-ce docker-ce-cli containerd.io
sudo systemctl start docker
sudo usermod -aG docker $USER
choco install docker-desktop -y
2. API Key Issues
echo $OPENAI_API_KEY
export OPENAI_API_KEY="sk-your-actual-key-here"
echo "OPENAI_API_KEY=sk-your-actual-key-here" > .env
3. Base Image Not Found
dank build --base
4. Container Resource Issues
createAgent('my-agent')
.setInstanceType('medium') // Increase from 'small' to 'medium'
5. Agent Not Starting
dank logs agent-name
docker ps -f name=dank-
docker logs container-id
π‘ Best Practices
1. Resource Management
createAgent('light-agent')
.setInstanceType('small');
createAgent('heavy-agent')
.setInstanceType('large');
2. Error Handling
createAgent('robust-agent')
.addHandler('error', (error) => {
console.error('Agent error:', error.message);
logError(error);
if (error.type === 'CRITICAL') {
sendAlert(error);
}
scheduleRetry(error.context);
})
.addHandler('output', (data) => {
try {
processOutput(data);
} catch (error) {
console.error('Output processing failed:', error);
}
});
3. Environment Configuration
const config = {
development: {
model: 'gpt-3.5-turbo',
memory: '256m',
logLevel: 'debug'
},
production: {
model: 'gpt-4',
memory: '1g',
logLevel: 'info'
}
};
const env = process.env.NODE_ENV || 'development';
const settings = config[env];
createAgent('environment-aware')
.setLLM('openai', {
model: settings.model,
temperature: 0.7
})
.setInstanceType(settings.instanceType || 'small')
});
4. Monitoring and Logging
createAgent('monitored-agent')
.addHandler('output', (data) => {
logger.info('Agent output', {
agent: 'monitored-agent',
timestamp: new Date().toISOString(),
data: data.substring(0, 100)
});
})
.addHandler('error', (error) => {
logger.error('Agent error', {
agent: 'monitored-agent',
error: error.message,
stack: error.stack
});
})
.addHandler('start', () => {
logger.info('Agent started', { agent: 'monitored-agent' });
});
5. Security Considerations
createAgent('secure-agent')
.setLLM('openai', {
apiKey: process.env.OPENAI_API_KEY,
model: 'gpt-3.5-turbo'
})
.setPrompt(`
You are a helpful assistant.
IMPORTANT SECURITY RULES:
- Never reveal API keys or sensitive information
- Don't execute system commands
- Validate all inputs before processing
- Don't access external URLs unless explicitly allowed
`)
.addHandler('output', (data) => {
const sanitized = sanitizeOutput(data);
console.log(sanitized);
});
1. Parallel Agent Management
module.exports = {
agents: [
createAgent('analyzer').setInstanceType('medium'),
createAgent('processor').setInstanceType('large'),
createAgent('notifier').setInstanceType('small')
]
};
2. Efficient Prompt Design
.setPrompt(`
You are a customer service agent. Follow these steps:
1. Greet the customer politely
2. Understand their issue by asking clarifying questions
3. Provide a solution or escalate if needed
4. Confirm resolution
Response format: JSON with fields: greeting, questions, solution, status
`);
π Development Workflow
1. Local Development
NODE_ENV=development dank run
dank stop --all
dank run --build
createAgent('dev-agent').setInstanceType('small')
2. Testing Agents
dank run --detached
dank logs test-agent --follow
curl http://localhost:3001/health
docker stats dank-test-agent
3. Production Deployment
export NODE_ENV=production
dank build --force
dank run --detached
dank status --watch
Monitoring and Debugging
dank status --watch
dank logs my-agent --follow
docker ps -f name=dank-
curl http://localhost:3001/health
π¦ Installation
Prerequisites
- Node.js 16+
- Docker Desktop or Docker Engine
- npm or yarn
Global Installation
npm install -g dank-ai
Local Development
git clone https://github.com/your-org/dank
cd dank
npm install
npm link
π€ Contributing
- Fork the repository
- Create a feature branch:
git checkout -b feature/amazing-feature
- Commit changes:
git commit -m 'Add amazing feature'
- Push to branch:
git push origin feature/amazing-feature
- Open a Pull Request
π License
ISC License - see LICENSE file for details.
π Support