
Security News
Critical Security Vulnerability in React Server Components
React disclosed a CVSS 10.0 RCE in React Server Components and is advising users to upgrade affected packages and frameworks to patched versions now.
Docker-based AI Agent Orchestration Platform
Dank is a powerful Node.js service that allows you to define, deploy, and manage AI agents using Docker containers. Each agent runs in its own isolated environment with configurable resources, LLM providers, and custom handlers. Built for production with comprehensive CI/CD support and Docker registry integration.
π Website: https://dank-ai.xyz
π¦ NPM Package: https://www.npmjs.com/package/dank-ai
βοΈ Cloud Deployment: https://cloud.dank-ai.xyz - Serverless for AI Agents
Serverless for AI Agents - Deploy your Dank agents seamlessly to the cloud with zero infrastructure management.
π https://cloud.dank-ai.xyz - The seamless cloud deployment management serverless solution for Dank. Scale your AI agents automatically, pay only for what you use, and focus on building great agents instead of managing servers.
Before you begin, make sure you have:
π Auto-Docker Installation: Dank will automatically detect, install, and start Docker if it's not available on your system. No manual setup required!
npm install -g dank-ai
# Create and navigate to your project directory
mkdir my-agent-project
cd my-agent-project
# Initialize Dank project
dank init my-agent-project
This creates:
my-agent-project/
βββ dank.config.js # Your agent configuration
βββ agents/ # Custom agent code (optional)
β βββ example-agent.js
βββ .dank/ # Generated files
βββ project.yaml
Create a .env file or export environment variables:
# For OpenAI
export OPENAI_API_KEY="your-openai-api-key"
# For Anthropic
export ANTHROPIC_API_KEY="your-anthropic-api-key"
# For Cohere
export COHERE_API_KEY="your-cohere-api-key"
Edit dank.config.js to define your agents:
const { createAgent } = require('dank');
module.exports = {
name: 'my-agent-project',
agents: [
createAgent('assistant')
.setLLM('openai', {
apiKey: process.env.OPENAI_API_KEY,
model: 'gpt-3.5-turbo',
temperature: 0.7
})
.setPrompt('You are a helpful assistant that responds with enthusiasm!')
.setInstanceType('small')
.addHandler('output', (data) => {
console.log('Assistant says:', data);
})
]
};
# Build agent images (base image is pulled automatically)
dank build
### 6. Start your agents
```bash
# Start all agents
dank run
# Or run in detached mode (background)
dank run --detached
# Check agent status
dank status
# Watch status in real-time
dank status --watch
# View agent logs
dank logs assistant
# Follow logs in real-time
dank logs assistant --follow
# Build production images with custom naming
dank build:prod
# Build and push to registry
dank build:prod --push
# Build with custom tag and registry
dank build:prod --tag v1.0.0 --registry ghcr.io --namespace myorg --push
# Use a common image name and tag by agent
dank build:prod --registry ghcr.io --namespace myorg --tag-by-agent --push
dank run # Start all defined agents
dank status # Show agent status
dank stop [agents...] # Stop specific agents
dank stop --all # Stop all agents
dank logs [agent] # View agent logs
dank init [name] # Initialize new project
dank build # Build Docker images
dank build:prod # Build agent images with custom naming
dank clean # Clean up Docker resources
dank build:prod # Build with agent image config
dank build:prod --push # Build and push to registry (CLI only)
dank build:prod --tag v1.0.0 # Build with custom tag
dank build:prod --registry ghcr.io # Build for specific registry
dank build:prod --force # Force rebuild without cache
dank build:prod --output-metadata deployment.json # Generate deployment metadata
dank build:prod --json # Output JSON summary to stdout
π‘ Push Control: The
--pushoption is the only way to push images to registries. Agent configuration defines naming, CLI controls pushing.
dank run --detached # Run in background
dank run --no-build # Skip rebuilding images (default is to rebuild)
dank run --pull # Pull latest base image before building
dank status --watch # Live status monitoring
dank logs --follow # Follow log output
dank build:prod --push # Build and push to registry
dank build:prod --tag v1.0.0 # Build with custom tag
dank build:prod --registry ghcr.io # Build for GitHub Container Registry
dank build:prod --namespace mycompany # Build with custom namespace
dank build:prod --tag-by-agent # Use agent name as tag (common repo)
dank build:prod --force # Force rebuild without cache
dank build:prod --output-metadata <file> # Output deployment metadata JSON
dank build:prod --json # Output machine-readable JSON summary
const agent = createAgent('my-agent')
.setLLM('openai', {
apiKey: process.env.OPENAI_API_KEY,
model: 'gpt-4',
temperature: 0.8
})
.setPrompt('Your system prompt here')
.setPromptingServer({
port: 3000,
authentication: false,
maxConnections: 50
})
.setInstanceType('medium');
HTTP automatically enables when you add routes. Here's a simple "Hello World" POST endpoint:
const agent = createAgent('hello-agent')
.setLLM('openai', {
apiKey: process.env.OPENAI_API_KEY,
model: 'gpt-3.5-turbo'
})
.setPromptingServer({
port: 3000
})
// Add a POST endpoint (HTTP auto-enables)
.post('/hello', (req, res) => {
res.json({
message: 'Hello, World!',
received: req.body,
timestamp: new Date().toISOString()
});
});
Test it:
curl -X POST http://localhost:3000/hello \
-H "Content-Type: application/json" \
-d '{"name": "User"}'
Response:
{
"message": "Hello, World!",
"received": {"name": "User"},
"timestamp": "2024-01-15T10:30:00.000Z"
}
.setLLM('openai', {
apiKey: 'your-api-key',
model: 'gpt-4',
temperature: 0.7,
maxTokens: 1000
})
.setLLM('anthropic', {
apiKey: 'your-api-key',
model: 'claude-3-sonnet-20240229',
maxTokens: 1000
})
.setLLM('ollama', {
baseURL: 'http://localhost:11434',
model: 'llama2'
})
.setLLM('cohere', {
apiKey: 'your-api-key',
model: 'command',
temperature: 0.7
})
.setLLM('huggingface', {
apiKey: 'your-api-key',
model: 'microsoft/DialoGPT-medium'
})
.setLLM('custom', {
baseURL: 'https://api.your-provider.com',
apiKey: 'your-key',
model: 'your-model'
})
Dank provides a comprehensive event system with three main sources of events. Each event handler follows specific naming patterns for maximum flexibility and control.
π Auto-Detection: Dank automatically enables communication features based on your usage:
- Event Handlers: Auto-enabled when you add
.addHandler()calls- Direct Prompting: Auto-enabled when you use
.setPrompt()+.setLLM()- HTTP API: Auto-enabled when you add routes with
.get(),.post(), etc.
request_output)Events triggered when agents receive and respond to direct prompts via HTTP:
agent
// Main LLM response event
.addHandler('request_output', (data) => {
console.log('LLM Response:', {
prompt: data.prompt, // Original prompt
finalPrompt: data.finalPrompt, // Modified prompt (if changed)
response: data.response, // LLM response
conversationId: data.conversationId,
processingTime: data.processingTime,
promptModified: data.promptModified, // Boolean: was prompt modified?
usage: data.usage,
model: data.model
});
})
// Lifecycle events with modification capabilities
.addHandler('request_output:start', (data) => {
console.log('Processing prompt:', data.conversationId);
console.log('Original prompt:', data.prompt);
// β¨ MODIFY PROMPT: Return modified data to change the prompt sent to LLM
const enhancedPrompt = `Context: You are a helpful assistant. Please be concise and friendly.\n\nUser Question: ${data.prompt}`;
return {
prompt: enhancedPrompt // This will replace the original prompt
};
})
.addHandler('request_output:end', (data) => {
console.log('Completed in:', data.processingTime + 'ms');
console.log('Original response:', data.response.substring(0, 50) + '...');
// β¨ MODIFY RESPONSE: Return modified data to change the response sent to caller
const enhancedResponse = `${data.response}\n\n---\nπ‘ This response was generated by Dank Framework`;
return {
response: enhancedResponse // This will replace the original response
};
})
.addHandler('request_output:error', (data) => {
console.error('Prompt processing failed:', data.error);
});
π Event Modification Capabilities:
request_output:start: Can modify the prompt before it's sent to the LLM by returning an object with a prompt propertyrequest_output:end: Can modify the response before it's sent back to the caller by returning an object with a response propertyβ±οΈ Event Flow Timeline:
request_output:start β Fires when prompt is received
{ prompt, conversationId, context, timestamp }LLM Processing β The (potentially modified) prompt is sent to the LLM
request_output β Fires after LLM responds successfully
{ prompt, finalPrompt, response, conversationId, promptModified, ... }request_output:end β Fires after request_output, before sending to caller
{ prompt, finalPrompt, response, conversationId, promptModified, success, ... }Response Sent β The (potentially modified) response is sent back to the caller
π‘ Practical Examples:
// Example 1: Add context and formatting to prompts
.addHandler('request_output:start', (data) => {
// Add system context and format the user's question
const enhancedPrompt = `System: You are a helpful AI assistant. Be concise and professional.
User Question: ${data.prompt}
Please provide a clear, helpful response.`;
return { prompt: enhancedPrompt };
})
// Example 2: Add metadata and branding to responses
.addHandler('request_output:end', (data) => {
// Add footer with metadata and branding
const brandedResponse = `${data.response}
---
π€ Generated by Dank Framework Agent
β±οΈ Processing time: ${data.processingTime}ms
π Conversation: ${data.conversationId}`;
return { response: brandedResponse };
})
// Example 3: Log and analyze all interactions
.addHandler('request_output', (data) => {
// Log for analytics
console.log('Interaction logged:', {
originalPrompt: data.prompt,
modifiedPrompt: data.finalPrompt,
wasModified: data.promptModified,
responseLength: data.response.length,
model: data.model,
usage: data.usage
});
})
tool:*)Events triggered by tool usage, following the pattern tool:<tool-name>:<action>:<specifics>:
agent
// Example: Tool events for built-in tools
.addHandler('tool:httpRequest:*', (data) => {
// Listen to ALL HTTP request tool events
console.log('HTTP Request Tool:', data);
});
Tool Event Pattern Structure:
tool:<tool-name>:* - All events for a specific tooltool:<tool-name>:call - Tool invocation/input eventstool:<tool-name>:response - Tool output/result eventstool:<tool-name>:error - Tool-specific errorsNote: HTTP API routes (added via .get(), .post(), etc.) are part of the main HTTP server, not a separate tool. They don't emit tool events.
Traditional system-level events:
agent
.addHandler('output', (data) => {
console.log('General output:', data);
})
.addHandler('error', (error) => {
console.error('System error:', error);
})
.addHandler('heartbeat', () => {
console.log('Agent heartbeat');
})
.addHandler('start', () => {
console.log('Agent started');
})
.addHandler('stop', () => {
console.log('Agent stopped');
});
Wildcard Matching:
// Listen to all tool events
.addHandler('tool:*', (data) => {
console.log('Any tool activity:', data);
})
// Listen to all request outputs
.addHandler('request_output:*', (data) => {
console.log('Any request event:', data);
})
Multiple Handlers:
// Multiple handlers for the same event
agent
.addHandler('request_output', (data) => {
// Log to console
console.log('Response:', data.response);
})
.addHandler('request_output', (data) => {
// Save to database
saveToDatabase(data);
})
.addHandler('request_output', (data) => {
// Send to analytics
trackAnalytics(data);
});
Request Output Event Data:
{
prompt: "User's input prompt",
response: "LLM's response",
conversationId: "unique-conversation-id",
usage: { total_tokens: 150, prompt_tokens: 50, completion_tokens: 100 },
model: "gpt-3.5-turbo",
processingTime: 1250,
timestamp: "2024-01-15T10:30:00.000Z"
}
npm
Each communication method can be enabled/disabled independently:
createAgent('flexible-agent')
// Configure direct prompting with specific settings
.setPromptingServer({
port: 3000,
authentication: false,
maxConnections: 50
})
.disableDirectPrompting() // Disable if needed
// Listen to direct prompting events only
.addHandler('request_output', (data) => {
console.log('HTTP response:', data.response);
})
// Add HTTP API routes (HTTP auto-enables)
.get('/api/status', (req, res) => {
res.json({ status: 'ok' });
});
Configure container resources:
.setInstanceType('small') // Options: 'small', 'medium', 'large', 'xlarge'
// small: 512m, 1 CPU
// medium: 1g, 2 CPU
// large: 2g, 2 CPU
// xlarge: 4g, 4 CPU
Note: setInstanceType() is only used during deployments to Dank Cloud services. When running agents locally with dank run, this setting is disregarded and containers run without resource limits.
Configure Docker image naming and registry settings for agent builds:
// Complete agent image configuration
.setAgentImageConfig({
registry: 'ghcr.io', // Docker registry URL
namespace: 'mycompany', // Organization/namespace
tag: 'v1.0.0' // Image tag
})
The agent image build feature allows you to create properly tagged Docker images for deployment to container registries. This is essential for:
π Note: Image pushing is controlled exclusively by the CLI
--pushoption. Agent configuration only defines image naming (registry, namespace, tag) - not push behavior.
const { createAgent } = require('dank');
module.exports = {
name: 'production-system',
agents: [
// Production-ready customer service agent
createAgent('customer-service')
.setLLM('openai', {
apiKey: process.env.OPENAI_API_KEY,
model: 'gpt-4',
temperature: 0.7
})
.setPrompt('You are a professional customer service representative.')
.setPromptingServer({
port: 3000,
authentication: true,
maxConnections: 100
})
.setInstanceType('medium')
// Agent image configuration
.setAgentImageConfig({
registry: 'ghcr.io',
namespace: 'mycompany',
tag: 'v1.2.0'
})
.addHandler('request_output', (data) => {
// Log for production monitoring
console.log(`[${new Date().toISOString()}] Customer Service: ${data.response.substring(0, 100)}...`);
}),
// Data processing agent with different registry
createAgent('data-processor')
.setLLM('openai', {
apiKey: process.env.OPENAI_API_KEY,
model: 'gpt-4',
temperature: 0.1
})
.setPrompt('You are a data analysis expert.')
.setPromptingServer({
port: 3001,
authentication: false,
maxConnections: 50
})
.setInstanceType('large')
// Different agent image configuration
.setAgentImageConfig({
registry: 'docker.io',
namespace: 'mycompany',
tag: 'latest'
})
.addHandler('request_output', (data) => {
console.log(`[Data Processor] Analysis completed: ${data.processingTime}ms`);
})
]
};
Basic Production Build:
# Build all agents with their image configuration
dank build:prod
# Build with custom configuration file
dank build:prod --config production.config.js
Registry and Tagging:
# Build with custom tag
dank build:prod --tag v2.1.0
# Build for GitHub Container Registry
dank build:prod --registry ghcr.io --namespace myorg
# Build for Docker Hub
dank build:prod --registry docker.io --namespace mycompany
# Build for private registry
dank build:prod --registry registry.company.com --namespace ai-agents
Push and Force Rebuild:
# Build and push to registry
dank build:prod --push
# Force rebuild without cache
dank build:prod --force
# Force rebuild and push
dank build:prod --force --push
# Build with custom tag and push
dank build:prod --tag release-2024.1 --push
Deployment Metadata Output:
# Generate deployment metadata JSON file
dank build:prod --output-metadata deployment.json
# Build, push, and generate metadata
dank build:prod --push --output-metadata deployment.json
# Use with custom configuration
dank build:prod --config production.config.js --output-metadata deployment.json
The --output-metadata option generates a JSON file containing all deployment information needed for your backend infrastructure:
setBaseImage() value)This metadata file is perfect for CI/CD pipelines to automatically configure your deployment infrastructure, determine which ports to open, and which features to enable/disable.
Example Metadata Output:
{
"project": "my-agent-project",
"buildTimestamp": "2024-01-15T10:30:00.000Z",
"agents": [
{
"name": "customer-service",
"imageName": "ghcr.io/mycompany/customer-service:v1.2.0",
"baseImage": {
"full": "deltadarkly/dank-agent-base:nodejs-20",
"tag": "nodejs-20"
},
"promptingServer": {
"port": 3000,
"authentication": false,
"maxConnections": 50,
"timeout": 30000
},
"resources": {
"memory": "512m",
"cpu": 1,
"timeout": 30000
},
"ports": [
{
"port": 3000,
"description": "Direct prompting server"
}
],
"features": {
"directPrompting": true,
"httpApi": false,
"eventHandlers": true
},
"llm": {
"provider": "openai",
"model": "gpt-3.5-turbo",
"temperature": 0.7,
"maxTokens": 1000
},
"handlers": ["request_output", "request_output:start"],
"buildOptions": {
"registry": "ghcr.io",
"namespace": "mycompany",
"tag": "v1.2.0",
"tagByAgent": false
}
}
],
"summary": {
"total": 1,
"successful": 1,
"failed": 0,
"pushed": 1
}
}
Default (Per-Agent Repository):
{registry}/{namespace}/{agent-name}:{tag}ghcr.io/mycompany/customer-service:v1.2.0Tag By Agent (Common Repository):
--tag-by-agent or agent.config.agentImage.tagByAgent = true{registry}/{namespace}/dank-agentghcr.io/myorg/dank-agent:customer-serviceWithout Configuration:
{agent-name}:{tag}customer-service:latestDocker Hub:
# Login to Docker Hub
docker login
# Build and push
dank build:prod --registry docker.io --namespace myusername --push
GitHub Container Registry:
# Login to GHCR
echo $GITHUB_TOKEN | docker login ghcr.io -u USERNAME --password-stdin
# Build and push
dank build:prod --registry ghcr.io --namespace myorg --push
Private Registry:
# Login to private registry
docker login registry.company.com
# Build and push
dank build:prod --registry registry.company.com --namespace ai-agents --push
$ dank build:prod --push
ποΈ Building production Docker images...
π¦ Building production image for agent: customer-service
info: Building production image for agent: customer-service -> ghcr.io/mycompany/customer-service:v1.2.0
Step 1/3 : FROM deltadarkly/dank-agent-base:latest
---> 7b560f235fe3
Step 2/3 : COPY agent-code/ /app/agent-code/
---> d766de6e95c4
Step 3/3 : USER dankuser
---> Running in c773e808270c
Successfully built 43a664c636a2
Successfully tagged ghcr.io/mycompany/customer-service:v1.2.0
info: Production image 'ghcr.io/mycompany/customer-service:v1.2.0' built successfully
info: Pushing image to registry: ghcr.io/mycompany/customer-service:v1.2.0
info: Successfully pushed image: ghcr.io/mycompany/customer-service:v1.2.0
β
Successfully built: ghcr.io/mycompany/customer-service:v1.2.0
π Successfully pushed: ghcr.io/mycompany/customer-service:v1.2.0
π Build Summary:
================
β
Successful builds: 2
π Pushed to registry: 2
π¦ Built Images:
- ghcr.io/mycompany/customer-service:v1.2.0
- docker.io/mycompany/data-processor:latest
π Production build completed successfully!
GitHub Actions Example:
name: Build and Push Production Images
on:
push:
tags:
- 'v*'
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: '18'
- name: Install Dank
run: npm install -g dank-ai
- name: Login to GHCR
uses: docker/login-action@v2
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and Push Production Images
run: |
dank build:prod \
--registry ghcr.io \
--namespace ${{ github.repository_owner }} \
--tag ${{ github.ref_name }} \
--push
GitLab CI Example:
build_production:
stage: build
image: node:18
before_script:
- npm install -g dank-ai
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
script:
- dank build:prod --registry $CI_REGISTRY --namespace $CI_PROJECT_NAMESPACE --tag $CI_COMMIT_TAG --push
only:
- tags
Use your production images in Docker Compose:
version: '3.8'
services:
customer-service:
image: ghcr.io/mycompany/customer-service:v1.2.0
ports:
- "3000:3000"
environment:
- OPENAI_API_KEY=${OPENAI_API_KEY}
restart: unless-stopped
data-processor:
image: docker.io/mycompany/data-processor:latest
ports:
- "3001:3001"
environment:
- OPENAI_API_KEY=${OPENAI_API_KEY}
restart: unless-stopped
Common Issues:
Registry Authentication:
# Error: authentication required
# Solution: Login to registry first
docker login ghcr.io
Push Permissions:
# Error: denied: push access denied
# Solution: Check namespace permissions or use personal namespace
dank build:prod --namespace your-username --push
Image Already Exists:
# Error: image already exists
# Solution: Use different tag or force rebuild
dank build:prod --tag v1.2.1 --push
Build Context Issues:
# Error: build context too large
# Solution: Add .dockerignore file
echo "node_modules/" > .dockerignore
echo "*.log" >> .dockerignore
my-project/
βββ dank.config.js # Agent configuration
βββ agents/ # Custom agent code (optional)
β βββ example-agent.js
βββ .dank/ # Generated files
βββ project.yaml # Project state
βββ logs/ # Agent logs
When you install Dank via npm, you can import the following:
const {
createAgent, // Convenience function to create agents
DankAgent, // Main agent class
DankProject, // Project management class
SUPPORTED_LLMS, // List of supported LLM providers
DEFAULT_CONFIG // Default configuration values
} = require("dank");
The examples/ directory contains two configuration files:
dank.config.js - Local development example (uses ../lib/index.js)dank.config.template.js - Production template (uses require("dank"))# Use the example file directly
dank run --config example/dank.config.js
# 1. Copy the template to your project
cp example/dank.config.template.js ./dank.config.js
# 2. Install dank as a dependency
npm install dank-ai
# 3. The template already uses the correct import
# const { createAgent } = require("dank");
# 4. Run your agents
dank run
Dank uses a layered Docker approach:
deltadarkly/dank-agent-base): Common runtime with Node.js, LLM clientsDank automatically handles Docker installation and startup for you:
When you run any Dank command, it will:
docker --version to detect installationmacOS:
# Dank will automatically run:
brew install --cask docker
open -a Docker
Linux (Ubuntu/Debian):
# Dank will automatically run:
sudo apt-get update
sudo apt-get install -y docker-ce docker-ce-cli containerd.io
sudo systemctl start docker
sudo systemctl enable docker
sudo usermod -aG docker $USER
Windows:
# Dank will automatically run:
choco install docker-desktop -y
start "" "C:\Program Files\Docker\Docker\Docker Desktop.exe"
If automatic installation fails, Dank will provide clear instructions:
# Example output when manual installation is needed
β Docker installation failed: Homebrew not found
π‘ Please install Docker Desktop manually from:
https://www.docker.com/products/docker-desktop/
Dank provides clear feedback during the process:
π Checking Docker availability...
π¦ Docker is not installed. Installing Docker...
π₯οΈ Installing Docker Desktop for macOS...
β³ Installing Docker Desktop via Homebrew...
β
Docker installation completed
π Starting Docker Desktop...
β³ Waiting for Docker to become available...
β
Docker is now available
π³ Docker connection established
# In your existing project directory
npm install -g dank-ai
# Initialize Dank configuration
dank init
# This creates dank.config.js in your current directory
Start with a simple agent configuration in dank.config.js:
const { createAgent } = require('dank');
module.exports = {
name: 'my-project',
agents: [
// Simple assistant agent
createAgent('helper')
.setLLM('openai', {
apiKey: process.env.OPENAI_API_KEY,
model: 'gpt-3.5-turbo'
})
.setPrompt('You are a helpful assistant.')
.addHandler('output', console.log)
]
};
Configure multiple specialized agents:
const { createAgent } = require('dank');
module.exports = {
name: 'multi-agent-system',
agents: [
// Customer service agent
createAgent('customer-service')
.setLLM('openai', {
apiKey: process.env.OPENAI_API_KEY,
model: 'gpt-3.5-turbo',
temperature: 0.7
})
.setPrompt(`
You are a friendly customer service representative.
- Be helpful and professional
- Resolve customer issues quickly
- Escalate complex problems appropriately
`)
.setInstanceType('small')
.addHandler('output', (data) => {
console.log('[Customer Service]:', data);
// Add your business logic here
}),
// Data analyst agent
createAgent('analyst')
.setLLM('openai', {
apiKey: process.env.OPENAI_API_KEY,
model: 'gpt-4',
temperature: 0.3
})
.setPrompt(`
You are a data analyst expert.
- Analyze trends and patterns
- Provide statistical insights
- Create actionable recommendations
`)
.setInstanceType('medium')
.addHandler('output', (data) => {
console.log('[Analyst]:', data);
// Save analysis results to database
}),
// Content creator agent
createAgent('content-creator')
.setLLM('anthropic', {
apiKey: process.env.ANTHROPIC_API_KEY,
model: 'claude-3-sonnet-20240229'
})
.setPrompt(`
You are a creative content writer.
- Write engaging, original content
- Adapt tone to target audience
- Follow brand guidelines
`)
.setInstanceType('small')
.addHandler('output', (data) => {
console.log('[Content Creator]:', data);
// Process and publish content
})
]
};
createAgent('support-bot')
.setLLM('openai', {
apiKey: process.env.OPENAI_API_KEY,
model: 'gpt-3.5-turbo'
})
.setPrompt(`
You are a customer support specialist for [Your Company].
Guidelines:
- Always be polite and helpful
- For technical issues, provide step-by-step solutions
- If you cannot resolve an issue, escalate to human support
- Use the customer's name when available
Knowledge Base:
- Product features: [list your features]
- Common issues: [list common problems and solutions]
- Contact info: support@yourcompany.com
`)
.addHandler('output', (response) => {
// Send response back to customer via your chat system
sendToCustomer(response);
})
.addHandler('error', (error) => {
// Fallback to human support
escalateToHuman(error);
});
const contentAgents = [
// Research agent
createAgent('researcher')
.setLLM('openai', { model: 'gpt-4' })
.setPrompt('Research and gather information on given topics')
.addHandler('output', (research) => {
// Pass research to writer agent
triggerContentCreation(research);
}),
// Writer agent
createAgent('writer')
.setLLM('anthropic', { model: 'claude-3-sonnet' })
.setPrompt('Write engaging blog posts based on research data')
.addHandler('output', (article) => {
// Save draft and notify editor
saveDraft(article);
notifyEditor(article);
}),
// SEO optimizer agent
createAgent('seo-optimizer')
.setLLM('openai', { model: 'gpt-3.5-turbo' })
.setPrompt('Optimize content for SEO and readability')
.addHandler('output', (optimizedContent) => {
// Publish optimized content
publishContent(optimizedContent);
})
];
createAgent('data-processor')
.setLLM('openai', {
apiKey: process.env.OPENAI_API_KEY,
model: 'gpt-4',
temperature: 0.1 // Low temperature for consistent analysis
})
.setPrompt(`
You are a data analyst. Analyze the provided data and:
1. Identify key trends and patterns
2. Calculate important metrics
3. Provide actionable insights
4. Format results as JSON
`)
.setInstanceType('large') // More memory for data processing
.addHandler('output', (analysis) => {
try {
const results = JSON.parse(analysis);
// Store results in database
saveAnalysisResults(results);
// Generate reports
generateReport(results);
// Send alerts if thresholds are met
checkAlerts(results);
} catch (error) {
console.error('Failed to parse analysis:', error);
}
});
For complex logic, create custom agent files in the agents/ directory:
// agents/custom-agent.js
module.exports = {
async main(llmClient, handlers) {
console.log('Custom agent starting...');
// Your custom agent logic
setInterval(async () => {
try {
// Make LLM request
const response = await llmClient.chat.completions.create({
model: 'gpt-3.5-turbo',
messages: [
{ role: 'system', content: 'You are a helpful assistant' },
{ role: 'user', content: 'Generate a daily report' }
]
});
// Trigger output handlers
const outputHandlers = handlers.get('output') || [];
outputHandlers.forEach(handler =>
handler(response.choices[0].message.content)
);
} catch (error) {
// Trigger error handlers
const errorHandlers = handlers.get('error') || [];
errorHandlers.forEach(handler => handler(error));
}
}, 60000); // Run every minute
},
// Define custom handlers
handlers: {
output: [
(data) => console.log('Custom output:', data)
],
error: [
(error) => console.error('Custom error:', error)
]
}
};
// dank.config.js
const { createAgent } = require('dank');
const isDevelopment = process.env.NODE_ENV === 'development';
const isProduction = process.env.NODE_ENV === 'production';
module.exports = {
name: 'my-project',
agents: [
createAgent('main-agent')
.setLLM('openai', {
apiKey: process.env.OPENAI_API_KEY,
model: isDevelopment ? 'gpt-3.5-turbo' : 'gpt-4',
temperature: isDevelopment ? 0.9 : 0.7
})
.setInstanceType(isDevelopment ? 'small' : 'medium')
.addHandler('output', (data) => {
if (isDevelopment) {
console.log('DEV:', data);
} else {
// Production logging
logger.info('Agent output', { data });
}
})
]
};
1. Docker Connection Issues
# Error: Cannot connect to Docker daemon
# Solution: Dank will automatically handle this!
# If automatic installation fails, manual steps:
docker --version
docker ps
# On macOS/Windows: Start Docker Desktop manually
# On Linux: Start Docker service
sudo systemctl start docker
1a. Docker Installation Issues
# If automatic installation fails, try manual installation:
# macOS (with Homebrew):
brew install --cask docker
open -a Docker
# Linux (Ubuntu/Debian):
sudo apt-get update
sudo apt-get install -y docker-ce docker-ce-cli containerd.io
sudo systemctl start docker
sudo usermod -aG docker $USER
# Windows (with Chocolatey):
choco install docker-desktop -y
# Then start Docker Desktop from Start Menu
2. API Key Issues
# Error: Invalid API key
# Solution: Check your environment variables
echo $OPENAI_API_KEY
# Set the key properly
export OPENAI_API_KEY="sk-your-actual-key-here"
# Or create a .env file in your project
echo "OPENAI_API_KEY=sk-your-actual-key-here" > .env
3. Base Image Not Found
# Error: Base image 'deltadarkly/dank-agent-base' not found
# Solution: The base image is pulled automatically, but you can build it manually
# ty also pulling manually when docker is running via docker pull <image name>
dank build --base
4. Container Resource Issues
# Error: Container exits with code 137 (out of memory)
# Solution: Increase memory allocation (On cloud service, on local agents run with given resources)
createAgent('my-agent')
.setInstanceType('medium') // Increase from 'small' to 'medium'
5. Agent Not Starting
# Check agent logs for detailed error information
dank logs agent-name
# Check container status
docker ps -f name=dank-
# View Docker logs directly
docker logs container-id
// Good: Appropriate resource allocation
createAgent('light-agent')
.setInstanceType('small'); // Light tasks
createAgent('heavy-agent')
.setInstanceType('large'); // Heavy processing
// Good: Comprehensive error handling
createAgent('robust-agent')
.addHandler('error', (error) => {
console.error('Agent error:', error.message);
// Log to monitoring system
logError(error);
// Send alert if critical
if (error.type === 'CRITICAL') {
sendAlert(error);
}
// Implement retry logic
scheduleRetry(error.context);
})
.addHandler('output', (data) => {
try {
processOutput(data);
} catch (error) {
console.error('Output processing failed:', error);
}
});
// Good: Environment-specific settings
const config = {
development: {
model: 'gpt-3.5-turbo',
memory: '256m',
logLevel: 'debug'
},
production: {
model: 'gpt-4',
memory: '1g',
logLevel: 'info'
}
};
const env = process.env.NODE_ENV || 'development';
const settings = config[env];
createAgent('environment-aware')
.setLLM('openai', {
model: settings.model,
temperature: 0.7
})
.setInstanceType(settings.instanceType || 'small')
});
// Good: Structured logging
createAgent('monitored-agent')
.addHandler('output', (data) => {
logger.info('Agent output', {
agent: 'monitored-agent',
timestamp: new Date().toISOString(),
data: data.substring(0, 100) // Truncate for logs
});
})
.addHandler('error', (error) => {
logger.error('Agent error', {
agent: 'monitored-agent',
error: error.message,
stack: error.stack
});
})
.addHandler('start', () => {
logger.info('Agent started', { agent: 'monitored-agent' });
});
// Good: Secure configuration
createAgent('secure-agent')
.setLLM('openai', {
apiKey: process.env.OPENAI_API_KEY, // Never hardcode keys
model: 'gpt-3.5-turbo'
})
.setPrompt(`
You are a helpful assistant.
IMPORTANT SECURITY RULES:
- Never reveal API keys or sensitive information
- Don't execute system commands
- Validate all inputs before processing
- Don't access external URLs unless explicitly allowed
`)
.addHandler('output', (data) => {
// Sanitize output before logging
const sanitized = sanitizeOutput(data);
console.log(sanitized);
});
// Good: Balanced agent distribution
module.exports = {
agents: [
// CPU-intensive agents
createAgent('analyzer').setInstanceType('medium'),
// Memory-intensive agents
createAgent('processor').setInstanceType('large'),
// Light agents
createAgent('notifier').setInstanceType('small')
]
};
// Good: Clear, specific prompts
.setPrompt(`
You are a customer service agent. Follow these steps:
1. Greet the customer politely
2. Understand their issue by asking clarifying questions
3. Provide a solution or escalate if needed
4. Confirm resolution
Response format: JSON with fields: greeting, questions, solution, status
`);
# 1. Start with development configuration
NODE_ENV=development dank run
# 2. Make changes to dank.config.js
# 3. Restart agents to apply changes
dank stop --all
dank run --build # Rebuild if needed
# 4. Test with reduced resources
createAgent('dev-agent').setInstanceType('small')
# Test individual agents
dank run --detached
dank logs test-agent --follow
# Check health endpoints
curl http://localhost:3001/health
# Monitor resource usage
docker stats dank-test-agent
# 1. Set production environment
export NODE_ENV=production
# 2. Build optimized images
dank build --force
# 3. Start with image config
dank run --detached
# 4. Monitor and scale as needed
dank status --watch
# Watch all agents in real-time
dank status --watch
# Follow logs from specific agent
dank logs my-agent --follow
# View container details
docker ps -f name=dank-
# Check agent health
curl http://localhost:3001/health
npm install -g dank-ai
git clone https://github.com/your-org/dank
cd dank
npm install
npm link # Creates global symlink
git checkout -b feature/amazing-featuregit commit -m 'Add amazing feature'git push origin feature/amazing-featureISC License - see LICENSE file for details.
FAQs
Dank Agent Service - Docker-based AI agent orchestration platform
The npm package dank-ai receives a total of 25 weekly downloads. As such, dank-ai popularity was classified as not popular.
We found that dank-ai demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago.Β It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
React disclosed a CVSS 10.0 RCE in React Server Components and is advising users to upgrade affected packages and frameworks to patched versions now.

Research
/Security News
We spotted a wave of auto-generated βelf-*β npm packages published every two minutes from new accounts, with simple malware variants and early takedowns underway.

Security News
TypeScript 6.0 will be the last JavaScript-based major release, as the project shifts to the TypeScript 7 native toolchain with major build speedups.