
Security News
Attackers Are Hunting High-Impact Node.js Maintainers in a Coordinated Social Engineering Campaign
Multiple high-impact npm maintainers confirm they have been targeted in the same social engineering campaign that compromised Axios.
Use AWS S3, the world's most reliable document storage, as a database with this ORM.
Transform AWS S3 into a powerful document database
Cost-effective storage โข Automatic encryption โข ORM-like interface โข Streaming API
s3db.js is a document database that transforms AWS S3 into a fully functional database using S3's metadata capabilities. Instead of traditional storage methods, it stores document data in S3's metadata fields (up to 2KB), making it highly cost-effective while providing a familiar ORM-like interface.
Perfect for:
๐ฏ Database Operations
|
๐ Security & Performance
|
๐ฆ Data Management
|
๐ง Extensibility
|
Core Concepts: Schema & Validation โข Schema Registry โข Clients โข Fastest Validator
Plugins: API Plugin โข Identity Plugin โข All Plugins
Guides: Path-based Basic + OIDC Example
Integrations: MCP Server โข Model Context Protocol
Advanced: Executor Pool Benchmark โข Performance Tuning โข Examples โข TypeScript Support
Get up and running in less than 5 minutes!
npm install s3db.js
Need deeper telemetry? Pass
taskExecutorMonitoringalongsideexecutorPool. It merges into the pool's monitoring block, making it easy to enable verbose stats/heap tracking for any database instance without touching individual resources.
import { S3db } from "s3db.js";
const s3db = new S3db({
connectionString: "s3://ACCESS_KEY:SECRET_KEY@BUCKET_NAME/databases/myapp"
});
await s3db.connect();
console.log("๐ Connected to S3 database!");
โก Performance Tip: s3db.js comes with optimized HTTP client settings by default for excellent S3 performance. The default configuration includes keep-alive enabled, balanced connection pooling, and appropriate timeouts for most applications.
โน๏ธ Note: You do not need to provide
ACCESS_KEYandSECRET_KEYin the connection string if your environment already has S3 permissions (e.g., via IAM Role on EKS, EC2, Lambda, or other compatible clouds). s3db.js will use the default AWS credential provider chain, so credentials can be omitted for role-based or environment-based authentication. This also applies to S3-compatible clouds (MinIO, DigitalOcean Spaces, etc.) if they support such mechanisms.
Schema validation powered by fastest-validator โก
const users = await s3db.createResource({
name: "users",
attributes: {
name: "string|min:2|max:100",
email: "email|unique",
age: "number|integer|positive",
isActive: "boolean"
},
timestamps: true
});
// Insert a user
const user = await users.insert({
name: "John Doe",
email: "john@example.com",
age: 30,
isActive: true,
createdAt: new Date()
});
// Query the user
const foundUser = await users.get(user.id);
console.log(`Hello, ${foundUser.name}! ๐`);
// Update the user
await users.update(user.id, { age: 31 });
// List all users
const allUsers = await users.list();
console.log(`Total users: ${allUsers.length}`);
That's it! You now have a fully functional document database running on AWS S3. ๐
Enhance your database with powerful plugins for production-ready features. All plugins are available from the main s3db.js package:
import { S3db } from "s3db.js";
import { RelationPlugin, TTLPlugin, ReplicatorPlugin, CachePlugin } from "s3db.js";
const s3db = new S3db({
connectionString: "s3://ACCESS_KEY:SECRET_KEY@BUCKET_NAME/databases/myapp",
plugins: [
// Auto-cleanup expired records (no cron jobs needed!)
new TTLPlugin({
resources: {
sessions: { ttl: 86400, onExpire: 'soft-delete' } // 24h
}
}),
// ORM-like relationships with 10-100x faster queries
new RelationPlugin({
relations: {
users: {
posts: { type: 'hasMany', resource: 'posts', foreignKey: 'userId' }
}
}
}),
// Real-time replication to BigQuery, PostgreSQL, etc.
new ReplicatorPlugin({
replicators: [{
driver: 'bigquery',
config: { projectId: 'my-project', datasetId: 'analytics' },
resources: { users: 'users_table', posts: 'posts_table' }
}]
}),
// Cache frequently accessed data (memory, S3, or filesystem)
new CachePlugin({
driver: 'memory',
ttl: 300000 // 5 minutes
})
]
});
Learn more about available plugins and their features in the Plugin Documentation.
# npm
npm install s3db.js
# pnpm
pnpm add s3db.js
# yarn
yarn add s3db.js
Some features require additional dependencies to be installed manually:
If you plan to use the API plugin, install these dependencies:
# Core API runtime
npm install s3db.js
# HTTP logging (optional, recommended)
npm install pino-http
# Authentication (optional)
npm install jose # For JWT auth
# Standalone Raffel integrations (optional)
npm install raffel
If you plan to use the replicator system with external services, install the corresponding dependencies:
# For SQS replicator (AWS SQS queues)
npm install @aws-sdk/client-sqs
# For BigQuery replicator (Google BigQuery)
npm install @google-cloud/bigquery
# For PostgreSQL replicator (PostgreSQL databases)
npm install pg
Why manual installation? These are marked as peerDependencies to keep the main package lightweight (~500KB). Only install what you need!
Contributing to s3db.js? Use our modular installation system to install only what you need:
# Clone the repo
git clone https://github.com/forattini-dev/s3db.js.git
cd s3db.js
# Install base dependencies (required)
pnpm install
# Choose your dev setup:
./scripts/install-deps.sh minimal # Core only (~50MB)
./scripts/install-deps.sh common # + Replicators + Plugins (~500MB)
./scripts/install-deps.sh full # Everything (~2GB)
# Or install specific groups:
pnpm run install:dev:replicators # PostgreSQL, BigQuery, etc.
pnpm run install:dev:plugins # API, Identity, ML, etc.
pnpm run install:dev:puppeteer # Web scraping suite
pnpm run install:dev:cloud # AWS SDK clients
See docs/README.md for the main documentation entrypoint and package.json for the current dependency groups.
s3db.js includes comprehensive TypeScript definitions out of the box. Get full type safety, autocomplete, and IntelliSense support in your IDE!
import { Database, DatabaseConfig, Resource } from 's3db.js';
// Type-safe configuration
const config: DatabaseConfig = {
connectionString: 's3://ACCESS_KEY:SECRET@bucket/path',
logLevel: 'debug',
executorPool: { concurrency: 100 } // Default - nested under executorPool
};
const db = new Database(config);
// TypeScript knows all methods and options!
await db.createResource({
name: 'users',
attributes: {
name: 'string|required',
email: 'string|required|email',
age: 'number|min:0'
}
});
// Full autocomplete for all operations
const users: Resource<any> = db.resources.users;
const user = await users.insert({ name: 'Alice', email: 'alice@example.com', age: 28 });
For even better type safety, auto-generate TypeScript interfaces from your resources:
import { generateTypes } from 's3db.js/typescript-generator';
// Generate types after creating resources
await generateTypes(db, { outputPath: './types/database.d.ts' });
See the complete example in docs/examples/typescript-usage-example.ts.
s3db.js is backend-portable. Same code, same resources, same plugins โ just change the connection string.
| Backend | Connection String | Best For |
|---|---|---|
| AWS S3 | s3://KEY:SECRET@bucket?region=us-east-1 | Production, large datasets (100 GB+) |
| Cloudflare R2 | https://KEY:SECRET@ACCOUNT.r2.cloudflarestorage.com/bucket | Production, zero egress, auto 8 KB metadata |
| Cloudflare D1 | sqlite+d1://ACCOUNT/DB_ID?apiToken=TOKEN | Read-heavy, serverless, < 10 GB |
| Cloudflare D1 (Worker) | sqlite+d1://binding/DB + clientOptions: { d1Binding: env.DB } | Inside Workers (~1-5ms latency) |
| Turso | sqlite+libsql://db-org.turso.io?authToken=TOKEN | Edge reads, any runtime |
| Turso (embedded) | sqlite+libsql:///tmp/local.db?syncUrl=libsql://db.turso.io&authToken=TOKEN | 0ms local reads + remote sync |
| MinIO | http://user:pass@localhost:9000/bucket | Self-hosted, local dev |
| SQLite (file) | sqlite:///path/to/db.sqlite | Local dev, CI, single-process |
| SQLite (memory) | sqlite:///:memory: | Fast tests with SQLite behavior |
| Memory | memory://bucket/prefix | Tests (100-1000x faster) |
| Filesystem | file:///path/to/data | Local dev, debugging |
// Just change the connection string โ everything else stays the same
const db = new Database({
connectionString: process.env.S3DB_CONNECTION_STRING
})
For detailed pricing comparison and decision guidance, see Choosing a Backend.
A Database is a logical container for your resources, stored in a specific S3 bucket path. The database manages resource metadata, connections, and provides the core interface for all operations.
| Parameter | Type | Default | Description |
|---|---|---|---|
connectionString | string | required | S3 connection string (see formats below) |
httpClientOptions | object | optimized | HTTP client configuration for S3 requests |
logLevel | boolean | false | Enable debug logging for debugging |
parallelism | number | 100 | Concurrent operations for bulk operations (Separate Executor Pools per Database) |
versioningEnabled | boolean | false | Enable automatic resource versioning |
passphrase | string | 'secret' | Default passphrase for field encryption |
plugins | array | [] | Array of plugin instances to extend functionality |
s3db.js supports multiple connection string formats for different S3 providers:
// AWS S3 (with credentials)
"s3://ACCESS_KEY:SECRET_KEY@BUCKET_NAME/databases/myapp?region=us-east-1"
// AWS S3 (IAM role - recommended for production)
"s3://BUCKET_NAME/databases/myapp?region=us-east-1"
// MinIO (self-hosted)
"http://minioadmin:minioadmin@localhost:9000/bucket/databases/myapp"
// Digital Ocean Spaces
"https://SPACES_KEY:SPACES_SECRET@nyc3.digitaloceanspaces.com/SPACE_NAME/databases/myapp"
// LocalStack (local testing)
"http://test:test@localhost:4566/mybucket/databases/myapp"
// MemoryClient (ultra-fast in-memory testing - no S3 required!)
"memory://mybucket/databases/myapp"
// Backblaze B2
"https://KEY_ID:APPLICATION_KEY@s3.us-west-002.backblazeb2.com/BUCKET/databases/myapp"
// Cloudflare R2
"https://ACCESS_KEY:SECRET_KEY@ACCOUNT_ID.r2.cloudflarestorage.com/BUCKET/databases/myapp"
const s3db = new S3db({
connectionString: "s3://ACCESS_KEY:SECRET_KEY@BUCKET_NAME/databases/myapp"
});
// No credentials needed - uses IAM role permissions
const s3db = new S3db({
connectionString: "s3://BUCKET_NAME/databases/myapp"
});
// MinIO running locally (note: http:// protocol and port)
const s3db = new S3db({
connectionString: "http://minioadmin:minioadmin@localhost:9000/mybucket/databases/myapp"
});
// Digital Ocean Spaces (NYC3 datacenter)
const s3db = new S3db({
connectionString: "https://SPACES_KEY:SPACES_SECRET@nyc3.digitaloceanspaces.com/SPACE_NAME/databases/myapp"
});
For testing, s3db.js provides MemoryClient - a pure in-memory implementation that's 100-1000x faster than LocalStack and requires zero dependencies.
Why MemoryClient?
Quick Start with Connection String:
import { S3db } from 's3db.js';
// Simple - just use memory:// protocol!
const db = new S3db({
connectionString: 'memory://mybucket'
});
await db.connect();
Alternative - Manual Instantiation:
import { S3db, MemoryClient } from 's3db.js';
// Create database with MemoryClient
const db = new S3db({
client: new MemoryClient({ bucket: 'test-bucket' })
});
await db.connect();
// Use exactly like S3 - same API!
const users = await db.createResource({
name: 'users',
attributes: {
name: 'string|required',
email: 'email|required'
}
});
await users.insert({ id: 'u1', name: 'John', email: 'john@test.com' });
const user = await users.get('u1');
Connection String Options:
// Basic usage
"memory://mybucket"
// With key prefix (path)
"memory://mybucket/databases/myapp"
// With multiple path segments
"memory://testdb/level1/level2/level3"
// With query parameters
"memory://mybucket?region=us-west-2"
Advanced Features (Manual Client):
import { S3db, MemoryClient } from 's3db.js';
// Option 1: Connection string (recommended)
const db1 = new S3db({
connectionString: 'memory://test-bucket/tests/'
});
// Option 2: Manual client configuration
const db2 = new S3db({
client: new MemoryClient({
bucket: 'test-bucket',
keyPrefix: 'tests/', // Optional prefix for all keys
enforceLimits: true, // Enforce S3 2KB metadata limit
persistPath: './test-data.json', // Optional: persist to disk
logLevel: 'silent' // Disable logging
})
});
// Snapshot/Restore (perfect for tests)
const snapshot = client.snapshot(); // Capture current state
// ... run tests that modify data ...
client.restore(snapshot); // Restore to original state
// Persistence
await client.saveToDisk(); // Save to persistPath
await client.loadFromDisk(); // Load from persistPath
// Statistics
const stats = client.getStats();
console.log(`Objects: ${stats.objectCount}, Size: ${stats.totalSizeFormatted}`);
// Clear all data
client.clear();
Testing Example:
import {
describe,
test,
beforeEach,
afterEach } from 'vitest';
import { S3db
} from 's3db.js';
describe('User Tests', () => {
let db, users, snapshot;
beforeEach(async () => {
// Simple connection string setup!
db = new S3db({
connectionString: 'memory://test-db/my-tests'
});
await db.connect();
users = await db.createResource({
name: 'users',
attributes: { name: 'string', email: 'email' }
});
// Save snapshot for each test
snapshot = db.client.snapshot();
});
afterEach(() => {
// Restore to clean state (faster than recreating)
db.client.restore(snapshot);
});
test('should insert user', async () => {
await users.insert({ id: 'u1', name: 'John', email: 'john@test.com' });
const user = await users.get('u1');
expect(user.name).toBe('John');
});
});
Illustrative Performance Comparison:
| Operation | LocalStack | MemoryClient | Speedup |
|---|---|---|---|
| Insert 100 records | ~2000ms | ~50ms | 40x faster |
| Query 1000 records | ~5000ms | ~100ms | 50x faster |
| Full test suite | ~120s | ~2s | 60x faster |
These numbers are workload-dependent and should be treated as directional, not as a current benchmark contract.
๐ Full MemoryClient Documentation โข Core Memory Benchmark
For workloads that need persistence without S3 infrastructure (single-process local dev,
CI, migration drills), use SqliteClient.
Why SqliteClient?
maxMemoryMB to avoid OOM in heavy writesQuick Start (Connection String):
import { S3db } from 's3db.js';
const db = new S3db({
connectionString: 'sqlite:///tmp/s3db.sqlite'
});
await db.connect();
Example with options:
import { S3db, SqliteClient } from 's3db.js';
const client = new SqliteClient({
basePath: '/tmp/s3db.sqlite',
bucket: 'myapp',
maxObjectSize: 5 * 1024 * 1024,
maxMemoryMB: 256,
});
const db = new S3db({ client });
await db.connect();
For remote SQLite-compatible backends, use the explicit remote connection string schemes:
import { Database } from 's3db.js';
const tursoDb = new Database({
connectionString: 'sqlite+libsql://my-db-my-org.turso.io?authToken=YOUR_TOKEN'
});
const d1Db = new Database({
connectionString: 'sqlite+d1://ACCOUNT_ID/DATABASE_ID?apiToken=YOUR_TOKEN'
});
Notes:
sqlite:// remains local-only.sqlite+libsql:// uses the remote SQLite client over libsql/Turso.sqlite+d1:// uses the remote SQLite client over the Cloudflare D1 HTTP API.@libsql/client is optional and only required when you use sqlite+libsql://.Install the optional Turso dependency in your app:
pnpm add @libsql/client
๐ Full SqliteClient Documentation
When you create a database, s3db.js organizes your data in a structured way within your S3 bucket:
bucket-name/
โโโ databases/
โโโ myapp/ # Database root (from connection string)
โโโ s3db.json # Database metadata & resource definitions
โ
โโโ resource=users/ # Resource: users
โ โโโ data/
โ โ โโโ id=user-123 # Document (metadata in S3 metadata, optional body)
โ โ โโโ id=user-456
โ โโโ partition=byRegion/ # Partition: byRegion
โ โโโ region=US/
โ โ โโโ id=user-123 # Partition reference
โ โ โโโ id=user-789
โ โโโ region=EU/
โ โโโ id=user-456
โ
โโโ resource=posts/ # Resource: posts
โ โโโ data/
โ โโโ id=post-abc
โ โโโ id=post-def
โ
โโโ resource=sessions/ # Resource: sessions (with TTL)
โ โโโ data/
โ โโโ id=session-xyz
โ โโโ id=session-qwe
โ
โโโ plugin=cache/ # Plugin: CachePlugin (global data)
โ โโโ config # Plugin configuration
โ โโโ locks/
โ โโโ cache-cleanup # Distributed lock
โ
โโโ resource=wallets/ # Resource: wallets
โโโ data/
โ โโโ id=wallet-123
โโโ plugin=eventual-consistency/ # Plugin: scoped to resource
โโโ balance/
โ โโโ transactions/
โ โโโ id=txn-123 # Plugin-specific data
โโโ locks/
โโโ balance-sync # Resource-scoped lock
Key Path Patterns:
| Type | Pattern | Example |
|---|---|---|
| Metadata | s3db.json | Database schema, resources, versions |
| Document | resource={name}/data/id={id} | resource=users/data/id=user-123 |
| Partition | resource={name}/partition={partition}/{field}={value}/id={id} | resource=users/partition=byRegion/region=US/id=user-123 |
| Plugin (global) | plugin={slug}/{path} | plugin=cache/config |
| Plugin (resource) | resource={name}/plugin={slug}/{path} | resource=wallets/plugin=eventual-consistency/balance/transactions/id=txn-123 |
| Lock (global) | plugin={slug}/locks/{lockName} | plugin=ttl/locks/cleanup |
| Lock (resource) | resource={name}/plugin={slug}/locks/{lockName} | resource=wallets/plugin=eventual-consistency/locks/balance-sync |
Storage Layers:
Documents - User data stored in resources
Partitions - Organized references for O(1) queries
Plugin Storage - Plugin-specific data
plugin={slug}/... - Shared config, caches, locksresource={name}/plugin={slug}/... - Per-resource dataWhy This Structure?
resource=, partition=, plugin=, id= prefixesimport { S3db } from 's3db.js';
// Simple connection
const db = new S3db({
connectionString: 's3://ACCESS_KEY:SECRET@bucket/databases/myapp'
});
await db.connect();
// With plugins and options
const db = new S3db({
connectionString: 's3://bucket/databases/myapp',
logLevel: 'debug',
versioningEnabled: true,
executorPool: {
concurrency: 100, // Default concurrency (can increase for high-throughput)
retries: 3,
retryDelay: 1000
},
taskExecutorMonitoring: {
enabled: true,
collectMetrics: true,
sampleRate: 0.2
},
plugins: [
new CachePlugin({ ttl: 300000 }),
new MetricsPlugin()
],
httpClientOptions: {
keepAlive: true,
maxSockets: 100,
timeout: 60000
}
});
await db.connect();
| Method | Description |
|---|---|
connect() | Initialize database connection and load metadata |
createResource(config) | Create or update a resource |
getResource(name, options?) | Get existing resource instance |
resourceExists(name) | Check if resource exists |
resources.{name} | Access resource by property |
uploadMetadataFile() | Save metadata changes to S3 |
Customize HTTP performance for your workload:
const db = new S3db({
connectionString: '...',
httpClientOptions: {
keepAlive: true, // Enable connection reuse
keepAliveMsecs: 1000, // Keep connections alive for 1s
maxSockets: 50, // Max 50 concurrent connections
maxFreeSockets: 10, // Keep 10 free connections in pool
timeout: 60000 // 60 second timeout
}
});
Presets:
httpClientOptions: {
keepAlive: true,
keepAliveMsecs: 1000,
maxSockets: 100, // Higher concurrency
maxFreeSockets: 20, // More free connections
timeout: 60000
}
httpClientOptions: {
keepAlive: true,
keepAliveMsecs: 5000, // Longer keep-alive
maxSockets: 200, // High concurrency
maxFreeSockets: 50, // Large connection pool
timeout: 120000 // 2 minute timeout
}
Complete documentation: See above for all Database configuration options
s3db.js uses Pino - a blazing-fast, low-overhead JSON logger (5-10x faster than console.*). The logging system is hierarchical: Database โ Plugins โ Resources automatically inherit log levels, with per-component override capabilities.
All components (Database, Plugins, Resources) automatically inherit the global log level:
const db = new Database({
connectionString: 's3://bucket/db',
loggerOptions: {
level: 'warn' // โ Database, Resources, and Plugins all inherit 'warn'
}
});
await db.usePlugin(new CachePlugin(), 'cache'); // Inherits 'warn'
await db.usePlugin(new TTLPlugin(), 'ttl'); // Inherits 'warn'
s3db.js provides two built-in format presets for different environments:
JSON Format (Production - Structured Logs):
const db = new Database({
connectionString: 's3://bucket/db',
loggerOptions: {
level: 'info',
format: 'json' // โ Compact JSON for log aggregation
}
});
// Output: {"level":30,"time":1234567890,"msg":"User created","userId":"123"}
Pretty Format (Development - Human Readable):
const db = new Database({
connectionString: 's3://bucket/db',
loggerOptions: {
level: 'debug',
format: 'pretty' // โ Colorized, readable output
}
});
// Output: [14:23:45.123] INFO: User created
// userId: "123"
Auto-Detection (Default):
// Automatically chooses format based on:
// - TTY detection (terminal vs piped)
// - NODE_ENV (development vs production)
const db = new Database({
connectionString: 's3://bucket/db',
loggerOptions: {
level: 'info'
// format is auto-detected
}
});
s3db.js errors automatically use toJSON() for structured logging:
import { ValidationError } from 's3db.js';
const error = new ValidationError('Invalid email', {
field: 'email',
value: 'invalid@',
statusCode: 422
});
// Logs include full error context automatically
logger.error({ err: error }, 'Validation failed');
// Output includes: name, message, code, statusCode, suggestion, stack, etc.
Fine-tune log levels for specific plugins or resources using childLevels:
const db = new Database({
connectionString: 's3://bucket/db',
loggerOptions: {
level: 'warn', // โ Global default
childLevels: {
// Override specific plugins
'Plugin:cache': 'debug', // Cache plugin in debug mode
'Plugin:ttl': 'trace', // TTL plugin in trace mode
'Plugin:metrics': 'error', // Metrics plugin only shows errors
'Plugin:s3-queue': 'info', // S3Queue plugin in info mode
// Override specific resources
'Resource:users': 'debug', // Users resource in debug
'Resource:logs': 'silent' // Logs resource silenced
}
}
});
Result:
warndebug (override)trace (override)error (override)warn (inherited)Plugins can use completely custom loggers that don't inherit from Database:
import { createLogger } from 's3db.js/logger';
// Create custom logger
const customLogger = createLogger({
name: 'MyApp',
level: 'trace',
// Pino options
transport: {
target: 'pino-pretty',
options: { colorize: true }
}
});
// Plugin uses custom logger instead of inheriting
const plugin = new CachePlugin({
logger: customLogger // โ Ignores inheritance
});
await db.usePlugin(plugin, 'cache');
Change log levels on the fly for specific components:
// Increase verbosity for debugging
db.setChildLevel('Plugin:cache', 'debug');
// Silence a noisy plugin
db.setChildLevel('Plugin:ttl', 'silent');
// Debug specific resource
db.setChildLevel('Resource:clicks', 'trace');
โ ๏ธ Limitation: setChildLevel() only affects new child loggers. Loggers already created maintain their previous level.
Override logging globally using environment variables:
# Set log level
S3DB_LOG_LEVEL=debug node app.js
# Set output format (using presets)
S3DB_LOG_FORMAT=pretty node app.js # Pretty format (colorized, human-readable)
S3DB_LOG_FORMAT=json node app.js # JSON format (structured logs for production)
# Combined example
S3DB_LOG_LEVEL=debug S3DB_LOG_FORMAT=pretty node app.js
Legacy Support: The old S3DB_LOG_PRETTY environment variable is still supported for backward compatibility:
S3DB_LOG_PRETTY=true node app.js # Same as S3DB_LOG_FORMAT=pretty
S3DB_LOG_PRETTY=false node app.js # Same as S3DB_LOG_FORMAT=json
| Level | Use Case | When to Use |
|---|---|---|
silent | No logs | Tests, silent components |
fatal | Critical errors | System unusable |
error | Errors | Failed operations |
warn | Warnings | Deprecations, fallbacks |
info | Information | Default for production |
debug | Debug | Development |
trace | Full trace | Deep debugging |
const db = new Database({
connectionString: process.env.S3DB_CONNECTION,
loggerOptions: {
level: 'warn',
format: 'json', // โ Structured logs for aggregation
childLevels: {
// Info-level logging only for critical plugins
'Plugin:metrics': 'info',
'Plugin:audit': 'info'
}
}
});
const db = new Database({
connectionString: 'http://localhost:9000/bucket',
loggerOptions: {
level: 'debug',
format: 'pretty', // โ Human-readable, colorized
childLevels: {
// Trace the specific plugin you're debugging
'Plugin:cache': 'trace',
// Silence noisy plugins
'Plugin:metrics': 'silent'
}
}
});
const db = new Database({
connectionString: 's3://bucket/db',
loggerOptions: {
level: 'warn',
format: 'json', // โ Production format
childLevels: {
// Debug ONLY the TTL plugin
'Plugin:ttl': 'trace'
}
}
});
Plugins: Format is Plugin:{name}
await db.usePlugin(new CachePlugin(), 'cache');
// Child logger: 'Plugin:cache'
await db.usePlugin(new TTLPlugin(), 'my-ttl');
// Child logger: 'Plugin:my-ttl'
Resources: Format is Resource:{name}
await db.createResource({ name: 'users', ... });
// Child logger: 'Resource:users'
The API Plugin includes automatic HTTP request/response logging with smart detection:
Smart Detection:
pino-http is installed: Uses full-featured pino-http with all bells and whistlespino-http is NOT installed: Falls back to simple built-in HTTP loggingInstallation (optional, recommended):
npm install pino-http
Usage:
import { APIPlugin } from 's3db.js';
const api = new APIPlugin({
port: 3000,
// Enable HTTP logging (works with or without pino-http!)
httpLogger: {
enabled: true,
autoLogging: true, // Log all requests/responses
ignorePaths: ['/health'], // Skip logging for these paths
// Custom log level based on status code
customLogLevel: (req, res, err) => {
if (err || res.statusCode >= 500) return 'error';
if (res.statusCode >= 400) return 'warn';
return 'info';
}
},
// Enable request ID tracking (recommended)
requestId: {
enabled: true,
headerName: 'X-Request-ID'
}
});
What you get:
| Feature | With pino-http | Without pino-http |
|---|---|---|
| Request logging | โ Full | โ Basic |
| Response logging | โ Full | โ Basic |
| Error logging | โ Full | โ Basic |
| Request ID | โ Auto | โ Manual |
| Custom serializers | โ Yes | โ Basic |
| Performance overhead | โก Minimal | โก Minimal |
No installation required! HTTP logging works out-of-the-box with basic features. Install pino-http for enhanced capabilities.
Automatic Logging Output:
{
"level": 30,
"time": 1234567890,
"req": {
"id": "abc123",
"method": "POST",
"url": "/users",
"headers": { "user-agent": "...", "content-type": "application/json" }
},
"res": {
"statusCode": 201,
"headers": { "content-type": "application/json" }
},
"responseTime": 45,
"msg": "request completed"
}
Features:
toJSON()/health, /metrics)format: 'json' with level: 'warn' for structured loggingformat: 'pretty' with level: 'debug' for readabilitychildLevels to isolate specific componentstrace, debug) have performance impacttoJSON() for rich contextformat: 'json' in automated environments for parsinghttpLogger in API Plugin for automatic request trackingResources are the core abstraction in s3db.js - they define your data structure, validation rules, and behavior. Think of them as tables in traditional databases, but with much more flexibility and features.
Resources provide:
Quick example:
const users = await db.createResource({
name: 'users',
attributes: {
email: 'email|required|unique',
password: 'secret|required',
age: 'number|min:18|max:120'
},
behavior: 'enforce-limits',
timestamps: true,
partitions: {
byAge: { fields: { age: 'number' } }
}
});
await users.insert({ email: 'john@example.com', password: 'secret123', age: 25 });
Define your data structure with powerful validation using fastest-validator - a blazing-fast validation library with comprehensive type support:
| Type | Example | Validation Rules |
|---|---|---|
string | "name: 'string|required'" | min, max, length, pattern, enum |
number | "age: 'number|min:0'" | min, max, integer, positive, negative |
boolean | "isActive: 'boolean'" | true, false |
email | "email: 'email|required'" | RFC 5322 validation |
url | "website: 'url'" | Valid URL format |
date | "createdAt: 'date'" | ISO 8601 dates |
array | "tags: 'array|items:string'" | items, min, max, unique |
object | "profile: { type: 'object', props: {...} }" | Nested validation |
| Type | Savings | Example |
|---|---|---|
secret | Encrypted | "password: 'secret|required'" - AES-256-GCM |
embedding:N | 77% | "vector: 'embedding:1536'" - Fixed-point Base62 |
ip4 | 47% | "ipAddress: 'ip4'" - Binary Base64 |
ip6 | 44% | "ipv6: 'ip6'" - Binary Base64 |
Encoding optimizations:
๐ Validation powered by fastest-validator All schemas use fastest-validator's syntax with full support for shorthand notation.
// Simple schema
{
name: 'string|required|min:2|max:100',
email: 'email|required|unique',
age: 'number|integer|min:0|max:150'
}
// Nested objects - MAGIC AUTO-DETECT! โจ (recommended)
// Just write your object structure - s3db detects it automatically!
{
name: 'string|required',
profile: { // โ No $$type needed! Auto-detected as optional object
bio: 'string|max:500',
avatar: 'url|optional',
social: { // โ Deeply nested also works!
twitter: 'string|optional',
github: 'string|optional'
}
}
}
// Need validation control? Use $$type (when you need required/optional)
{
name: 'string|required',
profile: {
$$type: 'object|required', // โ Add required validation
bio: 'string|max:500',
avatar: 'url|optional'
}
}
// Advanced: Full control (rare cases - strict mode, etc)
{
name: 'string|required',
profile: {
type: 'object',
optional: false,
strict: true, // โ Enable strict validation
props: {
bio: 'string|max:500',
avatar: 'url|optional'
}
}
}
// Arrays with validation
{
name: 'string|required',
tags: 'array|items:string|min:1|max:10|unique',
scores: 'array|items:number|min:0|max:100'
}
// Encrypted fields
{
email: 'email|required',
password: 'secret|required',
apiKey: 'secret|required'
}
S3 metadata has a 2KB limit. Behaviors define how to handle data that exceeds this:
| Behavior | Enforcement | Data Loss | Use Case |
|---|---|---|---|
user-managed | None | Possible | Dev/Test - warnings only |
enforce-limits | Strict | No | Production - throws errors |
truncate-data | Truncates | Yes | Content management - smart truncation |
body-overflow | Splits | No | Mixed data - metadata + body |
body-only | Unlimited | No | Large docs - everything in body |
// Enforce limits (recommended for production)
const users = await db.createResource({
name: 'users',
behavior: 'enforce-limits',
attributes: { name: 'string', bio: 'string' }
});
// Body overflow for large content
const blogs = await db.createResource({
name: 'blogs',
behavior: 'body-overflow',
attributes: { title: 'string', content: 'string' }
});
// Body-only for documents
const documents = await db.createResource({
name: 'documents',
behavior: 'body-only',
attributes: { title: 'string', content: 'string', metadata: 'object' }
});
// Create
const user = await users.insert({ name: 'John', email: 'john@example.com' });
// Read
const user = await users.get('user-123');
const all = await users.list({ limit: 10, offset: 0 });
const filtered = await users.query({ isActive: true });
// Update (3 methods with different performance)
await users.update(id, { name: 'Jane' }); // GET+PUT merge (baseline)
await users.patch(id, { name: 'Jane' }); // HEAD+COPY (40-60% faster*)
await users.replace(id, fullObject); // PUT only (30-40% faster)
// *patch() uses HEAD+COPY for metadata-only behaviors
// Delete
await users.delete('user-123');
// Bulk insert
await users.insertMany([
{ name: 'User 1', email: 'user1@example.com' },
{ name: 'User 2', email: 'user2@example.com' }
]);
// Bulk get
const data = await users.getMany(['user-1', 'user-2', 'user-3']);
// Bulk delete
await users.deleteMany(['user-1', 'user-2']);
Organize data for fast queries without scanning:
const analytics = await db.createResource({
name: 'analytics',
attributes: {
userId: 'string',
event: 'string',
timestamp: 'date',
region: 'string'
},
partitions: {
// Single field
byEvent: { fields: { event: 'string' } },
// Multiple fields (composite)
byEventAndRegion: {
fields: {
event: 'string',
region: 'string'
}
},
// Nested field
byUserCountry: {
fields: {
'profile.country': 'string'
}
}
},
// Async partitions for 70-100% faster writes
asyncPartitions: true
});
// Query by partition (O(1))
const usEvents = await analytics.list({
partition: 'byEventAndRegion',
partitionValues: { event: 'click', region: 'US' }
});
Automatic timestamp partitions:
const events = await db.createResource({
name: 'events',
attributes: { name: 'string', data: 'object' },
timestamps: true // Auto-creates byCreatedDate and byUpdatedDate partitions
});
const todayEvents = await events.list({
partition: 'byCreatedDate',
partitionValues: { createdAt: '2024-01-15' }
});
Add custom logic before/after operations:
const products = await db.createResource({
name: 'products',
attributes: { name: 'string', price: 'number', sku: 'string' },
hooks: {
// Before operations
beforeInsert: [
async (data) => {
data.sku = `PROD-${Date.now()}`;
return data;
}
],
beforeUpdate: [
async (data) => {
data.updatedAt = new Date().toISOString();
return data;
}
],
// After operations
afterInsert: [
async (data) => {
console.log(`Product ${data.name} created with SKU ${data.sku}`);
}
],
afterDelete: [
async (data) => {
await notifyWarehouse(data.sku);
}
]
}
});
Available hooks:
beforeInsert, afterInsertbeforeUpdate, afterUpdatebeforeDelete, afterDeletebeforeGet, afterGetbeforeList, afterListIntercept and transform method calls:
// Authentication middleware
users.useMiddleware('inserted', async (ctx, next) => {
if (!ctx.args[0].userId) {
throw new Error('Authentication required');
}
return await next();
});
// Logging middleware
users.useMiddleware('updated', async (ctx, next) => {
const start = Date.now();
const result = await next();
console.log(`Update took ${Date.now() - start}ms`);
return result;
});
// Validation middleware
users.useMiddleware('inserted', async (ctx, next) => {
ctx.args[0].name = ctx.args[0].name.toUpperCase();
return await next();
});
Supported methods:
fetched, list, inserted, updated, deleted, deleteMany, exists, getMany, count, page, listIds, getAll
Listen to resource operations:
const users = await db.createResource({
name: 'users',
attributes: { name: 'string', email: 'string' },
// Declarative event listeners
events: {
insert: (event) => {
console.log('User created:', event.id, event.name);
},
update: [
(event) => console.log('Update detected:', event.id),
(event) => {
if (event.$before.email !== event.$after.email) {
console.log('Email changed!');
}
}
],
delete: (event) => {
console.log('User deleted:', event.id);
}
}
});
// Programmatic listeners
users.on('inserted', (event) => {
sendWelcomeEmail(event.email);
});
Available events:
inserted, updated, deleted, insertMany, deleteMany, list, count, fetched, getMany
Process large datasets efficiently:
// Readable stream
const readableStream = await users.readable({
batchSize: 50,
concurrency: 10
});
readableStream.on('data', (user) => {
console.log('Processing:', user.name);
});
readableStream.on('end', () => {
console.log('Stream completed');
});
// Writable stream
const writableStream = await users.writable({
batchSize: 25,
concurrency: 5
});
userData.forEach(user => writableStream.write(user));
writableStream.end();
A complex, production-ready resource showing all capabilities:
const orders = await db.createResource({
name: 'orders',
// Schema with all features
attributes: {
// Basic fields
orderId: 'string|required|unique',
userId: 'string|required',
status: 'string|required|enum:pending,processing,completed,cancelled',
total: 'number|required|min:0',
// Encrypted sensitive data
paymentToken: 'secret|required',
// Nested objects
customer: {
type: 'object',
props: {
name: 'string|required',
email: 'email|required',
phone: 'string|optional',
address: {
type: 'object',
props: {
street: 'string|required',
city: 'string|required',
country: 'string|required|length:2',
zipCode: 'string|required'
}
}
}
},
// Arrays
items: 'array|items:object|min:1',
tags: 'array|items:string|unique|optional',
// Special types
ipAddress: 'ip4',
userAgent: 'string|max:500',
// Embeddings for AI/ML
orderEmbedding: 'embedding:384'
},
// Behavior for large orders
behavior: 'body-overflow',
// Automatic timestamps
timestamps: true,
// Versioning for schema evolution
versioningEnabled: true,
// Custom ID generation
idGenerator: () => `ORD-${Date.now()}-${Math.random().toString(36).substr(2, 5)}`,
// Partitions for efficient queries
partitions: {
byStatus: { fields: { status: 'string' } },
byUser: { fields: { userId: 'string' } },
byCountry: { fields: { 'customer.address.country': 'string' } },
byUserAndStatus: {
fields: {
userId: 'string',
status: 'string'
}
}
},
// Async partitions for faster writes
asyncPartitions: true,
// Hooks for business logic
hooks: {
beforeInsert: [
async function(data) {
// Validate stock availability
const available = await this.validateStock(data.items);
if (!available) throw new Error('Insufficient stock');
// Calculate total
data.total = data.items.reduce((sum, item) => sum + item.price * item.quantity, 0);
return data;
},
async (data) => {
// Add metadata
data.processedAt = new Date().toISOString();
return data;
}
],
afterInsert: [
async (data) => {
// Send confirmation email
await sendOrderConfirmation(data.customer.email, data.orderId);
},
async (data) => {
// Update inventory
await updateInventory(data.items);
}
],
beforeUpdate: [
async function(data) {
// Prevent status rollback
if (data.status === 'cancelled' && this.previousStatus === 'completed') {
throw new Error('Cannot cancel completed order');
}
return data;
}
],
afterUpdate: [
async (data) => {
// Notify customer of status change
if (data.$before.status !== data.$after.status) {
await notifyStatusChange(data.customer.email, data.status);
}
}
]
},
// Events for monitoring
events: {
insert: (event) => {
console.log(`Order ${event.orderId} created - Total: $${event.total}`);
metrics.increment('orders.created');
},
update: [
(event) => {
if (event.$before.status !== event.$after.status) {
console.log(`Order ${event.orderId}: ${event.$before.status} โ ${event.$after.status}`);
metrics.increment(`orders.status.${event.$after.status}`);
}
}
],
delete: (event) => {
console.warn(`Order ${event.orderId} deleted`);
metrics.increment('orders.deleted');
}
}
});
// Add middlewares for cross-cutting concerns
orders.useMiddleware('inserted', async (ctx, next) => {
// Rate limiting
await checkRateLimit(ctx.args[0].userId);
return await next();
});
orders.useMiddleware('updated', async (ctx, next) => {
// Audit logging
const start = Date.now();
const result = await next();
await auditLog.write({
action: 'order.update',
orderId: ctx.args[0],
duration: Date.now() - start,
timestamp: new Date()
});
return result;
});
Complete documentation: docs/core/resource.md
s3db.js features Separate Executor Pools - a revolutionary architecture where each Database instance gets its own independent executor pool for maximum efficiency and zero contention.
Each database instance gets its own executor pool, enabling:
Executor pool is enabled by default with optimized settings:
import { Database } from 's3db.js';const db = new Database({
connectionString: 's3://bucket/database'
// That's it! Executor pool is automatically configured with:
// - Separate pool per database (zero contention)
// - Concurrency: 100 (default)
// - Auto-retry with exponential backoff
// - Priority queue for important operations
// - Real-time metrics
})
await db.connect()
Executor pools (and the standalone TasksRunner/TasksPool) support lightweight vs full-featured schedulers, observability exports, and adaptive concurrency:
const db = new Database({
connectionString: 's3://bucket/database',
executorPool: {
features: { profile: 'light', emitEvents: false }, // or 'balanced'
monitoring: {
enabled: true,
reportInterval: 1000,
exporter: (snapshot) => console.log('[executor]', snapshot)
},
autoTuning: {
enabled: true,
minConcurrency: 10,
maxConcurrency: 200,
targetLatency: 250,
adjustmentInterval: 5000
}
}
})
Use the light profile for PromisePool-style throughput when you just need FIFO fan-out. Switch to balanced when you need retries, priority aging, rich metrics, or adaptive scaling. The same options apply to filesystem/memory clients via taskExecutorMonitoring, autoTuning, and features.profile.
Customize concurrency for your specific workload:
import { Database } from 's3db.js';const db = new Database({
connectionString: 's3://bucket/database',
executorPool: {
concurrency: 200, // Increase for high-throughput scenarios
// Or use auto-tuning:
// concurrency: 'auto', // Auto-tune based on system load
autotune: {
targetLatency: 100, // Target 100ms per operation
minConcurrency: 50, // Never go below 50
maxConcurrency: 500 // Never exceed 500
}
}
})
// Get queue statistics
const stats = db.client.getQueueStats()
console.log(stats)
// {
// queueSize: 0,
// activeCount: 50,
// processedCount: 15420,
// errorCount: 3,
// retryCount: 8
// }
// Get performance metrics
const metrics = db.client.getAggregateMetrics()
console.log(metrics)
// {
// count: 15420,
// avgExecution: 45,
// p50: 42,
// p95: 78,
// p99: 125
// }
// Lifecycle control
await db.client.pausePool() // Pause processing
db.client.resumePool() // Resume processing
await db.client.drainPool() // Wait for queue to empty
db.client.stopPool() // Stop and cleanup
OperationPool emits events for monitoring and observability:
| Event | Parameters | Description |
|---|---|---|
pool:taskStarted | (task) | Task execution started |
pool:taskCompleted | (task, result) | Task completed successfully |
pool:taskError | (task, error) | Task failed with error |
pool:taskRetry | (task, attempt) | Task retry attempt (1-based) |
pool:taskMetrics | (metrics) | Task performance metrics |
pool:paused | () | Pool paused (waiting for active tasks) |
pool:resumed | () | Pool resumed processing |
pool:drained | () | All tasks completed (queue empty) |
pool:stopped | () | Pool stopped (pending tasks cancelled) |
Example:
db.client.on('pool:taskCompleted', (task, result) => {
console.log(`โ ${task.id}: ${task.timings.execution}ms`)
})
db.client.on('pool:taskError', (task, error) => {
console.error(`โ ${task.id}:`, error.message)
})
See docs/benchmarks/operation-pool.md for the operation-pool benchmark discussion and implementation notes.
Benchmark results from comprehensive testing of 108 scenarios (see docs/benchmarks/operation-pool.md and the benchmark index):
| Scale | Separate Pools | Promise.all | Shared Pool | Winner |
|---|---|---|---|---|
| 1,000 ops | 2.1ms | 1.8ms | 2.5ms | Promise.all (marginal) |
| 5,000 ops | 18ms | 28ms | 32ms | Separate Pools (+40%) |
| 10,000 ops | 35ms | 45ms | 52ms | Separate Pools (+37%) |
| Memory (10K) | 88 MB | 1,142 MB | 278 MB | Separate Pools (13x better) |
โ Automatic (no configuration needed):
Customize concurrency for:
executorPool: { concurrency: 200 }executorPool: { concurrency: 300-500 }executorPool: { concurrency: 10 }executorPool: { concurrency: 25-50 }Separate Pools comes pre-configured with production-ready defaults. Override only what you need:
// Minimal - uses all defaults (recommended)
const db = new Database({
connectionString: 's3://bucket/database'
// executorPool uses defaults: { concurrency: 100 }
})
// Custom - override specific settings
const db = new Database({
connectionString: 's3://bucket/database',
executorPool: {
concurrency: 200, // Concurrency per database pool (default: 100)
retries: 3, // Max retry attempts
retryDelay: 1000, // Initial retry delay (ms)
timeout: 30000, // Operation timeout (ms)
retryableErrors: [ // Errors to retry (empty = all)
'NetworkingError',
'TimeoutError',
'RequestTimeout',
'ServiceUnavailable',
'SlowDown',
'RequestLimitExceeded'
],
autotune: { // Auto-tuning (optional)
enabled: true,
targetLatency: 100, // Target latency (ms)
minConcurrency: 50, // Min per database
maxConcurrency: 500, // Max per database
targetMemoryPercent: 0.7, // Target memory usage (70%)
adjustmentInterval: 5000 // Check interval (ms)
}
},
taskExecutorMonitoring: {
enabled: true,
collectMetrics: true,
sampleRate: 1,
mode: 'balanced'
}
})
Complete documentation: docs/benchmarks/executor-pool.md
Quick Jump: ๐ API | ๐ Identity | โก Performance | ๐ Data | ๐ฎ Gaming | ๐ง DevOps | ๐ค ML/AI | ๐ท๏ธ Web Scraping
Extend s3db.js with powerful plugins. All plugins are optional and can be installed independently.
APIPlugin - Transform s3db.js into production-ready REST API with OpenAPI, multi-auth (JWT/OIDC/Basic/API Key), rate limiting, and template engines.
IdentityPlugin - Complete OAuth2/OIDC server with MFA, whitelabel UI, and enterprise SSO.
CachePlugin โข TTLPlugin โข EventualConsistencyPlugin โข MetricsPlugin
CachePlugin - Memory/S3/filesystem caching with compression and automatic invalidation.
TTLPlugin - Auto-cleanup expired records with O(1) partition-based deletion.
EventualConsistencyPlugin - Eventually consistent counters and high-performance analytics.
MetricsPlugin - Performance monitoring with Prometheus export.
ReplicatorPlugin โข ImporterPlugin โข BackupPlugin โข AuditPlugin
ReplicatorPlugin - Real-time replication to BigQuery, PostgreSQL, MySQL, Turso, PlanetScale, and SQS.
ImporterPlugin - Stream processing for large JSON/CSV imports.
BackupPlugin - Automated backups to S3, filesystem, or cross-cloud.
AuditPlugin - Compliance logging for all database operations.
TournamentPlugin - Complete tournament engine supporting Single/Double Elimination, Round Robin, Swiss, and League formats with automated bracket generation.
QueueConsumerPlugin โข SchedulerPlugin โข TfstatePlugin โข CloudInventoryPlugin โข CostsPlugin
QueueConsumerPlugin - Process RabbitMQ/SQS messages for event-driven architectures.
SchedulerPlugin - Cron-based job scheduling for maintenance tasks.
TfstatePlugin - Track Terraform infrastructure changes and drift detection.
CloudInventoryPlugin - Multi-cloud inventory with versioning and diff tracking.
CostsPlugin - AWS cost tracking and optimization insights.
MLPlugin โข VectorPlugin โข FullTextPlugin โข GeoPlugin
MLPlugin - Machine learning model management and inference pipelines.
VectorPlugin - Vector similarity search (cosine, euclidean) for RAG and ML applications.
FullTextPlugin - Full-text search with tokenization and indexing.
GeoPlugin - Geospatial queries and distance calculations.
PuppeteerPlugin - Enterprise-grade browser automation with anti-bot detection, cookie farming, proxy rotation, and intelligent pooling for web scraping at scale.
RelationPlugin โข StateMachinePlugin โข S3QueuePlugin
RelationPlugin - ORM-like relationships with join optimization (10-100x faster queries).
StateMachinePlugin - Finite state machine workflows for business processes.
S3QueuePlugin - Distributed queue with zero race conditions using S3.
# Core plugins (no dependencies)
# Included in s3db.js package
# External dependencies (install only what you need)
pnpm add pg # PostgreSQL replication (ReplicatorPlugin)
pnpm add @google-cloud/bigquery # BigQuery replication (ReplicatorPlugin)
pnpm add @aws-sdk/client-sqs # SQS replication/consumption (ReplicatorPlugin, QueueConsumerPlugin)
pnpm add amqplib # RabbitMQ consumption (QueueConsumerPlugin)
pnpm add ejs # Template engine (APIPlugin - optional)
import { S3db } from 's3db.js';
import { CachePlugin, MetricsPlugin, TTLPlugin } from 's3db.js';
const db = new S3db({
connectionString: 's3://bucket/databases/myapp',
plugins: [
// Cache frequently accessed data
new CachePlugin({
driver: 'memory',
ttl: 300000, // 5 minutes
config: {
maxMemoryPercent: 0.1, // 10% of system memory
enableCompression: true
}
}),
// Track performance metrics
new MetricsPlugin({
enablePrometheus: true,
port: 9090
}),
// Auto-cleanup expired sessions
new TTLPlugin({
resources: {
sessions: { ttl: 86400, onExpire: 'soft-delete' } // 24h
}
})
]
});
Simple plugin example:
import { Plugin } from 's3db.js';
export class MyPlugin extends Plugin {
constructor(options = {}) {
super(options);
this.name = 'MyPlugin';
}
async initialize(database) {
console.log('Plugin initialized!');
// Wrap methods
this.wrapMethod('Resource', 'inserted', async (original, resource, args) => {
console.log(`Inserting into ${resource.name}`);
const result = await original(...args);
console.log(`Inserted: ${result.id}`);
return result;
});
}
}
Complete documentation: docs/plugins/README.md
S3DB includes a built-in MCP server that works in two modes depending on whether a connection string is provided.
# Claude Code
claude mcp add s3db -- npx -y s3db.js mcp
Exposes documentation tools, resources, and prompts. Helps you design schemas, choose field types, configure plugins, and learn s3db โ no AWS credentials required.
# Claude Code
claude mcp add s3db \
-e S3DB_CONNECTION_STRING=s3://KEY:SECRET@my-bucket \
-- npx -y s3db.js mcp
# HTTP transport
S3DB_CONNECTION_STRING=s3://KEY:SECRET@my-bucket npx s3db.js mcp --transport=http
Adds CRUD tools, live resource introspection (s3db://resource/{name}), and data-aware prompts on top of everything in Library Mode.
| Library Mode | Full Mode | |
|---|---|---|
s3dbSearchDocs | โ | โ |
s3db://core/, s3db://plugin/, s3db://guide/ resources | โ | โ |
| Schema design & migration prompts | โ | โ |
| CRUD tools (resourceGet, resourceList, resourceInsertโฆ) | โ | โ |
s3db://resource/{name} (live schema) | โ | โ |
| Debug & optimization prompts | โ | โ |
Library Mode can be promoted to Full Mode at runtime by calling the dbConnect tool โ no restart needed.
Complete documentation: docs/mcp.md
s3db.js integrates seamlessly with:
s3db.js includes a powerful CLI for database management and operations.
# Global
npm install -g s3db.js
# Project
npm install s3db.js
npx s3db [command]
# List resources
s3db list
# Query resources
s3db query users
s3db query users --filter '{"status":"active"}'
# Insert records
s3db insert users --data '{"name":"John","email":"john@example.com"}'
# Update records
s3db update users user-123 --data '{"age":31}'
# Delete records
s3db delete users user-123
# Export data
s3db export users --format json > users.json
s3db export users --format csv > users.csv
# Import data
s3db import users < users.json
# Stats
s3db stats
s3db stats users
# MCP Server
s3db mcp --transport=stdio
s3db mcp --transport=sse --port=17500
S3DB_CONNECTION_STRING=s3://bucket/databases/myapp
S3DB_CACHE_ENABLED=true
S3DB_COSTS_ENABLED=true
S3DB_VERBOSE=false
Browse 60+ examples covering:
| Resource | Link |
|---|---|
| Resource API | docs/core/resource.md |
| Client API | docs/clients/README.md |
| Schema Validation | docs/core/schema.md |
| Plugin API | docs/plugins/README.md |
Common issues and solutions:
Problem: Cannot connect to S3 bucket
Solutions:
// Enable debug logging
const db = new S3db({
connectionString: '...',
logLevel: 'debug'
});
Problem: Error: "S3 metadata size exceeds 2KB limit"
Solutions:
body-overflow or body-onlyconst resource = await db.createResource({
name: 'blogs',
behavior: 'body-overflow', // Automatically handle overflow
attributes: { title: 'string', content: 'string' }
});
Problem: Slow queries or operations
Solutions:
// Add partitions
const resource = await db.createResource({
name: 'analytics',
attributes: { event: 'string', region: 'string' },
partitions: {
byEvent: { fields: { event: 'string' } }
},
asyncPartitions: true // 70-100% faster writes
});
// Enable caching
const db = new S3db({
connectionString: '...',
plugins: [new CachePlugin({ ttl: 300000 })]
});
Problem: Partition references deleted field
Solutions:
const resource = await db.getResource('users', { strictValidation: false });
const orphaned = resource.findOrphanedPartitions();
console.log('Orphaned:', orphaned);
// Remove them
resource.removeOrphanedPartitions();
await db.uploadMetadataFile();
โ ๏ธ Important: All benchmark results documented were generated using Node.js v22.6.0. Performance may vary with different Node.js versions.
s3db.js includes comprehensive benchmarks demonstrating real-world performance optimizations:
Contributions are welcome. Use the repository on GitHub to open issues or pull requests.
This project is licensed under the Unlicense.
Made with โค๏ธ by the s3db.js community
FAQs
Use AWS S3, the world's most reliable document storage, as a database with this ORM.
The npm package s3db.js receives a total of 2,261 weekly downloads. As such, s3db.js popularity was classified as popular.
We found that s3db.js demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago.ย It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
Multiple high-impact npm maintainers confirm they have been targeted in the same social engineering campaign that compromised Axios.

Security News
Axios compromise traced to social engineering, showing how attacks on maintainers can bypass controls and expose the broader software supply chain.

Security News
Node.js has paused its bug bounty program after funding ended, removing payouts for vulnerability reports but keeping its security process unchanged.