
Research
Supply Chain Attack on Axios Pulls Malicious Dependency from npm
A supply chain attack on Axios introduced a malicious dependency, plain-crypto-js@4.2.1, published minutes earlier and absent from the project’s GitHub releases.
Xronox v3.0: Enterprise-grade MongoDB persistence layer with embedded multi-tenancy, tiered architecture, and big data capabilities. Features time-travel versioning, S3/Azure/local storage, enrichment API, Identity system, and field projection. NEW: Stand
The Essential Persistence Layer for Big Data & SaaS Applications
Enterprise-Grade MongoDB + S3/Azure with Embedded Multi-Tenancy Formerly Chronos-DB
Track WHO created or modified data with a consistent identity interface across your entire ecosystem:
Identity type for users, agents, systems, APIs, services, jobscreateUserIdentity, createAgentIdentity, createSystemIdentity, etc.Control what data is returned in queries, just like hidden files in your OS:
_metadata, _internal)minimal, default, withMetadata)includeHidden and projectionSpec for dynamic controlAutomatically discover tenant databases at runtime without configuration restarts:
Optimize costs and organization with separate buckets for different data types:
records, versions, content, backups in different S3 bucketsZero-code configuration with automatic discovery and ENV variable resolution:
xronox.config.json or .xronox.json from project rootENV.VAR_NAME placeholders resolved at runtimeWhether you're building Big Data platforms OR SaaS applications — Xronox is a must-have that dramatically simplifies development and slashes costs:
Xronox v2.4 is designed for large-scale applications handling millions of records with embedded multi-tenancy by design and tiered architecture for efficient big data workloads while maintaining enterprise-grade security and compliance.
Xronox v2.4 provides a production-ready persistence layer designed for enterprise applications and big data projects that combines:
insertWithEntities and getWithEntitiesgetKnowledge and getMetadata with automatic fallback/merge across tiers✅ No Environment Variables - All configuration via JSON
✅ Cost-First - Minimize storage and compute costs
✅ Stability-First - Immutable versioning, transactions, optimistic locking
✅ Concurrent-Safe - Transaction locking prevents multi-server write conflicts
✅ Portable - Works with any S3-compatible provider
✅ Type-Safe - Full TypeScript support with Zod validation
✅ Security-First - Built-in tenant isolation and data protection
✅ Compliance-Ready - Audit trails, data lineage, and regulatory features
Xronox v2.4 implements enterprise-grade security with multiple layers of protection:
Chronos-DB uses a sophisticated tiered approach to optimize for different data types and security requirements:
// Multi-tier database configuration with security considerations
databases: {
metadata: {
genericDatabase: { // System-wide metadata (no tenant isolation needed)
dbConnRef: 'mongo-primary',
spaceConnRef: 's3-primary',
bucket: 'chronos-metadata',
dbName: 'chronos_metadata_generic'
},
domainsDatabases: [ // Domain-level metadata (shared within domain)
{
domain: 'healthcare',
dbConnRef: 'mongo-healthcare',
spaceConnRef: 's3-healthcare',
bucket: 'chronos-metadata-healthcare',
dbName: 'chronos_metadata_healthcare'
}
],
tenantDatabases: [ // Tenant-specific metadata (isolated per tenant)
{
tenantId: 'tenant-a',
dbConnRef: 'mongo-tenant-a',
spaceConnRef: 's3-tenant-a',
bucket: 'chronos-metadata-tenant-a',
dbName: 'chronos_metadata_tenant_a'
}
]
},
knowledge: {
genericDatabase: { // Shared knowledge base
dbConnRef: 'mongo-primary',
spaceConnRef: 's3-primary',
bucket: 'chronos-knowledge',
dbName: 'chronos_knowledge_generic'
},
domainsDatabases: [ // Domain-specific knowledge
{
domain: 'finance',
dbConnRef: 'mongo-finance',
spaceConnRef: 's3-finance',
bucket: 'chronos-knowledge-finance',
dbName: 'chronos_knowledge_finance'
}
],
tenantDatabases: [ // Tenant-specific knowledge
{
tenantId: 'tenant-b',
dbConnRef: 'mongo-tenant-b',
spaceConnRef: 's3-tenant-b',
bucket: 'chronos-knowledge-tenant-b',
dbName: 'chronos_knowledge_tenant_b'
}
]
},
runtime: {
tenantDatabases: [ // Runtime data (always tenant-isolated)
{
tenantId: 'tenant-a',
dbConnRef: 'mongo-tenant-a',
spaceConnRef: 's3-tenant-a',
bucket: 'chronos-runtime-tenant-a',
dbName: 'chronos_runtime_tenant_a',
analyticsDbName: 'chronos_analytics_tenant_a' // Integrated analytics
}
]
},
logs: { // System logs (centralized)
dbConnRef: 'mongo-logs',
spaceConnRef: 's3-logs',
bucket: 'chronos-logs',
dbName: 'chronos_logs'
}
}
// Define connections once, reference everywhere (95% reuse as requested)
dbConnections: {
'mongo-primary': {
mongoUri: 'mongodb+srv://user:pass@primary-cluster.mongodb.net/?retryWrites=true&w=majority'
},
'mongo-tenant-a': {
mongoUri: 'mongodb+srv://user:pass@tenant-a-cluster.mongodb.net/?retryWrites=true&w=majority'
},
'mongo-analytics': {
mongoUri: 'mongodb+srv://user:pass@analytics-cluster.mongodb.net/?retryWrites=true&w=majority'
}
},
spacesConnections: {
's3-primary': {
endpoint: 'https://s3.amazonaws.com',
region: 'us-east-1',
accessKey: process.env.AWS_ACCESS_KEY_ID,
secretKey: process.env.AWS_SECRET_ACCESS_KEY
},
's3-tenant-a': {
endpoint: 'https://tenant-a-bucket.s3.amazonaws.com',
region: 'us-east-1',
accessKey: process.env.TENANT_A_ACCESS_KEY,
secretKey: process.env.TENANT_A_SECRET_KEY
},
// Azure Blob Storage (NEW in v2.0.1)
'azure-primary': {
endpoint: 'https://myaccount.blob.core.windows.net',
region: 'us-east-1', // Not used for Azure but required
accessKey: process.env.AZURE_ACCOUNT_NAME,
secretKey: process.env.AZURE_ACCOUNT_KEY
}
}
Option A: Complete Isolation (Highest Security)
// Each tenant gets separate MongoDB cluster and S3 bucket
tenantDatabases: [
{
tenantId: 'tenant-a',
dbConnRef: 'mongo-tenant-a', // Separate MongoDB cluster
spaceConnRef: 's3-tenant-a', // Separate S3 bucket
bucket: 'chronos-tenant-a',
dbName: 'chronos_tenant_a',
analyticsDbName: 'chronos_analytics_tenant_a'
}
]
Option B: Shared Infrastructure (Cost-Effective)
// Multiple tenants share infrastructure but with strict isolation
tenantDatabases: [
{
tenantId: 'tenant-a',
dbConnRef: 'mongo-shared', // Shared MongoDB cluster
spaceConnRef: 's3-shared', // Shared S3 bucket
bucket: 'chronos-shared',
dbName: 'chronos_tenant_a', // Separate database per tenant
analyticsDbName: 'chronos_analytics_tenant_a'
}
]
GDPR Compliance
// Enable logical delete for GDPR compliance
logicalDelete: {
enabled: true // Default - enables data recovery and audit trails
},
// Enable versioning for data lineage
versioning: {
enabled: true // Default - enables time-travel queries and audit trails
}
SOX Compliance
// Enable comprehensive audit trails
collectionMaps: {
financial_records: {
indexedProps: ['accountId', 'transactionId', 'amount', 'date'],
validation: {
requiredIndexed: ['accountId', 'transactionId', 'amount']
}
}
},
// Enable transaction logging
transactions: {
enabled: true,
autoDetect: true
}
Chronos-DB v2.0 is specifically designed for big data scenarios with enterprise-grade performance:
// Distribute load across multiple MongoDB clusters
dbConnections: {
'mongo-cluster-1': { mongoUri: 'mongodb://cluster-1:27017' },
'mongo-cluster-2': { mongoUri: 'mongodb://cluster-2:27017' },
'mongo-cluster-3': { mongoUri: 'mongodb://cluster-3:27017' }
},
// S3 storage across multiple regions
spacesConnections: {
's3-us-east': {
endpoint: 'https://s3.us-east-1.amazonaws.com',
region: 'us-east-1'
},
's3-eu-west': {
endpoint: 'https://s3.eu-west-1.amazonaws.com',
region: 'eu-west-1'
}
}
// Optimized for millions of operations per day
const xronox = initXronox({
// ... configuration
writeOptimization: {
batchSize: 1000, // Batch S3 operations
debounceMs: 100, // Debounce counter updates
compressionEnabled: true // Compress large payloads
},
// Fallback queues for guaranteed durability
fallback: {
enabled: true,
maxRetries: 3,
retryDelayMs: 1000,
maxDelayMs: 60000
}
});
// Built-in analytics for each tenant
runtime: {
tenantDatabases: [
{
tenantId: 'tenant-a',
dbConnRef: 'mongo-tenant-a',
spaceConnRef: 's3-tenant-a',
bucket: 'chronos-runtime-tenant-a',
dbName: 'chronos_runtime_tenant_a',
analyticsDbName: 'chronos_analytics_tenant_a' // Integrated analytics
}
]
}
// High-volume IoT data processing
const iotOps = chronos.with({
databaseType: 'runtime',
tier: 'tenant',
tenantId: 'iot-platform',
collection: 'sensor-data'
});
// Batch processing for millions of sensor readings
const batchSize = 10000;
const sensorData = Array.from({ length: batchSize }, (_, i) => ({
deviceId: `sensor-${i % 1000}`,
timestamp: new Date(),
temperature: Math.random() * 100,
humidity: Math.random() * 100,
location: { lat: Math.random() * 90, lng: Math.random() * 180 }
}));
// Efficient batch insertion
for (let i = 0; i < sensorData.length; i += 100) {
const batch = sensorData.slice(i, i + 100);
await Promise.all(batch.map(data =>
iotOps.create(data, 'iot-ingestion', 'sensor-data')
));
}
// High-frequency trading data
const tradingOps = chronos.with({
databaseType: 'runtime',
tier: 'tenant',
tenantId: 'trading-firm',
collection: 'transactions'
});
// Process thousands of transactions per second
const processTransaction = async (transaction) => {
const result = await tradingOps.create({
symbol: transaction.symbol,
price: transaction.price,
quantity: transaction.quantity,
timestamp: transaction.timestamp,
traderId: transaction.traderId
}, 'trading-system', 'market-transaction');
// Analytics automatically tracked in analyticsDbName
return result;
};
// Multi-tenant e-commerce platform
const ecommerceOps = chronos.with({
databaseType: 'runtime',
tier: 'tenant',
tenantId: 'ecommerce-store',
collection: 'orders'
});
// Process orders with automatic analytics
const createOrder = async (orderData) => {
const result = await ecommerceOps.create({
customerId: orderData.customerId,
items: orderData.items,
total: orderData.total,
status: 'pending'
}, 'ecommerce-system', 'order-creation');
// Analytics automatically tracked:
// - Order count per customer
// - Revenue per day/month
// - Product popularity
// - Customer behavior patterns
return result;
};
Why Connection Management Matters:
How Chronos-DB Manages Connections:
// You define connections ONCE by key
dbConnections: {
'mongo-primary': { mongoUri: 'mongodb://primary:27017' },
'mongo-tenant-a': { mongoUri: 'mongodb://tenant-a:27017' }
},
spacesConnections: {
's3-primary': { endpoint: 'https://s3.amazonaws.com', ... }
}
// Then reference them everywhere
databases: {
runtime: {
tenantDatabases: [
{ dbConnRef: 'mongo-primary', spaceConnRef: 's3-primary', ... }, // ← References
{ dbConnRef: 'mongo-tenant-a', spaceConnRef: 's3-primary', ... } // ← References
]
}
}
What Happens Internally:
First Request to mongo-primary:
// Router creates ONE MongoClient connection
const client = new MongoClient('mongodb://primary:27017');
await client.connect(); // Only happens ONCE
// Stores in connection pool: { 'mongodb://primary:27017' => client }
Subsequent Requests to mongo-primary:
// Router REUSES the existing connection
const client = connectionPool.get('mongodb://primary:27017');
// No new connection created! ✅
Connection Lifecycle:
// During operation
- getMongoClient(mongoUri) → Returns existing client or creates new one
- Connection stays open for the lifetime of the application
- MongoDB driver's built-in connection pooling handles concurrency
// On shutdown
- chronos.admin.shutdown() → Closes all connections gracefully
- Ensures no connections are leaked
Benefits:
✅ Performance:
✅ Resource Efficiency:
mongo-primary = 1 connection pool✅ Scalability:
✅ Reliability:
Example Scenario:
// 3 tenants, 2 MongoDB clusters, 1 S3 bucket
dbConnections: {
'mongo-shared': { mongoUri: 'mongodb://shared:27017' },
'mongo-premium': { mongoUri: 'mongodb://premium:27017' }
},
spacesConnections: {
's3-main': { endpoint: 'https://s3.amazonaws.com', ... }
},
databases: {
runtime: {
tenantDatabases: [
{ tenantId: 'tenant-a', dbConnRef: 'mongo-shared', spaceConnRef: 's3-main', ... },
{ tenantId: 'tenant-b', dbConnRef: 'mongo-shared', spaceConnRef: 's3-main', ... },
{ tenantId: 'tenant-c', dbConnRef: 'mongo-premium', spaceConnRef: 's3-main', ... }
]
}
}
// Result:
// - 2 MongoDB connections (not 3!) ✅
// - 1 S3 client (not 3!) ✅
// - 95% configuration reuse ✅
// - Tenant-a and tenant-b share mongo-shared connection pool
// - Tenant-c uses dedicated mongo-premium connection pool
Connection Management API:
// Internal router methods (you don't call these directly)
router.getMongoClient(mongoUri) // Returns cached or creates new MongoClient
router.getSpaces(ctx) // Returns cached or creates new S3/Azure client
router.getAllMongoUris() // Lists all unique MongoDB URIs
router.shutdown() // Closes all connections gracefully
Best Practices:
✅ DO: Reference the same connection key for tenants sharing infrastructure
✅ DO: Use separate connection keys for isolated/premium tenants
✅ DO: Call chronos.admin.shutdown() on application shutdown
❌ DON'T: Define duplicate connections with different keys but same URI
❌ DON'T: Create connections in loops or per-request
// S3 clients are cached by spaceConnRef
spacesConnections: {
's3-primary': { ... } // Created once, reused for all operations
}
// Azure clients auto-detected and cached
spacesConnections: {
'azure-primary': {
endpoint: 'https://account.blob.core.windows.net', // ← Azure detected!
accessKey: 'account-name',
secretKey: 'account-key'
}
}
// Router automatically creates AzureBlobStorageAdapter and caches it
// Optimized S3 operations
spacesConnections: {
's3-optimized': {
endpoint: 'https://s3.amazonaws.com',
region: 'us-east-1',
// Automatic retry and exponential backoff
// Connection pooling for S3 operations
// Batch operations for multiple files
}
}
// Dev shadow for frequently accessed data
devShadow: {
enabled: true,
ttlHours: 24, // Cache for 24 hours
maxBytesPerDoc: 1024 * 1024 // 1MB max per document
}
npm install xronox@^3.0.0
Note: The package was previously named chronos-db. Starting from v2.4.0, it's published as xronox.
The simplest way to get started - zero code configuration:
1. Create xronox.config.json in your project root:
{
"xronox": {
"dbConnections": {
"mongo-primary": {
"mongoUri": "ENV.MONGO_URI"
}
},
"spacesConnections": {
"s3-primary": {
"endpoint": "ENV.S3_ENDPOINT",
"region": "ENV.S3_REGION",
"accessKey": "ENV.S3_ACCESS_KEY",
"secretKey": "ENV.S3_SECRET_KEY"
}
},
"databases": {
"runtime": {
"tenantDatabases": [
{
"tenantId": "default",
"dbConnRef": "mongo-primary",
"spaceConnRef": "s3-primary",
"recordsBucket": "ENV.S3_BUCKET",
"dbName": "my_app_runtime"
}
]
}
}
}
}
2. Set environment variables:
export MONGO_URI="mongodb://localhost:27017"
export S3_ENDPOINT="https://s3.amazonaws.com"
export S3_REGION="us-east-1"
export S3_ACCESS_KEY="your-access-key"
export S3_SECRET_KEY="your-secret-key"
export S3_BUCKET="my-app-bucket"
3. Initialize (auto-loads config):
import { initXronox } from 'xronox';
// Auto-discovers and loads xronox.config.json
const xronox = await initXronox();
const ops = xronox.with({
databaseType: 'runtime',
tier: 'tenant',
tenantId: 'default',
collection: 'users'
});
await ops.create({ name: 'Alice', email: 'alice@example.com' });
The easiest way to get started:
import { createProductionConfig, initXronox } from 'xronox';
const config = createProductionConfig({
mongoUri: process.env.MONGO_URI!,
s3Endpoint: 'https://s3.amazonaws.com',
s3Region: 'us-east-1',
s3AccessKey: process.env.AWS_ACCESS_KEY_ID!,
s3SecretKey: process.env.AWS_SECRET_ACCESS_KEY!,
tenants: ['acme-corp', 'globex-inc'],
bucketPrefix: 'myapp'
});
const xronox = initXronox(config);
For Development:
import { createDevConfig, initXronox } from 'xronox';
const config = createDevConfig({
mongoUri: 'mongodb://localhost:27017',
basePath: './xronox-data'
});
const xronox = initXronox(config);
import { initChronos } from 'xronox';
const xronox = initXronox({
// Connection definitions (95% reuse as requested)
dbConnections: {
'mongo-primary': {
mongoUri: 'mongodb+srv://user:pass@primary-cluster.mongodb.net'
},
'mongo-analytics': {
mongoUri: 'mongodb+srv://user:pass@analytics-cluster.mongodb.net'
}
},
spacesConnections: {
's3-primary': {
endpoint: 'https://s3.amazonaws.com',
region: 'us-east-1',
accessKey: process.env.AWS_ACCESS_KEY_ID,
secretKey: process.env.AWS_SECRET_ACCESS_KEY
}
},
// Tiered database architecture
databases: {
metadata: {
genericDatabase: {
dbConnRef: 'mongo-primary',
spaceConnRef: 's3-primary',
bucket: 'chronos-metadata',
dbName: 'chronos_metadata_generic'
},
domainsDatabases: [
{
domain: 'healthcare',
dbConnRef: 'mongo-primary',
spaceConnRef: 's3-primary',
bucket: 'chronos-metadata-healthcare',
dbName: 'chronos_metadata_healthcare'
}
],
tenantDatabases: [
{
tenantId: 'tenant-a',
dbConnRef: 'mongo-primary',
spaceConnRef: 's3-primary',
bucket: 'chronos-metadata-tenant-a',
dbName: 'chronos_metadata_tenant_a'
}
]
},
knowledge: {
genericDatabase: {
dbConnRef: 'mongo-primary',
spaceConnRef: 's3-primary',
bucket: 'chronos-knowledge',
dbName: 'chronos_knowledge_generic'
},
domainsDatabases: [],
tenantDatabases: []
},
runtime: {
tenantDatabases: [
{
tenantId: 'tenant-a',
dbConnRef: 'mongo-primary',
spaceConnRef: 's3-primary',
bucket: 'chronos-runtime-tenant-a',
dbName: 'chronos_runtime_tenant_a',
analyticsDbName: 'chronos_analytics_tenant_a'
}
]
},
logs: {
dbConnRef: 'mongo-primary',
spaceConnRef: 's3-primary',
bucket: 'chronos-logs',
dbName: 'chronos_logs'
}
},
// Enterprise configuration
routing: { hashAlgo: 'rendezvous' },
retention: { ver: { days: 90 }, counters: { days: 30 } },
collectionMaps: {
users: { indexedProps: ['email', 'tenantId'] },
orders: { indexedProps: ['orderId', 'customerId', 'tenantId'] }
},
// Security and compliance
logicalDelete: { enabled: true }, // GDPR compliance
versioning: { enabled: true }, // Audit trails
transactions: { enabled: true }, // Data integrity
// Performance optimization
writeOptimization: {
batchSize: 1000,
debounceMs: 100
},
fallback: {
enabled: true,
maxRetries: 3
}
});
// Multi-tenant operations
const tenantAOps = chronos.with({
databaseType: 'runtime',
tier: 'tenant',
tenantId: 'tenant-a',
collection: 'users'
});
const tenantBOps = chronos.with({
databaseType: 'runtime',
tier: 'tenant',
tenantId: 'tenant-b',
collection: 'users'
});
// Create users in different tenants (completely isolated)
await tenantAOps.create({ email: 'user@tenant-a.com', name: 'User A' });
await tenantBOps.create({ email: 'user@tenant-b.com', name: 'User B' });
interface ChronosConfig {
// Required: Connection definitions (95% reuse)
dbConnections: Record<string, DbConnection>;
spacesConnections: Record<string, SpacesConnection>;
// Required: Tiered database configuration
databases: {
metadata?: {
genericDatabase: GenericDatabase;
domainsDatabases: DomainDatabase[];
tenantDatabases: TenantDatabase[];
};
knowledge?: {
genericDatabase: GenericDatabase;
domainsDatabases: DomainDatabase[];
tenantDatabases: TenantDatabase[];
};
runtime?: {
tenantDatabases: RuntimeTenantDatabase[];
};
logs?: LogsDatabase;
};
// Optional: Local filesystem storage (for development/testing)
localStorage?: {
basePath: string;
enabled: boolean;
};
// Optional: Routing configuration
routing?: {
hashAlgo?: 'rendezvous' | 'jump';
chooseKey?: string | ((ctx: RouteContext) => string);
};
// Optional: Data retention policies
retention?: {
ver?: {
days?: number;
maxPerItem?: number;
};
counters?: {
days?: number;
weeks?: number;
months?: number;
};
};
// Optional: Collection mapping and validation
collectionMaps?: Record<string, {
indexedProps: string[]; // Empty array = auto-index all properties
base64Props?: Record<string, {
contentType: string;
preferredText?: boolean;
textCharset?: string;
}>;
validation?: {
requiredIndexed?: string[];
};
}>;
// Optional: Counter rules for analytics
counterRules?: {
rules?: Array<{
name: string;
on?: ('CREATE' | 'UPDATE' | 'DELETE')[];
scope?: 'meta' | 'payload';
when: Record<string, any>;
}>;
};
// Optional: Development shadow storage
devShadow?: {
enabled: boolean;
ttlHours: number;
maxBytesPerDoc?: number;
};
// Optional: Security and compliance
logicalDelete?: {
enabled: boolean; // Default: true (GDPR compliance)
};
versioning?: {
enabled: boolean; // Default: true (audit trails)
};
// Optional: Performance optimization
writeOptimization?: {
batchSize?: number;
debounceMs?: number;
compressionEnabled?: boolean;
};
// Optional: Fallback queue configuration
fallback?: {
enabled: boolean;
maxRetries?: number;
retryDelayMs?: number;
maxDelayMs?: number;
deadLetterCollection?: string;
};
// Optional: Transaction configuration
transactions?: {
enabled?: boolean;
autoDetect?: boolean;
};
}
// Connection interfaces
interface DbConnection {
mongoUri: string;
}
interface SpacesConnection {
endpoint: string;
region: string;
accessKey: string;
secretKey: string;
forcePathStyle?: boolean;
}
// Database interfaces
interface GenericDatabase {
dbConnRef: string;
spaceConnRef: string;
bucket: string;
dbName: string;
}
interface DomainDatabase {
domain: string;
dbConnRef: string;
spaceConnRef: string;
bucket: string;
dbName: string;
}
interface TenantDatabase {
tenantId: string;
dbConnRef: string;
spaceConnRef: string;
bucket: string;
dbName: string;
}
interface RuntimeTenantDatabase {
tenantId: string;
dbConnRef: string;
spaceConnRef: string;
bucket: string;
dbName: string;
analyticsDbName: string; // Integrated analytics
}
interface LogsDatabase {
dbConnRef: string;
spaceConnRef: string;
bucket: string;
dbName: string;
}
metadata - System configuration, user settings, application metadataknowledge - Content, documents, knowledge base, static dataruntime - User data, transactions, dynamic application datalogs - System logs, audit trails, monitoringgeneric - Shared across all tenants (system-wide data)domain - Shared within a domain (multi-tenant within domain)tenant - Isolated per tenant (single-tenant data)Option A: Direct Tier + Tenant ID Usage (Recommended)
const ops = chronos.with({
databaseType: 'runtime', // metadata | knowledge | runtime | logs
tier: 'tenant', // generic | domain | tenant
tenantId: 'tenant-a', // Maps to tenant-specific database
collection: 'users'
});
Option B: Generic Tier (No Tenant ID)
const ops = chronos.with({
databaseType: 'metadata',
tier: 'generic', // No tenantId needed
collection: 'config'
});
Option C: Domain Tier
const ops = chronos.with({
databaseType: 'knowledge',
tier: 'domain',
domain: 'healthcare', // Maps to domain-specific database
collection: 'articles'
});
Full transaction support with optimistic locking and tenant isolation:
// Create with automatic versioning and tenant isolation
const created = await ops.create(data, 'actor', 'reason');
// Returns: { id, ov: 0, cv: 0, createdAt }
// Update with optimistic lock and tenant context
const updated = await ops.update(id, newData, expectedOv, 'actor', 'reason');
// Returns: { id, ov: 1, cv: 1, updatedAt }
// Logical delete (default) - maintains audit trail
const deleted = await ops.delete(id, expectedOv, 'actor', 'reason');
// Returns: { id, ov: 2, cv: 2, deletedAt }
Incrementally augment records without full rewrite:
// Deep merge with array union
await ops.enrich(id, {
tags: ['premium'], // Arrays unioned
metadata: { newField: 'value' }, // Objects deep merged
}, {
functionId: 'enricher@v1', // Provenance tracking
actor: 'system',
reason: 'automated enrichment',
});
// Batch enrichment
await ops.enrich(id, [
{ tags: ['vip'] },
{ metadata: { score: 100 } },
{ tags: ['verified'] },
]);
Multiple read strategies with security:
// Get latest version with presigned URL
const latest = await ops.getLatest(id, {
presign: true,
ttlSeconds: 3600,
projection: ['email', 'status'],
});
// Get specific version
const v1 = await ops.getVersion(id, 1);
// Get as of time (time-travel)
const historical = await ops.getAsOf(id, '2024-01-01T00:00:00Z');
// List by metadata with pagination
const results = await ops.listByMeta({
filter: { status: 'active' },
limit: 50,
afterId: lastId,
sort: { updatedAt: -1 },
}, { presign: true });
Built-in analytics for each tenant:
// Analytics automatically tracked in analyticsDbName
const metrics = await chronos.counters.getTotals({
dbName: 'chronos_runtime_tenant_a',
collection: 'users',
});
// Returns:
// {
// created: 1000,
// updated: 500,
// deleted: 50,
// activeUsers: 750,
// }
New in v2.0.1! Automatic management of related entities with referential integrity:
Automatically extract and save related entities to their own collections:
// Define entity mappings
const entityMappings = [
{
property: 'customer', // Property in main record
collection: 'customers', // Target collection
keyProperty: 'customerId', // Key field in entity
databaseType: 'metadata', // Optional: database tier
tier: 'tenant' // Optional: tier level
},
{
property: 'product',
collection: 'products',
keyProperty: 'productId',
databaseType: 'knowledge',
tier: 'domain'
}
];
// Insert order with automatic customer/product management
const result = await ops.insertWithEntities(
{
orderId: 'ORD-123',
customer: {
customerId: 'CUST-456',
name: 'John Doe',
email: 'john@example.com'
},
product: {
productId: 'PROD-789',
name: 'Widget',
price: 99.99
},
quantity: 2
},
entityMappings,
'order-system',
'new order created'
);
// Returns:
// {
// mainRecordId: 'order-123-id',
// entityResults: Map {
// 'customer' => { id: 'cust-id', operation: 'created' },
// 'product' => { id: 'prod-id', operation: 'unchanged' }
// }
// }
// What happened:
// 1. Checked if customer CUST-456 exists → Created new customer record
// 2. Checked if product PROD-789 exists → Already existed, no changes
// 3. Created the order record with embedded customer/product objects
Fetch a record and automatically retrieve all related entities:
// Fetch order with all related entities
const result = await ops.getWithEntities(
'order-123-id',
entityMappings,
{ presign: true } // Optional read options
);
// Returns:
// {
// mainRecord: {
// orderId: 'ORD-123',
// customer: { customerId: 'CUST-456', ... },
// product: { productId: 'PROD-789', ... },
// quantity: 2
// },
// entityRecords: Map {
// 'customer' => { customerId: 'CUST-456', name: 'John Doe', ... },
// 'product' => { productId: 'PROD-789', name: 'Widget', price: 99.99, ... }
// }
// }
// Benefits:
// - Single call to fetch related data
// - Automatic relationship resolution
// - Maintains referential integrity
// - Works across database tiers
New in v2.0.1! Fetch data across tiers with automatic fallback or merging:
Fetch from knowledge database with tier priority (tenant → domain → generic):
// Fetch with fallback (returns first found)
const config = await chronos.getKnowledge(
'app-config',
{ key: 'feature-flags' },
{
tenantId: 'tenant-a',
domain: 'production',
merge: false // Return first found
}
);
// Returns tenant config if exists, otherwise domain, otherwise generic
// Fetch with merge (combines all tiers)
const mergedConfig = await chronos.getKnowledge(
'app-config',
{ key: 'feature-flags' },
{
tenantId: 'tenant-a',
domain: 'production',
merge: true, // Merge all tiers
mergeOptions: { dedupeArrays: true }
}
);
// Returns:
// {
// data: {
// // Generic tier settings
// maxUploadSize: 10485760,
// // Domain tier settings (production)
// enableNewFeature: true,
// // Tenant tier settings (tenant-a overrides)
// maxUploadSize: 52428800,
// customField: 'tenant-specific'
// },
// tiersFound: ['generic', 'domain', 'tenant'],
// tierRecords: {
// generic: { maxUploadSize: 10485760, ... },
// domain: { enableNewFeature: true, ... },
// tenant: { maxUploadSize: 52428800, customField: ... }
// }
// }
Same functionality for metadata database:
// Fetch schema with tier fallback
const schema = await chronos.getMetadata(
'schemas',
{ entityType: 'user' },
{
tenantId: 'tenant-a',
domain: 'saas',
merge: true
}
);
// Merge priority: generic → domain → tenant
// Tenant-specific fields override domain and generic
// Generic tier (base configuration)
{
theme: 'light',
features: ['basic', 'standard'],
settings: { timeout: 30 }
}
// Domain tier (environment-specific)
{
features: ['advanced'],
settings: { maxRetries: 3 }
}
// Tenant tier (customer-specific)
{
theme: 'dark',
features: ['premium'],
settings: { timeout: 60 }
}
// Merged result (with merge: true):
{
theme: 'dark', // From tenant (overrides)
features: ['basic', 'standard', 'advanced', 'premium'], // Union of all
settings: { timeout: 60, maxRetries: 3 } // Deep merge
}
Explicit, append-only restore with audit trails:
// Restore object to specific version
await ops.restoreObject(id, { ov: 5 });
// or by time
await ops.restoreObject(id, { at: '2024-01-01T00:00:00Z' });
// Restore entire collection
await ops.restoreCollection({ cv: 100 });
// or by time
await ops.restoreCollection({ at: '2024-01-01T00:00:00Z' });
Simplified MongoDB-compatible client for CRUD operations:
import { initXronox, createXronoxClient } from 'xronox';
const xronox = await initXronox();
const client = createXronoxClient(xronox, {
databaseType: 'knowledge',
tier: 'tenant',
tenantId: 'default'
});
// Insert with auto-generated ID
const id = await client.insert('knowledge_items', {
topic: 'quantum-computing',
content: { text: '...' },
confidence: 0.9
});
// Find one
const item = await client.findOne('knowledge_items', {
topic: 'quantum-computing'
});
// Find by ID
const doc = await client.findById('knowledge_items', id);
// Find multiple with filters
const items = await client.find('knowledge_items', {
state: 'known',
confidence: { $gte: 0.8 }
}, {
limit: 10,
projection: { include: ['topic', 'content'], exclude: [] }
});
// Update
await client.update('knowledge_items', id, {
confidence: 0.95
});
// Delete
await client.delete('knowledge_items', id);
// Bulk delete (GDPR compliance)
const deletedCount = await client.deleteMany('knowledge_items', {
'_metadata.identity.identifier': 'user-123'
});
// Count
const total = await client.count('knowledge_items', {
state: 'known'
});
Why XronoxClient?
deleteMany)Track WHO created or modified data with a standardized identity interface across your entire ecosystem.
import { Identity, createUserIdentity, createAgentIdentity, createSystemIdentity } from 'xronox';
// User identity
const userIdentity: Identity = createUserIdentity('user-123', 'John Doe', {
email: 'john@example.com',
role: 'admin',
department: 'engineering'
});
// AI Agent identity
const agentIdentity: Identity = createAgentIdentity('research-agent-v2', 'Research Agent', {
version: '2.0.0',
model: 'gpt-4',
provider: 'openai'
});
// System identity
const systemIdentity: Identity = createSystemIdentity('background-processor', 'Background Processor', {
hostname: 'worker-01',
pid: 12345
});
// API/Service identity
const apiIdentity: Identity = {
type: 'api',
identifier: 'stripe-integration',
name: 'Stripe Payment Integration',
metadata: { version: 'v1', environment: 'production' }
};
// Create record with identity attribution
const result = await ops.create(
{ topic: 'quantum-computing', content: '...' },
userIdentity, // WHO created this
'research' // WHY created (reason)
);
// Update with agent identity
await ops.update(
id,
{ confidence: 0.95 },
expectedOv,
agentIdentity, // WHO updated this
'ml-validation' // WHY updated
);
// Enrich with system identity
await ops.enrich(
id,
{ processed: true, processedAt: new Date() },
{
identity: systemIdentity, // WHO enriched this
reason: 'automated-processing'
}
);
type IdentityType =
| 'user' // Human user
| 'agent' // AI agent / automated system
| 'system' // Internal system process
| 'api' // External API integration
| 'service' // Microservice
| 'job' // Background job / worker
| 'cron' // Scheduled task
| 'webhook' // Webhook trigger
| string; // Custom types allowed
import {
createIdentity,
createUserIdentity,
createAgentIdentity,
createSystemIdentity,
createAPIIdentity,
validateIdentity,
isIdentity,
identityToString,
parseIdentityString
} from 'xronox';
// Create identities
const identity = createIdentity('user', 'user-123', 'John Doe');
// Validate identity
validateIdentity(identity); // Throws if invalid
isIdentity(obj); // Returns boolean
// Serialize for logging
const str = identityToString(identity);
// Returns: "user:user-123 (John Doe)"
// Parse from string
const parsed = parseIdentityString("user:user-123 (John Doe)");
// Returns: { type: 'user', identifier: 'user-123', name: 'John Doe' }
// Find all records created by a specific user
const userRecords = await ops.query({
meta: { '_system.creator.identifier': 'user-123' }
});
// Find all records updated by an agent
const agentUpdates = await ops.query({
meta: { '_system.updater.type': 'agent' }
});
// Find records created by API integrations
const apiRecords = await ops.query({
meta: { '_system.creator.type': 'api' }
});
Control what data is returned in queries, just like hidden files in your operating system.
collectionMaps: {
knowledge_items: {
indexedProps: ['topic', 'state', 'confidence'],
// Fields hidden by default (like hidden files in OS)
hiddenFields: ['_metadata', '_internal'],
// Named projections (reusable presets)
projection: {
default: {
include: '*',
exclude: ['_metadata', '_internal'] // Hide by default
},
minimal: {
include: ['id', 'topic', 'content'], // Only essential fields
exclude: []
},
withMetadata: {
include: '*', // Include everything (even hidden)
exclude: []
},
admin: {
include: '*', // Admin view with all fields
exclude: []
}
}
}
}
// Default behavior - hidden fields excluded
const item = await ops.getLatest(id);
// Returns: { topic, content, confidence, sources, ... }
// Does NOT include: _metadata, _internal ✅
// Explicitly include hidden fields (like "Show hidden files" in Windows)
const itemWithMetadata = await ops.getLatest(id, {
includeHidden: true
});
// Returns: { topic, content, _metadata, _internal, ... }
// INCLUDES hidden fields ✅
// Use named projection
const minimal = await ops.getLatest(id, {
projectionSpec: 'minimal'
});
// Returns: { id, topic, content } // Only specified fields ✅
// Use custom projection
const custom = await ops.getLatest(id, {
projectionSpec: {
include: ['topic', 'confidence'],
exclude: []
}
});
// Returns: { topic, confidence } ✅
// Query without metadata (default)
const items = await ops.query({
meta: { state: 'known' }
});
// All items returned without _metadata ✅
// Query with metadata (for admin panel)
const adminItems = await ops.query({
meta: { state: 'known' }
}, {
includeHidden: true // Include _metadata for audit
});
// All items with _metadata ✅
// Query with named projection
const minimalItems = await ops.query({
meta: { state: 'known' }
}, {
projectionSpec: 'minimal'
});
// All items with only minimal fields ✅
// Configuration
collectionMaps: {
knowledge_items: {
indexedProps: ['topic', 'agentId', 'state', 'confidence'],
// Hide internal metadata by default
hiddenFields: ['_metadata'],
projection: {
// Application use (clean response)
default: {
include: '*',
exclude: ['_metadata']
},
// Admin/Debug view (with metadata)
withMetadata: {
include: '*'
}
}
}
}
// Application code - clean response
const knowledge = await knowledgeOps.getLatest(id);
// Returns:
// {
// topic: 'quantum-computing',
// content: { ... },
// confidence: 0.9,
// sources: [...]
// }
// No _metadata cluttering the response ✅
// Admin/Debug - full context
const knowledgeWithContext = await knowledgeOps.getLatest(id, {
includeHidden: true
});
// Returns:
// {
// topic: 'quantum-computing',
// content: { ... },
// confidence: 0.9,
// sources: [...],
// _metadata: { // ← Now visible
// agentId: 'agent-123',
// jobId: 'job-456',
// identity: { ... },
// context: [...],
// learnedAt: '2025-01-09T...'
// }
// }
| Without Projection | With Projection (Xronox v3.0) |
|---|---|
| ❌ Metadata clutters responses | ✅ Clean responses by default |
| ❌ Manual field stripping in code | ✅ Define once in config |
| ❌ Inconsistent across frameworks | ✅ Standard behavior |
| ❌ Larger payloads | ✅ Smaller payloads (better performance) |
| ❌ Security risk (accidental exposure) | ✅ Security by default |
| ❌ Duplicated logic everywhere | ✅ DRY principle |
GDPR = General Data Protection Regulation - EU law giving users the "Right to be Forgotten"
1. Identity-Based Data Tracking
Every record can be attributed to an identity:
import { createUserIdentity } from 'xronox';
const userIdentity = createUserIdentity('user-123', 'John Doe', {
email: 'john@example.com'
});
// Track WHO created data
await ops.create(data, userIdentity, 'user-action');
// All data is now linked to user-123 via identity
2. Find All Data for a User
Query all records created/modified by a specific user:
// Find by creator
const userRecords = await ops.query({
meta: { '_system.creator.identifier': 'user-123' }
});
// Find by updater
const userUpdates = await ops.query({
meta: { '_system.updater.identifier': 'user-123' }
});
// Find by identity type
const userCreatedData = await ops.query({
meta: {
'_system.creator.type': 'user',
'_system.creator.identifier': 'user-123'
}
});
3. Bulk Delete User Data (Right to be Forgotten) ⭐ NEW in v3.0.1
import { createXronoxClient } from 'xronox';
const client = createXronoxClient(xronox, {
databaseType: 'knowledge',
tier: 'tenant',
tenantId: 'tenant-a'
});
// Delete ALL data for a user (GDPR compliance)
const deletedCount = await client.deleteMany('knowledge_items', {
'_metadata.identity.identifier': 'user-123',
'_metadata.tenantId': 'tenant-a'
});
console.log(`GDPR: Deleted ${deletedCount} items for user-123`);
// Returns: number of deleted documents
4. GDPR Deletion Workflow
// Step 1: User requests data deletion
async function handleGDPRDeletionRequest(userId: string, tenantId: string) {
const client = createXronoxClient(xronox, {
databaseType: 'knowledge',
tier: 'tenant',
tenantId
});
// Step 2: Find all user data
const userKnowledge = await client.find('knowledge_items', {
'_metadata.identity.identifier': userId,
'_metadata.tenantId': tenantId
});
console.log(`Found ${userKnowledge.length} knowledge items for user ${userId}`);
// Step 3: Delete all user data
const deletedKnowledge = await client.deleteMany('knowledge_items', {
'_metadata.identity.identifier': userId,
'_metadata.tenantId': tenantId
});
// Step 4: Delete from other collections
const deletedQuestions = await client.deleteMany('questions', {
'_metadata.identity.identifier': userId,
'_metadata.tenantId': tenantId
});
// Step 5: Return deletion report
return {
userId,
tenantId,
deletedAt: new Date().toISOString(),
itemsDeleted: {
knowledge: deletedKnowledge,
questions: deletedQuestions
},
total: deletedKnowledge + deletedQuestions,
status: 'GDPR_COMPLIANT'
};
}
// Usage
const report = await handleGDPRDeletionRequest('user-123', 'tenant-a');
console.log(`GDPR Deletion Complete:`, report);
// Output:
// {
// userId: 'user-123',
// tenantId: 'tenant-a',
// deletedAt: '2025-01-09T...',
// itemsDeleted: { knowledge: 45, questions: 12 },
// total: 57,
// status: 'GDPR_COMPLIANT'
// }
5. Logical Delete (Audit Trail)
// Enable logical delete for GDPR compliance with audit trails
logicalDelete: {
enabled: true // Default - data marked deleted but retained for audit
}
// Versioning for data lineage
versioning: {
enabled: true // Default - complete history of all changes
}
// Delete with audit trail
await ops.delete(id, expectedOv, userIdentity, 'gdpr-request');
// Data is logically deleted but history remains for compliance
6. GDPR Compliance Checklist
✅ Identity Attribution: Track WHO created/modified data
✅ Data Discovery: Find all data for a user
✅ Bulk Delete: Delete all user data in one operation
✅ Audit Trails: Complete history of deletions
✅ Tenant Isolation: Ensure cross-tenant data safety
✅ 30-Day Compliance: Automated deletion workflow
// Day 1: User requests deletion
const deletionRequest = {
userId: 'user-123',
email: 'john@example.com',
requestDate: '2025-01-01',
reason: 'GDPR Article 17 - Right to be Forgotten'
};
// Day 2-5: Process deletion
const report = await handleGDPRDeletionRequest(
deletionRequest.userId,
'tenant-a'
);
// Day 6: Verify deletion
const remainingData = await client.find('knowledge_items', {
'_metadata.identity.identifier': deletionRequest.userId
});
if (remainingData.length === 0) {
console.log('✅ GDPR Compliant: All user data deleted');
} else {
console.error('❌ GDPR Violation: Data still exists!');
}
// Day 30: Deadline - COMPLIANT! ✅
| Without XronoxClient | With XronoxClient v3.0.1 |
|---|---|
| ❌ Manual deletion per record | ✅ Bulk delete with deleteMany() |
| ❌ Complex query logic | ✅ Simple identity-based queries |
| ❌ Risk of missing data | ✅ Comprehensive deletion |
| ❌ Cannot meet 30-day deadline | ✅ Automated compliance |
| ❌ Legal liability | ✅ GDPR compliant |
| ❌ Cannot deploy in EU | ✅ EU-ready |
Legal Protection: Xronox v3.0.1 provides the technical foundation for GDPR compliance, but legal review is recommended.
// Enable comprehensive audit trails
collectionMaps: {
financial_records: {
indexedProps: ['accountId', 'transactionId', 'amount', 'date'],
validation: {
requiredIndexed: ['accountId', 'transactionId', 'amount']
}
}
},
// Enable transaction logging
transactions: {
enabled: true,
autoDetect: true
}
// Separate infrastructure per tenant for healthcare data
tenantDatabases: [
{
tenantId: 'healthcare-provider',
dbConnRef: 'mongo-healthcare', // Separate MongoDB cluster
spaceConnRef: 's3-healthcare', // Separate S3 bucket
bucket: 'chronos-healthcare',
dbName: 'chronos_healthcare',
analyticsDbName: 'chronos_analytics_healthcare'
}
]
┌─────────────┐
│ Client │
└──────┬──────┘
│
▼
┌─────────────────────────────────┐
│ Chronos-DB v2.0 │
│ ┌───────────────────────────┐ │
│ │ Router (HRW Hashing) │ │
│ │ + Tenant Resolution │ │
│ └───────────────────────────┘ │
│ │ │ │
│ ▼ ▼ │
│ ┌──────────┐ ┌──────────┐ │
│ │ Mongo │ │ S3 │ │
│ │ (Indexed)│ │(Payloads)│ │
│ └──────────┘ └──────────┘ │
│ │
│ ┌───────────────────────────┐ │
│ │ Analytics (Integrated) │ │
│ └───────────────────────────┘ │
│ │
│ ┌───────────────────────────┐ │
│ │ Fallback Queue (Optional)│ │
│ └───────────────────────────┘ │
└─────────────────────────────────┘
<collection>_head - Latest state pointers<collection>_ver - Immutable version index<collection>_counter - Collection version counter<collection>_locks - Transaction locks for concurrent write preventioncnt_total - Counter totals (in analytics database)chronos_fallback_ops - Fallback queue (if enabled)chronos_fallback_dead - Dead letter queue (if enabled)<jsonBucket>/
<collection>/
<itemId>/
v0/item.json
v1/item.json
v2/item.json
<contentBucket>/
<collection>/
<itemId>/
v0/
<property>/blob.bin
<property>/text.txt
v1/
<property>/blob.bin
Chronos-DB organizes all system fields under the _system property to keep your documents clean and normal-looking. This design ensures that your application data remains separate from Chronos-DB's internal management fields.
{
"_id": "507f1f77bcf86cd799439011", // MongoDB's native _id (stays at root)
"email": "user@example.com", // Your application data
"name": "John Doe", // Your application data
"status": "active", // Your application data
"_system": { // All Chronos-DB system fields
"ov": 3, // Object version (incremented on each update)
"cv": 150, // Collection version (incremented on each operation)
"insertedAt": "2024-01-01T00:00:00Z", // Creation timestamp
"updatedAt": "2024-01-15T10:30:00Z", // Last update timestamp
"deletedAt": null, // Deletion timestamp (null if not deleted)
"deleted": false, // Deletion status
"functionIds": ["enricher@v1"], // Enrichment function IDs that modified this record
"parentId": "parent-record-id", // Parent record for lineage tracking
"parentCollection": "parent-collection", // Parent collection name
"originId": "root-record-id", // Original root record ID
"originCollection": "root-collection" // Original root collection name
}
}
ov (Object Version): Incremented each time this specific record is updatedcv (Collection Version): Incremented each time any record in the collection is modifiedinsertedAt: ISO 8601 timestamp when the record was first createdupdatedAt: ISO 8601 timestamp when the record was last modifieddeletedAt: ISO 8601 timestamp when the record was logically deleted (null if not deleted)deleted: Boolean indicating if the record is logically deletedfunctionIds: Array of enrichment function IDs that have modified this recordparentId: ID of the parent record (for hierarchical data)parentCollection: Collection name of the parent recordoriginId: ID of the original root record (preserved throughout lineage)originCollection: Collection name of the original root recordChronos-DB automatically manages all _system fields. Your application should NOT modify these fields directly.
ov and cv are automatically incrementedinsertedAt, updatedAt, deletedAt are automatically setdeleted flag is automatically managedfunctionIds are automatically updated during enrichment_system fields: Use them for optimistic locking, audit trails, etc.parentRecord or origin when creating recordsexpectedOv for updates to prevent conflicts_system fields: Never directly set or change these fields// Create a child record with parent lineage
const childRecord = await ops.create({
name: 'Child Record',
data: 'some data'
}, 'system', 'child creation', {
parentRecord: {
id: 'parent-record-id',
collection: 'parent_items',
}
});
// The _system field will automatically include:
// {
// parentId: 'parent-record-id',
// parentCollection: 'parent_items',
// originId: 'parent-record-id', // Derived from parent
// originCollection: 'parent_items' // Derived from parent
// }
// Create a record with explicit origin (e.g., from external system)
const importedRecord = await ops.create({
customerId: 'ext-123',
name: 'Imported Customer'
}, 'system', 'import', {
origin: {
id: 'stripe_cus_123',
collection: 'customers',
system: 'stripe' // Optional external system name
}
});
// The _system field will automatically include:
// {
// originId: 'stripe_cus_123',
// originCollection: 'stripe:customers' // Includes system prefix
// }
// Get current record
const current = await ops.getLatest('record-id');
// Update with optimistic locking
const updated = await ops.update('record-id', {
name: 'Updated Name'
}, current._system.ov, 'user', 'name-update');
// Chronos-DB automatically:
// - Increments ov from 3 to 4
// - Updates updatedAt timestamp
// - Prevents conflicts if another process updated the record
// Get record as it was at a specific time
const historical = await ops.getAsOf('record-id', '2024-01-01T00:00:00Z');
// Get specific version
const v2 = await ops.getVersion('record-id', 2);
// Both return the same structure with _system fields showing:
// - ov: 2 (version at that time)
// - updatedAt: timestamp when that version was created
// - All other _system fields as they were at that time
If you're migrating from a system that stores version/timestamp fields at the root level:
// OLD WAY (don't do this):
{
"_id": "123",
"name": "John",
"version": 5, // ❌ Don't store at root
"createdAt": "...", // ❌ Don't store at root
"updatedAt": "..." // ❌ Don't store at root
}
// NEW WAY (Chronos-DB):
{
"_id": "123",
"name": "John", // ✅ Clean application data
"_system": { // ✅ All system fields organized
"ov": 5,
"insertedAt": "...",
"updatedAt": "..."
}
}
Chronos-DB v2.0.1 includes sophisticated analytics capabilities with unique counting support.
counterRules: {
rules: [
{
name: 'user_logins',
on: ['CREATE'],
scope: 'meta',
when: { action: 'login' },
countUnique: ['sessionId'] // Count unique sessionId values
},
{
name: 'product_views',
on: ['CREATE'],
scope: 'meta',
when: { action: 'view' },
countUnique: ['productId', 'category', 'brand'] // Multiple unique counts
},
{
name: 'premium_purchases',
on: ['CREATE'],
scope: 'meta',
when: {
userTier: 'premium',
action: 'purchase',
amount: { $gte: 100 }
},
countUnique: ['productId', 'category']
}
]
}
// Get analytics for a tenant
const metrics = await chronos.counters.getTotals({
dbName: 'chronos_runtime_tenant_a',
collection: 'events',
});
console.log('Analytics:', metrics);
// Output:
// {
// _id: "tenant:tenant-a|db:chronos_runtime_tenant_a|coll:events",
// created: 1000, // Total occurrences
// updated: 500,
// deleted: 50,
// rules: {
// user_logins: {
// created: 150, // Total logins
// unique: {
// sessionId: 45 // Unique sessions
// }
// },
// product_views: {
// created: 800, // Total views
// unique: {
// productId: 150, // Unique products viewed
// category: 25, // Unique categories viewed
// brand: 12 // Unique brands viewed
// }
// },
// premium_purchases: {
// created: 200, // Total premium purchases
// unique: {
// productId: 75, // Unique products purchased
// category: 15 // Unique categories purchased
// }
// }
// },
// lastAt: "2024-01-15T10:30:00Z"
// }
Business Intelligence: Understand user behavior patterns
Performance Optimization: Identify bottlenecks
Compliance Reporting: Meet regulatory requirements
E-commerce Analytics
// Track unique customers per day
{
name: 'daily_unique_customers',
on: ['CREATE'],
scope: 'meta',
when: { event: 'purchase' },
countUnique: ['customerId']
}
// Track unique products per category
{
name: 'category_product_diversity',
on: ['CREATE'],
scope: 'meta',
when: { action: 'view' },
countUnique: ['productId', 'category']
}
User Engagement Analytics
// Track unique sessions per user
{
name: 'user_session_activity',
on: ['CREATE'],
scope: 'meta',
when: { action: 'login' },
countUnique: ['sessionId', 'userId']
}
// Track unique features used per user
{
name: 'feature_adoption',
on: ['CREATE'],
scope: 'meta',
when: { event: 'feature_used' },
countUnique: ['featureId', 'userId']
}
Financial Analytics
// Track unique accounts per transaction type
{
name: 'transaction_diversity',
on: ['CREATE'],
scope: 'meta',
when: {
event: 'transaction',
amount: { $gte: 1000 }
},
countUnique: ['accountId', 'transactionType']
}
Each tenant gets its own analytics database with the following collections:
cnt_total Collection
{
"_id": "tenant:tenant-a|db:chronos_runtime_tenant_a|coll:events",
"tenant": "tenant-a",
"dbName": "chronos_runtime_tenant_a",
"collection": "events",
"created": 1000,
"updated": 500,
"deleted": 50,
"rules": {
"user_logins": {
"created": 150,
"unique": {
"sessionId": ["sess1", "sess2", "sess3", ...] // Stored as arrays
}
}
},
"lastAt": "2024-01-15T10:30:00Z"
}
Key Features:
$addToSet ensures unique values1. Choose Meaningful Properties
// Good: Track business-relevant unique values
countUnique: ['userId', 'productId', 'sessionId']
// Avoid: Tracking too many properties
countUnique: ['userId', 'productId', 'sessionId', 'ipAddress', 'userAgent', 'timestamp']
2. Use Appropriate Conditions
// Good: Specific conditions for meaningful analytics
when: {
userTier: 'premium',
action: 'purchase',
amount: { $gte: 100 }
}
// Avoid: Too broad conditions
when: { action: 'view' } // Might be too noisy
3. Monitor Performance
// Use indexes on frequently queried fields
collectionMaps: {
events: {
indexedProps: ['userId', 'action', 'timestamp', 'userTier']
}
}
4. Regular Cleanup
// Set appropriate retention policies
retention: {
counters: {
days: 30, // Keep daily counts for 30 days
weeks: 12, // Keep weekly counts for 12 weeks
months: 6 // Keep monthly counts for 6 months
}
Chronos-DB provides advanced analytics capabilities that are designed to work with external workers. Important: The worker itself is NOT included in Chronos-DB - you need to implement your own worker system.
Time-based analytics rules are designed to be executed by external workers on a schedule (hourly, daily, monthly).
analytics: {
// Standard counter rules (real-time)
counterRules: [
{
name: 'user_logins',
on: ['CREATE'],
scope: 'meta',
when: { action: 'login' },
countUnique: ['sessionId']
}
],
// Time-based analytics rules (worker-driven)
timeBasedRules: [
{
name: 'daily_revenue',
collection: 'transactions',
query: { status: 'completed' },
operation: 'sum',
field: 'amount',
saveMode: 'timeframe',
timeframe: 'daily'
},
{
name: 'hourly_active_users',
collection: 'events',
query: { action: 'page_view' },
operation: 'count',
saveMode: 'timeframe',
timeframe: 'hourly',
relativeTime: {
newerThan: 'PT1H' // Last hour
}
},
{
name: 'monthly_unique_customers',
collection: 'orders',
query: { status: 'completed' },
operation: 'count',
saveMode: 'timeframe',
timeframe: 'monthly',
arguments: ['customerId'] // Foreign key filtering
}
],
// Cross-tenant analytics rules
crossTenantRules: [
{
name: 'global_active_tenants',
collection: 'events',
query: { action: 'user_activity' },
mode: 'boolean',
masterTenantId: 'master-tenant',
slaveTenantIds: ['tenant-a', 'tenant-b', 'tenant-c'],
relativeTime: {
newerThan: 'P1D' // Last 24 hours
}
}
],
// List of all tenants for cross-tenant operations
tenants: ['tenant-a', 'tenant-b', 'tenant-c', 'master-tenant']
}
import { AdvancedAnalytics } from 'chronos-db';
import { MongoClient } from 'mongodb';
class AnalyticsWorker {
private analytics: AdvancedAnalytics;
constructor(mongoUri: string, analyticsDbName: string, config: any) {
const mongoClient = new MongoClient(mongoUri);
this.analytics = new AdvancedAnalytics(mongoClient, analyticsDbName, config);
}
// Execute time-based rules
async executeTimeBasedRules() {
const rules = this.config.analytics.timeBasedRules || [];
for (const rule of rules) {
try {
const result = await this.analytics.executeTimeBasedRule(rule);
console.log(`Executed rule ${rule.name}:`, result.value);
} catch (error) {
console.error(`Failed to execute rule ${rule.name}:`, error);
}
}
}
// Execute cross-tenant rules
async executeCrossTenantRules() {
const rules = this.config.analytics.crossTenantRules || [];
for (const rule of rules) {
try {
const result = await this.analytics.executeCrossTenantRule(rule);
console.log(`Executed cross-tenant rule ${rule.name}:`, result.value);
} catch (error) {
console.error(`Failed to execute cross-tenant rule ${rule.name}:`, error);
}
}
}
// Cleanup TTL data
async cleanupTTLData() {
// Clean up old analytics data
const cutoffDate = new Date(Date.now() - 30 * 24 * 60 * 60 * 1000); // 30 days ago
// Clean up time-based results older than 30 days
await this.analytics.analyticsDb.collection('timeBasedResults')
.deleteMany({ timestamp: { $lt: cutoffDate } });
// Clean up cross-tenant results older than 30 days
await this.analytics.analyticsDb.collection('crossTenantResults')
.deleteMany({ timestamp: { $lt: cutoffDate } });
console.log('TTL cleanup completed');
}
}
// Worker scheduling (example with node-cron)
import cron from 'node-cron';
const worker = new AnalyticsWorker(
'mongodb://localhost:27017',
'analytics_db',
config
);
// Run time-based analytics every hour
cron.schedule('0 * * * *', async () => {
console.log('Running hourly analytics...');
await worker.executeTimeBasedRules();
});
// Run cross-tenant analytics daily at midnight
cron.schedule('0 0 * * *', async () => {
console.log('Running daily cross-tenant analytics...');
await worker.executeCrossTenantRules();
});
// Cleanup TTL data weekly
cron.schedule('0 0 * * 0', async () => {
console.log('Running weekly TTL cleanup...');
await worker.cleanupTTLData();
});
{
"_id": "daily_revenue_1704067200000_abc123",
"ruleName": "daily_revenue",
"collection": "transactions",
"operation": "sum",
"field": "amount",
"value": 15420.50,
"timeframe": "daily",
"timestamp": "2024-01-01T00:00:00Z",
"arguments": null
}
{
"_id": "global_active_tenants_1704067200000_def456",
"ruleName": "global_active_tenants",
"collection": "events",
"mode": "boolean",
"value": 3,
"timestamp": "2024-01-01T00:00:00Z",
"masterTenantId": "master-tenant",
"slaveResults": [
{ "tenantId": "tenant-a", "value": 1 },
{ "tenantId": "tenant-b", "value": 1 },
{ "tenantId": "tenant-c", "value": 1 }
]
}
// Get analytics results
const timeBasedResults = await analytics.getTimeBasedResults({
ruleName: 'daily_revenue',
timeframe: 'daily',
limit: 30
});
const crossTenantResults = await analytics.getCrossTenantResults({
ruleName: 'global_active_tenants',
masterTenantId: 'master-tenant',
limit: 7
});
Chronos-DB requires external workers to handle TTL cleanup for:
// Robust worker with error handling
class RobustAnalyticsWorker {
async executeWithRetry(operation: () => Promise<any>, maxRetries = 3) {
for (let attempt = 1; attempt <= maxRetries; attempt++) {
try {
return await operation();
} catch (error) {
console.error(`Attempt ${attempt} failed:`, error);
if (attempt === maxRetries) {
throw error;
}
await new Promise(resolve => setTimeout(resolve, 1000 * attempt));
}
}
}
async executeTimeBasedRules() {
const rules = this.config.analytics.timeBasedRules || [];
for (const rule of rules) {
await this.executeWithRetry(async () => {
const result = await this.analytics.executeTimeBasedRule(rule);
console.log(`Executed rule ${rule.name}:`, result.value);
});
}
}
}
// Multiple workers for high-volume analytics
const workers = [
new AnalyticsWorker(mongoUri1, 'analytics_db_1', config),
new AnalyticsWorker(mongoUri2, 'analytics_db_2', config),
new AnalyticsWorker(mongoUri3, 'analytics_db_3', config)
];
// Distribute rules across workers
const rulesPerWorker = Math.ceil(timeBasedRules.length / workers.length);
workers.forEach((worker, index) => {
const startIndex = index * rulesPerWorker;
const endIndex = Math.min(startIndex + rulesPerWorker, timeBasedRules.length);
const workerRules = timeBasedRules.slice(startIndex, endIndex);
// Execute worker-specific rules
worker.executeRules(workerRules);
});
// Worker health monitoring
class MonitoredAnalyticsWorker {
async executeTimeBasedRules() {
const startTime = Date.now();
let successCount = 0;
let errorCount = 0;
try {
const rules = this.config.analytics.timeBasedRules || [];
for (const rule of rules) {
try {
await this.analytics.executeTimeBasedRule(rule);
successCount++;
} catch (error) {
errorCount++;
console.error(`Rule ${rule.name} failed:`, error);
// Send alert for critical rules
if (rule.critical) {
await this.sendAlert(`Critical analytics rule failed: ${rule.name}`);
}
}
}
const duration = Date.now() - startTime;
console.log(`Analytics execution completed: ${successCount} success, ${errorCount} errors, ${duration}ms`);
// Send metrics to monitoring system
await this.sendMetrics({
successCount,
errorCount,
duration,
timestamp: new Date()
});
} catch (error) {
console.error('Analytics worker failed:', error);
await this.sendAlert('Analytics worker failed completely');
}
}
}
// Ensure data consistency across workers
class ConsistentAnalyticsWorker {
async executeCrossTenantRules() {
// Use MongoDB transactions for consistency
const session = this.analytics.mongoClient.startSession();
try {
await session.withTransaction(async () => {
const rules = this.config.analytics.crossTenantRules || [];
for (const rule of rules) {
const result = await this.analytics.executeCrossTenantRule(rule);
// Verify result consistency
await this.verifyCrossTenantResult(result);
}
});
} finally {
await session.endSession();
}
}
async verifyCrossTenantResult(result: CrossTenantResult) {
// Verify that slave results sum correctly
const expectedValue = result.slaveResults.reduce((sum, slave) => sum + slave.value, 0);
if (result.value !== expectedValue) {
throw new Error(`Cross-tenant result inconsistency: expected ${expectedValue}, got ${result.value}`);
}
}
}
Chronos-DB works with any MongoDB setup - standalone instances, replica sets, or sharded clusters.
Recommended for Production:
# Option 1: Standalone MongoDB (works out of the box)
mongodb://localhost:27017/dbname
# Option 2: Replica Set (recommended for production)
mongodb://mongo1:27017,mongo2:27017,mongo3:27017/dbname?replicaSet=rs0
Transaction Support:
# Example docker-compose.yml for replica set (optional)
services:
mongo1:
image: mongo:6
command: mongod --replSet rs0
mongo2:
image: mongo:6
command: mongod --replSet rs0
mongo3:
image: mongo:6
command: mongod --replSet rs0
Tested with:
Contributions welcome! Please ensure:
MIT © nx-intelligence
Built with:
Xronox v2.4 introduces a first-class messaging database type designed for integration with Chronow (hot Redis-backed messaging + warm MongoDB durable audit). This enables dual-tier retention, DLQ auditing, and cross-tenant observability for pub/sub systems.
The messaging database provides simple MongoDB-only storage (NO versioning, NO S3 offload) for:
✅ Simple & Fast: MongoDB-only, no versioning overhead, no storage offload
✅ Dual-Tier Design: Redis hot path (Chronow) + MongoDB warm/audit (Chronos)
✅ Multi-Tenant: Tenant-scoped with isolated databases
✅ Idempotent: Safe retry/replay with duplicate detection
✅ DLQ Support: Dead letter tracking with failure reasons
✅ Optional Delivery Tracking: Control storage overhead with captureDeliveries flag
Single Database for All Tenants (like logs):
{
"dbConnections": {
"mongo-primary": {
"mongoUri": "mongodb://localhost:27017"
}
},
"spacesConnections": {},
"databases": {
"messaging": {
"dbConnRef": "mongo-primary",
"dbName": "chronos_messaging",
"captureDeliveries": false
}
},
"routing": { "hashAlgo": "rendezvous" }
}
Configuration Fields:
dbConnRef: MongoDB connection reference (required)dbName: MongoDB database name (required)captureDeliveries: Enable delivery attempt tracking (optional, default: false)Note: The messaging database is shared across all tenants. Tenant isolation is achieved through the tenantId field in every document/query.
import { initChronos } from 'xronox';
const chronos = initChronos(config);
// Get messaging API for a tenant
const messaging = chronos.messaging('tenant-a');
Store shared memory snapshots with append or latest-wins strategy:
// Latest strategy (one document per key, overwrites)
await messaging.shared.save({
namespace: 'config',
key: 'feature-flags',
val: { beta: true, newUI: false },
strategy: 'latest'
});
// Load latest value
const config = await messaging.shared.load({
namespace: 'config',
key: 'feature-flags',
strategy: 'latest'
});
// Append strategy (versioned history)
await messaging.shared.save({
namespace: 'events',
key: 'user-session',
val: { action: 'login', ts: new Date() },
strategy: 'append'
});
// Returns: { id: '...', version: 0 }
await messaging.shared.save({
namespace: 'events',
key: 'user-session',
val: { action: 'view-page', page: '/home' },
strategy: 'append'
});
// Returns: { id: '...', version: 1 }
// Load specific version
const v0 = await messaging.shared.load({
namespace: 'events',
key: 'user-session',
strategy: 'append',
version: 0
});
// Load latest version
const latest = await messaging.shared.load({
namespace: 'events',
key: 'user-session',
strategy: 'append'
});
// Tombstone (delete all versions)
await messaging.shared.tombstone({
namespace: 'config',
key: 'feature-flags',
reason: 'tenant-deleted'
});
Ensure topics exist and retrieve metadata:
// Ensure topic exists
await messaging.topics.ensure({
topic: 'payments',
shards: 4
});
// Get topic metadata
const topicInfo = await messaging.topics.get({ topic: 'payments' });
// Returns: { tenantId: 'tenant-a', topic: 'payments', shards: 4, createdAt: Date }
Save and retrieve canonical messages:
// Save message (idempotent)
await messaging.messages.save({
topic: 'payments',
msgId: '171223123-0', // Redis stream ID or ULID
headers: { type: 'payment.created', traceId: 'abc123' },
payload: { orderId: '123', amount: 100 },
firstSeenAt: new Date(),
size: 128
});
// Get specific message
const msg = await messaging.messages.get({
topic: 'payments',
msgId: '171223123-0'
});
// List messages (with time filter)
const recent = await messaging.messages.list({
topic: 'payments',
after: new Date(Date.now() - 86400000), // Last 24h
limit: 100
});
Track delivery attempts per subscription (enabled via captureDeliveries: true):
// Append delivery attempt
if (messaging.deliveries) {
await messaging.deliveries.append({
topic: 'payments',
subscription: 'payment-processor',
msgId: '171223123-0',
attempt: 1,
status: 'pending',
consumerId: 'worker-1',
ts: new Date()
});
// Update to ack
await messaging.deliveries.append({
topic: 'payments',
subscription: 'payment-processor',
msgId: '171223123-0',
attempt: 1,
status: 'ack',
consumerId: 'worker-1',
ts: new Date()
});
// List deliveries for a message
const deliveries = await messaging.deliveries.listByMessage({
topic: 'payments',
msgId: '171223123-0'
});
}
Track terminally failed messages:
// Save to DLQ
await messaging.deadLetters.save({
topic: 'payments',
subscription: 'payment-processor',
msgId: '171223123-0',
headers: { type: 'payment.created' },
payload: { orderId: '123', amount: 100 },
deliveries: 5,
reason: 'max_retries_exceeded',
failedAt: new Date()
});
// List dead letters
const dlq = await messaging.deadLetters.list({
topic: 'payments',
after: new Date(Date.now() - 86400000), // Last 24h
limit: 100
});
The messaging database creates 5 collections with optimized indexes:
{ tenantId, namespace, key, strategy } - Unique for latest strategy{ tenantId, namespace, key, version } - Versioned history for append{ updatedAt } - Freshness queries{ tenantId, topic } - Unique per tenant{ tenantId, topic, msgId } - Unique per tenant/topic{ firstSeenAt } - Time-based queries{ tenantId, topic, firstSeenAt } - Topic+time queriescaptureDeliveries: true){ tenantId, topic, subscription, msgId, attempt } - Unique per delivery{ tenantId, topic, msgId } - Message lookup{ ts } - Time-based cleanup{ tenantId, topic, msgId } - Lookup by message{ failedAt } - Time-based queries{ tenantId, topic, failedAt } - Topic analysisMessaging databases do NOT use Chronos versioning/S3 storage - they are simple MongoDB collections.
Retention is managed via external worker scripts:
// Example: Clean up old deliveries (7 days)
const db = mongoClient.db('chronos_messaging_tenant_a');
await db.collection('deliveries').deleteMany({
ts: { $lt: new Date(Date.now() - 7 * 86400000) }
});
// Example: Clean up old dead letters (30 days)
await db.collection('dead_letters').deleteMany({
failedAt: { $lt: new Date(Date.now() - 30 * 86400000) }
});
// Example: Enforce maxVersions for append-mode shared memory
const maxVersions = 100;
const keys = await db.collection('shared_memory').distinct('key', {
tenantId: 'tenant-a',
namespace: 'events',
strategy: 'append'
});
for (const key of keys) {
const docs = await db.collection('shared_memory')
.find({ tenantId: 'tenant-a', namespace: 'events', key, strategy: 'append' })
.sort({ version: -1 })
.skip(maxVersions)
.toArray();
if (docs.length > 0) {
await db.collection('shared_memory').deleteMany({
_id: { $in: docs.map(d => d._id) }
});
}
}
Chronow (Redis-backed hot messaging) uses this messaging database for:
Typical Flow:
Chronow (Hot - Redis):
├─ Publish message → Redis Stream → Subscribers
└─ On publish: chronos.messaging(tenant).messages.save(...)
Chronos (Warm - MongoDB):
├─ Stores canonical message for audit
├─ Tracks DLQ for failed deliveries
└─ Provides long-tail retrieval (>24h)
Xronox v2.4 includes a dedicated identities database for managing users, accounts, authentication, permissions, and roles.
The identities database provides simple MongoDB-only storage (NO versioning, NO S3 offload, like logs/messaging) for:
✅ Simple & Fast: MongoDB-only, no versioning overhead
✅ Generic/Shared: Single database for all tenants (tenant-scoped queries)
✅ RBAC Ready: Built for role-based access control
✅ Auth Flexible: Works with any auth strategy (JWT, OAuth, sessions)
✅ SaaS Optimized: Perfect for multi-tenant SaaS applications
Single Database for All Tenants (like logs and messaging):
{
"dbConnections": {
"mongo-primary": {
"mongoUri": "mongodb://localhost:27017"
}
},
"spacesConnections": {},
"databases": {
"identities": {
"dbConnRef": "mongo-primary",
"dbName": "chronos_identities"
}
},
"routing": { "hashAlgo": "rendezvous" }
}
Configuration Fields:
dbConnRef: MongoDB connection reference (required)dbName: MongoDB database name (required)Note: The identities database is shared across all tenants. Tenant isolation is achieved through the tenantId field in every document/query.
While Chronos-DB doesn't enforce specific collection schemas for identities (you define them), typical collections include:
{
_id: ObjectId,
tenantId: string,
email: string,
passwordHash: string,
name: string,
roles: string[],
status: 'active' | 'suspended' | 'deleted',
createdAt: Date,
updatedAt: Date
}
{
_id: ObjectId,
tenantId: string,
companyName: string,
plan: 'free' | 'pro' | 'enterprise',
ownerId: string, // Reference to users collection
seats: number,
billing: {
customerId: string,
subscriptionId: string
},
createdAt: Date
}
{
_id: ObjectId,
tenantId: string,
userId: string,
token: string,
expiresAt: Date,
ipAddress: string,
userAgent: string,
createdAt: Date
}
{
_id: ObjectId,
tenantId: string,
name: string,
permissions: string[],
isSystemRole: boolean,
createdAt: Date
}
Use the standard Xronox with() API with databaseType: 'identities':
import { initChronos } from 'xronox';
const chronos = initChronos(config);
// Get operations for identities database
const ops = chronos.with({
databaseType: 'identities',
dbName: 'chronos_identities',
collection: 'users'
});
// Create user
const user = await ops.create({
tenantId: 'acme-corp',
email: 'john@acme.com',
passwordHash: '...',
name: 'John Doe',
roles: ['admin'],
status: 'active'
}, 'system', 'user-registration');
// Query users
const users = await ops.query({
tenantId: 'acme-corp',
status: 'active'
}, {});
// Update user
await ops.update(user.id, {
name: 'John Smith'
}, user.ov, 'admin', 'name-change');
The identities database is authentication-agnostic — use it with any auth strategy:
JWT Authentication:
// Verify login
const user = await ops.query({
tenantId: 'acme-corp',
email: 'john@acme.com'
}, {});
if (user && await bcrypt.compare(password, user.passwordHash)) {
const token = jwt.sign({ userId: user._id, tenantId: user.tenantId }, secret);
// Store session
await sessionOps.create({
tenantId: user.tenantId,
userId: user._id,
token: hashToken(token),
expiresAt: new Date(Date.now() + 86400000)
});
}
OAuth Integration:
// Store OAuth connection
await oauthOps.create({
tenantId: 'acme-corp',
userId: user._id,
provider: 'google',
providerId: googleProfile.id,
accessToken: encryptToken(tokens.access_token),
refreshToken: encryptToken(tokens.refresh_token),
expiresAt: new Date(Date.now() + tokens.expires_in * 1000)
});
{ tenantId, email }, { tenantId, status }, etc.tenantId for very large deployments (millions of users)The identities database is separate from metadata/knowledge because:
A: Xronox v2.4.0 is a rebranding from chronos-db with the same great features:
xronox (formerly chronos-db)messaging database type for Chronow integration (pub/sub audit, DLQ, shared memory)identities database type for users, accounts, auth, permissions, rolesA: v2.2.0 introduced major new features:
insertWithEntities and getWithEntities for automatic entity managementgetKnowledge and getMetadata with automatic fallback/merge across tiersA: Xronox provides multiple levels of tenant isolation:
A:
A: Xronox is optimized for big data:
A: For v2.4.0, just update your package.json and imports:
npm uninstall chronos-db
npm install xronox@^2.4.0
Then update your imports from 'chronos-db' to 'xronox'. The API is 100% compatible.
A: Yes! Use localStorage for development/testing:
const xronox = initXronox({
dbConnections: { 'local': { mongoUri: 'mongodb://localhost:27017' } },
spacesConnections: {},
databases: { /* your config */ },
localStorage: { enabled: true, basePath: './data' }
});
Made with ❤️ for enterprise-grade data management
FAQs
Xronox v3.0: Enterprise-grade MongoDB persistence layer with embedded multi-tenancy, tiered architecture, and big data capabilities. Features time-travel versioning, S3/Azure/local storage, enrichment API, Identity system, and field projection. NEW: Stand
We found that xronox demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Research
A supply chain attack on Axios introduced a malicious dependency, plain-crypto-js@4.2.1, published minutes earlier and absent from the project’s GitHub releases.

Research
Malicious versions of the Telnyx Python SDK on PyPI delivered credential-stealing malware via a multi-stage supply chain attack.

Security News
TeamPCP is partnering with ransomware group Vect to turn open source supply chain attacks on tools like Trivy and LiteLLM into large-scale ransomware operations.