New Research: Supply Chain Attack on Axios Pulls Malicious Dependency from npm.Details →
Socket
Book a DemoSign in
Socket

xronox

Package Overview
Dependencies
Maintainers
1
Versions
13
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install
Package was removed
Sorry, it seems this package was removed from the registry

xronox

Xronox v3.0: Enterprise-grade MongoDB persistence layer with embedded multi-tenancy, tiered architecture, and big data capabilities. Features time-travel versioning, S3/Azure/local storage, enrichment API, Identity system, and field projection. NEW: Stand

latest
Source
npmnpm
Version
3.1.0
Version published
Maintainers
1
Created
Source

Xronox v3.0 🚀

The Essential Persistence Layer for Big Data & SaaS Applications
Enterprise-Grade MongoDB + S3/Azure with Embedded Multi-Tenancy Formerly Chronos-DB

Build Status TypeScript Node License Version

🆕 What's New in v3.0

🆔 Standard Identity System (CR-007)

Track WHO created or modified data with a consistent identity interface across your entire ecosystem:

  • Universal Identity Interface: Standard Identity type for users, agents, systems, APIs, services, jobs
  • Attribution & Audit Trails: GDPR, SOX, HIPAA compliant tracking
  • Helper Functions: createUserIdentity, createAgentIdentity, createSystemIdentity, etc.
  • Serialization: Convert identities to/from strings for logging and storage

🎯 Field Projection & Hidden Fields (CR-009)

Control what data is returned in queries, just like hidden files in your OS:

  • Hidden Fields: Mark fields as hidden by default (e.g., _metadata, _internal)
  • Named Projections: Define reusable projections (minimal, default, withMetadata)
  • Query Options: includeHidden and projectionSpec for dynamic control
  • Security & Performance: Reduce payload size, prevent accidental data exposure

🌐 Dynamic Tenant Resolution (v2.8)

Automatically discover tenant databases at runtime without configuration restarts:

  • Zero-Config Tenants: Add new tenants without redeployment
  • Database Discovery: Auto-detect tenant databases based on naming patterns
  • Caching: Intelligent caching with TTL for performance

📦 Multi-Bucket Storage (v2.5)

Optimize costs and organization with separate buckets for different data types:

  • Separate Buckets: records, versions, content, backups in different S3 buckets
  • Cost Optimization: Use different storage classes per bucket type
  • Access Control: Fine-grained permissions per bucket

⚙️ Config File Auto-Loading (v2.7)

Zero-code configuration with automatic discovery and ENV variable resolution:

  • Auto-Discovery: Loads xronox.config.json or .xronox.json from project root
  • ENV Tokens: ENV.VAR_NAME placeholders resolved at runtime
  • Strict Validation: Hard errors if ENV vars are missing
  • Secret Masking: Automatic masking of sensitive values in logs

🎯 Why Xronox is Essential for Big Data & SaaS

Whether you're building Big Data platforms OR SaaS applications — Xronox is a must-have that dramatically simplifies development and slashes costs:

💰 Cost Savings

  • R&D Costs: Eliminate months of building multi-tenant data infrastructure from scratch
  • Server Costs: Intelligent tiering (hot MongoDB + warm S3/Azure) reduces compute by 60-80%
  • Database Costs: S3/Azure storage is 95% cheaper than MongoDB for historical data
  • Storage Costs: Automatic compression, deduplication, and tiered retention policies

⚡ Development Speed

  • Pre-built Multi-Tenancy: Tenant isolation, routing, and security out-of-the-box
  • Zero Infrastructure Code: Focus on business logic, not data plumbing
  • Production-Ready: Battle-tested patterns for versioning, audit trails, analytics
  • Time-to-Market: Launch 3-6 months faster with proven architecture

📊 Big Data Ready

Xronox v2.4 is designed for large-scale applications handling millions of records with embedded multi-tenancy by design and tiered architecture for efficient big data workloads while maintaining enterprise-grade security and compliance.

🏢 Enterprise Features

  • 🏢 Embedded Multi-Tenancy: Built-in tenant isolation with configurable database tiers
  • 📊 Tiered Architecture: Separate metadata, knowledge, runtime, and logs databases
  • ⚡ Big Data Ready: Optimized for high-volume operations with S3 integration
  • 🔄 Time-Travel Queries: Full historical data access and point-in-time recovery
  • 🔒 Enterprise Security: Row-level security with tenant-based data isolation
  • 📈 Scalable Design: Horizontal scaling across multiple MongoDB clusters
  • 🛡️ Compliance Ready: Built-in audit trails, data lineage, and regulatory compliance features

📖 Architecture Overview

Xronox v2.4 provides a production-ready persistence layer designed for enterprise applications and big data projects that combines:

  • 🏢 Multi-Tenant Architecture: Built-in tenant isolation with configurable database tiers (metadata, knowledge, runtime, logs, messaging, identities)
  • 📊 MongoDB for indexed metadata, head pointers, and bounded recent version index
  • ☁️ S3-compatible storage for authoritative payloads, full JSON per version
  • 🔄 Automatic versioning with explicit restore capabilities and time-travel queries
  • ⚡ Multi-backend routing with connection pooling for horizontal scaling
  • 🔒 Transaction locking for concurrent write prevention across multiple servers
  • 📈 Big Data Optimization: Efficient handling of large datasets with S3/Azure integration
  • 🎯 Tenant Isolation: Row-level security with configurable tenant boundaries
  • 📊 Integrated Analytics: Built-in counters and metrics for business intelligence
  • 🔗 Entity Relationships: Automatic entity management with insertWithEntities and getWithEntities
  • 🎯 Tiered Fetching: getKnowledge and getMetadata with automatic fallback/merge across tiers
  • 🔧 Enrichment API for incremental updates with deep merge
  • 🔄 Fallback queues for guaranteed durability
  • ⚡ Write optimization for high-throughput scenarios
  • 💬 Messaging Integration: First-class messaging database for Chronow (Redis hot + MongoDB warm/audit)
  • 👤 Identities Database: Dedicated database for users, accounts, authentication, permissions, roles

Key Principles

No Environment Variables - All configuration via JSON
Cost-First - Minimize storage and compute costs
Stability-First - Immutable versioning, transactions, optimistic locking
Concurrent-Safe - Transaction locking prevents multi-server write conflicts
Portable - Works with any S3-compatible provider
Type-Safe - Full TypeScript support with Zod validation
Security-First - Built-in tenant isolation and data protection
Compliance-Ready - Audit trails, data lineage, and regulatory features

🏢 Multi-Tenant Architecture & Security

🔒 Security-First Design

Xronox v2.4 implements enterprise-grade security with multiple layers of protection:

1. Tenant Isolation

  • Complete Data Separation: Each tenant's data is stored in separate databases
  • Network-Level Isolation: Different MongoDB clusters per tenant (optional)
  • Storage Isolation: Separate S3 buckets or prefixes per tenant
  • Access Control: Tenant-based routing prevents cross-tenant data access

2. Row-Level Security

  • Tenant Context: Every operation requires explicit tenant context
  • Automatic Filtering: Queries automatically filter by tenant
  • Audit Trails: All operations logged with tenant information
  • Data Lineage: Complete tracking of data flow and transformations

3. Compliance Features

  • GDPR Compliance: Built-in data deletion and anonymization
  • SOX Compliance: Immutable audit trails and data integrity
  • HIPAA Compliance: Encrypted data storage and access controls
  • SOC 2: Comprehensive logging and monitoring capabilities

🏗️ Tiered Database Architecture

Chronos-DB uses a sophisticated tiered approach to optimize for different data types and security requirements:

// Multi-tier database configuration with security considerations
databases: {
  metadata: {
    genericDatabase: {     // System-wide metadata (no tenant isolation needed)
      dbConnRef: 'mongo-primary',
      spaceConnRef: 's3-primary',
      bucket: 'chronos-metadata',
      dbName: 'chronos_metadata_generic'
    },
    domainsDatabases: [    // Domain-level metadata (shared within domain)
      {
        domain: 'healthcare',
        dbConnRef: 'mongo-healthcare',
        spaceConnRef: 's3-healthcare',
        bucket: 'chronos-metadata-healthcare',
        dbName: 'chronos_metadata_healthcare'
      }
    ],
    tenantDatabases: [    // Tenant-specific metadata (isolated per tenant)
      {
        tenantId: 'tenant-a',
        dbConnRef: 'mongo-tenant-a',
        spaceConnRef: 's3-tenant-a',
        bucket: 'chronos-metadata-tenant-a',
        dbName: 'chronos_metadata_tenant_a'
      }
    ]
  },
  knowledge: {
    genericDatabase: {     // Shared knowledge base
      dbConnRef: 'mongo-primary',
      spaceConnRef: 's3-primary',
      bucket: 'chronos-knowledge',
      dbName: 'chronos_knowledge_generic'
    },
    domainsDatabases: [    // Domain-specific knowledge
      {
        domain: 'finance',
        dbConnRef: 'mongo-finance',
        spaceConnRef: 's3-finance',
        bucket: 'chronos-knowledge-finance',
        dbName: 'chronos_knowledge_finance'
      }
    ],
    tenantDatabases: [    // Tenant-specific knowledge
      {
        tenantId: 'tenant-b',
        dbConnRef: 'mongo-tenant-b',
        spaceConnRef: 's3-tenant-b',
        bucket: 'chronos-knowledge-tenant-b',
        dbName: 'chronos_knowledge_tenant_b'
      }
    ]
  },
  runtime: {
    tenantDatabases: [    // Runtime data (always tenant-isolated)
      {
        tenantId: 'tenant-a',
        dbConnRef: 'mongo-tenant-a',
        spaceConnRef: 's3-tenant-a',
        bucket: 'chronos-runtime-tenant-a',
        dbName: 'chronos_runtime_tenant_a',
        analyticsDbName: 'chronos_analytics_tenant_a'  // Integrated analytics
      }
    ]
  },
  logs: {                 // System logs (centralized)
    dbConnRef: 'mongo-logs',
    spaceConnRef: 's3-logs',
    bucket: 'chronos-logs',
    dbName: 'chronos_logs'
  }
}

🔐 Security Best Practices

1. Connection Reuse & Security

// Define connections once, reference everywhere (95% reuse as requested)
dbConnections: {
  'mongo-primary': {
    mongoUri: 'mongodb+srv://user:pass@primary-cluster.mongodb.net/?retryWrites=true&w=majority'
  },
  'mongo-tenant-a': {
    mongoUri: 'mongodb+srv://user:pass@tenant-a-cluster.mongodb.net/?retryWrites=true&w=majority'
  },
  'mongo-analytics': {
    mongoUri: 'mongodb+srv://user:pass@analytics-cluster.mongodb.net/?retryWrites=true&w=majority'
  }
},
spacesConnections: {
  's3-primary': {
    endpoint: 'https://s3.amazonaws.com',
    region: 'us-east-1',
    accessKey: process.env.AWS_ACCESS_KEY_ID,
    secretKey: process.env.AWS_SECRET_ACCESS_KEY
  },
  's3-tenant-a': {
    endpoint: 'https://tenant-a-bucket.s3.amazonaws.com',
    region: 'us-east-1',
    accessKey: process.env.TENANT_A_ACCESS_KEY,
    secretKey: process.env.TENANT_A_SECRET_KEY
  },
  // Azure Blob Storage (NEW in v2.0.1)
  'azure-primary': {
    endpoint: 'https://myaccount.blob.core.windows.net',
    region: 'us-east-1',  // Not used for Azure but required
    accessKey: process.env.AZURE_ACCOUNT_NAME,
    secretKey: process.env.AZURE_ACCOUNT_KEY
  }
}

2. Tenant Isolation Strategies

Option A: Complete Isolation (Highest Security)

// Each tenant gets separate MongoDB cluster and S3 bucket
tenantDatabases: [
  {
    tenantId: 'tenant-a',
    dbConnRef: 'mongo-tenant-a',      // Separate MongoDB cluster
    spaceConnRef: 's3-tenant-a',      // Separate S3 bucket
    bucket: 'chronos-tenant-a',
    dbName: 'chronos_tenant_a',
    analyticsDbName: 'chronos_analytics_tenant_a'
  }
]

Option B: Shared Infrastructure (Cost-Effective)

// Multiple tenants share infrastructure but with strict isolation
tenantDatabases: [
  {
    tenantId: 'tenant-a',
    dbConnRef: 'mongo-shared',        // Shared MongoDB cluster
    spaceConnRef: 's3-shared',       // Shared S3 bucket
    bucket: 'chronos-shared',
    dbName: 'chronos_tenant_a',       // Separate database per tenant
    analyticsDbName: 'chronos_analytics_tenant_a'
  }
]

3. Compliance Configuration

GDPR Compliance

// Enable logical delete for GDPR compliance
logicalDelete: {
  enabled: true  // Default - enables data recovery and audit trails
},

// Enable versioning for data lineage
versioning: {
  enabled: true  // Default - enables time-travel queries and audit trails
}

SOX Compliance

// Enable comprehensive audit trails
  collectionMaps: {
  financial_records: {
    indexedProps: ['accountId', 'transactionId', 'amount', 'date'],
      validation: {
      requiredIndexed: ['accountId', 'transactionId', 'amount']
    }
  }
},

// Enable transaction logging
transactions: {
  enabled: true,
  autoDetect: true
}

📊 Big Data Architecture & Performance

🚀 Big Data Optimization

Chronos-DB v2.0 is specifically designed for big data scenarios with enterprise-grade performance:

1. Horizontal Scaling

// Distribute load across multiple MongoDB clusters
dbConnections: {
  'mongo-cluster-1': { mongoUri: 'mongodb://cluster-1:27017' },
  'mongo-cluster-2': { mongoUri: 'mongodb://cluster-2:27017' },
  'mongo-cluster-3': { mongoUri: 'mongodb://cluster-3:27017' }
},

// S3 storage across multiple regions
spacesConnections: {
  's3-us-east': {
    endpoint: 'https://s3.us-east-1.amazonaws.com',
    region: 'us-east-1'
  },
  's3-eu-west': {
    endpoint: 'https://s3.eu-west-1.amazonaws.com',
    region: 'eu-west-1'
  }
}

2. High-Throughput Operations

// Optimized for millions of operations per day
const xronox = initXronox({
  // ... configuration
  writeOptimization: {
    batchSize: 1000,           // Batch S3 operations
    debounceMs: 100,          // Debounce counter updates
    compressionEnabled: true   // Compress large payloads
  },
  
  // Fallback queues for guaranteed durability
  fallback: {
    enabled: true,
    maxRetries: 3,
    retryDelayMs: 1000,
    maxDelayMs: 60000
  }
});

3. Analytics Integration

// Built-in analytics for each tenant
runtime: {
  tenantDatabases: [
    {
      tenantId: 'tenant-a',
      dbConnRef: 'mongo-tenant-a',
      spaceConnRef: 's3-tenant-a',
      bucket: 'chronos-runtime-tenant-a',
      dbName: 'chronos_runtime_tenant_a',
      analyticsDbName: 'chronos_analytics_tenant_a'  // Integrated analytics
    }
  ]
}

📈 Big Data Use Cases

1. IoT Data Ingestion

// High-volume IoT data processing
const iotOps = chronos.with({
  databaseType: 'runtime',
  tier: 'tenant',
  tenantId: 'iot-platform',
  collection: 'sensor-data'
});

// Batch processing for millions of sensor readings
const batchSize = 10000;
const sensorData = Array.from({ length: batchSize }, (_, i) => ({
  deviceId: `sensor-${i % 1000}`,
  timestamp: new Date(),
  temperature: Math.random() * 100,
  humidity: Math.random() * 100,
  location: { lat: Math.random() * 90, lng: Math.random() * 180 }
}));

// Efficient batch insertion
for (let i = 0; i < sensorData.length; i += 100) {
  const batch = sensorData.slice(i, i + 100);
  await Promise.all(batch.map(data => 
    iotOps.create(data, 'iot-ingestion', 'sensor-data')
  ));
}

2. Financial Transaction Processing

// High-frequency trading data
const tradingOps = chronos.with({
  databaseType: 'runtime',
  tier: 'tenant',
  tenantId: 'trading-firm',
  collection: 'transactions'
});

// Process thousands of transactions per second
const processTransaction = async (transaction) => {
  const result = await tradingOps.create({
    symbol: transaction.symbol,
    price: transaction.price,
    quantity: transaction.quantity,
    timestamp: transaction.timestamp,
    traderId: transaction.traderId
  }, 'trading-system', 'market-transaction');
  
  // Analytics automatically tracked in analyticsDbName
  return result;
};

3. E-commerce Analytics

// Multi-tenant e-commerce platform
const ecommerceOps = chronos.with({
  databaseType: 'runtime',
  tier: 'tenant',
  tenantId: 'ecommerce-store',
  collection: 'orders'
});

// Process orders with automatic analytics
const createOrder = async (orderData) => {
  const result = await ecommerceOps.create({
    customerId: orderData.customerId,
    items: orderData.items,
    total: orderData.total,
    status: 'pending'
  }, 'ecommerce-system', 'order-creation');
  
  // Analytics automatically tracked:
  // - Order count per customer
  // - Revenue per day/month
  // - Product popularity
  // - Customer behavior patterns
  
  return result;
};

⚡ Performance Optimizations

1. Automatic Connection Management

Why Connection Management Matters:

  • Opening/closing database connections is expensive (100-500ms per connection)
  • Each MongoDB cluster should have ONE connection pool shared across all operations
  • S3/Azure clients should be reused to avoid SDK initialization overhead
  • Poor connection management = slow performance + resource exhaustion

How Chronos-DB Manages Connections:

// You define connections ONCE by key
dbConnections: {
  'mongo-primary': { mongoUri: 'mongodb://primary:27017' },
  'mongo-tenant-a': { mongoUri: 'mongodb://tenant-a:27017' }
},
spacesConnections: {
  's3-primary': { endpoint: 'https://s3.amazonaws.com', ... }
}

// Then reference them everywhere
databases: {
  runtime: {
    tenantDatabases: [
      { dbConnRef: 'mongo-primary', spaceConnRef: 's3-primary', ... },  // ← References
      { dbConnRef: 'mongo-tenant-a', spaceConnRef: 's3-primary', ... }  // ← References
    ]
  }
}

What Happens Internally:

  • First Request to mongo-primary:

    // Router creates ONE MongoClient connection
    const client = new MongoClient('mongodb://primary:27017');
    await client.connect();  // Only happens ONCE
    
    // Stores in connection pool: { 'mongodb://primary:27017' => client }
    
  • Subsequent Requests to mongo-primary:

    // Router REUSES the existing connection
    const client = connectionPool.get('mongodb://primary:27017');
    // No new connection created! ✅
    
  • Connection Lifecycle:

    // During operation
    - getMongoClient(mongoUri) → Returns existing client or creates new one
    - Connection stays open for the lifetime of the application
    - MongoDB driver's built-in connection pooling handles concurrency
    
    // On shutdown
    - chronos.admin.shutdown() → Closes all connections gracefully
    - Ensures no connections are leaked
    

Benefits:

Performance:

  • First request: ~200ms (connection + operation)
  • Subsequent requests: ~5ms (operation only)
  • 40x faster for repeated operations!

Resource Efficiency:

  • 10 tenants referencing same mongo-primary = 1 connection pool
  • Without this: 10 separate connections = 10x resource usage ❌

Scalability:

  • Handle 1000s of requests/sec with minimal connections
  • MongoDB connection pool handles concurrency automatically
  • S3/Azure client reuse reduces API initialization overhead

Reliability:

  • Automatic reconnection on connection failures
  • Connection health monitoring
  • Graceful degradation

Example Scenario:

// 3 tenants, 2 MongoDB clusters, 1 S3 bucket
dbConnections: {
  'mongo-shared': { mongoUri: 'mongodb://shared:27017' },
  'mongo-premium': { mongoUri: 'mongodb://premium:27017' }
},
spacesConnections: {
  's3-main': { endpoint: 'https://s3.amazonaws.com', ... }
},
databases: {
  runtime: {
    tenantDatabases: [
      { tenantId: 'tenant-a', dbConnRef: 'mongo-shared', spaceConnRef: 's3-main', ... },
      { tenantId: 'tenant-b', dbConnRef: 'mongo-shared', spaceConnRef: 's3-main', ... },
      { tenantId: 'tenant-c', dbConnRef: 'mongo-premium', spaceConnRef: 's3-main', ... }
    ]
  }
}

// Result:
// - 2 MongoDB connections (not 3!) ✅
// - 1 S3 client (not 3!) ✅
// - 95% configuration reuse ✅
// - Tenant-a and tenant-b share mongo-shared connection pool
// - Tenant-c uses dedicated mongo-premium connection pool

Connection Management API:

// Internal router methods (you don't call these directly)
router.getMongoClient(mongoUri)  // Returns cached or creates new MongoClient
router.getSpaces(ctx)            // Returns cached or creates new S3/Azure client
router.getAllMongoUris()         // Lists all unique MongoDB URIs
router.shutdown()                // Closes all connections gracefully

Best Practices:

DO: Reference the same connection key for tenants sharing infrastructure ✅ DO: Use separate connection keys for isolated/premium tenants
DO: Call chronos.admin.shutdown() on application shutdown ❌ DON'T: Define duplicate connections with different keys but same URI ❌ DON'T: Create connections in loops or per-request

2. S3/Azure Client Reuse

// S3 clients are cached by spaceConnRef
spacesConnections: {
  's3-primary': { ... }  // Created once, reused for all operations
}

// Azure clients auto-detected and cached
spacesConnections: {
  'azure-primary': {
    endpoint: 'https://account.blob.core.windows.net',  // ← Azure detected!
    accessKey: 'account-name',
    secretKey: 'account-key'
  }
}
// Router automatically creates AzureBlobStorageAdapter and caches it

3. S3 Optimization

// Optimized S3 operations
spacesConnections: {
  's3-optimized': {
    endpoint: 'https://s3.amazonaws.com',
    region: 'us-east-1',
    // Automatic retry and exponential backoff
    // Connection pooling for S3 operations
    // Batch operations for multiple files
  }
}

3. Caching Strategy

// Dev shadow for frequently accessed data
devShadow: {
  enabled: true,
  ttlHours: 24,              // Cache for 24 hours
  maxBytesPerDoc: 1024 * 1024 // 1MB max per document
}

🚀 Quick Start

Installation

npm install xronox@^3.0.0

Note: The package was previously named chronos-db. Starting from v2.4.0, it's published as xronox.

The simplest way to get started - zero code configuration:

1. Create xronox.config.json in your project root:

{
  "xronox": {
    "dbConnections": {
      "mongo-primary": {
        "mongoUri": "ENV.MONGO_URI"
      }
    },
    "spacesConnections": {
      "s3-primary": {
        "endpoint": "ENV.S3_ENDPOINT",
        "region": "ENV.S3_REGION",
        "accessKey": "ENV.S3_ACCESS_KEY",
        "secretKey": "ENV.S3_SECRET_KEY"
      }
    },
    "databases": {
      "runtime": {
        "tenantDatabases": [
          {
            "tenantId": "default",
            "dbConnRef": "mongo-primary",
            "spaceConnRef": "s3-primary",
            "recordsBucket": "ENV.S3_BUCKET",
            "dbName": "my_app_runtime"
          }
        ]
      }
    }
  }
}

2. Set environment variables:

export MONGO_URI="mongodb://localhost:27017"
export S3_ENDPOINT="https://s3.amazonaws.com"
export S3_REGION="us-east-1"
export S3_ACCESS_KEY="your-access-key"
export S3_SECRET_KEY="your-secret-key"
export S3_BUCKET="my-app-bucket"

3. Initialize (auto-loads config):

import { initXronox } from 'xronox';

// Auto-discovers and loads xronox.config.json
const xronox = await initXronox();

const ops = xronox.with({
  databaseType: 'runtime',
  tier: 'tenant',
  tenantId: 'default',
  collection: 'users'
});

await ops.create({ name: 'Alice', email: 'alice@example.com' });

🎯 Quick Start with Config Builder

The easiest way to get started:

import { createProductionConfig, initXronox } from 'xronox';

const config = createProductionConfig({
  mongoUri: process.env.MONGO_URI!,
  s3Endpoint: 'https://s3.amazonaws.com',
  s3Region: 'us-east-1',
  s3AccessKey: process.env.AWS_ACCESS_KEY_ID!,
  s3SecretKey: process.env.AWS_SECRET_ACCESS_KEY!,
  tenants: ['acme-corp', 'globex-inc'],
  bucketPrefix: 'myapp'
});

const xronox = initXronox(config);

For Development:

import { createDevConfig, initXronox } from 'xronox';

const config = createDevConfig({
    mongoUri: 'mongodb://localhost:27017',
  basePath: './xronox-data'
});

const xronox = initXronox(config);

Enterprise Multi-Tenant Setup

import { initChronos } from 'xronox';

const xronox = initXronox({
  // Connection definitions (95% reuse as requested)
  dbConnections: {
    'mongo-primary': {
      mongoUri: 'mongodb+srv://user:pass@primary-cluster.mongodb.net'
    },
    'mongo-analytics': {
      mongoUri: 'mongodb+srv://user:pass@analytics-cluster.mongodb.net'
    }
  },
  
  spacesConnections: {
    's3-primary': {
      endpoint: 'https://s3.amazonaws.com',
      region: 'us-east-1',
      accessKey: process.env.AWS_ACCESS_KEY_ID,
      secretKey: process.env.AWS_SECRET_ACCESS_KEY
    }
  },
  
  // Tiered database architecture
  databases: {
    metadata: {
      genericDatabase: {
        dbConnRef: 'mongo-primary',
        spaceConnRef: 's3-primary',
        bucket: 'chronos-metadata',
        dbName: 'chronos_metadata_generic'
      },
      domainsDatabases: [
        {
          domain: 'healthcare',
          dbConnRef: 'mongo-primary',
          spaceConnRef: 's3-primary',
          bucket: 'chronos-metadata-healthcare',
          dbName: 'chronos_metadata_healthcare'
        }
      ],
      tenantDatabases: [
        {
          tenantId: 'tenant-a',
          dbConnRef: 'mongo-primary',
          spaceConnRef: 's3-primary',
          bucket: 'chronos-metadata-tenant-a',
          dbName: 'chronos_metadata_tenant_a'
        }
      ]
    },
    knowledge: {
      genericDatabase: {
        dbConnRef: 'mongo-primary',
        spaceConnRef: 's3-primary',
        bucket: 'chronos-knowledge',
        dbName: 'chronos_knowledge_generic'
      },
      domainsDatabases: [],
      tenantDatabases: []
    },
    runtime: {
      tenantDatabases: [
        {
          tenantId: 'tenant-a',
          dbConnRef: 'mongo-primary',
          spaceConnRef: 's3-primary',
          bucket: 'chronos-runtime-tenant-a',
          dbName: 'chronos_runtime_tenant_a',
          analyticsDbName: 'chronos_analytics_tenant_a'
        }
      ]
    },
    logs: {
      dbConnRef: 'mongo-primary',
      spaceConnRef: 's3-primary',
      bucket: 'chronos-logs',
      dbName: 'chronos_logs'
    }
  },
  
  // Enterprise configuration
  routing: { hashAlgo: 'rendezvous' },
  retention: { ver: { days: 90 }, counters: { days: 30 } },
  collectionMaps: {
    users: { indexedProps: ['email', 'tenantId'] },
    orders: { indexedProps: ['orderId', 'customerId', 'tenantId'] }
  },
  
  // Security and compliance
  logicalDelete: { enabled: true },    // GDPR compliance
  versioning: { enabled: true },       // Audit trails
  transactions: { enabled: true },     // Data integrity
  
  // Performance optimization
  writeOptimization: {
    batchSize: 1000,
    debounceMs: 100
  },
  fallback: {
    enabled: true,
    maxRetries: 3
  }
});

// Multi-tenant operations
const tenantAOps = chronos.with({ 
  databaseType: 'runtime',
  tier: 'tenant',
  tenantId: 'tenant-a',
  collection: 'users' 
});

const tenantBOps = chronos.with({ 
  databaseType: 'runtime',
  tier: 'tenant',
  tenantId: 'tenant-b',
  collection: 'users' 
});

// Create users in different tenants (completely isolated)
await tenantAOps.create({ email: 'user@tenant-a.com', name: 'User A' });
await tenantBOps.create({ email: 'user@tenant-b.com', name: 'User B' });

⚙️ Configuration Reference

Core Configuration

interface ChronosConfig {
  // Required: Connection definitions (95% reuse)
  dbConnections: Record<string, DbConnection>;
  spacesConnections: Record<string, SpacesConnection>;
  
  // Required: Tiered database configuration
  databases: {
    metadata?: {
      genericDatabase: GenericDatabase;
      domainsDatabases: DomainDatabase[];
      tenantDatabases: TenantDatabase[];
    };
    knowledge?: {
      genericDatabase: GenericDatabase;
      domainsDatabases: DomainDatabase[];
      tenantDatabases: TenantDatabase[];
    };
    runtime?: {
      tenantDatabases: RuntimeTenantDatabase[];
    };
    logs?: LogsDatabase;
  };
  
  // Optional: Local filesystem storage (for development/testing)
  localStorage?: {
    basePath: string;
    enabled: boolean;
  };
  
  // Optional: Routing configuration
  routing?: {
    hashAlgo?: 'rendezvous' | 'jump';
    chooseKey?: string | ((ctx: RouteContext) => string);
  };
  
  // Optional: Data retention policies
  retention?: {
    ver?: {
      days?: number;
      maxPerItem?: number;
    };
    counters?: {
      days?: number;
      weeks?: number;
      months?: number;
    };
  };
  
  // Optional: Collection mapping and validation
  collectionMaps?: Record<string, {
    indexedProps: string[]; // Empty array = auto-index all properties
    base64Props?: Record<string, {
      contentType: string;
      preferredText?: boolean;
      textCharset?: string;
    }>;
    validation?: {
      requiredIndexed?: string[];
    };
  }>;
  
  // Optional: Counter rules for analytics
  counterRules?: {
    rules?: Array<{
      name: string;
      on?: ('CREATE' | 'UPDATE' | 'DELETE')[];
      scope?: 'meta' | 'payload';
      when: Record<string, any>;
    }>;
  };
  
  // Optional: Development shadow storage
  devShadow?: {
    enabled: boolean;
    ttlHours: number;
    maxBytesPerDoc?: number;
  };
  
  // Optional: Security and compliance
  logicalDelete?: {
    enabled: boolean;  // Default: true (GDPR compliance)
  };
  versioning?: {
    enabled: boolean;  // Default: true (audit trails)
  };
  
  // Optional: Performance optimization
  writeOptimization?: {
    batchSize?: number;
    debounceMs?: number;
    compressionEnabled?: boolean;
  };
  
  // Optional: Fallback queue configuration
  fallback?: {
    enabled: boolean;
    maxRetries?: number;
    retryDelayMs?: number;
    maxDelayMs?: number;
    deadLetterCollection?: string;
  };
  
  // Optional: Transaction configuration
  transactions?: {
    enabled?: boolean;
    autoDetect?: boolean;
  };
}

// Connection interfaces
interface DbConnection {
  mongoUri: string;
}

interface SpacesConnection {
  endpoint: string;
  region: string;
  accessKey: string;
  secretKey: string;
  forcePathStyle?: boolean;
}

// Database interfaces
interface GenericDatabase {
  dbConnRef: string;
  spaceConnRef: string;
  bucket: string;
  dbName: string;
}

interface DomainDatabase {
  domain: string;
  dbConnRef: string;
  spaceConnRef: string;
  bucket: string;
  dbName: string;
}

interface TenantDatabase {
  tenantId: string;
  dbConnRef: string;
  spaceConnRef: string;
  bucket: string;
  dbName: string;
}

interface RuntimeTenantDatabase {
  tenantId: string;
  dbConnRef: string;
  spaceConnRef: string;
  bucket: string;
  dbName: string;
  analyticsDbName: string;  // Integrated analytics
}

interface LogsDatabase {
  dbConnRef: string;
  spaceConnRef: string;
  bucket: string;
  dbName: string;
}

Multi-Tenant Architecture Explained

Database Types

  • metadata - System configuration, user settings, application metadata
  • knowledge - Content, documents, knowledge base, static data
  • runtime - User data, transactions, dynamic application data
  • logs - System logs, audit trails, monitoring

Tiers

  • generic - Shared across all tenants (system-wide data)
  • domain - Shared within a domain (multi-tenant within domain)
  • tenant - Isolated per tenant (single-tenant data)

Usage Patterns

Option A: Direct Tier + Tenant ID Usage (Recommended)

const ops = chronos.with({
  databaseType: 'runtime',     // metadata | knowledge | runtime | logs
  tier: 'tenant',              // generic | domain | tenant
  tenantId: 'tenant-a',        // Maps to tenant-specific database
  collection: 'users'
});

Option B: Generic Tier (No Tenant ID)

const ops = chronos.with({
  databaseType: 'metadata',
  tier: 'generic',              // No tenantId needed
  collection: 'config'
});

Option C: Domain Tier

const ops = chronos.with({
  databaseType: 'knowledge',
  tier: 'domain',
  domain: 'healthcare',         // Maps to domain-specific database
  collection: 'articles'
});

🎯 Core Features

1. CRUD Operations with Security

Full transaction support with optimistic locking and tenant isolation:

// Create with automatic versioning and tenant isolation
const created = await ops.create(data, 'actor', 'reason');
// Returns: { id, ov: 0, cv: 0, createdAt }

// Update with optimistic lock and tenant context
const updated = await ops.update(id, newData, expectedOv, 'actor', 'reason');
// Returns: { id, ov: 1, cv: 1, updatedAt }

// Logical delete (default) - maintains audit trail
const deleted = await ops.delete(id, expectedOv, 'actor', 'reason');
// Returns: { id, ov: 2, cv: 2, deletedAt }

2. Enrichment API

Incrementally augment records without full rewrite:

// Deep merge with array union
await ops.enrich(id, {
  tags: ['premium'],              // Arrays unioned
  metadata: { newField: 'value' }, // Objects deep merged
}, {
  functionId: 'enricher@v1',       // Provenance tracking
  actor: 'system',
  reason: 'automated enrichment',
});

// Batch enrichment
await ops.enrich(id, [
  { tags: ['vip'] },
  { metadata: { score: 100 } },
  { tags: ['verified'] },
]);

3. Read Operations with Presigned URLs

Multiple read strategies with security:

// Get latest version with presigned URL
const latest = await ops.getLatest(id, { 
  presign: true,
  ttlSeconds: 3600,
  projection: ['email', 'status'],
});

// Get specific version
const v1 = await ops.getVersion(id, 1);

// Get as of time (time-travel)
const historical = await ops.getAsOf(id, '2024-01-01T00:00:00Z');

// List by metadata with pagination
const results = await ops.listByMeta({
  filter: { status: 'active' },
  limit: 50,
  afterId: lastId,
  sort: { updatedAt: -1 },
}, { presign: true });

4. Integrated Analytics

Built-in analytics for each tenant:

// Analytics automatically tracked in analyticsDbName
const metrics = await chronos.counters.getTotals({
  dbName: 'chronos_runtime_tenant_a',
  collection: 'users',
});

// Returns:
// {
//   created: 1000,
//   updated: 500,
//   deleted: 50,
//   activeUsers: 750,
// }

5. Entity Relationship Management

New in v2.0.1! Automatic management of related entities with referential integrity:

Automatically extract and save related entities to their own collections:

// Define entity mappings
const entityMappings = [
  {
    property: 'customer',           // Property in main record
    collection: 'customers',        // Target collection
    keyProperty: 'customerId',      // Key field in entity
    databaseType: 'metadata',       // Optional: database tier
    tier: 'tenant'                  // Optional: tier level
  },
  {
    property: 'product',
    collection: 'products',
    keyProperty: 'productId',
    databaseType: 'knowledge',
    tier: 'domain'
  }
];

// Insert order with automatic customer/product management
const result = await ops.insertWithEntities(
  {
    orderId: 'ORD-123',
    customer: {
      customerId: 'CUST-456',
      name: 'John Doe',
      email: 'john@example.com'
    },
    product: {
      productId: 'PROD-789',
      name: 'Widget',
      price: 99.99
    },
    quantity: 2
  },
  entityMappings,
  'order-system',
  'new order created'
);

// Returns:
// {
//   mainRecordId: 'order-123-id',
//   entityResults: Map {
//     'customer' => { id: 'cust-id', operation: 'created' },
//     'product' => { id: 'prod-id', operation: 'unchanged' }
//   }
// }

// What happened:
// 1. Checked if customer CUST-456 exists → Created new customer record
// 2. Checked if product PROD-789 exists → Already existed, no changes
// 3. Created the order record with embedded customer/product objects

Fetch a record and automatically retrieve all related entities:

// Fetch order with all related entities
const result = await ops.getWithEntities(
  'order-123-id',
  entityMappings,
  { presign: true }  // Optional read options
);

// Returns:
// {
//   mainRecord: { 
//     orderId: 'ORD-123',
//     customer: { customerId: 'CUST-456', ... },
//     product: { productId: 'PROD-789', ... },
//     quantity: 2
//   },
//   entityRecords: Map {
//     'customer' => { customerId: 'CUST-456', name: 'John Doe', ... },
//     'product' => { productId: 'PROD-789', name: 'Widget', price: 99.99, ... }
//   }
// }

// Benefits:
// - Single call to fetch related data
// - Automatic relationship resolution
// - Maintains referential integrity
// - Works across database tiers

6. Tiered Data Fetching

New in v2.0.1! Fetch data across tiers with automatic fallback or merging:

getKnowledge - Tiered Knowledge Fetching

Fetch from knowledge database with tier priority (tenant → domain → generic):

// Fetch with fallback (returns first found)
const config = await chronos.getKnowledge(
  'app-config',
  { key: 'feature-flags' },
  {
    tenantId: 'tenant-a',
    domain: 'production',
    merge: false  // Return first found
  }
);
// Returns tenant config if exists, otherwise domain, otherwise generic

// Fetch with merge (combines all tiers)
const mergedConfig = await chronos.getKnowledge(
  'app-config',
  { key: 'feature-flags' },
  {
    tenantId: 'tenant-a',
    domain: 'production',
    merge: true,  // Merge all tiers
    mergeOptions: { dedupeArrays: true }
  }
);

// Returns:
// {
//   data: { 
//     // Generic tier settings
//     maxUploadSize: 10485760,
//     // Domain tier settings (production)
//     enableNewFeature: true,
//     // Tenant tier settings (tenant-a overrides)
//     maxUploadSize: 52428800,
//     customField: 'tenant-specific'
//   },
//   tiersFound: ['generic', 'domain', 'tenant'],
//   tierRecords: {
//     generic: { maxUploadSize: 10485760, ... },
//     domain: { enableNewFeature: true, ... },
//     tenant: { maxUploadSize: 52428800, customField: ... }
//   }
// }

getMetadata - Tiered Metadata Fetching

Same functionality for metadata database:

// Fetch schema with tier fallback
const schema = await chronos.getMetadata(
  'schemas',
  { entityType: 'user' },
  {
    tenantId: 'tenant-a',
    domain: 'saas',
    merge: true
  }
);

// Merge priority: generic → domain → tenant
// Tenant-specific fields override domain and generic

How Tiered Merging Works

// Generic tier (base configuration)
{
  theme: 'light',
  features: ['basic', 'standard'],
  settings: { timeout: 30 }
}

// Domain tier (environment-specific)
{
  features: ['advanced'],
  settings: { maxRetries: 3 }
}

// Tenant tier (customer-specific)
{
  theme: 'dark',
  features: ['premium'],
  settings: { timeout: 60 }
}

// Merged result (with merge: true):
{
  theme: 'dark',                                    // From tenant (overrides)
  features: ['basic', 'standard', 'advanced', 'premium'],  // Union of all
  settings: { timeout: 60, maxRetries: 3 }         // Deep merge
}

7. Restore Operations

Explicit, append-only restore with audit trails:

// Restore object to specific version
await ops.restoreObject(id, { ov: 5 });
// or by time
await ops.restoreObject(id, { at: '2024-01-01T00:00:00Z' });

// Restore entire collection
await ops.restoreCollection({ cv: 100 });
// or by time
await ops.restoreCollection({ at: '2024-01-01T00:00:00Z' });

7B. XronoxClient - MongoDB-like API ⭐ NEW in v3.0.1

Simplified MongoDB-compatible client for CRUD operations:

import { initXronox, createXronoxClient } from 'xronox';

const xronox = await initXronox();
const client = createXronoxClient(xronox, {
  databaseType: 'knowledge',
  tier: 'tenant',
  tenantId: 'default'
});

// Insert with auto-generated ID
const id = await client.insert('knowledge_items', {
  topic: 'quantum-computing',
  content: { text: '...' },
  confidence: 0.9
});

// Find one
const item = await client.findOne('knowledge_items', {
  topic: 'quantum-computing'
});

// Find by ID
const doc = await client.findById('knowledge_items', id);

// Find multiple with filters
const items = await client.find('knowledge_items', {
  state: 'known',
  confidence: { $gte: 0.8 }
}, {
  limit: 10,
  projection: { include: ['topic', 'content'], exclude: [] }
});

// Update
await client.update('knowledge_items', id, {
  confidence: 0.95
});

// Delete
await client.delete('knowledge_items', id);

// Bulk delete (GDPR compliance)
const deletedCount = await client.deleteMany('knowledge_items', {
  '_metadata.identity.identifier': 'user-123'
});

// Count
const total = await client.count('knowledge_items', {
  state: 'known'
});

Why XronoxClient?

  • ✅ Familiar MongoDB-like API
  • ✅ Auto-generates IDs
  • ✅ Simplified CRUD operations
  • ✅ Field projection support
  • ✅ GDPR compliance (deleteMany)
  • ✅ Type-safe with generics

8. Identity & Attribution System ⭐ NEW in v3.0

Track WHO created or modified data with a standardized identity interface across your entire ecosystem.

Why Identity Matters

  • Attribution: Know who created each record
  • Audit Trails: Compliance requirements (GDPR, SOX, HIPAA)
  • Access Control: Filter data by creator
  • Collaboration: Multiple users per tenant
  • AI Agents: Track which agent learned/created data

Standard Identity Interface

import { Identity, createUserIdentity, createAgentIdentity, createSystemIdentity } from 'xronox';

// User identity
const userIdentity: Identity = createUserIdentity('user-123', 'John Doe', {
  email: 'john@example.com',
  role: 'admin',
  department: 'engineering'
});

// AI Agent identity
const agentIdentity: Identity = createAgentIdentity('research-agent-v2', 'Research Agent', {
  version: '2.0.0',
  model: 'gpt-4',
  provider: 'openai'
});

// System identity
const systemIdentity: Identity = createSystemIdentity('background-processor', 'Background Processor', {
  hostname: 'worker-01',
  pid: 12345
});

// API/Service identity
const apiIdentity: Identity = {
  type: 'api',
  identifier: 'stripe-integration',
  name: 'Stripe Payment Integration',
  metadata: { version: 'v1', environment: 'production' }
};

Using Identities in Operations

// Create record with identity attribution
const result = await ops.create(
  { topic: 'quantum-computing', content: '...' },
  userIdentity,  // WHO created this
  'research'     // WHY created (reason)
);

// Update with agent identity
await ops.update(
  id,
  { confidence: 0.95 },
  expectedOv,
  agentIdentity,  // WHO updated this
  'ml-validation' // WHY updated
);

// Enrich with system identity
await ops.enrich(
  id,
  { processed: true, processedAt: new Date() },
  {
    identity: systemIdentity,  // WHO enriched this
    reason: 'automated-processing'
  }
);

Identity Types

type IdentityType = 
  | 'user'      // Human user
  | 'agent'     // AI agent / automated system
  | 'system'    // Internal system process
  | 'api'       // External API integration
  | 'service'   // Microservice
  | 'job'       // Background job / worker
  | 'cron'      // Scheduled task
  | 'webhook'   // Webhook trigger
  | string;     // Custom types allowed

Helper Functions

import {
  createIdentity,
  createUserIdentity,
  createAgentIdentity,
  createSystemIdentity,
  createAPIIdentity,
  validateIdentity,
  isIdentity,
  identityToString,
  parseIdentityString
} from 'xronox';

// Create identities
const identity = createIdentity('user', 'user-123', 'John Doe');

// Validate identity
validateIdentity(identity);  // Throws if invalid
isIdentity(obj);             // Returns boolean

// Serialize for logging
const str = identityToString(identity);
// Returns: "user:user-123 (John Doe)"

// Parse from string
const parsed = parseIdentityString("user:user-123 (John Doe)");
// Returns: { type: 'user', identifier: 'user-123', name: 'John Doe' }

Querying by Identity

// Find all records created by a specific user
const userRecords = await ops.query({
  meta: { '_system.creator.identifier': 'user-123' }
});

// Find all records updated by an agent
const agentUpdates = await ops.query({
  meta: { '_system.updater.type': 'agent' }
});

// Find records created by API integrations
const apiRecords = await ops.query({
  meta: { '_system.creator.type': 'api' }
});

9. Field Projection & Hidden Fields ⭐ NEW in v3.0

Control what data is returned in queries, just like hidden files in your operating system.

Why Projection Matters

  • Cleaner Responses: Hide metadata fields by default
  • Performance: Reduce payload size (less data transfer)
  • Security: Prevent accidental exposure of internal fields
  • Flexibility: Show hidden fields when needed (admin/debug)
  • DRY Principle: Define once in config, use everywhere

Configure Hidden Fields

collectionMaps: {
  knowledge_items: {
    indexedProps: ['topic', 'state', 'confidence'],
    
    // Fields hidden by default (like hidden files in OS)
    hiddenFields: ['_metadata', '_internal'],
    
    // Named projections (reusable presets)
    projection: {
      default: {
        include: '*',
        exclude: ['_metadata', '_internal']  // Hide by default
      },
      minimal: {
        include: ['id', 'topic', 'content'],  // Only essential fields
        exclude: []
      },
      withMetadata: {
        include: '*',   // Include everything (even hidden)
        exclude: []
      },
      admin: {
        include: '*',   // Admin view with all fields
        exclude: []
      }
    }
  }
}

Using Projections in Queries

// Default behavior - hidden fields excluded
const item = await ops.getLatest(id);
// Returns: { topic, content, confidence, sources, ... }
// Does NOT include: _metadata, _internal ✅

// Explicitly include hidden fields (like "Show hidden files" in Windows)
const itemWithMetadata = await ops.getLatest(id, {
  includeHidden: true  
});
// Returns: { topic, content, _metadata, _internal, ... }
// INCLUDES hidden fields ✅

// Use named projection
const minimal = await ops.getLatest(id, {
  projectionSpec: 'minimal'
});
// Returns: { id, topic, content }  // Only specified fields ✅

// Use custom projection
const custom = await ops.getLatest(id, {
  projectionSpec: {
    include: ['topic', 'confidence'],
    exclude: []
  }
});
// Returns: { topic, confidence }  ✅

Projections in Query Operations

// Query without metadata (default)
const items = await ops.query({
  meta: { state: 'known' }
});
// All items returned without _metadata ✅

// Query with metadata (for admin panel)
const adminItems = await ops.query({
  meta: { state: 'known' }
}, {
  includeHidden: true  // Include _metadata for audit
});
// All items with _metadata ✅

// Query with named projection
const minimalItems = await ops.query({
  meta: { state: 'known' }
}, {
  projectionSpec: 'minimal'
});
// All items with only minimal fields ✅

Real-World Example: Knowledge Graph with Metadata

// Configuration
collectionMaps: {
  knowledge_items: {
    indexedProps: ['topic', 'agentId', 'state', 'confidence'],
    
    // Hide internal metadata by default
    hiddenFields: ['_metadata'],
    
    projection: {
      // Application use (clean response)
      default: {
        include: '*',
        exclude: ['_metadata']
      },
      // Admin/Debug view (with metadata)
      withMetadata: {
        include: '*'
      }
    }
  }
}

// Application code - clean response
const knowledge = await knowledgeOps.getLatest(id);
// Returns:
// {
//   topic: 'quantum-computing',
//   content: { ... },
//   confidence: 0.9,
//   sources: [...]
// }
// No _metadata cluttering the response ✅

// Admin/Debug - full context
const knowledgeWithContext = await knowledgeOps.getLatest(id, {
  includeHidden: true
});
// Returns:
// {
//   topic: 'quantum-computing',
//   content: { ... },
//   confidence: 0.9,
//   sources: [...],
//   _metadata: {              // ← Now visible
//     agentId: 'agent-123',
//     jobId: 'job-456',
//     identity: { ... },
//     context: [...],
//     learnedAt: '2025-01-09T...'
//   }
// }

Benefits of Field Projection

Without ProjectionWith Projection (Xronox v3.0)
❌ Metadata clutters responses✅ Clean responses by default
❌ Manual field stripping in code✅ Define once in config
❌ Inconsistent across frameworks✅ Standard behavior
❌ Larger payloads✅ Smaller payloads (better performance)
❌ Security risk (accidental exposure)✅ Security by default
❌ Duplicated logic everywhere✅ DRY principle

🔐 Security & Compliance

GDPR Compliance 🔒

GDPR = General Data Protection Regulation - EU law giving users the "Right to be Forgotten"

Why GDPR Compliance is Critical

  • Legally Required: Must comply within 30 days of user request
  • Massive Fines: Up to €20M or 4% of global revenue for non-compliance
  • User Rights: Fundamental privacy right in EU
  • Global Requirement: Applies to ANY company serving EU citizens

GDPR Features in Xronox

1. Identity-Based Data Tracking

Every record can be attributed to an identity:

import { createUserIdentity } from 'xronox';

const userIdentity = createUserIdentity('user-123', 'John Doe', {
  email: 'john@example.com'
});

// Track WHO created data
await ops.create(data, userIdentity, 'user-action');

// All data is now linked to user-123 via identity

2. Find All Data for a User

Query all records created/modified by a specific user:

// Find by creator
const userRecords = await ops.query({
  meta: { '_system.creator.identifier': 'user-123' }
});

// Find by updater  
const userUpdates = await ops.query({
  meta: { '_system.updater.identifier': 'user-123' }
});

// Find by identity type
const userCreatedData = await ops.query({
  meta: { 
    '_system.creator.type': 'user',
    '_system.creator.identifier': 'user-123'
  }
});

3. Bulk Delete User Data (Right to be Forgotten) ⭐ NEW in v3.0.1

import { createXronoxClient } from 'xronox';

const client = createXronoxClient(xronox, {
  databaseType: 'knowledge',
  tier: 'tenant',
  tenantId: 'tenant-a'
});

// Delete ALL data for a user (GDPR compliance)
const deletedCount = await client.deleteMany('knowledge_items', {
  '_metadata.identity.identifier': 'user-123',
  '_metadata.tenantId': 'tenant-a'
});

console.log(`GDPR: Deleted ${deletedCount} items for user-123`);
// Returns: number of deleted documents

4. GDPR Deletion Workflow

// Step 1: User requests data deletion
async function handleGDPRDeletionRequest(userId: string, tenantId: string) {
  const client = createXronoxClient(xronox, {
    databaseType: 'knowledge',
    tier: 'tenant',
    tenantId
  });

  // Step 2: Find all user data
  const userKnowledge = await client.find('knowledge_items', {
    '_metadata.identity.identifier': userId,
    '_metadata.tenantId': tenantId
  });

  console.log(`Found ${userKnowledge.length} knowledge items for user ${userId}`);

  // Step 3: Delete all user data
  const deletedKnowledge = await client.deleteMany('knowledge_items', {
    '_metadata.identity.identifier': userId,
    '_metadata.tenantId': tenantId
  });

  // Step 4: Delete from other collections
  const deletedQuestions = await client.deleteMany('questions', {
    '_metadata.identity.identifier': userId,
    '_metadata.tenantId': tenantId
  });

  // Step 5: Return deletion report
  return {
    userId,
    tenantId,
    deletedAt: new Date().toISOString(),
    itemsDeleted: {
      knowledge: deletedKnowledge,
      questions: deletedQuestions
    },
    total: deletedKnowledge + deletedQuestions,
    status: 'GDPR_COMPLIANT'
  };
}

// Usage
const report = await handleGDPRDeletionRequest('user-123', 'tenant-a');
console.log(`GDPR Deletion Complete:`, report);
// Output:
// {
//   userId: 'user-123',
//   tenantId: 'tenant-a',
//   deletedAt: '2025-01-09T...',
//   itemsDeleted: { knowledge: 45, questions: 12 },
//   total: 57,
//   status: 'GDPR_COMPLIANT'
// }

5. Logical Delete (Audit Trail)

// Enable logical delete for GDPR compliance with audit trails
logicalDelete: {
  enabled: true  // Default - data marked deleted but retained for audit
}

// Versioning for data lineage
versioning: {
  enabled: true  // Default - complete history of all changes
}

// Delete with audit trail
await ops.delete(id, expectedOv, userIdentity, 'gdpr-request');
// Data is logically deleted but history remains for compliance

6. GDPR Compliance Checklist

Identity Attribution: Track WHO created/modified data
Data Discovery: Find all data for a user
Bulk Delete: Delete all user data in one operation
Audit Trails: Complete history of deletions
Tenant Isolation: Ensure cross-tenant data safety
30-Day Compliance: Automated deletion workflow

Real-World GDPR Scenario

// Day 1: User requests deletion
const deletionRequest = {
  userId: 'user-123',
  email: 'john@example.com',
  requestDate: '2025-01-01',
  reason: 'GDPR Article 17 - Right to be Forgotten'
};

// Day 2-5: Process deletion
const report = await handleGDPRDeletionRequest(
  deletionRequest.userId,
  'tenant-a'
);

// Day 6: Verify deletion
const remainingData = await client.find('knowledge_items', {
  '_metadata.identity.identifier': deletionRequest.userId
});

if (remainingData.length === 0) {
  console.log('✅ GDPR Compliant: All user data deleted');
} else {
  console.error('❌ GDPR Violation: Data still exists!');
}

// Day 30: Deadline - COMPLIANT! ✅

Why XronoxClient is Essential for GDPR

Without XronoxClientWith XronoxClient v3.0.1
❌ Manual deletion per record✅ Bulk delete with deleteMany()
❌ Complex query logic✅ Simple identity-based queries
❌ Risk of missing data✅ Comprehensive deletion
❌ Cannot meet 30-day deadline✅ Automated compliance
❌ Legal liability✅ GDPR compliant
❌ Cannot deploy in EU✅ EU-ready

Legal Protection: Xronox v3.0.1 provides the technical foundation for GDPR compliance, but legal review is recommended.

SOX Compliance

// Enable comprehensive audit trails
collectionMaps: {
  financial_records: {
    indexedProps: ['accountId', 'transactionId', 'amount', 'date'],
    validation: {
      requiredIndexed: ['accountId', 'transactionId', 'amount']
    }
  }
},

// Enable transaction logging
transactions: {
  enabled: true,
  autoDetect: true
}

HIPAA Compliance

// Separate infrastructure per tenant for healthcare data
tenantDatabases: [
  {
    tenantId: 'healthcare-provider',
    dbConnRef: 'mongo-healthcare',      // Separate MongoDB cluster
    spaceConnRef: 's3-healthcare',      // Separate S3 bucket
    bucket: 'chronos-healthcare',
    dbName: 'chronos_healthcare',
    analyticsDbName: 'chronos_analytics_healthcare'
  }
]

🏗️ Architecture

Data Flow

┌─────────────┐
│   Client    │
└──────┬──────┘
       │
       ▼
┌─────────────────────────────────┐
│  Chronos-DB v2.0               │
│  ┌───────────────────────────┐  │
│  │  Router (HRW Hashing)     │  │
│  │  + Tenant Resolution      │  │
│  └───────────────────────────┘  │
│          │           │           │
│          ▼           ▼           │
│  ┌──────────┐  ┌──────────┐     │
│  │  Mongo   │  │    S3    │     │
│  │ (Indexed)│  │(Payloads)│     │
│  └──────────┘  └──────────┘     │
│                                  │
│  ┌───────────────────────────┐  │
│  │  Analytics (Integrated)   │  │
│  └───────────────────────────┘  │
│                                  │
│  ┌───────────────────────────┐  │
│  │  Fallback Queue (Optional)│  │
│  └───────────────────────────┘  │
└─────────────────────────────────┘

MongoDB Collections

  • <collection>_head - Latest state pointers
  • <collection>_ver - Immutable version index
  • <collection>_counter - Collection version counter
  • <collection>_locks - Transaction locks for concurrent write prevention
  • cnt_total - Counter totals (in analytics database)
  • chronos_fallback_ops - Fallback queue (if enabled)
  • chronos_fallback_dead - Dead letter queue (if enabled)

S3 Storage Layout

<jsonBucket>/
  <collection>/
    <itemId>/
      v0/item.json
      v1/item.json
      v2/item.json

<contentBucket>/
  <collection>/
    <itemId>/
      v0/
        <property>/blob.bin
        <property>/text.txt
      v1/
        <property>/blob.bin

📋 System Fields Structure

Document Structure Overview

Chronos-DB organizes all system fields under the _system property to keep your documents clean and normal-looking. This design ensures that your application data remains separate from Chronos-DB's internal management fields.

Complete Document Structure

{
  "_id": "507f1f77bcf86cd799439011",  // MongoDB's native _id (stays at root)
  "email": "user@example.com",        // Your application data
  "name": "John Doe",                 // Your application data
  "status": "active",                 // Your application data
  "_system": {                        // All Chronos-DB system fields
    "ov": 3,                          // Object version (incremented on each update)
    "cv": 150,                        // Collection version (incremented on each operation)
    "insertedAt": "2024-01-01T00:00:00Z",  // Creation timestamp
    "updatedAt": "2024-01-15T10:30:00Z",   // Last update timestamp
    "deletedAt": null,                // Deletion timestamp (null if not deleted)
    "deleted": false,                 // Deletion status
    "functionIds": ["enricher@v1"],   // Enrichment function IDs that modified this record
    "parentId": "parent-record-id",   // Parent record for lineage tracking
    "parentCollection": "parent-collection", // Parent collection name
    "originId": "root-record-id",     // Original root record ID
    "originCollection": "root-collection"   // Original root collection name
  }
}

System Fields Explained

Version Management

  • ov (Object Version): Incremented each time this specific record is updated
  • cv (Collection Version): Incremented each time any record in the collection is modified
  • Purpose: Enables optimistic locking and time-travel queries

Timestamps

  • insertedAt: ISO 8601 timestamp when the record was first created
  • updatedAt: ISO 8601 timestamp when the record was last modified
  • deletedAt: ISO 8601 timestamp when the record was logically deleted (null if not deleted)
  • Purpose: Audit trails and temporal queries

Deletion Status

  • deleted: Boolean indicating if the record is logically deleted
  • Purpose: Enables logical delete functionality while maintaining audit trails

Enrichment Tracking

  • functionIds: Array of enrichment function IDs that have modified this record
  • Purpose: Tracks data lineage and enrichment provenance

Lineage Tracking

  • parentId: ID of the parent record (for hierarchical data)
  • parentCollection: Collection name of the parent record
  • originId: ID of the original root record (preserved throughout lineage)
  • originCollection: Collection name of the original root record
  • Purpose: Complete data lineage tracking for compliance and debugging

What Chronos-DB Manages Automatically

Chronos-DB automatically manages all _system fields. Your application should NOT modify these fields directly.

Automatic Management

  • Version Increments: ov and cv are automatically incremented
  • Timestamp Updates: insertedAt, updatedAt, deletedAt are automatically set
  • Deletion Status: deleted flag is automatically managed
  • Enrichment Tracking: functionIds are automatically updated during enrichment
  • Lineage Tracking: Parent and origin fields are automatically set when provided

What Your App Should Do

  • Read _system fields: Use them for optimistic locking, audit trails, etc.
  • Provide lineage context: Pass parentRecord or origin when creating records
  • Use version fields: Pass expectedOv for updates to prevent conflicts

What Your App Should NOT Do

  • Modify _system fields: Never directly set or change these fields
  • Depend on specific values: Don't assume specific version numbers or timestamps
  • Bypass system fields: Always use Chronos-DB APIs for data operations

Usage Examples

Creating Records with Lineage

// Create a child record with parent lineage
const childRecord = await ops.create({
  name: 'Child Record',
  data: 'some data'
}, 'system', 'child creation', {
  parentRecord: {
    id: 'parent-record-id',
    collection: 'parent_items',
  }
});

// The _system field will automatically include:
// {
//   parentId: 'parent-record-id',
//   parentCollection: 'parent_items',
//   originId: 'parent-record-id',        // Derived from parent
//   originCollection: 'parent_items'    // Derived from parent
// }

Creating Records with Explicit Origin

// Create a record with explicit origin (e.g., from external system)
const importedRecord = await ops.create({
  customerId: 'ext-123',
  name: 'Imported Customer'
}, 'system', 'import', {
  origin: {
    id: 'stripe_cus_123',
    collection: 'customers',
    system: 'stripe'  // Optional external system name
  }
});

// The _system field will automatically include:
// {
//   originId: 'stripe_cus_123',
//   originCollection: 'stripe:customers'  // Includes system prefix
// }

Using System Fields for Optimistic Locking

// Get current record
const current = await ops.getLatest('record-id');

// Update with optimistic locking
const updated = await ops.update('record-id', {
  name: 'Updated Name'
}, current._system.ov, 'user', 'name-update');

// Chronos-DB automatically:
// - Increments ov from 3 to 4
// - Updates updatedAt timestamp
// - Prevents conflicts if another process updated the record

Time-Travel Queries

// Get record as it was at a specific time
const historical = await ops.getAsOf('record-id', '2024-01-01T00:00:00Z');

// Get specific version
const v2 = await ops.getVersion('record-id', 2);

// Both return the same structure with _system fields showing:
// - ov: 2 (version at that time)
// - updatedAt: timestamp when that version was created
// - All other _system fields as they were at that time

Benefits of This Design

Clean Application Data

  • Your application data remains clean and normal-looking
  • No system fields mixed with business data
  • Easy to understand and maintain

Complete Audit Trail

  • Every change is tracked with timestamps
  • Full lineage from origin to current state
  • Enrichment provenance is preserved

Optimistic Locking

  • Built-in conflict prevention
  • Version-based concurrency control
  • No need for external locking mechanisms

Time-Travel Capabilities

  • Access any version of any record
  • Point-in-time queries
  • Complete historical data preservation

Compliance Ready

  • GDPR compliance with logical delete
  • SOX compliance with immutable audit trails
  • HIPAA compliance with complete data lineage

Migration from Other Systems

If you're migrating from a system that stores version/timestamp fields at the root level:

// OLD WAY (don't do this):
{
  "_id": "123",
  "name": "John",
  "version": 5,           // ❌ Don't store at root
  "createdAt": "...",     // ❌ Don't store at root
  "updatedAt": "..."      // ❌ Don't store at root
}

// NEW WAY (Chronos-DB):
{
  "_id": "123",
  "name": "John",         // ✅ Clean application data
  "_system": {            // ✅ All system fields organized
    "ov": 5,
    "insertedAt": "...",
    "updatedAt": "..."
  }
}

Enhanced Analytics with Unique Counting

Chronos-DB v2.0.1 includes sophisticated analytics capabilities with unique counting support.

Counter Rules with Unique Counting

counterRules: {
  rules: [
    {
      name: 'user_logins',
      on: ['CREATE'],
      scope: 'meta',
      when: { action: 'login' },
      countUnique: ['sessionId']  // Count unique sessionId values
    },
    {
      name: 'product_views',
      on: ['CREATE'],
      scope: 'meta',
      when: { action: 'view' },
      countUnique: ['productId', 'category', 'brand']  // Multiple unique counts
    },
    {
      name: 'premium_purchases',
      on: ['CREATE'],
      scope: 'meta',
      when: { 
        userTier: 'premium',
        action: 'purchase',
        amount: { $gte: 100 }
      },
      countUnique: ['productId', 'category']
    }
  ]
}

Analytics Results

// Get analytics for a tenant
const metrics = await chronos.counters.getTotals({
  dbName: 'chronos_runtime_tenant_a',
  collection: 'events',
});

console.log('Analytics:', metrics);
// Output:
// {
//   _id: "tenant:tenant-a|db:chronos_runtime_tenant_a|coll:events",
//   created: 1000,              // Total occurrences
//   updated: 500,
//   deleted: 50,
//   rules: {
//     user_logins: {
//       created: 150,           // Total logins
//       unique: {
//         sessionId: 45         // Unique sessions
//       }
//     },
//     product_views: {
//       created: 800,           // Total views
//       unique: {
//         productId: 150,       // Unique products viewed
//         category: 25,         // Unique categories viewed
//         brand: 12             // Unique brands viewed
//       }
//     },
//     premium_purchases: {
//       created: 200,           // Total premium purchases
//       unique: {
//         productId: 75,        // Unique products purchased
//         category: 15          // Unique categories purchased
//       }
//     }
//   },
//   lastAt: "2024-01-15T10:30:00Z"
// }

Why Unique Counting is Important

Business Intelligence: Understand user behavior patterns

  • How many unique users visited today?
  • How many unique products were viewed this month?
  • What's the conversion rate from views to purchases?

Performance Optimization: Identify bottlenecks

  • Which products are most popular?
  • Which categories drive the most engagement?
  • What's the user retention rate?

Compliance Reporting: Meet regulatory requirements

  • Unique user counts for GDPR compliance
  • Unique transaction counts for financial reporting
  • Unique data access counts for audit trails

Advanced Analytics Use Cases

E-commerce Analytics

// Track unique customers per day
{
  name: 'daily_unique_customers',
  on: ['CREATE'],
  scope: 'meta',
  when: { event: 'purchase' },
  countUnique: ['customerId']
}

// Track unique products per category
{
  name: 'category_product_diversity',
  on: ['CREATE'],
  scope: 'meta',
  when: { action: 'view' },
  countUnique: ['productId', 'category']
}

User Engagement Analytics

// Track unique sessions per user
{
  name: 'user_session_activity',
  on: ['CREATE'],
  scope: 'meta',
  when: { action: 'login' },
  countUnique: ['sessionId', 'userId']
}

// Track unique features used per user
{
  name: 'feature_adoption',
  on: ['CREATE'],
  scope: 'meta',
  when: { event: 'feature_used' },
  countUnique: ['featureId', 'userId']
}

Financial Analytics

// Track unique accounts per transaction type
{
  name: 'transaction_diversity',
  on: ['CREATE'],
  scope: 'meta',
  when: { 
    event: 'transaction',
    amount: { $gte: 1000 }
  },
  countUnique: ['accountId', 'transactionType']
}

Analytics Database Structure

Each tenant gets its own analytics database with the following collections:

cnt_total Collection

{
  "_id": "tenant:tenant-a|db:chronos_runtime_tenant_a|coll:events",
  "tenant": "tenant-a",
  "dbName": "chronos_runtime_tenant_a",
  "collection": "events",
  "created": 1000,
  "updated": 500,
  "deleted": 50,
  "rules": {
    "user_logins": {
      "created": 150,
      "unique": {
        "sessionId": ["sess1", "sess2", "sess3", ...]  // Stored as arrays
      }
    }
  },
  "lastAt": "2024-01-15T10:30:00Z"
}

Key Features:

  • Automatic Deduplication: MongoDB's $addToSet ensures unique values
  • Efficient Storage: Arrays are converted to counts when retrieved
  • Real-time Updates: Counters are updated with every operation
  • Tenant Isolation: Each tenant has separate analytics

Analytics Best Practices

1. Choose Meaningful Properties

// Good: Track business-relevant unique values
countUnique: ['userId', 'productId', 'sessionId']

// Avoid: Tracking too many properties
countUnique: ['userId', 'productId', 'sessionId', 'ipAddress', 'userAgent', 'timestamp']

2. Use Appropriate Conditions

// Good: Specific conditions for meaningful analytics
when: { 
  userTier: 'premium',
  action: 'purchase',
  amount: { $gte: 100 }
}

// Avoid: Too broad conditions
when: { action: 'view' }  // Might be too noisy

3. Monitor Performance

// Use indexes on frequently queried fields
collectionMaps: {
  events: {
    indexedProps: ['userId', 'action', 'timestamp', 'userTier']
  }
}

4. Regular Cleanup

// Set appropriate retention policies
retention: {
  counters: {
    days: 30,    // Keep daily counts for 30 days
    weeks: 12,   // Keep weekly counts for 12 weeks
    months: 6    // Keep monthly counts for 6 months
}

🔧 Worker Integration

Chronos-DB provides advanced analytics capabilities that are designed to work with external workers. Important: The worker itself is NOT included in Chronos-DB - you need to implement your own worker system.

Time-Based Analytics Rules

Time-based analytics rules are designed to be executed by external workers on a schedule (hourly, daily, monthly).

Configuration

analytics: {
  // Standard counter rules (real-time)
  counterRules: [
    {
      name: 'user_logins',
      on: ['CREATE'],
      scope: 'meta',
      when: { action: 'login' },
      countUnique: ['sessionId']
    }
  ],
  
  // Time-based analytics rules (worker-driven)
  timeBasedRules: [
    {
      name: 'daily_revenue',
      collection: 'transactions',
      query: { status: 'completed' },
      operation: 'sum',
      field: 'amount',
      saveMode: 'timeframe',
      timeframe: 'daily'
    },
    {
      name: 'hourly_active_users',
      collection: 'events',
      query: { action: 'page_view' },
      operation: 'count',
      saveMode: 'timeframe',
      timeframe: 'hourly',
      relativeTime: {
        newerThan: 'PT1H'  // Last hour
      }
    },
    {
      name: 'monthly_unique_customers',
      collection: 'orders',
      query: { status: 'completed' },
      operation: 'count',
      saveMode: 'timeframe',
      timeframe: 'monthly',
      arguments: ['customerId']  // Foreign key filtering
    }
  ],
  
  // Cross-tenant analytics rules
  crossTenantRules: [
    {
      name: 'global_active_tenants',
      collection: 'events',
      query: { action: 'user_activity' },
      mode: 'boolean',
      masterTenantId: 'master-tenant',
      slaveTenantIds: ['tenant-a', 'tenant-b', 'tenant-c'],
      relativeTime: {
        newerThan: 'P1D'  // Last 24 hours
      }
    }
  ],
  
  // List of all tenants for cross-tenant operations
  tenants: ['tenant-a', 'tenant-b', 'tenant-c', 'master-tenant']
}

Worker Implementation Example

import { AdvancedAnalytics } from 'chronos-db';
import { MongoClient } from 'mongodb';

class AnalyticsWorker {
  private analytics: AdvancedAnalytics;
  
  constructor(mongoUri: string, analyticsDbName: string, config: any) {
    const mongoClient = new MongoClient(mongoUri);
    this.analytics = new AdvancedAnalytics(mongoClient, analyticsDbName, config);
  }
  
  // Execute time-based rules
  async executeTimeBasedRules() {
    const rules = this.config.analytics.timeBasedRules || [];
    
    for (const rule of rules) {
      try {
        const result = await this.analytics.executeTimeBasedRule(rule);
        console.log(`Executed rule ${rule.name}:`, result.value);
      } catch (error) {
        console.error(`Failed to execute rule ${rule.name}:`, error);
      }
    }
  }
  
  // Execute cross-tenant rules
  async executeCrossTenantRules() {
    const rules = this.config.analytics.crossTenantRules || [];
    
    for (const rule of rules) {
      try {
        const result = await this.analytics.executeCrossTenantRule(rule);
        console.log(`Executed cross-tenant rule ${rule.name}:`, result.value);
      } catch (error) {
        console.error(`Failed to execute cross-tenant rule ${rule.name}:`, error);
      }
    }
  }
  
  // Cleanup TTL data
  async cleanupTTLData() {
    // Clean up old analytics data
    const cutoffDate = new Date(Date.now() - 30 * 24 * 60 * 60 * 1000); // 30 days ago
    
    // Clean up time-based results older than 30 days
    await this.analytics.analyticsDb.collection('timeBasedResults')
      .deleteMany({ timestamp: { $lt: cutoffDate } });
    
    // Clean up cross-tenant results older than 30 days
    await this.analytics.analyticsDb.collection('crossTenantResults')
      .deleteMany({ timestamp: { $lt: cutoffDate } });
    
    console.log('TTL cleanup completed');
  }
}

// Worker scheduling (example with node-cron)
import cron from 'node-cron';

const worker = new AnalyticsWorker(
  'mongodb://localhost:27017',
  'analytics_db',
  config
);

// Run time-based analytics every hour
cron.schedule('0 * * * *', async () => {
  console.log('Running hourly analytics...');
  await worker.executeTimeBasedRules();
});

// Run cross-tenant analytics daily at midnight
cron.schedule('0 0 * * *', async () => {
  console.log('Running daily cross-tenant analytics...');
  await worker.executeCrossTenantRules();
});

// Cleanup TTL data weekly
cron.schedule('0 0 * * 0', async () => {
  console.log('Running weekly TTL cleanup...');
  await worker.cleanupTTLData();
});

Analytics Results Structure

Time-Based Results

{
  "_id": "daily_revenue_1704067200000_abc123",
  "ruleName": "daily_revenue",
  "collection": "transactions",
  "operation": "sum",
  "field": "amount",
  "value": 15420.50,
  "timeframe": "daily",
  "timestamp": "2024-01-01T00:00:00Z",
  "arguments": null
}

Cross-Tenant Results

{
  "_id": "global_active_tenants_1704067200000_def456",
  "ruleName": "global_active_tenants",
  "collection": "events",
  "mode": "boolean",
  "value": 3,
  "timestamp": "2024-01-01T00:00:00Z",
  "masterTenantId": "master-tenant",
  "slaveResults": [
    { "tenantId": "tenant-a", "value": 1 },
    { "tenantId": "tenant-b", "value": 1 },
    { "tenantId": "tenant-c", "value": 1 }
  ]
}

Worker Integration Points

1. Analytics Execution

// Get analytics results
const timeBasedResults = await analytics.getTimeBasedResults({
  ruleName: 'daily_revenue',
  timeframe: 'daily',
  limit: 30
});

const crossTenantResults = await analytics.getCrossTenantResults({
  ruleName: 'global_active_tenants',
  masterTenantId: 'master-tenant',
  limit: 7
});

2. TTL Cleanup

Chronos-DB requires external workers to handle TTL cleanup for:

  • Analytics Data: Old time-based and cross-tenant results
  • Version Data: Old document versions (if retention policies are set)
  • Counter Data: Old counter totals (if retention policies are set)
  • Fallback Queue: Dead letter queue cleanup

3. Error Handling

// Robust worker with error handling
class RobustAnalyticsWorker {
  async executeWithRetry(operation: () => Promise<any>, maxRetries = 3) {
    for (let attempt = 1; attempt <= maxRetries; attempt++) {
      try {
        return await operation();
      } catch (error) {
        console.error(`Attempt ${attempt} failed:`, error);
        if (attempt === maxRetries) {
          throw error;
        }
        await new Promise(resolve => setTimeout(resolve, 1000 * attempt));
      }
    }
  }
  
  async executeTimeBasedRules() {
    const rules = this.config.analytics.timeBasedRules || [];
    
    for (const rule of rules) {
      await this.executeWithRetry(async () => {
        const result = await this.analytics.executeTimeBasedRule(rule);
        console.log(`Executed rule ${rule.name}:`, result.value);
      });
    }
  }
}

Production Considerations

1. Worker Scaling

// Multiple workers for high-volume analytics
const workers = [
  new AnalyticsWorker(mongoUri1, 'analytics_db_1', config),
  new AnalyticsWorker(mongoUri2, 'analytics_db_2', config),
  new AnalyticsWorker(mongoUri3, 'analytics_db_3', config)
];

// Distribute rules across workers
const rulesPerWorker = Math.ceil(timeBasedRules.length / workers.length);
workers.forEach((worker, index) => {
  const startIndex = index * rulesPerWorker;
  const endIndex = Math.min(startIndex + rulesPerWorker, timeBasedRules.length);
  const workerRules = timeBasedRules.slice(startIndex, endIndex);
  
  // Execute worker-specific rules
  worker.executeRules(workerRules);
});

2. Monitoring and Alerting

// Worker health monitoring
class MonitoredAnalyticsWorker {
  async executeTimeBasedRules() {
    const startTime = Date.now();
    let successCount = 0;
    let errorCount = 0;
    
    try {
      const rules = this.config.analytics.timeBasedRules || [];
      
      for (const rule of rules) {
        try {
          await this.analytics.executeTimeBasedRule(rule);
          successCount++;
        } catch (error) {
          errorCount++;
          console.error(`Rule ${rule.name} failed:`, error);
          
          // Send alert for critical rules
          if (rule.critical) {
            await this.sendAlert(`Critical analytics rule failed: ${rule.name}`);
          }
        }
      }
      
      const duration = Date.now() - startTime;
      console.log(`Analytics execution completed: ${successCount} success, ${errorCount} errors, ${duration}ms`);
      
      // Send metrics to monitoring system
      await this.sendMetrics({
        successCount,
        errorCount,
        duration,
        timestamp: new Date()
      });
      
    } catch (error) {
      console.error('Analytics worker failed:', error);
      await this.sendAlert('Analytics worker failed completely');
    }
  }
}

3. Data Consistency

// Ensure data consistency across workers
class ConsistentAnalyticsWorker {
  async executeCrossTenantRules() {
    // Use MongoDB transactions for consistency
    const session = this.analytics.mongoClient.startSession();
    
    try {
      await session.withTransaction(async () => {
        const rules = this.config.analytics.crossTenantRules || [];
        
        for (const rule of rules) {
          const result = await this.analytics.executeCrossTenantRule(rule);
          
          // Verify result consistency
          await this.verifyCrossTenantResult(result);
        }
      });
    } finally {
      await session.endSession();
    }
  }
  
  async verifyCrossTenantResult(result: CrossTenantResult) {
    // Verify that slave results sum correctly
    const expectedValue = result.slaveResults.reduce((sum, slave) => sum + slave.value, 0);
    
    if (result.value !== expectedValue) {
      throw new Error(`Cross-tenant result inconsistency: expected ${expectedValue}, got ${result.value}`);
    }
  }
}

🔐 Production Deployment

MongoDB Setup

Chronos-DB works with any MongoDB setup - standalone instances, replica sets, or sharded clusters.

Recommended for Production:

  • Replica Set (3+ nodes) for high availability and transaction support
  • Standalone MongoDB works perfectly for development and smaller deployments
# Option 1: Standalone MongoDB (works out of the box)
mongodb://localhost:27017/dbname

# Option 2: Replica Set (recommended for production)
mongodb://mongo1:27017,mongo2:27017,mongo3:27017/dbname?replicaSet=rs0

Transaction Support:

  • Replica Sets: Full transaction support with ACID guarantees
  • Standalone: Automatic fallback to non-transactional operations
  • Auto-detection: Chronos-DB automatically detects MongoDB capabilities
# Example docker-compose.yml for replica set (optional)
services:
  mongo1:
    image: mongo:6
    command: mongod --replSet rs0
    
  mongo2:
    image: mongo:6
    command: mongod --replSet rs0
    
  mongo3:
    image: mongo:6
    command: mongod --replSet rs0

Storage Providers

Tested with:

  • AWS S3
  • Azure Blob Storage (NEW in v2.0.1)
  • DigitalOcean Spaces
  • MinIO
  • Cloudflare R2

📚 Documentation

🤝 Contributing

Contributions welcome! Please ensure:

  • TypeScript compilation passes
  • Documentation is updated
  • Security considerations are addressed
  • Tests are included

📄 License

MIT © nx-intelligence

🙏 Credits

Built with:

💬 Messaging Database (Chronow Integration)

Xronox v2.4 introduces a first-class messaging database type designed for integration with Chronow (hot Redis-backed messaging + warm MongoDB durable audit). This enables dual-tier retention, DLQ auditing, and cross-tenant observability for pub/sub systems.

Overview

The messaging database provides simple MongoDB-only storage (NO versioning, NO S3 offload) for:

  • Shared Memory Snapshots: KV snapshots with versioning strategies
  • Topic Metadata: Topic configuration and shard info
  • Canonical Messages: Published messages for audit and retrieval
  • Delivery Tracking (optional): Per-subscription delivery attempts
  • Dead Letter Queue (DLQ): Terminally failed messages with audit trail

Key Features

Simple & Fast: MongoDB-only, no versioning overhead, no storage offload
Dual-Tier Design: Redis hot path (Chronow) + MongoDB warm/audit (Chronos)
Multi-Tenant: Tenant-scoped with isolated databases
Idempotent: Safe retry/replay with duplicate detection
DLQ Support: Dead letter tracking with failure reasons
Optional Delivery Tracking: Control storage overhead with captureDeliveries flag

Configuration

Single Database for All Tenants (like logs):

{
  "dbConnections": {
    "mongo-primary": {
      "mongoUri": "mongodb://localhost:27017"
    }
  },
  "spacesConnections": {},
  "databases": {
    "messaging": {
      "dbConnRef": "mongo-primary",
      "dbName": "chronos_messaging",
      "captureDeliveries": false
    }
  },
  "routing": { "hashAlgo": "rendezvous" }
}

Configuration Fields:

  • dbConnRef: MongoDB connection reference (required)
  • dbName: MongoDB database name (required)
  • captureDeliveries: Enable delivery attempt tracking (optional, default: false)

Note: The messaging database is shared across all tenants. Tenant isolation is achieved through the tenantId field in every document/query.

API Usage

Basic Setup

import { initChronos } from 'xronox';

const chronos = initChronos(config);

// Get messaging API for a tenant
const messaging = chronos.messaging('tenant-a');

Shared Memory (KV Snapshots)

Store shared memory snapshots with append or latest-wins strategy:

// Latest strategy (one document per key, overwrites)
await messaging.shared.save({
  namespace: 'config',
  key: 'feature-flags',
  val: { beta: true, newUI: false },
  strategy: 'latest'
});

// Load latest value
const config = await messaging.shared.load({
  namespace: 'config',
  key: 'feature-flags',
  strategy: 'latest'
});

// Append strategy (versioned history)
await messaging.shared.save({
  namespace: 'events',
  key: 'user-session',
  val: { action: 'login', ts: new Date() },
  strategy: 'append'
});
// Returns: { id: '...', version: 0 }

await messaging.shared.save({
  namespace: 'events',
  key: 'user-session',
  val: { action: 'view-page', page: '/home' },
  strategy: 'append'
});
// Returns: { id: '...', version: 1 }

// Load specific version
const v0 = await messaging.shared.load({
  namespace: 'events',
  key: 'user-session',
  strategy: 'append',
  version: 0
});

// Load latest version
const latest = await messaging.shared.load({
  namespace: 'events',
  key: 'user-session',
  strategy: 'append'
});

// Tombstone (delete all versions)
await messaging.shared.tombstone({
  namespace: 'config',
  key: 'feature-flags',
  reason: 'tenant-deleted'
});

Topics

Ensure topics exist and retrieve metadata:

// Ensure topic exists
await messaging.topics.ensure({
  topic: 'payments',
  shards: 4
});

// Get topic metadata
const topicInfo = await messaging.topics.get({ topic: 'payments' });
// Returns: { tenantId: 'tenant-a', topic: 'payments', shards: 4, createdAt: Date }

Messages

Save and retrieve canonical messages:

// Save message (idempotent)
await messaging.messages.save({
  topic: 'payments',
  msgId: '171223123-0',  // Redis stream ID or ULID
  headers: { type: 'payment.created', traceId: 'abc123' },
  payload: { orderId: '123', amount: 100 },
  firstSeenAt: new Date(),
  size: 128
});

// Get specific message
const msg = await messaging.messages.get({
  topic: 'payments',
  msgId: '171223123-0'
});

// List messages (with time filter)
const recent = await messaging.messages.list({
  topic: 'payments',
  after: new Date(Date.now() - 86400000),  // Last 24h
  limit: 100
});

Deliveries (Optional)

Track delivery attempts per subscription (enabled via captureDeliveries: true):

// Append delivery attempt
if (messaging.deliveries) {
  await messaging.deliveries.append({
    topic: 'payments',
    subscription: 'payment-processor',
    msgId: '171223123-0',
    attempt: 1,
    status: 'pending',
    consumerId: 'worker-1',
    ts: new Date()
  });

  // Update to ack
  await messaging.deliveries.append({
    topic: 'payments',
    subscription: 'payment-processor',
    msgId: '171223123-0',
    attempt: 1,
    status: 'ack',
    consumerId: 'worker-1',
    ts: new Date()
  });

  // List deliveries for a message
  const deliveries = await messaging.deliveries.listByMessage({
    topic: 'payments',
    msgId: '171223123-0'
  });
}

Dead Letters (DLQ)

Track terminally failed messages:

// Save to DLQ
await messaging.deadLetters.save({
  topic: 'payments',
  subscription: 'payment-processor',
  msgId: '171223123-0',
  headers: { type: 'payment.created' },
  payload: { orderId: '123', amount: 100 },
  deliveries: 5,
  reason: 'max_retries_exceeded',
  failedAt: new Date()
});

// List dead letters
const dlq = await messaging.deadLetters.list({
  topic: 'payments',
  after: new Date(Date.now() - 86400000),  // Last 24h
  limit: 100
});

Collections & Indexes

The messaging database creates 5 collections with optimized indexes:

shared_memory

  • { tenantId, namespace, key, strategy } - Unique for latest strategy
  • { tenantId, namespace, key, version } - Versioned history for append
  • { updatedAt } - Freshness queries

topics

  • { tenantId, topic } - Unique per tenant

messages

  • { tenantId, topic, msgId } - Unique per tenant/topic
  • { firstSeenAt } - Time-based queries
  • { tenantId, topic, firstSeenAt } - Topic+time queries

deliveries (if captureDeliveries: true)

  • { tenantId, topic, subscription, msgId, attempt } - Unique per delivery
  • { tenantId, topic, msgId } - Message lookup
  • { ts } - Time-based cleanup

dead_letters

  • { tenantId, topic, msgId } - Lookup by message
  • { failedAt } - Time-based queries
  • { tenantId, topic, failedAt } - Topic analysis

Retention & Lifecycle

Messaging databases do NOT use Chronos versioning/S3 storage - they are simple MongoDB collections.

Retention is managed via external worker scripts:

// Example: Clean up old deliveries (7 days)
const db = mongoClient.db('chronos_messaging_tenant_a');
await db.collection('deliveries').deleteMany({
  ts: { $lt: new Date(Date.now() - 7 * 86400000) }
});

// Example: Clean up old dead letters (30 days)
await db.collection('dead_letters').deleteMany({
  failedAt: { $lt: new Date(Date.now() - 30 * 86400000) }
});

// Example: Enforce maxVersions for append-mode shared memory
const maxVersions = 100;
const keys = await db.collection('shared_memory').distinct('key', {
  tenantId: 'tenant-a',
  namespace: 'events',
  strategy: 'append'
});

for (const key of keys) {
  const docs = await db.collection('shared_memory')
    .find({ tenantId: 'tenant-a', namespace: 'events', key, strategy: 'append' })
    .sort({ version: -1 })
    .skip(maxVersions)
    .toArray();
  
  if (docs.length > 0) {
    await db.collection('shared_memory').deleteMany({
      _id: { $in: docs.map(d => d._id) }
    });
  }
}

Integration with Chronow

Chronow (Redis-backed hot messaging) uses this messaging database for:

  • Shared Memory Persistence: Warm storage for config/state with cache-miss fallback
  • Message Audit Trail: Canonical copy of published messages for compliance
  • DLQ Observability: Dead letter tracking for replay/analysis
  • Cross-Tenant Analytics: Aggregate messaging metrics across tenants

Typical Flow:

Chronow (Hot - Redis):
  ├─ Publish message → Redis Stream → Subscribers
  └─ On publish: chronos.messaging(tenant).messages.save(...)

Chronos (Warm - MongoDB):
  ├─ Stores canonical message for audit
  ├─ Tracks DLQ for failed deliveries
  └─ Provides long-tail retrieval (>24h)

Performance & Sizing

  • Messages: Keep < 256KB per message (small payloads only)
  • Indexes: Automatically created on first write
  • Idempotency: Duplicate saves are ignored (MongoDB unique index)
  • Connections: Reuses existing MongoDB connection pool

Security & Multi-Tenancy

  • All operations are tenant-scoped (tenantId in every query)
  • Single shared database for all tenants (like logs database)
  • Tenant isolation enforced by tenantId in all queries and indexes
  • No cross-tenant data leakage (MongoDB queries enforce tenant boundaries)
  • PII compliance: Classify and mask payloads as needed

👤 Identities Database (Users, Accounts, Auth)

Xronox v2.4 includes a dedicated identities database for managing users, accounts, authentication, permissions, and roles.

Overview

The identities database provides simple MongoDB-only storage (NO versioning, NO S3 offload, like logs/messaging) for:

  • Users: User accounts, profiles, credentials
  • Accounts: Organization/company accounts, billing info
  • Authentication: Login sessions, tokens, OAuth connections
  • Permissions: Role-based access control (RBAC)
  • Roles: User roles and permission sets

Key Features

Simple & Fast: MongoDB-only, no versioning overhead
Generic/Shared: Single database for all tenants (tenant-scoped queries)
RBAC Ready: Built for role-based access control
Auth Flexible: Works with any auth strategy (JWT, OAuth, sessions)
SaaS Optimized: Perfect for multi-tenant SaaS applications

Configuration

Single Database for All Tenants (like logs and messaging):

{
  "dbConnections": {
    "mongo-primary": {
      "mongoUri": "mongodb://localhost:27017"
    }
  },
  "spacesConnections": {},
  "databases": {
    "identities": {
      "dbConnRef": "mongo-primary",
      "dbName": "chronos_identities"
    }
  },
  "routing": { "hashAlgo": "rendezvous" }
}

Configuration Fields:

  • dbConnRef: MongoDB connection reference (required)
  • dbName: MongoDB database name (required)

Note: The identities database is shared across all tenants. Tenant isolation is achieved through the tenantId field in every document/query.

Typical Collections

While Chronos-DB doesn't enforce specific collection schemas for identities (you define them), typical collections include:

users

{
  _id: ObjectId,
  tenantId: string,
  email: string,
  passwordHash: string,
  name: string,
  roles: string[],
  status: 'active' | 'suspended' | 'deleted',
  createdAt: Date,
  updatedAt: Date
}

accounts (for B2B SaaS)

{
  _id: ObjectId,
  tenantId: string,
  companyName: string,
  plan: 'free' | 'pro' | 'enterprise',
  ownerId: string,  // Reference to users collection
  seats: number,
  billing: {
    customerId: string,
    subscriptionId: string
  },
  createdAt: Date
}

sessions

{
  _id: ObjectId,
  tenantId: string,
  userId: string,
  token: string,
  expiresAt: Date,
  ipAddress: string,
  userAgent: string,
  createdAt: Date
}

roles

{
  _id: ObjectId,
  tenantId: string,
  name: string,
  permissions: string[],
  isSystemRole: boolean,
  createdAt: Date
}

API Usage

Use the standard Xronox with() API with databaseType: 'identities':

import { initChronos } from 'xronox';

const chronos = initChronos(config);

// Get operations for identities database
const ops = chronos.with({
  databaseType: 'identities',
  dbName: 'chronos_identities',
  collection: 'users'
});

// Create user
const user = await ops.create({
  tenantId: 'acme-corp',
  email: 'john@acme.com',
  passwordHash: '...',
  name: 'John Doe',
  roles: ['admin'],
  status: 'active'
}, 'system', 'user-registration');

// Query users
const users = await ops.query({
  tenantId: 'acme-corp',
  status: 'active'
}, {});

// Update user
await ops.update(user.id, {
  name: 'John Smith'
}, user.ov, 'admin', 'name-change');

Security & Multi-Tenancy

  • All operations are tenant-scoped (tenantId in every query)
  • Single shared database for all tenants
  • Tenant isolation enforced by tenantId in all queries and indexes
  • No cross-tenant data leakage (MongoDB queries enforce tenant boundaries)
  • Password Security: Always hash passwords (bcrypt, argon2, etc.) before storing
  • Token Security: Use short-lived tokens, store hashed versions
  • PII Compliance: Apply encryption at rest, GDPR right-to-delete

Integration with Authentication

The identities database is authentication-agnostic — use it with any auth strategy:

JWT Authentication:

// Verify login
const user = await ops.query({ 
  tenantId: 'acme-corp', 
  email: 'john@acme.com' 
}, {});

if (user && await bcrypt.compare(password, user.passwordHash)) {
  const token = jwt.sign({ userId: user._id, tenantId: user.tenantId }, secret);
  
  // Store session
  await sessionOps.create({
    tenantId: user.tenantId,
    userId: user._id,
    token: hashToken(token),
    expiresAt: new Date(Date.now() + 86400000)
  });
}

OAuth Integration:

// Store OAuth connection
await oauthOps.create({
  tenantId: 'acme-corp',
  userId: user._id,
  provider: 'google',
  providerId: googleProfile.id,
  accessToken: encryptToken(tokens.access_token),
  refreshToken: encryptToken(tokens.refresh_token),
  expiresAt: new Date(Date.now() + tokens.expires_in * 1000)
});

Performance & Sizing

  • Indexes: Create indexes on { tenantId, email }, { tenantId, status }, etc.
  • Sharding: Consider sharding by tenantId for very large deployments (millions of users)
  • Caching: Cache user/role lookups in Redis for auth performance
  • Connections: Reuses existing MongoDB connection pool

Why Separate from Metadata?

The identities database is separate from metadata/knowledge because:

  • Different Access Patterns: Auth queries are frequent, hot, and simple
  • Security Isolation: Easier to apply strict security controls
  • Scale Independently: Users/auth can scale separately from business data
  • Clear Boundaries: Authentication is a distinct concern from domain data

📋 Frequently Asked Questions (FAQs)

Q: What's new in v2.4.0?

A: Xronox v2.4.0 is a rebranding from chronos-db with the same great features:

  • New Package Name: Now published as xronox (formerly chronos-db)
  • Messaging Database: First-class messaging database type for Chronow integration (pub/sub audit, DLQ, shared memory)
  • Identities Database: identities database type for users, accounts, auth, permissions, roles
  • Big Data + SaaS Positioning: Optimized for both Big Data platforms AND SaaS applications
  • Cost Optimization: R&D, server, and storage cost reductions documented
  • MongoDB-Only Simple DBs: Both messaging and identities are simple, fast, no versioning overhead

Q: What was new in v2.2.0 (chronos-db)?

A: v2.2.0 introduced major new features:

  • Entity Relationships: insertWithEntities and getWithEntities for automatic entity management
  • Tiered Fetching: getKnowledge and getMetadata with automatic fallback/merge across tiers
  • Deep Merge Utility: Smart merging of records from multiple tiers with array union
  • Azure Storage Support: Native support for Azure Blob Storage alongside S3-compatible providers
  • Enhanced Analytics: Unique counting, time-based rules, and cross-tenant analytics
  • Worker Integration: Comprehensive worker integration documentation
  • Connection Reuse: Define MongoDB/S3/Azure connections once, reference everywhere
  • Simplified Configuration: No more nested tenant wrappers

Q: How does tenant isolation work?

A: Xronox provides multiple levels of tenant isolation:

  • Database Level: Each tenant can have separate databases
  • Infrastructure Level: Each tenant can have separate MongoDB clusters and S3 buckets
  • Network Level: Complete network isolation between tenants
  • Access Control: Tenant context required for all operations

Q: What are the security best practices?

A:

  • Use separate infrastructure for high-security tenants
  • Enable logical delete for GDPR compliance
  • Enable versioning for audit trails
  • Use connection reuse to reduce attack surface
  • Implement proper access controls at the application level

Q: How do I handle big data scenarios?

A: Xronox is optimized for big data:

  • Horizontal scaling across multiple MongoDB clusters
  • S3 integration for large datasets
  • Batch operations with write optimization
  • Integrated analytics for business intelligence
  • Connection pooling for high throughput

Q: How do I migrate from chronos-db to xronox?

A: For v2.4.0, just update your package.json and imports:

npm uninstall chronos-db
npm install xronox@^2.4.0

Then update your imports from 'chronos-db' to 'xronox'. The API is 100% compatible.

Q: Can I use xronox without S3?

A: Yes! Use localStorage for development/testing:

const xronox = initXronox({
  dbConnections: { 'local': { mongoUri: 'mongodb://localhost:27017' } },
  spacesConnections: {},
  databases: { /* your config */ },
  localStorage: { enabled: true, basePath: './data' }
});

Made with ❤️ for enterprise-grade data management

Keywords

xronox

FAQs

Package last updated on 09 Oct 2025

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts