Socket
Book a DemoInstallSign in
Socket

@kadi.build/local-remote-file-manager-ability

Package Overview
Dependencies
Maintainers
2
Versions
1
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

@kadi.build/local-remote-file-manager-ability

Local & Remote File Management System with S3-compatible container registry, HTTP server provider, file streaming, and comprehensive testing suite

0.0.1
latest
Source
npmnpm
Version published
Maintainers
2
Created
Source

Local & Remote File Manager

A comprehensive Node.js CLI tool and library for local file management with advanced features including real-time file watching, compression/decompression, secure temporary file sharing via tunneling, and S3-compatible object storage. This unified file management system provides powerful local operations with the capability to extend to remote server operations in future releases.

License: MIT Node.js Test Status

๐ŸŒŸ Features

  • ๐Ÿ“ Complete Local File Management: Full CRUD operations for files and folders with advanced path handling
  • ๐Ÿ‘๏ธ Real-time File Watching: Monitor file and directory changes with event filtering and callbacks
  • ๐Ÿ—œ๏ธ Advanced Compression: ZIP and TAR.GZ compression/decompression with progress tracking
  • ๐ŸŒ Secure File Sharing: Temporary URL generation with tunnel-based sharing (ngrok/localtunnel integration)
  • ๐ŸŒ HTTP Server Provider: Complete HTTP server management with static file serving and tunnel integration
  • โšก Enhanced File Streaming: Optimized file streaming with range requests and progress tracking
  • ๐Ÿ” S3-Compatible Object Storage: Full S3 endpoints with authentication, bucket mapping, and analytics
  • ๐Ÿ“Š Real-Time Monitoring Dashboard: Live progress tracking with visual dashboard and download analytics
  • ๐Ÿ”„ Auto-Shutdown Management: Intelligent shutdown triggers based on download completion or timeout
  • ๐Ÿ“ข Event Notification System: Comprehensive event system with console, file, and webhook notifications
  • ๐Ÿ–ฅ๏ธ Production-Ready CLI: Complete command-line interface for all features with interactive help
  • โšก High Performance: Efficient memory usage, streaming for large files, and batch operations
  • ๐Ÿ› ๏ธ CLI & Library: Use as command-line tool or integrate as Node.js library
  • ๐Ÿ”ง Robust Error Handling: Comprehensive error handling and retry logic
  • ๐Ÿ“Š Progress Tracking: Real-time progress for long-running operations
  • ๐ŸŽฏ Path Management: Automatic folder creation and path normalization
  • ๐Ÿงช Comprehensive Testing: Full test suite with 225/225 tests passing (100% success rate)

๐Ÿ“‹ Table of Contents

๐Ÿš€ Installation

As a CLI Tool

git clone <repository-url>
cd local-remote-file-manager-ability
npm install
npm run setup

Global Installation

npm install -g local-remote-file-manager

As a Node.js Library

npm install local-remote-file-manager
const { createManager, compressFile } = require('local-remote-file-manager');

// Quick start - factory functions
const manager = await createManager();
const files = await manager.getProvider('local').list('./');

// Quick compression
await compressFile('./my-folder', './archive.zip');

๐Ÿ“– See USAGE.md for complete examples and INTEGRATION-EXAMPLE.md for real-world integration patterns.

โšก Quick Start

CLI Quick Start

  • Install dependencies

    npm install
    
  • Test your setup

    npm test
    # or test specific features
    npm run test:cli
    npm run test:local
    npm run test:s3
    
  • Basic file operations

    node index.js copy --source document.pdf --target ./uploads/document.pdf
    node index.js upload --file data.zip --target ./uploads/
    node index.js list --directory ./uploads
    
  • Start S3-compatible file server

    node index.js serve-s3 ./files --port 5000 --auth
    # Access at http://localhost:5000/default/<filename>
    
  • S3 server with bucket mapping

    node index.js serve-s3 ./storage \
      --bucket containers:./storage/containers \
      --bucket images:./storage/images \
      --port 9000 --auth --tunnel
    
  • File server with auto-shutdown

    node index.js serve-s3 ./content --port 8000 \
      --auto-shutdown --shutdown-delay 30000
    
  • Start watching a directory

    node index.js watch ./documents
    
  • Compress files

    node index.js compress --file ./large-file.txt --output ./compressed.zip
    
  • Share a file temporarily

    node index.js share ./document.pdf --expires 30m
    

๐Ÿ“Š Development Status

โœ… Completed Phases (Production Ready)

PhaseStatusFeaturesTest Results
Phase 1: Foundation & Local CRUDโœ… CompleteFile/folder CRUD, path management, search operations33/33 tests passing (100%)
Phase 2: File/Directory Watchingโœ… CompleteReal-time monitoring, event filtering, recursive watching24/24 tests passing (100%)
Phase 3: Compression/Decompressionโœ… CompleteZIP/TAR.GZ support, batch operations, progress tracking30/30 tests passing (100%)
Phase 4: Tunneling & Temp URLsโœ… CompleteSecure file sharing, temporary URLs, multiple tunnel services35/35 tests passing (100%)
Phase 5: HTTP Server Providerโœ… CompleteHTTP server management, static file serving, tunnel integration22/22 tests passing (100%)
Phase 6: File Streaming Enhancementโœ… CompleteEnhanced streaming, range requests, MIME detection, progress tracking12/12 tests passing (100%)
Phase 7: S3 Object Storageโœ… CompleteS3-compatible endpoints, authentication, bucket/key mapping, analytics31/31 tests passing (100%)
Phase 8: Auto-Shutdown & Monitoringโœ… CompleteReal-time monitoring dashboard, auto-shutdown triggers, event notifications22/22 tests passing (100%)
Phase 9: CLI Integrationโœ… CompleteComplete CLI interface for all features, S3 server commands, validation16/16 tests passing (100%)

๐ŸŽฏ Overall Project Health

  • Total Tests: 225/225 automated tests passing
  • Pass Rate: 100% across all implemented features
  • Code Coverage: Comprehensive test coverage for all providers
  • Performance: Optimized for large files and high-volume operations
  • Stability: Production-ready with full CLI integration

๐Ÿ’ป CLI Usage

๐Ÿ”ง System Commands

System information and validation:

node index.js --help                   # Show all available commands
node index.js test                     # Test all providers
node index.js test --provider local    # Test specific provider
node index.js validate                 # Validate configuration
node index.js info                     # Show system information

๐Ÿ“ File Operations

Basic file management:

# Upload/copy files
node index.js upload --file document.pdf --target ./uploads/document.pdf
node index.js copy --source ./file.pdf --target ./backup/file.pdf

# Download files (local copy)
node index.js download --source ./uploads/document.pdf --target ./downloads/

# Move and rename files
node index.js move --source ./file.pdf --target ./archive/file.pdf
node index.js rename --file ./old-name.pdf --name new-name.pdf

# Delete files
node index.js delete --file ./old-file.pdf --yes

# List and search files
node index.js list --directory ./uploads
node index.js list --directory ./uploads --recursive
node index.js search --query "*.pdf" --directory ./uploads

Folder operations:

# Create and manage directories
node index.js mkdir --directory ./new-folder
node index.js ls-folders --directory ./uploads
node index.js rmdir --directory ./old-folder --recursive --yes

๐Ÿ‘๏ธ File Watching

Start and manage file watching:

# Start watching
node index.js watch ./documents                    # Watch directory
node index.js watch ./file.txt --no-recursive      # Watch single file
node index.js watch ./project --events add,change  # Filter events

# Manage watchers
node index.js watch-list                # List active watchers
node index.js watch-list --verbose      # Detailed watcher info
node index.js watch-status             # Show watching statistics
node index.js watch-stop ./documents   # Stop specific watcher
node index.js watch-stop --all         # Stop all watchers

๐Ÿ—œ๏ธ Compression Operations

Compress and decompress files:

# Basic compression
node index.js compress --file ./document.pdf --output ./compressed.zip
node index.js compress --file ./folder --output ./archive.tar.gz --format tar.gz
node index.js compress --file ./data --output ./backup.zip --level 9

# Decompression
node index.js decompress --file ./archive.zip --directory ./extracted/
node index.js decompress --file ./backup.tar.gz --directory ./restored/ --overwrite

# Batch operations
node index.js compress-batch --directory ./files --output ./archives/
node index.js decompress-batch --directory ./archives --output ./extracted/

# Compression status
node index.js compression-status

๐ŸŒ File Sharing & Tunneling

Share files temporarily:

# Basic file sharing
node index.js share ./document.pdf                 # Default 1h expiration
node index.js share ./folder --expires 30m         # 30 minutes
node index.js share ./file.zip --expires 2h        # 2 hours
node index.js share ./project --multi-download     # Allow multiple downloads

# Advanced sharing options
node index.js share ./data.zip --expires 24h --keep-alive --no-auto-shutdown

# Tunnel management
node index.js tunnel-status             # Show active tunnels and URLs
node index.js tunnel-cleanup           # Clean up expired URLs and tunnels

๐Ÿ” S3-Compatible File Server

Start S3 server (Core Feature):

# Basic S3 server
node index.js serve-s3 ./storage --port 5000
node index.js serve-s3 ./storage --port 5000 --auth    # With authentication

# S3 server with bucket mapping
node index.js serve-s3 ./storage \
  --bucket containers:./storage/containers \
  --bucket images:./storage/images \
  --bucket docs:./storage/documents \
  --port 9000 --auth

# S3 server with tunnel (public access)
node index.js serve-s3 ./content \
  --port 8000 --tunnel --tunnel-service ngrok \
  --name my-public-server

# S3 server with monitoring
node index.js serve-s3 ./data \
  --port 7000 --monitor --interactive \
  --name monitoring-server

S3 server with auto-shutdown:

# Auto-shutdown after downloads
node index.js serve-s3 ./container-storage \
  --port 9000 --auto-shutdown \
  --shutdown-delay 30000 --max-idle 600000

# Background server mode
node index.js serve-s3 ./storage \
  --port 5000 --background --name bg-server

# Container registry example
node index.js serve-s3 ./containers \
  --bucket containers:./containers \
  --bucket registry:./registry \
  --port 9000 --auto-shutdown \
  --name container-registry

S3 server management:

# Server status and control
node index.js server-status            # Show all active servers
node index.js server-status --json     # JSON output
node index.js server-stop --all        # Stop all servers
node index.js server-stop --name my-server  # Stop specific server

# Server cleanup
node index.js server-cleanup           # Clean up stopped servers

๐Ÿ“Š Real-Time Monitoring

Monitor server activity:

# Real-time monitoring (when server started with --monitor)
# Automatically displays:
# - Active downloads with progress bars
# - Server status and uptime
# - Download completion status
# - Auto-shutdown countdown

# Interactive mode (when server started with --interactive)
# Available commands in interactive mode:
# - status: Show server status
# - downloads: Show active downloads
# - stop: Stop the server
# - help: Show available commands

๐Ÿš€ NPM Scripts for Development

# Testing
npm test                    # Run all tests
npm run test:cli           # Test CLI integration
npm run test:local         # Test local operations
npm run test:watch         # Test file watching
npm run test:compression   # Test compression
npm run test:tunnel        # Test tunneling
npm run test:http          # Test HTTP server
npm run test:streaming     # Test file streaming
npm run test:s3           # Test S3 server
npm run test:monitor      # Test monitoring/auto-shutdown

# Demos
npm run demo:cli                    # CLI integration demo
npm run demo:basic                  # Basic operations demo
npm run demo:watch                  # File watching demo
npm run demo:compression            # Compression demo
npm run demo:tunnel                 # File sharing demo
npm run demo:container-registry     # ๐Ÿณ Container registry demo (simple)
npm run demo:container-registry-full # ๐Ÿณ Container registry demo (full)
npm run demo:container-registry-test # ๐Ÿณ Test container registry components

# Server shortcuts
npm run serve-s3          # Start S3 server on port 5000
npm run server-status     # Check server status
npm run server-stop       # Stop all servers

# Cleanup
npm run clean             # Clean test files
npm run clean:tests       # Clean test results

๐ŸŽฏ Common Use Cases

๐Ÿณ Container Registry Demo (Quick Start):

# Run the complete container registry demo
npm run demo:container-registry

# Or with real containers
npm run demo:container-registry-full

# Test the setup first
npm run demo:container-registry-test

This demo showcases:

  • ๐Ÿณ Container Export: Exports Podman/Docker containers to registry format
  • ๐ŸŒ Public Tunneling: Creates accessible HTTPS URLs via ngrok
  • ๐Ÿ”’ Secure Access: Generates temporary AWS-style credentials
  • ๐Ÿ“Š Real-time Monitoring: Shows download progress and statistics
  • โšก Auto-shutdown: Automatically cleans up when downloads complete

See Container Registry Demo for complete documentation.

Container Registry Setup:

# Set up S3-compatible container registry
node index.js serve-s3 ./container-storage \
  --bucket containers:./container-storage/containers \
  --bucket registry:./container-storage/registry \
  --port 9000 --auto-shutdown \
  --name container-registry

# Access containers at:
# http://localhost:9000/containers/manifest.json
# http://localhost:9000/containers/config.json
# http://localhost:9000/containers/layer1.tar

Public File Sharing:

# Share files with public tunnel
node index.js serve-s3 ./public-files \
  --port 8000 --tunnel --tunnel-service ngrok \
  --bucket files:./public-files \
  --name public-share

# Or temporary file sharing
node index.js share ./important-file.zip \
  --expires 24h --multi-download

Development File Server:

# Development server with monitoring
node index.js serve-s3 ./dev-content \
  --port 3000 --monitor --interactive \
  --bucket assets:./dev-content/assets \
  --bucket uploads:./dev-content/uploads

Automated Backup System:

# Watch and compress new files
node index.js watch ./documents &
# In another terminal, set up S3 server for backup access
node index.js serve-s3 ./backups \
  --bucket daily:./backups/daily \
  --bucket weekly:./backups/weekly \
  --port 9090 --auth

๐Ÿ“š Library Usage

Installation as Node.js Module

npm install local-remote-file-manager

Quick Start - Factory Functions

The library provides convenient factory functions for quick setup:

const { 
  createManager, 
  createS3Server, 
  compressFile, 
  watchDirectory 
} = require('local-remote-file-manager');

// Quick file operations
async function quickStart() {
  // Create a file manager with default config
  const manager = await createManager();
  
  // Get providers for different operations
  const local = manager.getProvider('local');
  const files = await local.list('./my-directory');
  
  // Quick compression
  await compressFile('./my-folder', './archive.zip');
  
  // Start file watching
  const watcher = await watchDirectory('./watched-folder');
  watcher.on('change', (data) => {
    console.log('File changed:', data.path);
  });
}

Basic Integration

const { LocalRemoteManager, ConfigManager } = require('local-remote-file-manager');

class FileManagementApp {
  constructor() {
    this.config = new ConfigManager();
    this.fileManager = null;
  }

  async initialize() {
    await this.config.load();
    this.fileManager = new LocalRemoteManager(this.config);
    
    // Set up event handling
    this.fileManager.on('fileEvent', (data) => {
      console.log('File event:', data.type, data.path);
    });
  }

  async processFile(inputPath, outputPath) {
    const local = this.fileManager.getProvider('local');
    const compression = this.fileManager.getCompressionProvider();
    
    // Copy file
    await local.copy(inputPath, outputPath);
    
    // Compress file
    const result = await compression.compress(
      outputPath, 
      outputPath.replace(/\.[^/.]+$/, '.zip')
    );
    
    return result;
  }
}

S3-Compatible Server

const { createS3Server } = require('local-remote-file-manager');

async function createFileServer() {
  const server = createS3Server({
    port: 5000,
    rootDirectory: './storage',
    bucketMapping: new Map([
      ['public', './public-files'],
      ['private', './private-files']
    ]),
    
    // Authentication
    authentication: {
      enabled: true,
      tempCredentials: true
    },
    
    // Monitoring and auto-shutdown
    monitoring: {
      enabled: true,
      dashboard: true
    },
    autoShutdown: {
      enabled: true,
      timeout: 3600000 // 1 hour
    }
  });
  
  // Event handling
  server.on('request', (data) => {
    console.log(`${data.method} ${data.path}`);
  });
  
  server.on('download', (data) => {
    console.log(`Downloaded: ${data.path} (${data.size} bytes)`);
  });
  
  await server.start();
  console.log('S3 server running on http://localhost:5000');
  
  return server;
}

class ContainerRegistryServer {
  constructor() {
    this.server = null;
  }

  async startWithAutoShutdown() {
    // Create S3 server with auto-shutdown and monitoring
    this.server = new S3HttpServer({
      port: 9000,
      serverName: 'container-registry',
      rootDirectory: './container-storage',
      
      // Auto-shutdown configuration
      enableAutoShutdown: true,
      shutdownOnCompletion: true,
      shutdownTriggers: ['completion', 'timeout', 'manual'],
      completionShutdownDelay: 30000, // 30 seconds after completion
      maxIdleTime: 600000, // 10 minutes idle
      maxTotalTime: 3600000, // 1 hour maximum
      
      // Real-time monitoring
      enableRealTimeMonitoring: true,
      enableDownloadTracking: true,
      monitoringUpdateInterval: 2000, // 2 seconds
      
      // Event notifications
      enableEventNotifications: true,
      notificationChannels: ['console', 'file'],
      
      // S3 configuration
      enableAuth: false, // Simplified for container usage
      bucketMapping: new Map([
        ['containers', 'container-files'],
        ['registry', 'registry-data']
      ])
    });

    // Setup event listeners
    this.setupEventListeners();

    // Start the server
    const result = await this.server.start();
    console.log(`๐Ÿš€ Container registry started: ${result.localUrl}`);
    
    // Configure expected downloads for auto-shutdown
    await this.configureExpectedDownloads();
    
    return result;
  }

  setupEventListeners() {
    // Download progress tracking
    this.server.on('downloadStarted', (info) => {
      console.log(`๐Ÿ“ฅ Download started: ${info.bucket}/${info.key} (${this.formatBytes(info.fileSize)})`);
    });

    this.server.on('downloadCompleted', (info) => {
      console.log(`โœ… Download completed: ${info.bucket}/${info.key} in ${info.duration}ms`);
    });

    this.server.on('downloadFailed', (info) => {
      console.log(`โŒ Download failed: ${info.bucket}/${info.key} - ${info.error}`);
    });

    // Auto-shutdown events
    this.server.on('allDownloadsComplete', (info) => {
      console.log(`๐ŸŽ‰ All downloads complete! Auto-shutdown will trigger in ${info.shutdownDelay / 1000}s`);
    });

    this.server.on('shutdownScheduled', (info) => {
      console.log(`โฐ Server shutdown scheduled: ${info.reason} (${Math.round(info.delay / 1000)}s)`);
    });

    this.server.on('shutdownWarning', (info) => {
      console.log(`โš ๏ธ  Server shutting down in ${Math.round(info.timeRemaining / 1000)} seconds`);
    });
  }

  async configureExpectedDownloads() {
    // Set expected container downloads
    const expectedDownloads = [
      { bucket: 'containers', key: 'manifest.json', size: 1024 },
      { bucket: 'containers', key: 'config.json', size: 512 },
      { bucket: 'containers', key: 'layer-1.tar', size: 1048576 }, // 1MB
      { bucket: 'containers', key: 'layer-2.tar', size: 2097152 }, // 2MB
    ];

    const result = this.server.setExpectedDownloads(expectedDownloads);
    if (result.success) {
      console.log(`๐Ÿ“‹ Configured ${result.expectedCount} expected downloads (${this.formatBytes(result.totalBytes)} total)`);
    }
  }

  formatBytes(bytes) {
    if (bytes === 0) return '0 B';
    const k = 1024;
    const sizes = ['B', 'KB', 'MB', 'GB'];
    const i = Math.floor(Math.log(bytes) / Math.log(k));
    return parseFloat((bytes / Math.pow(k, i)).toFixed(2)) + ' ' + sizes[i];
  }

  async getMonitoringData() {
    // Get real-time monitoring data
    return {
      serverStatus: this.server.getStatus(),
      downloadStats: this.server.getDownloadStats(),
      dashboardData: this.server.getMonitoringData(),
      completionStatus: this.server.getDownloadCompletionStatus()
    };
  }

  async gracefulShutdown() {
    console.log('๐Ÿ”„ Initiating graceful shutdown...');
    await this.server.stop({ graceful: true, timeout: 30000 });
    console.log('โœ… Server stopped gracefully');
  }
}

// Usage example
async function runContainerRegistry() {
  const registry = new ContainerRegistryServer();
  
  try {
    // Start server with monitoring
    await registry.startWithAutoShutdown();
    
    // Server will automatically shut down when all expected downloads complete
    // or after timeout periods are reached
    
    // Manual shutdown if needed
    process.on('SIGINT', async () => {
      await registry.gracefulShutdown();
      process.exit(0);
    });
    
  } catch (error) {
    console.error('Failed to start container registry:', error);
  }
}

Advanced File Watching

const { LocalRemoteManager } = require('local-remote-file-manager');

class DocumentWatcher {
  constructor() {
    this.fileManager = new LocalRemoteManager();
    this.setupEventHandlers();
  }

  setupEventHandlers() {
    this.fileManager.on('fileAdded', (event) => {
      console.log(`New file detected: ${event.filePath}`);
      this.processNewFile(event.filePath);
    });

    this.fileManager.on('fileChanged', (event) => {
      console.log(`File modified: ${event.filePath}`);
      this.handleFileChange(event.filePath);
    });

    this.fileManager.on('fileRemoved', (event) => {
      console.log(`File deleted: ${event.filePath}`);
      this.handleFileRemoval(event.filePath);
    });
  }

  async startWatching(directory) {
    const watchResult = await this.fileManager.startWatching(directory, {
      recursive: true,
      events: ['add', 'change', 'unlink'],
      ignoreDotfiles: true
    });

    console.log(`Started watching: ${watchResult.watchId}`);
    return watchResult;
  }

  async processNewFile(filePath) {
    try {
      // Auto-compress large files
      const fileInfo = await this.fileManager.getFileInfo(filePath);
      if (fileInfo.size > 10 * 1024 * 1024) { // 10MB
        const compressedPath = filePath + '.zip';
        await this.fileManager.compressFile(filePath, compressedPath);
        console.log(`Auto-compressed large file: ${compressedPath}`);
      }
    } catch (error) {
      console.error(`Failed to process new file: ${error.message}`);
    }
  }
}

HTTP Server with Tunnel Integration

const { LocalRemoteManager } = require('local-remote-file-manager');

class FileServerApp {
  constructor() {
    this.fileManager = new LocalRemoteManager();
  }

  async startTunneledServer(contentDirectory) {
    // Create HTTP server with automatic tunnel
    const serverInfo = await this.fileManager.createTunneledServer({
      port: 3000,
      rootDirectory: contentDirectory,
      tunnelService: 'serveo' // or 'ngrok'
    });

    console.log(`๐ŸŒ Local server: http://localhost:${serverInfo.port}`);
    console.log(`๐Ÿ”— Public URL: ${serverInfo.tunnelUrl}`);
    
    return serverInfo;
  }

  async addCustomRoutes(serverId) {
    // Add high-priority custom routes that override generic patterns
    await this.fileManager.addCustomRoute(
      serverId, 
      'GET', 
      '/api/status', 
      (req, res) => {
        res.json({ status: 'active', timestamp: new Date().toISOString() });
      },
      { priority: 100 } // High priority overrides generic /:bucket/* routes
    );

    // Add API endpoint with medium priority
    await this.fileManager.addCustomRoute(
      serverId,
      'GET',
      '/api/:version/health',
      (req, res) => {
        res.json({ health: 'ok', version: req.params.version });
      },
      { priority: 50 }
    );

    // Lower priority route (will be handled after higher priority routes)
    await this.fileManager.addCustomRoute(
      serverId,
      'GET',
      '/docs/:page',
      (req, res) => {
        res.send(`Documentation page: ${req.params.page}`);
      },
      { priority: 10 }
    );

    console.log('โœ… Custom routes added with priority-based routing');
  }

  async serveFiles() {
    const server = await this.startTunneledServer('./public-files');
    
    // Add custom routes with priority system
    await this.addCustomRoutes(server.serverId);
    
    // Monitor server status
    setInterval(async () => {
      const status = await this.fileManager.getServerStatus(server.serverId);
      console.log(`๐Ÿ“Š Server status: ${status.status}, Requests: ${status.requestCount}`);
    }, 30000);

    return server;
  }
}

S3-Compatible Object Storage

const { S3HttpServer } = require('local-remote-file-manager/src/s3Server');

class S3ObjectStorage {
  constructor() {
    this.s3Server = new S3HttpServer({
      port: 9000,
      serverName: 'my-s3-server',
      rootDirectory: './s3-storage',
      enableAuth: true,
      bucketMapping: new Map([
        ['documents', 'user-docs'],
        ['images', 'media/images'],
        ['backups', 'backup-storage']
      ]),
      bucketAccessControl: new Map([
        ['documents', { read: true, write: true }],
        ['images', { read: true, write: false }],
        ['backups', { read: true, write: true }]
      ])
    });
  }

  async start() {
    const serverInfo = await this.s3Server.start();
    console.log(`๐Ÿ—„๏ธ S3 Server running on port ${serverInfo.port}`);

    // Generate temporary credentials
    const credentials = this.s3Server.generateTemporaryCredentials({
      permissions: ['read', 'write'],
      buckets: ['documents', 'backups'],
      expiryMinutes: 60
    });

    console.log(`๐Ÿ”‘ Access Key: ${credentials.accessKey}`);
    console.log(`๐Ÿ” Secret Key: ${credentials.secretKey}`);
    
    return { serverInfo, credentials };
  }

  async enableMonitoring() {
    // Start real-time monitoring dashboard
    this.s3Server.startMonitoringDashboard({
      updateInterval: 2000,
      showServerStats: true,
      showDownloadProgress: true,
      showActiveDownloads: true,
      showShutdownStatus: true
    });

    // Setup download analytics
    this.s3Server.on('downloadCompleted', (info) => {
      console.log(`๐Ÿ“Š Download analytics: ${info.key} (${info.bytes} bytes in ${info.duration}ms)`);
      
      // Get real-time dashboard data
      const dashboardData = this.s3Server.getMonitoringData();
      console.log(`๐Ÿ“ˆ Total downloads: ${dashboardData.downloadStats.totalDownloads}`);
      console.log(`โšก Average speed: ${this.formatSpeed(dashboardData.downloadStats.averageSpeed)}`);
    });

    return true;
  }

  formatSpeed(bytesPerSecond) {
    if (bytesPerSecond < 1024) return `${bytesPerSecond} B/s`;
    if (bytesPerSecond < 1024 * 1024) return `${(bytesPerSecond / 1024).toFixed(1)} KB/s`;
    return `${(bytesPerSecond / (1024 * 1024)).toFixed(1)} MB/s`;
  }
}

// CLI Equivalent Commands:
// Instead of complex library setup, use simple CLI commands:

// Start S3 server with authentication and bucket mapping
// node index.js serve-s3 ./s3-storage --port 9000 --auth \
//   --bucket documents:./s3-storage/user-docs \
//   --bucket images:./s3-storage/media/images \
//   --bucket backups:./s3-storage/backup-storage \
//   --monitor --name my-s3-server

// S3 server with auto-shutdown for container registry
// node index.js serve-s3 ./containers --port 9000 \
//   --bucket containers:./containers \
//   --auto-shutdown --monitor --name container-registry

// S3 server with tunnel for public access
// node index.js serve-s3 ./public-files --port 8000 \
//   --tunnel --tunnel-service ngrok --bucket files:./public-files

// Access via S3-compatible endpoints:
// GET http://localhost:9000/documents/myfile.pdf
// HEAD http://localhost:9000/images/photo.jpg

CLI Integration Examples

The CLI provides direct access to all library features with simple commands:

// Library approach (complex setup):
const server = new S3HttpServer({
  enableAutoShutdown: true,
  shutdownTriggers: ['completion'],
  completionShutdownDelay: 30000,
  enableRealTimeMonitoring: true
});
await server.start();

// CLI approach (simple command):
// node index.js serve-s3 ./storage --auto-shutdown --shutdown-delay 30000 --monitor

// Multiple operations with library require coordination:
// 1. Set up file watcher
// 2. Set up compression handler 
// 3. Set up S3 server
// 4. Coordinate between them

// CLI approach - each command handles coordination:
// Terminal 1: node index.js watch ./documents
// Terminal 2: node index.js serve-s3 ./storage --port 9000 --monitor
// Terminal 3: node index.js compress-batch --directory ./documents --output ./archives/

Container Registry Use Case

# Complete container registry setup with CLI:
mkdir -p ./container-storage/containers ./container-storage/registry

# Start S3-compatible container registry
node index.js serve-s3 ./container-storage \
  --bucket containers:./container-storage/containers \
  --bucket registry:./container-storage/registry \
  --port 9000 --auto-shutdown --monitor \
  --name container-registry

# Server automatically shuts down after container downloads complete
# Real-time monitoring shows download progress and completion status
# Access containers at: http://localhost:9000/containers/<filename>

### Real-Time Monitoring Dashboard

```javascript
const { MonitoringDashboard, DownloadMonitor } = require('local-remote-file-manager');

class LiveMonitoringSystem {
  constructor() {
    this.dashboard = new MonitoringDashboard({
      updateInterval: 1000,
      showServerStats: true,
      showDownloadProgress: true,
      showActiveDownloads: true,
      showShutdownStatus: true
    });

    this.downloadMonitor = new DownloadMonitor({
      trackPartialDownloads: true,
      progressUpdateInterval: 1000
    });
  }

  async startMonitoring(s3Server) {
    // Connect monitoring to S3 server
    this.dashboard.connectToServer(s3Server);
    this.downloadMonitor.connectToServer(s3Server);

    // Start real-time dashboard
    this.dashboard.start();

    // Setup download tracking
    this.downloadMonitor.on('downloadStarted', (info) => {
      this.dashboard.addActiveDownload(info);
    });

    this.downloadMonitor.on('downloadProgress', (info) => {
      this.dashboard.updateDownloadProgress(info.downloadId, info);
    });

    this.downloadMonitor.on('downloadCompleted', (info) => {
      this.dashboard.completeDownload(info.downloadId, info);
    });

    // Example dashboard output:
    /*
    +------------------------------------------------------------------------------------------------------------------------------+
    |                                                   S3 Object Storage Server                                                   |
    +------------------------------------------------------------------------------------------------------------------------------+
    |Status: RUNNING                                                                                                    Uptime: 45s|
    |Port: 9000                                                                                    Public URL: http://localhost:9000|
    +------------------------------------------------------------------------------------------------------------------------------+
    |Downloads Progress                                                                                                            |
    |โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ 3/5 (60%)                                                                                     |
    |Active Downloads: 2                                                                                   Completed: 3   Failed: 0|
    |Speed: 1.2 MB/s                                                                                                Total: 2.1 MB  |
    +------------------------------------------------------------------------------------------------------------------------------+
    |Active Downloads                                                                                                              |
    |โ–ถ layer-2.tar (1.2 MB) โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–‘โ–‘โ–‘โ–‘โ–‘โ–‘ 80% @ 450 KB/s                                                        |
    |โ–ถ layer-3.tar (512 KB) โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘ 45% @ 230 KB/s                                                        |
    +------------------------------------------------------------------------------------------------------------------------------+
    |Auto-Shutdown: ON                                                                                    Trigger: Completion + 30s|
    |Next Check: 00:00:05                                                                                        Status: Monitoring|
    +------------------------------------------------------------------------------------------------------------------------------+
    */

    console.log('๐Ÿ“Š Real-time monitoring dashboard started');
    return true;
  }

  async stopMonitoring() {
    this.dashboard.stop();
    this.downloadMonitor.stop();
    console.log('๐Ÿ“Š Monitoring stopped');
  }

  async getAnalytics() {
    return {
      dashboard: this.dashboard.getCurrentData(),
      downloads: this.downloadMonitor.getStatistics(),
      performance: this.downloadMonitor.getPerformanceMetrics()
    };
  }
}

}

async enableMonitoring() { // Start real-time monitoring dashboard this.s3Server.startRealTimeMonitoring({ interval: 1000, enableConsole: true });

// Track download events
this.s3Server.on('download:started', (info) => {
  console.log(`๐Ÿ“ฅ Download started: ${info.bucket}/${info.key}`);
});

this.s3Server.on('download:completed', (info) => {
  console.log(`โœ… Download completed: ${info.bucket}/${info.key} (${info.size} bytes)`);
});

}

async getAnalytics() { const analytics = this.s3Server.generateDownloadAnalytics({ includeDetails: true });

console.log(`๐Ÿ“Š Total Downloads: ${analytics.summary.totalDownloads}`);
console.log(`๐Ÿš€ Average Speed: ${analytics.performance.averageSpeed}`);
console.log(`โฑ๏ธ Server Uptime: ${analytics.performance.uptime}s`);

return analytics;

} }

// Usage const storage = new S3ObjectStorage(); await storage.start(); await storage.enableMonitoring();

// Access via S3-compatible endpoints: // GET http://localhost:9000/documents/myfile.pdf // HEAD http://localhost:9000/images/photo.jpg


### Enhanced File Streaming

```javascript
const { FileStreamingUtils, DownloadTracker } = require('local-remote-file-manager');

class StreamingFileServer {
  async serveFileWithProgress(filePath, response, rangeHeader = null) {
    try {
      // Get file information
      const fileInfo = await FileStreamingUtils.getFileInfo(filePath);
      console.log(`๐Ÿ“„ Serving: ${fileInfo.name} (${fileInfo.size} bytes)`);

      // Create download tracker
      const tracker = new DownloadTracker(fileInfo.size);
      
      // Handle range request if specified
      let streamOptions = {};
      if (rangeHeader) {
        const range = FileStreamingUtils.parseRangeHeader(rangeHeader, fileInfo.size);
        if (range.isValid) {
          streamOptions = { start: range.start, end: range.end };
          response.status = 206; // Partial Content
          response.setHeader('Content-Range', 
            FileStreamingUtils.formatContentRange(range.start, range.end, fileInfo.size)
          );
        }
      }

      // Set response headers
      response.setHeader('Content-Type', FileStreamingUtils.getMimeType(filePath));
      response.setHeader('Content-Length', streamOptions.end ? 
        (streamOptions.end - streamOptions.start + 1) : fileInfo.size);
      response.setHeader('ETag', FileStreamingUtils.generateETag(fileInfo));
      response.setHeader('Last-Modified', fileInfo.lastModified.toUTCString());
      response.setHeader('Accept-Ranges', 'bytes');

      // Create and pipe stream
      const stream = await FileStreamingUtils.createReadStream(filePath, streamOptions);
      
      stream.on('data', (chunk) => {
        tracker.updateProgress(chunk.length);
        const progress = tracker.getProgress();
        console.log(`๐Ÿ“ˆ Progress: ${progress.percentage}% (${progress.speed}/s)`);
      });

      stream.on('end', () => {
        console.log(`โœ… Transfer complete: ${filePath}`);
      });

      stream.pipe(response);
      
    } catch (error) {
      console.error(`โŒ Streaming error: ${error.message}`);
      response.status = 500;
      response.end('Internal Server Error');
    }
  }
}

Batch File Operations

const { LocalRemoteManager } = require('local-remote-file-manager');

class BatchFileProcessor {
  constructor() {
    this.fileManager = new LocalRemoteManager();
  }

  async backupDocuments(sourceDirectory, backupDirectory) {
    // List all files
    const files = await this.fileManager.listFiles(sourceDirectory, { recursive: true });
    
    // Filter for documents
    const documents = files.filter(file => 
      /\.(pdf|doc|docx|txt|md)$/i.test(file.name)
    );

    console.log(`Found ${documents.length} documents to backup`);

    // Batch compress all documents
    const compressionResults = await this.fileManager.compressMultipleFiles(
      documents.map(doc => doc.path),
      backupDirectory,
      {
        format: 'zip',
        compressionLevel: 6,
        preserveStructure: true
      }
    );

    // Generate temporary share URLs for all backups
    const shareResults = await Promise.all(
      compressionResults.successful.map(async (result) => {
        return await this.fileManager.createShareableUrl(result.outputPath, {
          expiresIn: '24h',
          downloadLimit: 5
        });
      })
    );

    return {
      processed: documents.length,
      compressed: compressionResults.successful.length,
      failed: compressionResults.failed.length,
      shared: shareResults.length,
      shareUrls: shareResults.map(r => r.shareableUrl)
    };
  }

  async syncDirectories(sourceDir, targetDir) {
    const sourceFiles = await this.fileManager.listFiles(sourceDir, { recursive: true });
    const targetFiles = await this.fileManager.listFiles(targetDir, { recursive: true });
    
    const results = {
      copied: [],
      updated: [],
      errors: []
    };

    for (const file of sourceFiles) {
      try {
        const targetPath = file.path.replace(sourceDir, targetDir);
        const targetExists = targetFiles.some(t => t.path === targetPath);

        if (!targetExists) {
          await this.fileManager.copyFile(file.path, targetPath);
          results.copied.push(file.path);
        } else {
          const sourceInfo = await this.fileManager.getFileInfo(file.path);
          const targetInfo = await this.fileManager.getFileInfo(targetPath);
          
          if (sourceInfo.lastModified > targetInfo.lastModified) {
            await this.fileManager.copyFile(file.path, targetPath);
            results.updated.push(file.path);
          }
        }
      } catch (error) {
        results.errors.push({ file: file.path, error: error.message });
      }
    }

    return results;
  }
}

๐Ÿงช Testing

Automated Testing

Run comprehensive tests for all features:

npm test                    # Interactive test selection
npm run test:all           # Test all providers sequentially
npm run test:cli           # Test CLI integration (NEW)
npm run test:local         # Test local file operations
npm run test:watch         # Test file watching
npm run test:compression   # Test compression features
npm run test:tunnel        # Test tunneling and sharing
npm run test:http          # Test HTTP server provider
npm run test:streaming     # Test enhanced file streaming
npm run test:s3            # Test S3-compatible object storage
npm run test:monitor       # Test auto-shutdown & monitoring

Test Coverage by Feature

Phase 1: Local File Operations (33/33 tests passing)

  • โœ… Basic File Operations: Upload, download, copy, move, rename, delete
  • โœ… Folder Operations: Create, list, delete, rename folders
  • โœ… Path Management: Absolute/relative paths, normalization, validation
  • โœ… Search Operations: File search by name, pattern matching, recursive search
  • โœ… Error Handling: Non-existent files, invalid paths, permission errors

Phase 2: File Watching (24/24 tests passing)

  • โœ… Directory Watching: Start/stop watching, recursive monitoring
  • โœ… Event Filtering: Add, change, delete events with custom filtering
  • โœ… Performance Tests: High-frequency events, batch event processing
  • โœ… Edge Cases: Non-existent paths, permission issues, invalid events
  • โœ… Resource Management: Watcher lifecycle, memory cleanup

Phase 3: Compression (30/30 tests passing)

  • โœ… ZIP Operations: Compression, decompression, multiple compression levels
  • โœ… TAR.GZ Operations: Archive creation, extraction, directory compression
  • โœ… Format Detection: Automatic format detection, cross-format operations
  • โœ… Progress Tracking: Real-time progress events, operation monitoring
  • โœ… Batch Operations: Multiple file compression, batch decompression
  • โœ… Performance: Large file handling, memory efficiency tests

Phase 4: Tunneling & File Sharing (35/35 tests passing)

  • โœ… Tunnel Management: Create, destroy tunnels with multiple services
  • โœ… Temporary URLs: URL generation, expiration, access control
  • โœ… File Sharing: Secure sharing, download tracking, permission management
  • โœ… Service Integration: ngrok, localtunnel, fallback mechanisms
  • โœ… Security: Access tokens, expiration handling, cleanup

Phase 5: HTTP Server Provider (22/22 tests passing)

  • โœ… Server Lifecycle Management: Create, start, stop HTTP servers
  • โœ… Static File Serving: MIME detection, range requests, security headers
  • โœ… Route Registration: Parameterized routes, middleware support
  • โœ… Tunnel Integration: Automatic tunnel creation with multiple services
  • โœ… Server Monitoring: Status tracking, metrics collection, health checks

Phase 6: Enhanced File Streaming (12/12 tests passing)

  • โœ… Advanced Streaming: Range-aware streams, progress tracking
  • โœ… MIME Type Detection: 40+ file types, automatic detection
  • โœ… Range Request Processing: Comprehensive range header parsing
  • โœ… Progress Tracking: Real-time progress, speed calculation
  • โœ… Performance Optimization: Memory efficiency, large file handling

Phase 7: S3-Compatible Object Storage (31/31 tests passing)

  • โœ… S3 GET/HEAD Endpoints: Object downloads and metadata queries
  • โœ… Authentication System: AWS-style, Bearer token, Basic auth
  • โœ… Bucket/Key Mapping: Path mapping with security validation
  • โœ… S3-Compatible Headers: ETag, Last-Modified, Content-Range
  • โœ… Download Analytics: Progress tracking, real-time monitoring
  • โœ… Rate Limiting: Credential management, access control

Phase 8: Auto-Shutdown & Monitoring (22/22 tests passing)

  • โœ… Auto-Shutdown Triggers: Completion, timeout, idle detection
  • โœ… Real-Time Monitoring: Dashboard, progress bars, status display
  • โœ… Download Tracking: Individual downloads, completion status
  • โœ… Event Notifications: Console, file, webhook notifications
  • โœ… Expected Downloads: Configuration, progress calculation

Phase 9: CLI Integration (16/16 tests passing)

  • โœ… Command Validation: Help commands, option parsing, error handling
  • โœ… S3 Server Commands: Server start with auth, bucket mapping, monitoring
  • โœ… Server Management: Status commands, stop commands, cleanup
  • โœ… Configuration Validation: Directory validation, port conflicts
  • โœ… Error Handling: Graceful errors, permission handling

Test Results Summary

๐Ÿ“Š Overall Test Results
=======================
โœ… Total Tests: 225
โœ… Passed: 225 (100%)
โŒ Failed: 0 (0%)
โญ Skipped: 0 (0%)
๐ŸŽฏ Success Rate: 100%

โšก Performance Metrics
=====================
โฑ๏ธ Average Test Duration: 15ms
๐Ÿƒ Fastest Category: Local Operations (2ms avg)
๐ŸŒ Slowest Category: CLI Integration (1000ms avg)
๐Ÿ•’ Total Test Suite Time: ~5 minutes

๐ŸŽ‰ All features are production-ready!

Manual Testing & Demos

Validate functionality with built-in demos:

npm run demo:cli          # CLI integration demo (NEW)
npm run demo:basic        # Basic file operations demo
npm run demo:watch        # File watching demonstration
npm run demo:compression  # Compression feature demo
npm run demo:tunnel       # File sharing demo
npm run demo:s3          # S3 server demo (NEW)
npm run demo:monitor     # Auto-shutdown & monitoring demo (NEW)

๐Ÿ“– API Reference

LocalRemoteManager

Core File Operations

  • uploadFile(sourcePath, targetPath) - Copy file to target location (alias for local copy)
  • downloadFile(remotePath, localPath) - Download/copy file from remote location (alias for local copy)
  • getFileInfo(filePath) - Get file metadata, size, timestamps, and permissions
  • listFiles(directoryPath, options) - List files with recursive and filtering options
  • deleteFile(filePath) - Delete a file with error handling
  • copyFile(sourcePath, destinationPath) - Copy a file to new location
  • moveFile(sourcePath, destinationPath) - Move a file to new location
  • renameFile(filePath, newName) - Rename a file in same directory
  • searchFiles(pattern, options) - Search for files by name pattern with recursive support

Folder Operations

  • createFolder(folderPath) - Create a new folder with recursive support
  • listFolders(directoryPath) - List only directories in a path
  • deleteFolder(folderPath, recursive) - Delete a folder with optional recursive deletion
  • renameFolder(folderPath, newName) - Rename a folder in same parent directory
  • copyFolder(sourcePath, destinationPath) - Copy entire folder structure
  • moveFolder(sourcePath, destinationPath) - Move entire folder structure
  • getFolderInfo(folderPath) - Get folder metadata including item count and total size

File Watching

  • startWatching(path, options) - Start monitoring file/directory changes
    • Options: recursive, events, ignoreDotfiles, debounceMs
  • stopWatching(watchId | path) - Stop a specific watcher by ID or path
  • stopAllWatching() - Stop all active watchers with cleanup
  • listActiveWatchers() - Get array of active watcher objects
  • getWatcherInfo(watchId) - Get detailed watcher information including event count
  • getWatchingStatus() - Get overall watching system status and statistics

Compression Operations

  • compressFile(inputPath, outputPath, options) - Compress file or directory
    • Options: format (zip, tar.gz), level (1-9), includeRoot
  • decompressFile(archivePath, outputDirectory, options) - Extract archive contents
    • Options: format, overwrite, preservePermissions
  • compressMultipleFiles(fileArray, outputDirectory, options) - Batch compression with progress
  • decompressMultipleFiles(archiveArray, outputDirectory, options) - Batch extraction
  • getCompressionStatus() - Get compression system status and supported formats
  • getCompressionProvider() - Access compression provider directly

Tunneling & File Sharing

  • createTunnel(options) - Create new tunnel connection
    • Options: proto (http, https), subdomain, authToken, useExternalServer, localPort
    • useExternalServer: true - Forward tunnel to existing HTTP server instead of creating internal server
    • localPort: number - Specify external server port to forward tunnel traffic to
  • destroyTunnel(tunnelId) - Destroy specific tunnel connection
  • createTemporaryUrl(filePath, options) - Generate temporary shareable URL
    • Options: permissions, expiresAt, downloadLimit
  • revokeTemporaryUrl(urlId) - Revoke access to shared URL
  • listActiveUrls() - Get list of active temporary URLs
  • getTunnelStatus() - Get tunneling system status including active tunnels
  • getTunnelProvider() - Access tunnel provider directly

HTTP Server Provider

  • createHttpServer(options) - Create HTTP file server
    • Options: port, rootDirectory, enableTunnel, tunnelOptions
  • createTunneledServer(options) - Create HTTP server with automatic tunnel integration
    • Options: port, rootDirectory, tunnelService (default: 'serveo')
  • addCustomRoute(serverId, method, path, handler, options) - Add custom route with priority support
    • Options: priority (higher numbers = higher priority, overrides generic routes like /:bucket/*)
  • stopServer(serverId) - Stop specific HTTP server
  • stopAllServers() - Stop all active HTTP servers
  • getServerStatus(serverId) - Get HTTP server status and information
  • listActiveServers() - Get list of all active HTTP servers
  • getTunnelUrl(serverId) - Get tunnel URL for tunneled server
  • stopTunnel(serverId) - Stop tunnel for specific server
  • getHttpServerProvider() - Access HTTP server provider directly

Enhanced File Streaming

  • createReadStream(filePath, options) - Create range-aware file stream
    • Options: start, end, encoding, chunkSize
  • getFileInfo(filePath) - Get detailed file metadata with MIME type
  • getMimeType(filePath) - Get MIME type with 40+ file type support
  • parseRangeHeader(rangeHeader, fileSize) - Parse HTTP range headers
  • generateETag(fileStats) - Generate ETag for cache validation
  • formatContentRange(start, end, total) - Format Content-Range headers
  • DownloadTracker - Track download progress with speed calculation

S3-Compatible Object Storage

  • createS3Server(options) - Create S3-compatible object storage server
    • Options: port, serverName, rootDirectory, bucketMapping, enableAuth
  • generateTemporaryCredentials(options) - Generate temp AWS-style credentials
    • Options: permissions, buckets, expiryMinutes
  • mapBucketKeyToPath(bucket, key) - Map S3 bucket/key to file path
  • validateBucketAccess(bucket) - Check bucket access permissions
  • getDownloadStats() - Get download statistics and metrics
  • generateDownloadAnalytics(options) - Generate analytics report
  • getDownloadDashboard() - Get real-time dashboard data
  • startRealTimeMonitoring(options) - Start live monitoring console
  • stopRealTimeMonitoring() - Stop real-time monitoring

Auto-Shutdown & Monitoring

  • enableAutoShutdown(options) - Enable auto-shutdown with configurable triggers
    • Options: shutdownTriggers, completionShutdownDelay, maxIdleTime, maxTotalTime
  • setExpectedDownloads(downloads) - Configure expected downloads for completion detection
    • Downloads: Array of { bucket, key, size } objects
  • getDownloadCompletionStatus() - Get current download completion status
  • scheduleShutdown(trigger, delay) - Manually schedule server shutdown
  • cancelScheduledShutdown() - Cancel previously scheduled shutdown
  • startMonitoringDashboard(options) - Start real-time visual monitoring dashboard
    • Options: updateInterval, showServerStats, showDownloadProgress, showActiveDownloads
  • stopMonitoringDashboard() - Stop monitoring dashboard
  • getMonitoringData() - Get current monitoring data snapshot
  • addDownloadEventListener(event, callback) - Listen to download events
    • Events: downloadStarted, downloadCompleted, downloadFailed, allDownloadsComplete
  • addShutdownEventListener(event, callback) - Listen to shutdown events
    • Events: shutdownScheduled, shutdownWarning, shutdownCancelled

Event Notification System

  • enableEventNotifications(channels) - Enable event notifications
    • Channels: ['console', 'file', 'webhook']
  • configureWebhookNotifications(url, options) - Configure webhook notifications
    • Options: retryAttempts, timeout, headers
  • getEventHistory(options) - Get event history with filtering
    • Options: startDate, endDate, eventTypes, limit

Provider Management

  • testConnection(providerName) - Test specific provider connection and capabilities
  • validateProvider(providerName) - Validate provider configuration
  • getSystemInfo() - Get comprehensive system information
  • shutdown() - Gracefully shutdown all providers and cleanup resources

Event System

The LocalRemoteManager extends EventEmitter and provides these events:

  • fileEvent - File system changes (add, change, unlink, addDir, unlinkDir)
  • compressionProgress - Compression operation progress updates
  • decompressionProgress - Decompression operation progress updates
  • tunnelProgress - Tunnel creation/destruction progress
  • urlCreated - Temporary URL creation events
  • urlRevoked - URL revocation events
  • fileAccessed - File access via temporary URLs
  • tunnelError - Tunnel-related errors
  • downloadStarted - Download operation started
  • downloadProgress - Real-time download progress updates
  • downloadCompleted - Download operation completed
  • downloadFailed - Download operation failed
  • allDownloadsComplete - All expected downloads completed
  • shutdownScheduled - Auto-shutdown has been scheduled
  • shutdownWarning - Shutdown warning (time remaining)
  • shutdownCancelled - Scheduled shutdown was cancelled
  • monitoringEnabled - Real-time monitoring started
  • monitoringDisabled - Real-time monitoring stopped
  • dashboardUpdated - Monitoring dashboard data updated

ConfigManager

Configuration Management

  • load() - Load configuration from environment variables and defaults
  • get(key) - Get configuration value by key
  • set(key, value) - Set configuration value
  • validate() - Validate current configuration and return validation result
  • save() - Save configuration to persistent storage

Provider-Specific Configuration

  • getLocalConfig() - Get local provider configuration (paths, permissions)
  • getWatchConfig() - Get file watching configuration (debounce, patterns)
  • getCompressionConfig() - Get compression configuration (formats, levels)
  • getTunnelConfig() - Get tunneling configuration (services, fallback)

Provider Interfaces

Each provider implements a consistent interface:

Local Provider

  • File CRUD operations with path validation
  • Folder management with recursive support
  • Search functionality with pattern matching
  • System information and disk space monitoring

Watch Provider

  • Directory and file monitoring with chokidar
  • Event filtering and debouncing
  • Recursive watching with ignore patterns
  • Watcher lifecycle management

Compression Provider

  • ZIP and TAR.GZ format support
  • Multiple compression levels (1-9)
  • Batch operations with progress tracking
  • Format auto-detection and validation

Tunnel Provider

  • Multiple tunnel service support (Serveo, ngrok, Pinggy)
  • Automatic fallback between services
  • External server forwarding support with useExternalServer option
  • Target port specification with localPort parameter
  • Tunnels forward to existing HTTP servers for consistent content serving
  • HTTP server for file serving (fallback mode only)
  • Access token security and expiration

HTTP Server Provider

  • Static file serving with configurable root directory
  • Automatic port assignment and management
  • Integrated tunnel support for public access
  • Multiple concurrent server support
  • Request logging and analytics
  • MIME type detection and headers
  • Graceful shutdown and cleanup

Error Handling

All methods throw structured errors with:

  • code - Error code (ENOENT, EACCES, etc.)
  • message - Human-readable error description
  • path - File/directory path related to error (when applicable)
  • provider - Provider name that generated the error

๐Ÿ—๏ธ Architecture & Design

HTTP Server Provider Implementation

Tunnel Integration: The tunnel system is designed to forward external HTTP servers for consistent content serving.

External Server Forwarding

The TunnelProvider integrates with HTTP servers by forwarding tunnel traffic to existing server ports rather than creating separate internal servers:

// Standard tunnel forwarding approach
const httpServer = await httpServerProvider.createTunneledServer({
  port: 4005,
  rootDirectory: './content',
  tunnelService: 'serveo'  // Tunnel forwards to port 4005
});

// Manual tunnel configuration with forwarding
const tunnel = await tunnelProvider.createTunnel({
  useExternalServer: true,  // Don't create internal server
  localPort: 4005          // Forward to existing server on port 4005
});

Benefits of This Architecture

  • Consistency: Tunnel serves same content as local HTTP server
  • Flexibility: Multiple servers can have dedicated tunnels
  • Performance: No duplicate servers or port conflicts
  • Debugging: Clear separation between HTTP serving and tunnel forwarding
  • Container Registry Ready: Foundation for container serving capabilities

Usage Patterns

For file serving with tunnel access:

// Recommended approach for public file serving
const server = await manager.createTunneledServer({
  port: 3000,
  rootDirectory: './public',
  tunnelService: 'serveo'
});
const tunnelUrl = server.tunnelUrl;

TunnelProvider API Reference

Core Methods

createTunnel(options) - Create tunnel with external server support

const tunnel = await tunnelProvider.createTunnel({
  // Basic tunnel options
  subdomain: 'myapp',
  service: 'serveo',  // 'serveo', 'pinggy', 'localtunnel'
  
  // External server forwarding
  useExternalServer: true,  // Don't create internal HTTP server
  localPort: 4005,         // Forward tunnel to existing server on this port
  
  // Additional options
  authToken: 'optional',
  region: 'us'
});

Return Value:

{
  tunnelId: 'tunnel_abc123',
  url: 'https://subdomain.serveo.net',
  service: 'serveo',
  port: 4005,                    // Reflects target port when using external server
  createdAt: '2025-08-13T...',
  useExternalServer: true,       // Indicates forwarding mode
  targetPort: 4005              // Shows which external port is being forwarded to
}

Service-Specific Methods

All tunnel creation methods accept targetPort parameter:

// Method signatures with port forwarding support
await createServiceTunnel(serviceName, tunnelId, options, targetPort)
await createPinggyTunnel(tunnelId, options, targetPort)  
await createServeoTunnel(tunnelId, options, targetPort)
await createLocalTunnel(tunnelId, options, targetPort)

Configuration

// Default configuration
{
  service: 'serveo',                    // Primary tunnel service
  fallbackServices: 'serveo,localtunnel', // Fallback order
  autoFallback: true,
  useExternalServer: false              // Default to internal server creation
}

Method Return Types

File Operations

// File info result
{
  name: string,
  path: string,
  size: number,
  isDirectory: boolean,
  createdAt: Date,
  modifiedAt: Date,
  permissions: string
}

// Operation result
{
  name: string,
  path: string,
  size: number,
  completedAt: Date
}

Compression Operations

// Compression result
{
  operationId: string,
  name: string,
  format: string,
  size: number,
  originalSize: number,
  compressionRatio: number,
  level: number,
  completedAt: Date
}

// Batch result
{
  successful: Array,
  failed: Array,
  summary: {
    total: number,
    successful: number,
    failed: number,
    successRate: string
  }
}

Tunneling Operations

// Tunnel result
{
  tunnelId: string,
  url: string,
  service: string,
  port: number,
  createdAt: Date,
  useExternalServer?: boolean,  // Indicates if forwarding to external server
  targetPort?: number          // External server port being forwarded to
}

// HTTP Server result
{
  serverId: string,
  port: number,
  rootDirectory: string,
  url: string,
  status: 'running' | 'stopped',
  tunnelEnabled: boolean,
  tunnelUrl?: string,
  tunnelService?: string,
  createdAt: Date,
  requestCount: number
}

// Temporary URL result
{
  urlId: string,
  shareableUrl: string,
  accessToken: string,
  expiresAt: Date,
  permissions: Array,
  filePath: string
}

Keywords

file-management

FAQs

Package last updated on 19 Aug 2025

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

About

Packages

Stay in touch

Get open source security insights delivered straight into your inbox.

  • Terms
  • Privacy
  • Security

Made with โšก๏ธ by Socket Inc

U.S. Patent No. 12,346,443 & 12,314,394. Other pending.