
Product
Rust Support Now in Beta
Socket's Rust support is moving to Beta: all users can scan Cargo projects and generate SBOMs, including Cargo.toml-only crates, with Rust-aware supply chain checks.
@kadi.build/local-remote-file-manager-ability
Advanced tools
Local & Remote File Management System with S3-compatible container registry, HTTP server provider, file streaming, and comprehensive testing suite
A comprehensive Node.js CLI tool and library for local file management with advanced features including real-time file watching, compression/decompression, secure temporary file sharing via tunneling, and S3-compatible object storage. This unified file management system provides powerful local operations with the capability to extend to remote server operations in future releases.
git clone <repository-url>
cd local-remote-file-manager-ability
npm install
npm run setup
npm install -g local-remote-file-manager
npm install local-remote-file-manager
const { createManager, compressFile } = require('local-remote-file-manager');
// Quick start - factory functions
const manager = await createManager();
const files = await manager.getProvider('local').list('./');
// Quick compression
await compressFile('./my-folder', './archive.zip');
๐ See USAGE.md for complete examples and INTEGRATION-EXAMPLE.md for real-world integration patterns.
Install dependencies
npm install
Test your setup
npm test
# or test specific features
npm run test:cli
npm run test:local
npm run test:s3
Basic file operations
node index.js copy --source document.pdf --target ./uploads/document.pdf
node index.js upload --file data.zip --target ./uploads/
node index.js list --directory ./uploads
Start S3-compatible file server
node index.js serve-s3 ./files --port 5000 --auth
# Access at http://localhost:5000/default/<filename>
S3 server with bucket mapping
node index.js serve-s3 ./storage \
--bucket containers:./storage/containers \
--bucket images:./storage/images \
--port 9000 --auth --tunnel
File server with auto-shutdown
node index.js serve-s3 ./content --port 8000 \
--auto-shutdown --shutdown-delay 30000
Start watching a directory
node index.js watch ./documents
Compress files
node index.js compress --file ./large-file.txt --output ./compressed.zip
Share a file temporarily
node index.js share ./document.pdf --expires 30m
Phase | Status | Features | Test Results |
---|---|---|---|
Phase 1: Foundation & Local CRUD | โ Complete | File/folder CRUD, path management, search operations | 33/33 tests passing (100%) |
Phase 2: File/Directory Watching | โ Complete | Real-time monitoring, event filtering, recursive watching | 24/24 tests passing (100%) |
Phase 3: Compression/Decompression | โ Complete | ZIP/TAR.GZ support, batch operations, progress tracking | 30/30 tests passing (100%) |
Phase 4: Tunneling & Temp URLs | โ Complete | Secure file sharing, temporary URLs, multiple tunnel services | 35/35 tests passing (100%) |
Phase 5: HTTP Server Provider | โ Complete | HTTP server management, static file serving, tunnel integration | 22/22 tests passing (100%) |
Phase 6: File Streaming Enhancement | โ Complete | Enhanced streaming, range requests, MIME detection, progress tracking | 12/12 tests passing (100%) |
Phase 7: S3 Object Storage | โ Complete | S3-compatible endpoints, authentication, bucket/key mapping, analytics | 31/31 tests passing (100%) |
Phase 8: Auto-Shutdown & Monitoring | โ Complete | Real-time monitoring dashboard, auto-shutdown triggers, event notifications | 22/22 tests passing (100%) |
Phase 9: CLI Integration | โ Complete | Complete CLI interface for all features, S3 server commands, validation | 16/16 tests passing (100%) |
System information and validation:
node index.js --help # Show all available commands
node index.js test # Test all providers
node index.js test --provider local # Test specific provider
node index.js validate # Validate configuration
node index.js info # Show system information
Basic file management:
# Upload/copy files
node index.js upload --file document.pdf --target ./uploads/document.pdf
node index.js copy --source ./file.pdf --target ./backup/file.pdf
# Download files (local copy)
node index.js download --source ./uploads/document.pdf --target ./downloads/
# Move and rename files
node index.js move --source ./file.pdf --target ./archive/file.pdf
node index.js rename --file ./old-name.pdf --name new-name.pdf
# Delete files
node index.js delete --file ./old-file.pdf --yes
# List and search files
node index.js list --directory ./uploads
node index.js list --directory ./uploads --recursive
node index.js search --query "*.pdf" --directory ./uploads
Folder operations:
# Create and manage directories
node index.js mkdir --directory ./new-folder
node index.js ls-folders --directory ./uploads
node index.js rmdir --directory ./old-folder --recursive --yes
Start and manage file watching:
# Start watching
node index.js watch ./documents # Watch directory
node index.js watch ./file.txt --no-recursive # Watch single file
node index.js watch ./project --events add,change # Filter events
# Manage watchers
node index.js watch-list # List active watchers
node index.js watch-list --verbose # Detailed watcher info
node index.js watch-status # Show watching statistics
node index.js watch-stop ./documents # Stop specific watcher
node index.js watch-stop --all # Stop all watchers
Compress and decompress files:
# Basic compression
node index.js compress --file ./document.pdf --output ./compressed.zip
node index.js compress --file ./folder --output ./archive.tar.gz --format tar.gz
node index.js compress --file ./data --output ./backup.zip --level 9
# Decompression
node index.js decompress --file ./archive.zip --directory ./extracted/
node index.js decompress --file ./backup.tar.gz --directory ./restored/ --overwrite
# Batch operations
node index.js compress-batch --directory ./files --output ./archives/
node index.js decompress-batch --directory ./archives --output ./extracted/
# Compression status
node index.js compression-status
Share files temporarily:
# Basic file sharing
node index.js share ./document.pdf # Default 1h expiration
node index.js share ./folder --expires 30m # 30 minutes
node index.js share ./file.zip --expires 2h # 2 hours
node index.js share ./project --multi-download # Allow multiple downloads
# Advanced sharing options
node index.js share ./data.zip --expires 24h --keep-alive --no-auto-shutdown
# Tunnel management
node index.js tunnel-status # Show active tunnels and URLs
node index.js tunnel-cleanup # Clean up expired URLs and tunnels
Start S3 server (Core Feature):
# Basic S3 server
node index.js serve-s3 ./storage --port 5000
node index.js serve-s3 ./storage --port 5000 --auth # With authentication
# S3 server with bucket mapping
node index.js serve-s3 ./storage \
--bucket containers:./storage/containers \
--bucket images:./storage/images \
--bucket docs:./storage/documents \
--port 9000 --auth
# S3 server with tunnel (public access)
node index.js serve-s3 ./content \
--port 8000 --tunnel --tunnel-service ngrok \
--name my-public-server
# S3 server with monitoring
node index.js serve-s3 ./data \
--port 7000 --monitor --interactive \
--name monitoring-server
S3 server with auto-shutdown:
# Auto-shutdown after downloads
node index.js serve-s3 ./container-storage \
--port 9000 --auto-shutdown \
--shutdown-delay 30000 --max-idle 600000
# Background server mode
node index.js serve-s3 ./storage \
--port 5000 --background --name bg-server
# Container registry example
node index.js serve-s3 ./containers \
--bucket containers:./containers \
--bucket registry:./registry \
--port 9000 --auto-shutdown \
--name container-registry
S3 server management:
# Server status and control
node index.js server-status # Show all active servers
node index.js server-status --json # JSON output
node index.js server-stop --all # Stop all servers
node index.js server-stop --name my-server # Stop specific server
# Server cleanup
node index.js server-cleanup # Clean up stopped servers
Monitor server activity:
# Real-time monitoring (when server started with --monitor)
# Automatically displays:
# - Active downloads with progress bars
# - Server status and uptime
# - Download completion status
# - Auto-shutdown countdown
# Interactive mode (when server started with --interactive)
# Available commands in interactive mode:
# - status: Show server status
# - downloads: Show active downloads
# - stop: Stop the server
# - help: Show available commands
# Testing
npm test # Run all tests
npm run test:cli # Test CLI integration
npm run test:local # Test local operations
npm run test:watch # Test file watching
npm run test:compression # Test compression
npm run test:tunnel # Test tunneling
npm run test:http # Test HTTP server
npm run test:streaming # Test file streaming
npm run test:s3 # Test S3 server
npm run test:monitor # Test monitoring/auto-shutdown
# Demos
npm run demo:cli # CLI integration demo
npm run demo:basic # Basic operations demo
npm run demo:watch # File watching demo
npm run demo:compression # Compression demo
npm run demo:tunnel # File sharing demo
npm run demo:container-registry # ๐ณ Container registry demo (simple)
npm run demo:container-registry-full # ๐ณ Container registry demo (full)
npm run demo:container-registry-test # ๐ณ Test container registry components
# Server shortcuts
npm run serve-s3 # Start S3 server on port 5000
npm run server-status # Check server status
npm run server-stop # Stop all servers
# Cleanup
npm run clean # Clean test files
npm run clean:tests # Clean test results
๐ณ Container Registry Demo (Quick Start):
# Run the complete container registry demo
npm run demo:container-registry
# Or with real containers
npm run demo:container-registry-full
# Test the setup first
npm run demo:container-registry-test
This demo showcases:
See Container Registry Demo for complete documentation.
Container Registry Setup:
# Set up S3-compatible container registry
node index.js serve-s3 ./container-storage \
--bucket containers:./container-storage/containers \
--bucket registry:./container-storage/registry \
--port 9000 --auto-shutdown \
--name container-registry
# Access containers at:
# http://localhost:9000/containers/manifest.json
# http://localhost:9000/containers/config.json
# http://localhost:9000/containers/layer1.tar
Public File Sharing:
# Share files with public tunnel
node index.js serve-s3 ./public-files \
--port 8000 --tunnel --tunnel-service ngrok \
--bucket files:./public-files \
--name public-share
# Or temporary file sharing
node index.js share ./important-file.zip \
--expires 24h --multi-download
Development File Server:
# Development server with monitoring
node index.js serve-s3 ./dev-content \
--port 3000 --monitor --interactive \
--bucket assets:./dev-content/assets \
--bucket uploads:./dev-content/uploads
Automated Backup System:
# Watch and compress new files
node index.js watch ./documents &
# In another terminal, set up S3 server for backup access
node index.js serve-s3 ./backups \
--bucket daily:./backups/daily \
--bucket weekly:./backups/weekly \
--port 9090 --auth
npm install local-remote-file-manager
The library provides convenient factory functions for quick setup:
const {
createManager,
createS3Server,
compressFile,
watchDirectory
} = require('local-remote-file-manager');
// Quick file operations
async function quickStart() {
// Create a file manager with default config
const manager = await createManager();
// Get providers for different operations
const local = manager.getProvider('local');
const files = await local.list('./my-directory');
// Quick compression
await compressFile('./my-folder', './archive.zip');
// Start file watching
const watcher = await watchDirectory('./watched-folder');
watcher.on('change', (data) => {
console.log('File changed:', data.path);
});
}
const { LocalRemoteManager, ConfigManager } = require('local-remote-file-manager');
class FileManagementApp {
constructor() {
this.config = new ConfigManager();
this.fileManager = null;
}
async initialize() {
await this.config.load();
this.fileManager = new LocalRemoteManager(this.config);
// Set up event handling
this.fileManager.on('fileEvent', (data) => {
console.log('File event:', data.type, data.path);
});
}
async processFile(inputPath, outputPath) {
const local = this.fileManager.getProvider('local');
const compression = this.fileManager.getCompressionProvider();
// Copy file
await local.copy(inputPath, outputPath);
// Compress file
const result = await compression.compress(
outputPath,
outputPath.replace(/\.[^/.]+$/, '.zip')
);
return result;
}
}
const { createS3Server } = require('local-remote-file-manager');
async function createFileServer() {
const server = createS3Server({
port: 5000,
rootDirectory: './storage',
bucketMapping: new Map([
['public', './public-files'],
['private', './private-files']
]),
// Authentication
authentication: {
enabled: true,
tempCredentials: true
},
// Monitoring and auto-shutdown
monitoring: {
enabled: true,
dashboard: true
},
autoShutdown: {
enabled: true,
timeout: 3600000 // 1 hour
}
});
// Event handling
server.on('request', (data) => {
console.log(`${data.method} ${data.path}`);
});
server.on('download', (data) => {
console.log(`Downloaded: ${data.path} (${data.size} bytes)`);
});
await server.start();
console.log('S3 server running on http://localhost:5000');
return server;
}
class ContainerRegistryServer {
constructor() {
this.server = null;
}
async startWithAutoShutdown() {
// Create S3 server with auto-shutdown and monitoring
this.server = new S3HttpServer({
port: 9000,
serverName: 'container-registry',
rootDirectory: './container-storage',
// Auto-shutdown configuration
enableAutoShutdown: true,
shutdownOnCompletion: true,
shutdownTriggers: ['completion', 'timeout', 'manual'],
completionShutdownDelay: 30000, // 30 seconds after completion
maxIdleTime: 600000, // 10 minutes idle
maxTotalTime: 3600000, // 1 hour maximum
// Real-time monitoring
enableRealTimeMonitoring: true,
enableDownloadTracking: true,
monitoringUpdateInterval: 2000, // 2 seconds
// Event notifications
enableEventNotifications: true,
notificationChannels: ['console', 'file'],
// S3 configuration
enableAuth: false, // Simplified for container usage
bucketMapping: new Map([
['containers', 'container-files'],
['registry', 'registry-data']
])
});
// Setup event listeners
this.setupEventListeners();
// Start the server
const result = await this.server.start();
console.log(`๐ Container registry started: ${result.localUrl}`);
// Configure expected downloads for auto-shutdown
await this.configureExpectedDownloads();
return result;
}
setupEventListeners() {
// Download progress tracking
this.server.on('downloadStarted', (info) => {
console.log(`๐ฅ Download started: ${info.bucket}/${info.key} (${this.formatBytes(info.fileSize)})`);
});
this.server.on('downloadCompleted', (info) => {
console.log(`โ
Download completed: ${info.bucket}/${info.key} in ${info.duration}ms`);
});
this.server.on('downloadFailed', (info) => {
console.log(`โ Download failed: ${info.bucket}/${info.key} - ${info.error}`);
});
// Auto-shutdown events
this.server.on('allDownloadsComplete', (info) => {
console.log(`๐ All downloads complete! Auto-shutdown will trigger in ${info.shutdownDelay / 1000}s`);
});
this.server.on('shutdownScheduled', (info) => {
console.log(`โฐ Server shutdown scheduled: ${info.reason} (${Math.round(info.delay / 1000)}s)`);
});
this.server.on('shutdownWarning', (info) => {
console.log(`โ ๏ธ Server shutting down in ${Math.round(info.timeRemaining / 1000)} seconds`);
});
}
async configureExpectedDownloads() {
// Set expected container downloads
const expectedDownloads = [
{ bucket: 'containers', key: 'manifest.json', size: 1024 },
{ bucket: 'containers', key: 'config.json', size: 512 },
{ bucket: 'containers', key: 'layer-1.tar', size: 1048576 }, // 1MB
{ bucket: 'containers', key: 'layer-2.tar', size: 2097152 }, // 2MB
];
const result = this.server.setExpectedDownloads(expectedDownloads);
if (result.success) {
console.log(`๐ Configured ${result.expectedCount} expected downloads (${this.formatBytes(result.totalBytes)} total)`);
}
}
formatBytes(bytes) {
if (bytes === 0) return '0 B';
const k = 1024;
const sizes = ['B', 'KB', 'MB', 'GB'];
const i = Math.floor(Math.log(bytes) / Math.log(k));
return parseFloat((bytes / Math.pow(k, i)).toFixed(2)) + ' ' + sizes[i];
}
async getMonitoringData() {
// Get real-time monitoring data
return {
serverStatus: this.server.getStatus(),
downloadStats: this.server.getDownloadStats(),
dashboardData: this.server.getMonitoringData(),
completionStatus: this.server.getDownloadCompletionStatus()
};
}
async gracefulShutdown() {
console.log('๐ Initiating graceful shutdown...');
await this.server.stop({ graceful: true, timeout: 30000 });
console.log('โ
Server stopped gracefully');
}
}
// Usage example
async function runContainerRegistry() {
const registry = new ContainerRegistryServer();
try {
// Start server with monitoring
await registry.startWithAutoShutdown();
// Server will automatically shut down when all expected downloads complete
// or after timeout periods are reached
// Manual shutdown if needed
process.on('SIGINT', async () => {
await registry.gracefulShutdown();
process.exit(0);
});
} catch (error) {
console.error('Failed to start container registry:', error);
}
}
const { LocalRemoteManager } = require('local-remote-file-manager');
class DocumentWatcher {
constructor() {
this.fileManager = new LocalRemoteManager();
this.setupEventHandlers();
}
setupEventHandlers() {
this.fileManager.on('fileAdded', (event) => {
console.log(`New file detected: ${event.filePath}`);
this.processNewFile(event.filePath);
});
this.fileManager.on('fileChanged', (event) => {
console.log(`File modified: ${event.filePath}`);
this.handleFileChange(event.filePath);
});
this.fileManager.on('fileRemoved', (event) => {
console.log(`File deleted: ${event.filePath}`);
this.handleFileRemoval(event.filePath);
});
}
async startWatching(directory) {
const watchResult = await this.fileManager.startWatching(directory, {
recursive: true,
events: ['add', 'change', 'unlink'],
ignoreDotfiles: true
});
console.log(`Started watching: ${watchResult.watchId}`);
return watchResult;
}
async processNewFile(filePath) {
try {
// Auto-compress large files
const fileInfo = await this.fileManager.getFileInfo(filePath);
if (fileInfo.size > 10 * 1024 * 1024) { // 10MB
const compressedPath = filePath + '.zip';
await this.fileManager.compressFile(filePath, compressedPath);
console.log(`Auto-compressed large file: ${compressedPath}`);
}
} catch (error) {
console.error(`Failed to process new file: ${error.message}`);
}
}
}
const { LocalRemoteManager } = require('local-remote-file-manager');
class FileServerApp {
constructor() {
this.fileManager = new LocalRemoteManager();
}
async startTunneledServer(contentDirectory) {
// Create HTTP server with automatic tunnel
const serverInfo = await this.fileManager.createTunneledServer({
port: 3000,
rootDirectory: contentDirectory,
tunnelService: 'serveo' // or 'ngrok'
});
console.log(`๐ Local server: http://localhost:${serverInfo.port}`);
console.log(`๐ Public URL: ${serverInfo.tunnelUrl}`);
return serverInfo;
}
async addCustomRoutes(serverId) {
// Add high-priority custom routes that override generic patterns
await this.fileManager.addCustomRoute(
serverId,
'GET',
'/api/status',
(req, res) => {
res.json({ status: 'active', timestamp: new Date().toISOString() });
},
{ priority: 100 } // High priority overrides generic /:bucket/* routes
);
// Add API endpoint with medium priority
await this.fileManager.addCustomRoute(
serverId,
'GET',
'/api/:version/health',
(req, res) => {
res.json({ health: 'ok', version: req.params.version });
},
{ priority: 50 }
);
// Lower priority route (will be handled after higher priority routes)
await this.fileManager.addCustomRoute(
serverId,
'GET',
'/docs/:page',
(req, res) => {
res.send(`Documentation page: ${req.params.page}`);
},
{ priority: 10 }
);
console.log('โ
Custom routes added with priority-based routing');
}
async serveFiles() {
const server = await this.startTunneledServer('./public-files');
// Add custom routes with priority system
await this.addCustomRoutes(server.serverId);
// Monitor server status
setInterval(async () => {
const status = await this.fileManager.getServerStatus(server.serverId);
console.log(`๐ Server status: ${status.status}, Requests: ${status.requestCount}`);
}, 30000);
return server;
}
}
const { S3HttpServer } = require('local-remote-file-manager/src/s3Server');
class S3ObjectStorage {
constructor() {
this.s3Server = new S3HttpServer({
port: 9000,
serverName: 'my-s3-server',
rootDirectory: './s3-storage',
enableAuth: true,
bucketMapping: new Map([
['documents', 'user-docs'],
['images', 'media/images'],
['backups', 'backup-storage']
]),
bucketAccessControl: new Map([
['documents', { read: true, write: true }],
['images', { read: true, write: false }],
['backups', { read: true, write: true }]
])
});
}
async start() {
const serverInfo = await this.s3Server.start();
console.log(`๐๏ธ S3 Server running on port ${serverInfo.port}`);
// Generate temporary credentials
const credentials = this.s3Server.generateTemporaryCredentials({
permissions: ['read', 'write'],
buckets: ['documents', 'backups'],
expiryMinutes: 60
});
console.log(`๐ Access Key: ${credentials.accessKey}`);
console.log(`๐ Secret Key: ${credentials.secretKey}`);
return { serverInfo, credentials };
}
async enableMonitoring() {
// Start real-time monitoring dashboard
this.s3Server.startMonitoringDashboard({
updateInterval: 2000,
showServerStats: true,
showDownloadProgress: true,
showActiveDownloads: true,
showShutdownStatus: true
});
// Setup download analytics
this.s3Server.on('downloadCompleted', (info) => {
console.log(`๐ Download analytics: ${info.key} (${info.bytes} bytes in ${info.duration}ms)`);
// Get real-time dashboard data
const dashboardData = this.s3Server.getMonitoringData();
console.log(`๐ Total downloads: ${dashboardData.downloadStats.totalDownloads}`);
console.log(`โก Average speed: ${this.formatSpeed(dashboardData.downloadStats.averageSpeed)}`);
});
return true;
}
formatSpeed(bytesPerSecond) {
if (bytesPerSecond < 1024) return `${bytesPerSecond} B/s`;
if (bytesPerSecond < 1024 * 1024) return `${(bytesPerSecond / 1024).toFixed(1)} KB/s`;
return `${(bytesPerSecond / (1024 * 1024)).toFixed(1)} MB/s`;
}
}
// CLI Equivalent Commands:
// Instead of complex library setup, use simple CLI commands:
// Start S3 server with authentication and bucket mapping
// node index.js serve-s3 ./s3-storage --port 9000 --auth \
// --bucket documents:./s3-storage/user-docs \
// --bucket images:./s3-storage/media/images \
// --bucket backups:./s3-storage/backup-storage \
// --monitor --name my-s3-server
// S3 server with auto-shutdown for container registry
// node index.js serve-s3 ./containers --port 9000 \
// --bucket containers:./containers \
// --auto-shutdown --monitor --name container-registry
// S3 server with tunnel for public access
// node index.js serve-s3 ./public-files --port 8000 \
// --tunnel --tunnel-service ngrok --bucket files:./public-files
// Access via S3-compatible endpoints:
// GET http://localhost:9000/documents/myfile.pdf
// HEAD http://localhost:9000/images/photo.jpg
The CLI provides direct access to all library features with simple commands:
// Library approach (complex setup):
const server = new S3HttpServer({
enableAutoShutdown: true,
shutdownTriggers: ['completion'],
completionShutdownDelay: 30000,
enableRealTimeMonitoring: true
});
await server.start();
// CLI approach (simple command):
// node index.js serve-s3 ./storage --auto-shutdown --shutdown-delay 30000 --monitor
// Multiple operations with library require coordination:
// 1. Set up file watcher
// 2. Set up compression handler
// 3. Set up S3 server
// 4. Coordinate between them
// CLI approach - each command handles coordination:
// Terminal 1: node index.js watch ./documents
// Terminal 2: node index.js serve-s3 ./storage --port 9000 --monitor
// Terminal 3: node index.js compress-batch --directory ./documents --output ./archives/
# Complete container registry setup with CLI:
mkdir -p ./container-storage/containers ./container-storage/registry
# Start S3-compatible container registry
node index.js serve-s3 ./container-storage \
--bucket containers:./container-storage/containers \
--bucket registry:./container-storage/registry \
--port 9000 --auto-shutdown --monitor \
--name container-registry
# Server automatically shuts down after container downloads complete
# Real-time monitoring shows download progress and completion status
# Access containers at: http://localhost:9000/containers/<filename>
### Real-Time Monitoring Dashboard
```javascript
const { MonitoringDashboard, DownloadMonitor } = require('local-remote-file-manager');
class LiveMonitoringSystem {
constructor() {
this.dashboard = new MonitoringDashboard({
updateInterval: 1000,
showServerStats: true,
showDownloadProgress: true,
showActiveDownloads: true,
showShutdownStatus: true
});
this.downloadMonitor = new DownloadMonitor({
trackPartialDownloads: true,
progressUpdateInterval: 1000
});
}
async startMonitoring(s3Server) {
// Connect monitoring to S3 server
this.dashboard.connectToServer(s3Server);
this.downloadMonitor.connectToServer(s3Server);
// Start real-time dashboard
this.dashboard.start();
// Setup download tracking
this.downloadMonitor.on('downloadStarted', (info) => {
this.dashboard.addActiveDownload(info);
});
this.downloadMonitor.on('downloadProgress', (info) => {
this.dashboard.updateDownloadProgress(info.downloadId, info);
});
this.downloadMonitor.on('downloadCompleted', (info) => {
this.dashboard.completeDownload(info.downloadId, info);
});
// Example dashboard output:
/*
+------------------------------------------------------------------------------------------------------------------------------+
| S3 Object Storage Server |
+------------------------------------------------------------------------------------------------------------------------------+
|Status: RUNNING Uptime: 45s|
|Port: 9000 Public URL: http://localhost:9000|
+------------------------------------------------------------------------------------------------------------------------------+
|Downloads Progress |
|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 3/5 (60%) |
|Active Downloads: 2 Completed: 3 Failed: 0|
|Speed: 1.2 MB/s Total: 2.1 MB |
+------------------------------------------------------------------------------------------------------------------------------+
|Active Downloads |
|โถ layer-2.tar (1.2 MB) โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 80% @ 450 KB/s |
|โถ layer-3.tar (512 KB) โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 45% @ 230 KB/s |
+------------------------------------------------------------------------------------------------------------------------------+
|Auto-Shutdown: ON Trigger: Completion + 30s|
|Next Check: 00:00:05 Status: Monitoring|
+------------------------------------------------------------------------------------------------------------------------------+
*/
console.log('๐ Real-time monitoring dashboard started');
return true;
}
async stopMonitoring() {
this.dashboard.stop();
this.downloadMonitor.stop();
console.log('๐ Monitoring stopped');
}
async getAnalytics() {
return {
dashboard: this.dashboard.getCurrentData(),
downloads: this.downloadMonitor.getStatistics(),
performance: this.downloadMonitor.getPerformanceMetrics()
};
}
}
}
async enableMonitoring() { // Start real-time monitoring dashboard this.s3Server.startRealTimeMonitoring({ interval: 1000, enableConsole: true });
// Track download events
this.s3Server.on('download:started', (info) => {
console.log(`๐ฅ Download started: ${info.bucket}/${info.key}`);
});
this.s3Server.on('download:completed', (info) => {
console.log(`โ
Download completed: ${info.bucket}/${info.key} (${info.size} bytes)`);
});
}
async getAnalytics() { const analytics = this.s3Server.generateDownloadAnalytics({ includeDetails: true });
console.log(`๐ Total Downloads: ${analytics.summary.totalDownloads}`);
console.log(`๐ Average Speed: ${analytics.performance.averageSpeed}`);
console.log(`โฑ๏ธ Server Uptime: ${analytics.performance.uptime}s`);
return analytics;
} }
// Usage const storage = new S3ObjectStorage(); await storage.start(); await storage.enableMonitoring();
// Access via S3-compatible endpoints: // GET http://localhost:9000/documents/myfile.pdf // HEAD http://localhost:9000/images/photo.jpg
### Enhanced File Streaming
```javascript
const { FileStreamingUtils, DownloadTracker } = require('local-remote-file-manager');
class StreamingFileServer {
async serveFileWithProgress(filePath, response, rangeHeader = null) {
try {
// Get file information
const fileInfo = await FileStreamingUtils.getFileInfo(filePath);
console.log(`๐ Serving: ${fileInfo.name} (${fileInfo.size} bytes)`);
// Create download tracker
const tracker = new DownloadTracker(fileInfo.size);
// Handle range request if specified
let streamOptions = {};
if (rangeHeader) {
const range = FileStreamingUtils.parseRangeHeader(rangeHeader, fileInfo.size);
if (range.isValid) {
streamOptions = { start: range.start, end: range.end };
response.status = 206; // Partial Content
response.setHeader('Content-Range',
FileStreamingUtils.formatContentRange(range.start, range.end, fileInfo.size)
);
}
}
// Set response headers
response.setHeader('Content-Type', FileStreamingUtils.getMimeType(filePath));
response.setHeader('Content-Length', streamOptions.end ?
(streamOptions.end - streamOptions.start + 1) : fileInfo.size);
response.setHeader('ETag', FileStreamingUtils.generateETag(fileInfo));
response.setHeader('Last-Modified', fileInfo.lastModified.toUTCString());
response.setHeader('Accept-Ranges', 'bytes');
// Create and pipe stream
const stream = await FileStreamingUtils.createReadStream(filePath, streamOptions);
stream.on('data', (chunk) => {
tracker.updateProgress(chunk.length);
const progress = tracker.getProgress();
console.log(`๐ Progress: ${progress.percentage}% (${progress.speed}/s)`);
});
stream.on('end', () => {
console.log(`โ
Transfer complete: ${filePath}`);
});
stream.pipe(response);
} catch (error) {
console.error(`โ Streaming error: ${error.message}`);
response.status = 500;
response.end('Internal Server Error');
}
}
}
const { LocalRemoteManager } = require('local-remote-file-manager');
class BatchFileProcessor {
constructor() {
this.fileManager = new LocalRemoteManager();
}
async backupDocuments(sourceDirectory, backupDirectory) {
// List all files
const files = await this.fileManager.listFiles(sourceDirectory, { recursive: true });
// Filter for documents
const documents = files.filter(file =>
/\.(pdf|doc|docx|txt|md)$/i.test(file.name)
);
console.log(`Found ${documents.length} documents to backup`);
// Batch compress all documents
const compressionResults = await this.fileManager.compressMultipleFiles(
documents.map(doc => doc.path),
backupDirectory,
{
format: 'zip',
compressionLevel: 6,
preserveStructure: true
}
);
// Generate temporary share URLs for all backups
const shareResults = await Promise.all(
compressionResults.successful.map(async (result) => {
return await this.fileManager.createShareableUrl(result.outputPath, {
expiresIn: '24h',
downloadLimit: 5
});
})
);
return {
processed: documents.length,
compressed: compressionResults.successful.length,
failed: compressionResults.failed.length,
shared: shareResults.length,
shareUrls: shareResults.map(r => r.shareableUrl)
};
}
async syncDirectories(sourceDir, targetDir) {
const sourceFiles = await this.fileManager.listFiles(sourceDir, { recursive: true });
const targetFiles = await this.fileManager.listFiles(targetDir, { recursive: true });
const results = {
copied: [],
updated: [],
errors: []
};
for (const file of sourceFiles) {
try {
const targetPath = file.path.replace(sourceDir, targetDir);
const targetExists = targetFiles.some(t => t.path === targetPath);
if (!targetExists) {
await this.fileManager.copyFile(file.path, targetPath);
results.copied.push(file.path);
} else {
const sourceInfo = await this.fileManager.getFileInfo(file.path);
const targetInfo = await this.fileManager.getFileInfo(targetPath);
if (sourceInfo.lastModified > targetInfo.lastModified) {
await this.fileManager.copyFile(file.path, targetPath);
results.updated.push(file.path);
}
}
} catch (error) {
results.errors.push({ file: file.path, error: error.message });
}
}
return results;
}
}
Run comprehensive tests for all features:
npm test # Interactive test selection
npm run test:all # Test all providers sequentially
npm run test:cli # Test CLI integration (NEW)
npm run test:local # Test local file operations
npm run test:watch # Test file watching
npm run test:compression # Test compression features
npm run test:tunnel # Test tunneling and sharing
npm run test:http # Test HTTP server provider
npm run test:streaming # Test enhanced file streaming
npm run test:s3 # Test S3-compatible object storage
npm run test:monitor # Test auto-shutdown & monitoring
๐ Overall Test Results
=======================
โ
Total Tests: 225
โ
Passed: 225 (100%)
โ Failed: 0 (0%)
โญ Skipped: 0 (0%)
๐ฏ Success Rate: 100%
โก Performance Metrics
=====================
โฑ๏ธ Average Test Duration: 15ms
๐ Fastest Category: Local Operations (2ms avg)
๐ Slowest Category: CLI Integration (1000ms avg)
๐ Total Test Suite Time: ~5 minutes
๐ All features are production-ready!
Validate functionality with built-in demos:
npm run demo:cli # CLI integration demo (NEW)
npm run demo:basic # Basic file operations demo
npm run demo:watch # File watching demonstration
npm run demo:compression # Compression feature demo
npm run demo:tunnel # File sharing demo
npm run demo:s3 # S3 server demo (NEW)
npm run demo:monitor # Auto-shutdown & monitoring demo (NEW)
uploadFile(sourcePath, targetPath)
- Copy file to target location (alias for local copy)downloadFile(remotePath, localPath)
- Download/copy file from remote location (alias for local copy)getFileInfo(filePath)
- Get file metadata, size, timestamps, and permissionslistFiles(directoryPath, options)
- List files with recursive and filtering optionsdeleteFile(filePath)
- Delete a file with error handlingcopyFile(sourcePath, destinationPath)
- Copy a file to new locationmoveFile(sourcePath, destinationPath)
- Move a file to new locationrenameFile(filePath, newName)
- Rename a file in same directorysearchFiles(pattern, options)
- Search for files by name pattern with recursive supportcreateFolder(folderPath)
- Create a new folder with recursive supportlistFolders(directoryPath)
- List only directories in a pathdeleteFolder(folderPath, recursive)
- Delete a folder with optional recursive deletionrenameFolder(folderPath, newName)
- Rename a folder in same parent directorycopyFolder(sourcePath, destinationPath)
- Copy entire folder structuremoveFolder(sourcePath, destinationPath)
- Move entire folder structuregetFolderInfo(folderPath)
- Get folder metadata including item count and total sizestartWatching(path, options)
- Start monitoring file/directory changes
recursive
, events
, ignoreDotfiles
, debounceMs
stopWatching(watchId | path)
- Stop a specific watcher by ID or pathstopAllWatching()
- Stop all active watchers with cleanuplistActiveWatchers()
- Get array of active watcher objectsgetWatcherInfo(watchId)
- Get detailed watcher information including event countgetWatchingStatus()
- Get overall watching system status and statisticscompressFile(inputPath, outputPath, options)
- Compress file or directory
format
(zip, tar.gz), level
(1-9), includeRoot
decompressFile(archivePath, outputDirectory, options)
- Extract archive contents
format
, overwrite
, preservePermissions
compressMultipleFiles(fileArray, outputDirectory, options)
- Batch compression with progressdecompressMultipleFiles(archiveArray, outputDirectory, options)
- Batch extractiongetCompressionStatus()
- Get compression system status and supported formatsgetCompressionProvider()
- Access compression provider directlycreateTunnel(options)
- Create new tunnel connection
proto
(http, https), subdomain
, authToken
, useExternalServer
, localPort
useExternalServer: true
- Forward tunnel to existing HTTP server instead of creating internal serverlocalPort: number
- Specify external server port to forward tunnel traffic todestroyTunnel(tunnelId)
- Destroy specific tunnel connectioncreateTemporaryUrl(filePath, options)
- Generate temporary shareable URL
permissions
, expiresAt
, downloadLimit
revokeTemporaryUrl(urlId)
- Revoke access to shared URLlistActiveUrls()
- Get list of active temporary URLsgetTunnelStatus()
- Get tunneling system status including active tunnelsgetTunnelProvider()
- Access tunnel provider directlycreateHttpServer(options)
- Create HTTP file server
port
, rootDirectory
, enableTunnel
, tunnelOptions
createTunneledServer(options)
- Create HTTP server with automatic tunnel integration
port
, rootDirectory
, tunnelService
(default: 'serveo')addCustomRoute(serverId, method, path, handler, options)
- Add custom route with priority support
priority
(higher numbers = higher priority, overrides generic routes like /:bucket/*
)stopServer(serverId)
- Stop specific HTTP serverstopAllServers()
- Stop all active HTTP serversgetServerStatus(serverId)
- Get HTTP server status and informationlistActiveServers()
- Get list of all active HTTP serversgetTunnelUrl(serverId)
- Get tunnel URL for tunneled serverstopTunnel(serverId)
- Stop tunnel for specific servergetHttpServerProvider()
- Access HTTP server provider directlycreateReadStream(filePath, options)
- Create range-aware file stream
start
, end
, encoding
, chunkSize
getFileInfo(filePath)
- Get detailed file metadata with MIME typegetMimeType(filePath)
- Get MIME type with 40+ file type supportparseRangeHeader(rangeHeader, fileSize)
- Parse HTTP range headersgenerateETag(fileStats)
- Generate ETag for cache validationformatContentRange(start, end, total)
- Format Content-Range headersDownloadTracker
- Track download progress with speed calculationcreateS3Server(options)
- Create S3-compatible object storage server
port
, serverName
, rootDirectory
, bucketMapping
, enableAuth
generateTemporaryCredentials(options)
- Generate temp AWS-style credentials
permissions
, buckets
, expiryMinutes
mapBucketKeyToPath(bucket, key)
- Map S3 bucket/key to file pathvalidateBucketAccess(bucket)
- Check bucket access permissionsgetDownloadStats()
- Get download statistics and metricsgenerateDownloadAnalytics(options)
- Generate analytics reportgetDownloadDashboard()
- Get real-time dashboard datastartRealTimeMonitoring(options)
- Start live monitoring consolestopRealTimeMonitoring()
- Stop real-time monitoringenableAutoShutdown(options)
- Enable auto-shutdown with configurable triggers
shutdownTriggers
, completionShutdownDelay
, maxIdleTime
, maxTotalTime
setExpectedDownloads(downloads)
- Configure expected downloads for completion detection
{ bucket, key, size }
objectsgetDownloadCompletionStatus()
- Get current download completion statusscheduleShutdown(trigger, delay)
- Manually schedule server shutdowncancelScheduledShutdown()
- Cancel previously scheduled shutdownstartMonitoringDashboard(options)
- Start real-time visual monitoring dashboard
updateInterval
, showServerStats
, showDownloadProgress
, showActiveDownloads
stopMonitoringDashboard()
- Stop monitoring dashboardgetMonitoringData()
- Get current monitoring data snapshotaddDownloadEventListener(event, callback)
- Listen to download events
downloadStarted
, downloadCompleted
, downloadFailed
, allDownloadsComplete
addShutdownEventListener(event, callback)
- Listen to shutdown events
shutdownScheduled
, shutdownWarning
, shutdownCancelled
enableEventNotifications(channels)
- Enable event notifications
['console', 'file', 'webhook']
configureWebhookNotifications(url, options)
- Configure webhook notifications
retryAttempts
, timeout
, headers
getEventHistory(options)
- Get event history with filtering
startDate
, endDate
, eventTypes
, limit
testConnection(providerName)
- Test specific provider connection and capabilitiesvalidateProvider(providerName)
- Validate provider configurationgetSystemInfo()
- Get comprehensive system informationshutdown()
- Gracefully shutdown all providers and cleanup resourcesThe LocalRemoteManager extends EventEmitter and provides these events:
fileEvent
- File system changes (add, change, unlink, addDir, unlinkDir)compressionProgress
- Compression operation progress updatesdecompressionProgress
- Decompression operation progress updatestunnelProgress
- Tunnel creation/destruction progressurlCreated
- Temporary URL creation eventsurlRevoked
- URL revocation eventsfileAccessed
- File access via temporary URLstunnelError
- Tunnel-related errorsdownloadStarted
- Download operation starteddownloadProgress
- Real-time download progress updatesdownloadCompleted
- Download operation completeddownloadFailed
- Download operation failedallDownloadsComplete
- All expected downloads completedshutdownScheduled
- Auto-shutdown has been scheduledshutdownWarning
- Shutdown warning (time remaining)shutdownCancelled
- Scheduled shutdown was cancelledmonitoringEnabled
- Real-time monitoring startedmonitoringDisabled
- Real-time monitoring stoppeddashboardUpdated
- Monitoring dashboard data updatedload()
- Load configuration from environment variables and defaultsget(key)
- Get configuration value by keyset(key, value)
- Set configuration valuevalidate()
- Validate current configuration and return validation resultsave()
- Save configuration to persistent storagegetLocalConfig()
- Get local provider configuration (paths, permissions)getWatchConfig()
- Get file watching configuration (debounce, patterns)getCompressionConfig()
- Get compression configuration (formats, levels)getTunnelConfig()
- Get tunneling configuration (services, fallback)Each provider implements a consistent interface:
useExternalServer
optionlocalPort
parameterAll methods throw structured errors with:
code
- Error code (ENOENT, EACCES, etc.)message
- Human-readable error descriptionpath
- File/directory path related to error (when applicable)provider
- Provider name that generated the errorTunnel Integration: The tunnel system is designed to forward external HTTP servers for consistent content serving.
The TunnelProvider integrates with HTTP servers by forwarding tunnel traffic to existing server ports rather than creating separate internal servers:
// Standard tunnel forwarding approach
const httpServer = await httpServerProvider.createTunneledServer({
port: 4005,
rootDirectory: './content',
tunnelService: 'serveo' // Tunnel forwards to port 4005
});
// Manual tunnel configuration with forwarding
const tunnel = await tunnelProvider.createTunnel({
useExternalServer: true, // Don't create internal server
localPort: 4005 // Forward to existing server on port 4005
});
For file serving with tunnel access:
// Recommended approach for public file serving
const server = await manager.createTunneledServer({
port: 3000,
rootDirectory: './public',
tunnelService: 'serveo'
});
const tunnelUrl = server.tunnelUrl;
createTunnel(options)
- Create tunnel with external server support
const tunnel = await tunnelProvider.createTunnel({
// Basic tunnel options
subdomain: 'myapp',
service: 'serveo', // 'serveo', 'pinggy', 'localtunnel'
// External server forwarding
useExternalServer: true, // Don't create internal HTTP server
localPort: 4005, // Forward tunnel to existing server on this port
// Additional options
authToken: 'optional',
region: 'us'
});
Return Value:
{
tunnelId: 'tunnel_abc123',
url: 'https://subdomain.serveo.net',
service: 'serveo',
port: 4005, // Reflects target port when using external server
createdAt: '2025-08-13T...',
useExternalServer: true, // Indicates forwarding mode
targetPort: 4005 // Shows which external port is being forwarded to
}
All tunnel creation methods accept targetPort
parameter:
// Method signatures with port forwarding support
await createServiceTunnel(serviceName, tunnelId, options, targetPort)
await createPinggyTunnel(tunnelId, options, targetPort)
await createServeoTunnel(tunnelId, options, targetPort)
await createLocalTunnel(tunnelId, options, targetPort)
// Default configuration
{
service: 'serveo', // Primary tunnel service
fallbackServices: 'serveo,localtunnel', // Fallback order
autoFallback: true,
useExternalServer: false // Default to internal server creation
}
// File info result
{
name: string,
path: string,
size: number,
isDirectory: boolean,
createdAt: Date,
modifiedAt: Date,
permissions: string
}
// Operation result
{
name: string,
path: string,
size: number,
completedAt: Date
}
// Compression result
{
operationId: string,
name: string,
format: string,
size: number,
originalSize: number,
compressionRatio: number,
level: number,
completedAt: Date
}
// Batch result
{
successful: Array,
failed: Array,
summary: {
total: number,
successful: number,
failed: number,
successRate: string
}
}
// Tunnel result
{
tunnelId: string,
url: string,
service: string,
port: number,
createdAt: Date,
useExternalServer?: boolean, // Indicates if forwarding to external server
targetPort?: number // External server port being forwarded to
}
// HTTP Server result
{
serverId: string,
port: number,
rootDirectory: string,
url: string,
status: 'running' | 'stopped',
tunnelEnabled: boolean,
tunnelUrl?: string,
tunnelService?: string,
createdAt: Date,
requestCount: number
}
// Temporary URL result
{
urlId: string,
shareableUrl: string,
accessToken: string,
expiresAt: Date,
permissions: Array,
filePath: string
}
FAQs
Local & Remote File Management System with S3-compatible container registry, HTTP server provider, file streaming, and comprehensive testing suite
The npm package @kadi.build/local-remote-file-manager-ability receives a total of 0 weekly downloads. As such, @kadi.build/local-remote-file-manager-ability popularity was classified as not popular.
We found that @kadi.build/local-remote-file-manager-ability demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago.ย It has 2 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Product
Socket's Rust support is moving to Beta: all users can scan Cargo projects and generate SBOMs, including Cargo.toml-only crates, with Rust-aware supply chain checks.
Product
Socket Fix 2.0 brings targeted CVE remediation, smarter upgrade planning, and broader ecosystem support to help developers get to zero alerts.
Security News
Socket CEO Feross Aboukhadijeh joins Risky Business Weekly to unpack recent npm phishing attacks, their limited impact, and the risks if attackers get smarter.