Local & Remote File Manager
A comprehensive Node.js CLI tool and library for local file management with advanced features including real-time file watching, compression/decompression, secure temporary file sharing via tunneling, and S3-compatible object storage. This unified file management system provides powerful local operations with the capability to extend to remote server operations in future releases.

๐ Features
- ๐ Complete Local File Management: Full CRUD operations for files and folders with advanced path handling
- ๐๏ธ Real-time File Watching: Monitor file and directory changes with event filtering and callbacks
- ๐๏ธ Advanced Compression: ZIP and TAR.GZ compression/decompression with progress tracking
- ๐ Secure File Sharing: Temporary URL generation with tunnel-based sharing (ngrok/localtunnel integration)
- ๐ HTTP Server Provider: Complete HTTP server management with static file serving and tunnel integration
- โก Enhanced File Streaming: Optimized file streaming with range requests and progress tracking
- ๐ S3-Compatible Object Storage: Full S3 endpoints with authentication, bucket mapping, and analytics
- ๐ Real-Time Monitoring Dashboard: Live progress tracking with visual dashboard and download analytics
- ๐ Auto-Shutdown Management: Intelligent shutdown triggers based on download completion or timeout
- ๐ข Event Notification System: Comprehensive event system with console, file, and webhook notifications
- ๐ฅ๏ธ Production-Ready CLI: Complete command-line interface for all features with interactive help
- โก High Performance: Efficient memory usage, streaming for large files, and batch operations
- ๐ ๏ธ CLI & Library: Use as command-line tool or integrate as Node.js library
- ๐ง Robust Error Handling: Comprehensive error handling and retry logic
- ๐ Progress Tracking: Real-time progress for long-running operations
- ๐ฏ Path Management: Automatic folder creation and path normalization
- ๐งช Comprehensive Testing: Full test suite with 225/225 tests passing (100% success rate)
๐ Table of Contents
๐ Installation
As a CLI Tool
git clone <repository-url>
cd local-remote-file-manager-ability
npm install
npm run setup
Global Installation
npm install -g local-remote-file-manager
As a Node.js Library
npm install local-remote-file-manager
const { createManager, compressFile } = require('local-remote-file-manager');
const manager = await createManager();
const files = await manager.getProvider('local').list('./');
await compressFile('./my-folder', './archive.zip');
๐ See USAGE.md for complete examples and INTEGRATION-EXAMPLE.md for real-world integration patterns.
โก Quick Start
CLI Quick Start
-
Install dependencies
npm install
-
Test your setup
npm test
npm run test:cli
npm run test:local
npm run test:s3
-
Basic file operations
node index.js copy --source document.pdf --target ./uploads/document.pdf
node index.js upload --file data.zip --target ./uploads/
node index.js list --directory ./uploads
-
Start S3-compatible file server
node index.js serve-s3 ./files --port 5000 --auth
-
S3 server with bucket mapping
node index.js serve-s3 ./storage \
--bucket containers:./storage/containers \
--bucket images:./storage/images \
--port 9000 --auth --tunnel
-
File server with auto-shutdown
node index.js serve-s3 ./content --port 8000 \
--auto-shutdown --shutdown-delay 30000
-
Start watching a directory
node index.js watch ./documents
-
Compress files
node index.js compress --file ./large-file.txt --output ./compressed.zip
-
Share a file temporarily
node index.js share ./document.pdf --expires 30m
๐ Development Status
โ
Completed Phases (Production Ready)
Phase 1: Foundation & Local CRUD | โ
Complete | File/folder CRUD, path management, search operations | 33/33 tests passing (100%) |
Phase 2: File/Directory Watching | โ
Complete | Real-time monitoring, event filtering, recursive watching | 24/24 tests passing (100%) |
Phase 3: Compression/Decompression | โ
Complete | ZIP/TAR.GZ support, batch operations, progress tracking | 30/30 tests passing (100%) |
Phase 4: Tunneling & Temp URLs | โ
Complete | Secure file sharing, temporary URLs, multiple tunnel services | 35/35 tests passing (100%) |
Phase 5: HTTP Server Provider | โ
Complete | HTTP server management, static file serving, tunnel integration | 22/22 tests passing (100%) |
Phase 6: File Streaming Enhancement | โ
Complete | Enhanced streaming, range requests, MIME detection, progress tracking | 12/12 tests passing (100%) |
Phase 7: S3 Object Storage | โ
Complete | S3-compatible endpoints, authentication, bucket/key mapping, analytics | 31/31 tests passing (100%) |
Phase 8: Auto-Shutdown & Monitoring | โ
Complete | Real-time monitoring dashboard, auto-shutdown triggers, event notifications | 22/22 tests passing (100%) |
Phase 9: CLI Integration | โ
Complete | Complete CLI interface for all features, S3 server commands, validation | 16/16 tests passing (100%) |
๐ฏ Overall Project Health
- Total Tests: 225/225 automated tests passing
- Pass Rate: 100% across all implemented features
- Code Coverage: Comprehensive test coverage for all providers
- Performance: Optimized for large files and high-volume operations
- Stability: Production-ready with full CLI integration
๐ป CLI Usage
๐ง System Commands
System information and validation:
node index.js --help
node index.js test
node index.js test --provider local
node index.js validate
node index.js info
๐ File Operations
Basic file management:
node index.js upload --file document.pdf --target ./uploads/document.pdf
node index.js copy --source ./file.pdf --target ./backup/file.pdf
node index.js download --source ./uploads/document.pdf --target ./downloads/
node index.js move --source ./file.pdf --target ./archive/file.pdf
node index.js rename --file ./old-name.pdf --name new-name.pdf
node index.js delete --file ./old-file.pdf --yes
node index.js list --directory ./uploads
node index.js list --directory ./uploads --recursive
node index.js search --query "*.pdf" --directory ./uploads
Folder operations:
node index.js mkdir --directory ./new-folder
node index.js ls-folders --directory ./uploads
node index.js rmdir --directory ./old-folder --recursive --yes
๐๏ธ File Watching
Start and manage file watching:
node index.js watch ./documents
node index.js watch ./file.txt --no-recursive
node index.js watch ./project --events add,change
node index.js watch-list
node index.js watch-list --verbose
node index.js watch-status
node index.js watch-stop ./documents
node index.js watch-stop --all
๐๏ธ Compression Operations
Compress and decompress files:
node index.js compress --file ./document.pdf --output ./compressed.zip
node index.js compress --file ./folder --output ./archive.tar.gz --format tar.gz
node index.js compress --file ./data --output ./backup.zip --level 9
node index.js decompress --file ./archive.zip --directory ./extracted/
node index.js decompress --file ./backup.tar.gz --directory ./restored/ --overwrite
node index.js compress-batch --directory ./files --output ./archives/
node index.js decompress-batch --directory ./archives --output ./extracted/
node index.js compression-status
๐ File Sharing & Tunneling
Share files temporarily:
node index.js share ./document.pdf
node index.js share ./folder --expires 30m
node index.js share ./file.zip --expires 2h
node index.js share ./project --multi-download
node index.js share ./data.zip --expires 24h --keep-alive --no-auto-shutdown
node index.js tunnel-status
node index.js tunnel-cleanup
๐ S3-Compatible File Server
Start S3 server (Core Feature):
node index.js serve-s3 ./storage --port 5000
node index.js serve-s3 ./storage --port 5000 --auth
node index.js serve-s3 ./storage \
--bucket containers:./storage/containers \
--bucket images:./storage/images \
--bucket docs:./storage/documents \
--port 9000 --auth
node index.js serve-s3 ./content \
--port 8000 --tunnel --tunnel-service ngrok \
--name my-public-server
node index.js serve-s3 ./data \
--port 7000 --monitor --interactive \
--name monitoring-server
S3 server with auto-shutdown:
node index.js serve-s3 ./container-storage \
--port 9000 --auto-shutdown \
--shutdown-delay 30000 --max-idle 600000
node index.js serve-s3 ./storage \
--port 5000 --background --name bg-server
node index.js serve-s3 ./containers \
--bucket containers:./containers \
--bucket registry:./registry \
--port 9000 --auto-shutdown \
--name container-registry
S3 server management:
node index.js server-status
node index.js server-status --json
node index.js server-stop --all
node index.js server-stop --name my-server
node index.js server-cleanup
๐ Real-Time Monitoring
Monitor server activity:
๐ NPM Scripts for Development
npm test
npm run test:cli
npm run test:local
npm run test:watch
npm run test:compression
npm run test:tunnel
npm run test:http
npm run test:streaming
npm run test:s3
npm run test:monitor
npm run demo:cli
npm run demo:basic
npm run demo:watch
npm run demo:compression
npm run demo:tunnel
npm run demo:container-registry
npm run demo:container-registry-full
npm run demo:container-registry-test
npm run serve-s3
npm run server-status
npm run server-stop
npm run clean
npm run clean:tests
๐ฏ Common Use Cases
๐ณ Container Registry Demo (Quick Start):
npm run demo:container-registry
npm run demo:container-registry-full
npm run demo:container-registry-test
This demo showcases:
- ๐ณ Container Export: Exports Podman/Docker containers to registry format
- ๐ Public Tunneling: Creates accessible HTTPS URLs via ngrok
- ๐ Secure Access: Generates temporary AWS-style credentials
- ๐ Real-time Monitoring: Shows download progress and statistics
- โก Auto-shutdown: Automatically cleans up when downloads complete
See Container Registry Demo for complete documentation.
Container Registry Setup:
node index.js serve-s3 ./container-storage \
--bucket containers:./container-storage/containers \
--bucket registry:./container-storage/registry \
--port 9000 --auto-shutdown \
--name container-registry
Public File Sharing:
node index.js serve-s3 ./public-files \
--port 8000 --tunnel --tunnel-service ngrok \
--bucket files:./public-files \
--name public-share
node index.js share ./important-file.zip \
--expires 24h --multi-download
Development File Server:
node index.js serve-s3 ./dev-content \
--port 3000 --monitor --interactive \
--bucket assets:./dev-content/assets \
--bucket uploads:./dev-content/uploads
Automated Backup System:
node index.js watch ./documents &
node index.js serve-s3 ./backups \
--bucket daily:./backups/daily \
--bucket weekly:./backups/weekly \
--port 9090 --auth
๐ Library Usage
Installation as Node.js Module
npm install local-remote-file-manager
Quick Start - Factory Functions
The library provides convenient factory functions for quick setup:
const {
createManager,
createS3Server,
compressFile,
watchDirectory
} = require('local-remote-file-manager');
async function quickStart() {
const manager = await createManager();
const local = manager.getProvider('local');
const files = await local.list('./my-directory');
await compressFile('./my-folder', './archive.zip');
const watcher = await watchDirectory('./watched-folder');
watcher.on('change', (data) => {
console.log('File changed:', data.path);
});
}
Basic Integration
const { LocalRemoteManager, ConfigManager } = require('local-remote-file-manager');
class FileManagementApp {
constructor() {
this.config = new ConfigManager();
this.fileManager = null;
}
async initialize() {
await this.config.load();
this.fileManager = new LocalRemoteManager(this.config);
this.fileManager.on('fileEvent', (data) => {
console.log('File event:', data.type, data.path);
});
}
async processFile(inputPath, outputPath) {
const local = this.fileManager.getProvider('local');
const compression = this.fileManager.getCompressionProvider();
await local.copy(inputPath, outputPath);
const result = await compression.compress(
outputPath,
outputPath.replace(/\.[^/.]+$/, '.zip')
);
return result;
}
}
S3-Compatible Server
const { createS3Server } = require('local-remote-file-manager');
async function createFileServer() {
const server = createS3Server({
port: 5000,
rootDirectory: './storage',
bucketMapping: new Map([
['public', './public-files'],
['private', './private-files']
]),
authentication: {
enabled: true,
tempCredentials: true
},
monitoring: {
enabled: true,
dashboard: true
},
autoShutdown: {
enabled: true,
timeout: 3600000
}
});
server.on('request', (data) => {
console.log(`${data.method} ${data.path}`);
});
server.on('download', (data) => {
console.log(`Downloaded: ${data.path} (${data.size} bytes)`);
});
await server.start();
console.log('S3 server running on http://localhost:5000');
return server;
}
class ContainerRegistryServer {
constructor() {
this.server = null;
}
async startWithAutoShutdown() {
this.server = new S3HttpServer({
port: 9000,
serverName: 'container-registry',
rootDirectory: './container-storage',
enableAutoShutdown: true,
shutdownOnCompletion: true,
shutdownTriggers: ['completion', 'timeout', 'manual'],
completionShutdownDelay: 30000,
maxIdleTime: 600000,
maxTotalTime: 3600000,
enableRealTimeMonitoring: true,
enableDownloadTracking: true,
monitoringUpdateInterval: 2000,
enableEventNotifications: true,
notificationChannels: ['console', 'file'],
enableAuth: false,
bucketMapping: new Map([
['containers', 'container-files'],
['registry', 'registry-data']
])
});
this.setupEventListeners();
const result = await this.server.start();
console.log(`๐ Container registry started: ${result.localUrl}`);
await this.configureExpectedDownloads();
return result;
}
setupEventListeners() {
this.server.on('downloadStarted', (info) => {
console.log(`๐ฅ Download started: ${info.bucket}/${info.key} (${this.formatBytes(info.fileSize)})`);
});
this.server.on('downloadCompleted', (info) => {
console.log(`โ
Download completed: ${info.bucket}/${info.key} in ${info.duration}ms`);
});
this.server.on('downloadFailed', (info) => {
console.log(`โ Download failed: ${info.bucket}/${info.key} - ${info.error}`);
});
this.server.on('allDownloadsComplete', (info) => {
console.log(`๐ All downloads complete! Auto-shutdown will trigger in ${info.shutdownDelay / 1000}s`);
});
this.server.on('shutdownScheduled', (info) => {
console.log(`โฐ Server shutdown scheduled: ${info.reason} (${Math.round(info.delay / 1000)}s)`);
});
this.server.on('shutdownWarning', (info) => {
console.log(`โ ๏ธ Server shutting down in ${Math.round(info.timeRemaining / 1000)} seconds`);
});
}
async configureExpectedDownloads() {
const expectedDownloads = [
{ bucket: 'containers', key: 'manifest.json', size: 1024 },
{ bucket: 'containers', key: 'config.json', size: 512 },
{ bucket: 'containers', key: 'layer-1.tar', size: 1048576 },
{ bucket: 'containers', key: 'layer-2.tar', size: 2097152 },
];
const result = this.server.setExpectedDownloads(expectedDownloads);
if (result.success) {
console.log(`๐ Configured ${result.expectedCount} expected downloads (${this.formatBytes(result.totalBytes)} total)`);
}
}
formatBytes(bytes) {
if (bytes === 0) return '0 B';
const k = 1024;
const sizes = ['B', 'KB', 'MB', 'GB'];
const i = Math.floor(Math.log(bytes) / Math.log(k));
return parseFloat((bytes / Math.pow(k, i)).toFixed(2)) + ' ' + sizes[i];
}
async getMonitoringData() {
return {
serverStatus: this.server.getStatus(),
downloadStats: this.server.getDownloadStats(),
dashboardData: this.server.getMonitoringData(),
completionStatus: this.server.getDownloadCompletionStatus()
};
}
async gracefulShutdown() {
console.log('๐ Initiating graceful shutdown...');
await this.server.stop({ graceful: true, timeout: 30000 });
console.log('โ
Server stopped gracefully');
}
}
async function runContainerRegistry() {
const registry = new ContainerRegistryServer();
try {
await registry.startWithAutoShutdown();
process.on('SIGINT', async () => {
await registry.gracefulShutdown();
process.exit(0);
});
} catch (error) {
console.error('Failed to start container registry:', error);
}
}
Advanced File Watching
const { LocalRemoteManager } = require('local-remote-file-manager');
class DocumentWatcher {
constructor() {
this.fileManager = new LocalRemoteManager();
this.setupEventHandlers();
}
setupEventHandlers() {
this.fileManager.on('fileAdded', (event) => {
console.log(`New file detected: ${event.filePath}`);
this.processNewFile(event.filePath);
});
this.fileManager.on('fileChanged', (event) => {
console.log(`File modified: ${event.filePath}`);
this.handleFileChange(event.filePath);
});
this.fileManager.on('fileRemoved', (event) => {
console.log(`File deleted: ${event.filePath}`);
this.handleFileRemoval(event.filePath);
});
}
async startWatching(directory) {
const watchResult = await this.fileManager.startWatching(directory, {
recursive: true,
events: ['add', 'change', 'unlink'],
ignoreDotfiles: true
});
console.log(`Started watching: ${watchResult.watchId}`);
return watchResult;
}
async processNewFile(filePath) {
try {
const fileInfo = await this.fileManager.getFileInfo(filePath);
if (fileInfo.size > 10 * 1024 * 1024) {
const compressedPath = filePath + '.zip';
await this.fileManager.compressFile(filePath, compressedPath);
console.log(`Auto-compressed large file: ${compressedPath}`);
}
} catch (error) {
console.error(`Failed to process new file: ${error.message}`);
}
}
}
HTTP Server with Tunnel Integration
const { LocalRemoteManager } = require('local-remote-file-manager');
class FileServerApp {
constructor() {
this.fileManager = new LocalRemoteManager();
}
async startTunneledServer(contentDirectory) {
const serverInfo = await this.fileManager.createTunneledServer({
port: 3000,
rootDirectory: contentDirectory,
tunnelService: 'serveo'
});
console.log(`๐ Local server: http://localhost:${serverInfo.port}`);
console.log(`๐ Public URL: ${serverInfo.tunnelUrl}`);
return serverInfo;
}
async addCustomRoutes(serverId) {
await this.fileManager.addCustomRoute(
serverId,
'GET',
'/api/status',
(req, res) => {
res.json({ status: 'active', timestamp: new Date().toISOString() });
},
{ priority: 100 }
);
await this.fileManager.addCustomRoute(
serverId,
'GET',
'/api/:version/health',
(req, res) => {
res.json({ health: 'ok', version: req.params.version });
},
{ priority: 50 }
);
await this.fileManager.addCustomRoute(
serverId,
'GET',
'/docs/:page',
(req, res) => {
res.send(`Documentation page: ${req.params.page}`);
},
{ priority: 10 }
);
console.log('โ
Custom routes added with priority-based routing');
}
async serveFiles() {
const server = await this.startTunneledServer('./public-files');
await this.addCustomRoutes(server.serverId);
setInterval(async () => {
const status = await this.fileManager.getServerStatus(server.serverId);
console.log(`๐ Server status: ${status.status}, Requests: ${status.requestCount}`);
}, 30000);
return server;
}
}
S3-Compatible Object Storage
const { S3HttpServer } = require('local-remote-file-manager/src/s3Server');
class S3ObjectStorage {
constructor() {
this.s3Server = new S3HttpServer({
port: 9000,
serverName: 'my-s3-server',
rootDirectory: './s3-storage',
enableAuth: true,
bucketMapping: new Map([
['documents', 'user-docs'],
['images', 'media/images'],
['backups', 'backup-storage']
]),
bucketAccessControl: new Map([
['documents', { read: true, write: true }],
['images', { read: true, write: false }],
['backups', { read: true, write: true }]
])
});
}
async start() {
const serverInfo = await this.s3Server.start();
console.log(`๐๏ธ S3 Server running on port ${serverInfo.port}`);
const credentials = this.s3Server.generateTemporaryCredentials({
permissions: ['read', 'write'],
buckets: ['documents', 'backups'],
expiryMinutes: 60
});
console.log(`๐ Access Key: ${credentials.accessKey}`);
console.log(`๐ Secret Key: ${credentials.secretKey}`);
return { serverInfo, credentials };
}
async enableMonitoring() {
this.s3Server.startMonitoringDashboard({
updateInterval: 2000,
showServerStats: true,
showDownloadProgress: true,
showActiveDownloads: true,
showShutdownStatus: true
});
this.s3Server.on('downloadCompleted', (info) => {
console.log(`๐ Download analytics: ${info.key} (${info.bytes} bytes in ${info.duration}ms)`);
const dashboardData = this.s3Server.getMonitoringData();
console.log(`๐ Total downloads: ${dashboardData.downloadStats.totalDownloads}`);
console.log(`โก Average speed: ${this.formatSpeed(dashboardData.downloadStats.averageSpeed)}`);
});
return true;
}
formatSpeed(bytesPerSecond) {
if (bytesPerSecond < 1024) return `${bytesPerSecond} B/s`;
if (bytesPerSecond < 1024 * 1024) return `${(bytesPerSecond / 1024).toFixed(1)} KB/s`;
return `${(bytesPerSecond / (1024 * 1024)).toFixed(1)} MB/s`;
}
}
CLI Integration Examples
The CLI provides direct access to all library features with simple commands:
const server = new S3HttpServer({
enableAutoShutdown: true,
shutdownTriggers: ['completion'],
completionShutdownDelay: 30000,
enableRealTimeMonitoring: true
});
await server.start();
Container Registry Use Case
mkdir -p ./container-storage/containers ./container-storage/registry
node index.js serve-s3 ./container-storage \
--bucket containers:./container-storage/containers \
--bucket registry:./container-storage/registry \
--port 9000 --auto-shutdown --monitor \
--name container-registry
### Real-Time Monitoring Dashboard
```javascript
const { MonitoringDashboard, DownloadMonitor } = require('local-remote-file-manager');
class LiveMonitoringSystem {
constructor() {
this.dashboard = new MonitoringDashboard({
updateInterval: 1000,
showServerStats: true,
showDownloadProgress: true,
showActiveDownloads: true,
showShutdownStatus: true
});
this.downloadMonitor = new DownloadMonitor({
trackPartialDownloads: true,
progressUpdateInterval: 1000
});
}
async startMonitoring(s3Server) {
// Connect monitoring to S3 server
this.dashboard.connectToServer(s3Server);
this.downloadMonitor.connectToServer(s3Server);
// Start real-time dashboard
this.dashboard.start();
// Setup download tracking
this.downloadMonitor.on('downloadStarted', (info) => {
this.dashboard.addActiveDownload(info);
});
this.downloadMonitor.on('downloadProgress', (info) => {
this.dashboard.updateDownloadProgress(info.downloadId, info);
});
this.downloadMonitor.on('downloadCompleted', (info) => {
this.dashboard.completeDownload(info.downloadId, info);
});
// Example dashboard output:
/*
+------------------------------------------------------------------------------------------------------------------------------+
| S3 Object Storage Server |
+------------------------------------------------------------------------------------------------------------------------------+
|Status: RUNNING Uptime: 45s|
|Port: 9000 Public URL: http://localhost:9000|
+------------------------------------------------------------------------------------------------------------------------------+
|Downloads Progress |
|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 3/5 (60%) |
|Active Downloads: 2 Completed: 3 Failed: 0|
|Speed: 1.2 MB/s Total: 2.1 MB |
+------------------------------------------------------------------------------------------------------------------------------+
|Active Downloads |
|โถ layer-2.tar (1.2 MB) โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 80% @ 450 KB/s |
|โถ layer-3.tar (512 KB) โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 45% @ 230 KB/s |
+------------------------------------------------------------------------------------------------------------------------------+
|Auto-Shutdown: ON Trigger: Completion + 30s|
|Next Check: 00:00:05 Status: Monitoring|
+------------------------------------------------------------------------------------------------------------------------------+
*/
console.log('๐ Real-time monitoring dashboard started');
return true;
}
async stopMonitoring() {
this.dashboard.stop();
this.downloadMonitor.stop();
console.log('๐ Monitoring stopped');
}
async getAnalytics() {
return {
dashboard: this.dashboard.getCurrentData(),
downloads: this.downloadMonitor.getStatistics(),
performance: this.downloadMonitor.getPerformanceMetrics()
};
}
}
}
async enableMonitoring() {
// Start real-time monitoring dashboard
this.s3Server.startRealTimeMonitoring({
interval: 1000,
enableConsole: true
});
// Track download events
this.s3Server.on('download:started', (info) => {
console.log(`๐ฅ Download started: ${info.bucket}/${info.key}`);
});
this.s3Server.on('download:completed', (info) => {
console.log(`โ
Download completed: ${info.bucket}/${info.key} (${info.size} bytes)`);
});
}
async getAnalytics() {
const analytics = this.s3Server.generateDownloadAnalytics({
includeDetails: true
});
console.log(`๐ Total Downloads: ${analytics.summary.totalDownloads}`);
console.log(`๐ Average Speed: ${analytics.performance.averageSpeed}`);
console.log(`โฑ๏ธ Server Uptime: ${analytics.performance.uptime}s`);
return analytics;
}
}
// Usage
const storage = new S3ObjectStorage();
await storage.start();
await storage.enableMonitoring();
// Access via S3-compatible endpoints:
// GET http://localhost:9000/documents/myfile.pdf
// HEAD http://localhost:9000/images/photo.jpg
### Enhanced File Streaming
```javascript
const { FileStreamingUtils, DownloadTracker } = require('local-remote-file-manager');
class StreamingFileServer {
async serveFileWithProgress(filePath, response, rangeHeader = null) {
try {
// Get file information
const fileInfo = await FileStreamingUtils.getFileInfo(filePath);
console.log(`๐ Serving: ${fileInfo.name} (${fileInfo.size} bytes)`);
// Create download tracker
const tracker = new DownloadTracker(fileInfo.size);
// Handle range request if specified
let streamOptions = {};
if (rangeHeader) {
const range = FileStreamingUtils.parseRangeHeader(rangeHeader, fileInfo.size);
if (range.isValid) {
streamOptions = { start: range.start, end: range.end };
response.status = 206; // Partial Content
response.setHeader('Content-Range',
FileStreamingUtils.formatContentRange(range.start, range.end, fileInfo.size)
);
}
}
// Set response headers
response.setHeader('Content-Type', FileStreamingUtils.getMimeType(filePath));
response.setHeader('Content-Length', streamOptions.end ?
(streamOptions.end - streamOptions.start + 1) : fileInfo.size);
response.setHeader('ETag', FileStreamingUtils.generateETag(fileInfo));
response.setHeader('Last-Modified', fileInfo.lastModified.toUTCString());
response.setHeader('Accept-Ranges', 'bytes');
// Create and pipe stream
const stream = await FileStreamingUtils.createReadStream(filePath, streamOptions);
stream.on('data', (chunk) => {
tracker.updateProgress(chunk.length);
const progress = tracker.getProgress();
console.log(`๐ Progress: ${progress.percentage}% (${progress.speed}/s)`);
});
stream.on('end', () => {
console.log(`โ
Transfer complete: ${filePath}`);
});
stream.pipe(response);
} catch (error) {
console.error(`โ Streaming error: ${error.message}`);
response.status = 500;
response.end('Internal Server Error');
}
}
}
Batch File Operations
const { LocalRemoteManager } = require('local-remote-file-manager');
class BatchFileProcessor {
constructor() {
this.fileManager = new LocalRemoteManager();
}
async backupDocuments(sourceDirectory, backupDirectory) {
const files = await this.fileManager.listFiles(sourceDirectory, { recursive: true });
const documents = files.filter(file =>
/\.(pdf|doc|docx|txt|md)$/i.test(file.name)
);
console.log(`Found ${documents.length} documents to backup`);
const compressionResults = await this.fileManager.compressMultipleFiles(
documents.map(doc => doc.path),
backupDirectory,
{
format: 'zip',
compressionLevel: 6,
preserveStructure: true
}
);
const shareResults = await Promise.all(
compressionResults.successful.map(async (result) => {
return await this.fileManager.createShareableUrl(result.outputPath, {
expiresIn: '24h',
downloadLimit: 5
});
})
);
return {
processed: documents.length,
compressed: compressionResults.successful.length,
failed: compressionResults.failed.length,
shared: shareResults.length,
shareUrls: shareResults.map(r => r.shareableUrl)
};
}
async syncDirectories(sourceDir, targetDir) {
const sourceFiles = await this.fileManager.listFiles(sourceDir, { recursive: true });
const targetFiles = await this.fileManager.listFiles(targetDir, { recursive: true });
const results = {
copied: [],
updated: [],
errors: []
};
for (const file of sourceFiles) {
try {
const targetPath = file.path.replace(sourceDir, targetDir);
const targetExists = targetFiles.some(t => t.path === targetPath);
if (!targetExists) {
await this.fileManager.copyFile(file.path, targetPath);
results.copied.push(file.path);
} else {
const sourceInfo = await this.fileManager.getFileInfo(file.path);
const targetInfo = await this.fileManager.getFileInfo(targetPath);
if (sourceInfo.lastModified > targetInfo.lastModified) {
await this.fileManager.copyFile(file.path, targetPath);
results.updated.push(file.path);
}
}
} catch (error) {
results.errors.push({ file: file.path, error: error.message });
}
}
return results;
}
}
๐งช Testing
Automated Testing
Run comprehensive tests for all features:
npm test
npm run test:all
npm run test:cli
npm run test:local
npm run test:watch
npm run test:compression
npm run test:tunnel
npm run test:http
npm run test:streaming
npm run test:s3
npm run test:monitor
Test Coverage by Feature
Phase 1: Local File Operations (33/33 tests passing)
- โ
Basic File Operations: Upload, download, copy, move, rename, delete
- โ
Folder Operations: Create, list, delete, rename folders
- โ
Path Management: Absolute/relative paths, normalization, validation
- โ
Search Operations: File search by name, pattern matching, recursive search
- โ
Error Handling: Non-existent files, invalid paths, permission errors
Phase 2: File Watching (24/24 tests passing)
- โ
Directory Watching: Start/stop watching, recursive monitoring
- โ
Event Filtering: Add, change, delete events with custom filtering
- โ
Performance Tests: High-frequency events, batch event processing
- โ
Edge Cases: Non-existent paths, permission issues, invalid events
- โ
Resource Management: Watcher lifecycle, memory cleanup
Phase 3: Compression (30/30 tests passing)
- โ
ZIP Operations: Compression, decompression, multiple compression levels
- โ
TAR.GZ Operations: Archive creation, extraction, directory compression
- โ
Format Detection: Automatic format detection, cross-format operations
- โ
Progress Tracking: Real-time progress events, operation monitoring
- โ
Batch Operations: Multiple file compression, batch decompression
- โ
Performance: Large file handling, memory efficiency tests
Phase 4: Tunneling & File Sharing (35/35 tests passing)
- โ
Tunnel Management: Create, destroy tunnels with multiple services
- โ
Temporary URLs: URL generation, expiration, access control
- โ
File Sharing: Secure sharing, download tracking, permission management
- โ
Service Integration: ngrok, localtunnel, fallback mechanisms
- โ
Security: Access tokens, expiration handling, cleanup
Phase 5: HTTP Server Provider (22/22 tests passing)
- โ
Server Lifecycle Management: Create, start, stop HTTP servers
- โ
Static File Serving: MIME detection, range requests, security headers
- โ
Route Registration: Parameterized routes, middleware support
- โ
Tunnel Integration: Automatic tunnel creation with multiple services
- โ
Server Monitoring: Status tracking, metrics collection, health checks
Phase 6: Enhanced File Streaming (12/12 tests passing)
- โ
Advanced Streaming: Range-aware streams, progress tracking
- โ
MIME Type Detection: 40+ file types, automatic detection
- โ
Range Request Processing: Comprehensive range header parsing
- โ
Progress Tracking: Real-time progress, speed calculation
- โ
Performance Optimization: Memory efficiency, large file handling
Phase 7: S3-Compatible Object Storage (31/31 tests passing)
- โ
S3 GET/HEAD Endpoints: Object downloads and metadata queries
- โ
Authentication System: AWS-style, Bearer token, Basic auth
- โ
Bucket/Key Mapping: Path mapping with security validation
- โ
S3-Compatible Headers: ETag, Last-Modified, Content-Range
- โ
Download Analytics: Progress tracking, real-time monitoring
- โ
Rate Limiting: Credential management, access control
Phase 8: Auto-Shutdown & Monitoring (22/22 tests passing)
- โ
Auto-Shutdown Triggers: Completion, timeout, idle detection
- โ
Real-Time Monitoring: Dashboard, progress bars, status display
- โ
Download Tracking: Individual downloads, completion status
- โ
Event Notifications: Console, file, webhook notifications
- โ
Expected Downloads: Configuration, progress calculation
Phase 9: CLI Integration (16/16 tests passing)
- โ
Command Validation: Help commands, option parsing, error handling
- โ
S3 Server Commands: Server start with auth, bucket mapping, monitoring
- โ
Server Management: Status commands, stop commands, cleanup
- โ
Configuration Validation: Directory validation, port conflicts
- โ
Error Handling: Graceful errors, permission handling
Test Results Summary
๐ Overall Test Results
=======================
โ
Total Tests: 225
โ
Passed: 225 (100%)
โ Failed: 0 (0%)
โญ Skipped: 0 (0%)
๐ฏ Success Rate: 100%
โก Performance Metrics
=====================
โฑ๏ธ Average Test Duration: 15ms
๐ Fastest Category: Local Operations (2ms avg)
๐ Slowest Category: CLI Integration (1000ms avg)
๐ Total Test Suite Time: ~5 minutes
๐ All features are production-ready!
Manual Testing & Demos
Validate functionality with built-in demos:
npm run demo:cli
npm run demo:basic
npm run demo:watch
npm run demo:compression
npm run demo:tunnel
npm run demo:s3
npm run demo:monitor
๐ API Reference
LocalRemoteManager
Core File Operations
uploadFile(sourcePath, targetPath)
- Copy file to target location (alias for local copy)
downloadFile(remotePath, localPath)
- Download/copy file from remote location (alias for local copy)
getFileInfo(filePath)
- Get file metadata, size, timestamps, and permissions
listFiles(directoryPath, options)
- List files with recursive and filtering options
deleteFile(filePath)
- Delete a file with error handling
copyFile(sourcePath, destinationPath)
- Copy a file to new location
moveFile(sourcePath, destinationPath)
- Move a file to new location
renameFile(filePath, newName)
- Rename a file in same directory
searchFiles(pattern, options)
- Search for files by name pattern with recursive support
Folder Operations
createFolder(folderPath)
- Create a new folder with recursive support
listFolders(directoryPath)
- List only directories in a path
deleteFolder(folderPath, recursive)
- Delete a folder with optional recursive deletion
renameFolder(folderPath, newName)
- Rename a folder in same parent directory
copyFolder(sourcePath, destinationPath)
- Copy entire folder structure
moveFolder(sourcePath, destinationPath)
- Move entire folder structure
getFolderInfo(folderPath)
- Get folder metadata including item count and total size
File Watching
startWatching(path, options)
- Start monitoring file/directory changes
- Options:
recursive
, events
, ignoreDotfiles
, debounceMs
stopWatching(watchId | path)
- Stop a specific watcher by ID or path
stopAllWatching()
- Stop all active watchers with cleanup
listActiveWatchers()
- Get array of active watcher objects
getWatcherInfo(watchId)
- Get detailed watcher information including event count
getWatchingStatus()
- Get overall watching system status and statistics
Compression Operations
compressFile(inputPath, outputPath, options)
- Compress file or directory
- Options:
format
(zip, tar.gz), level
(1-9), includeRoot
decompressFile(archivePath, outputDirectory, options)
- Extract archive contents
- Options:
format
, overwrite
, preservePermissions
compressMultipleFiles(fileArray, outputDirectory, options)
- Batch compression with progress
decompressMultipleFiles(archiveArray, outputDirectory, options)
- Batch extraction
getCompressionStatus()
- Get compression system status and supported formats
getCompressionProvider()
- Access compression provider directly
Tunneling & File Sharing
createTunnel(options)
- Create new tunnel connection
- Options:
proto
(http, https), subdomain
, authToken
, useExternalServer
, localPort
useExternalServer: true
- Forward tunnel to existing HTTP server instead of creating internal server
localPort: number
- Specify external server port to forward tunnel traffic to
destroyTunnel(tunnelId)
- Destroy specific tunnel connection
createTemporaryUrl(filePath, options)
- Generate temporary shareable URL
- Options:
permissions
, expiresAt
, downloadLimit
revokeTemporaryUrl(urlId)
- Revoke access to shared URL
listActiveUrls()
- Get list of active temporary URLs
getTunnelStatus()
- Get tunneling system status including active tunnels
getTunnelProvider()
- Access tunnel provider directly
HTTP Server Provider
createHttpServer(options)
- Create HTTP file server
- Options:
port
, rootDirectory
, enableTunnel
, tunnelOptions
createTunneledServer(options)
- Create HTTP server with automatic tunnel integration
- Options:
port
, rootDirectory
, tunnelService
(default: 'serveo')
addCustomRoute(serverId, method, path, handler, options)
- Add custom route with priority support
- Options:
priority
(higher numbers = higher priority, overrides generic routes like /:bucket/*
)
stopServer(serverId)
- Stop specific HTTP server
stopAllServers()
- Stop all active HTTP servers
getServerStatus(serverId)
- Get HTTP server status and information
listActiveServers()
- Get list of all active HTTP servers
getTunnelUrl(serverId)
- Get tunnel URL for tunneled server
stopTunnel(serverId)
- Stop tunnel for specific server
getHttpServerProvider()
- Access HTTP server provider directly
Enhanced File Streaming
createReadStream(filePath, options)
- Create range-aware file stream
- Options:
start
, end
, encoding
, chunkSize
getFileInfo(filePath)
- Get detailed file metadata with MIME type
getMimeType(filePath)
- Get MIME type with 40+ file type support
parseRangeHeader(rangeHeader, fileSize)
- Parse HTTP range headers
generateETag(fileStats)
- Generate ETag for cache validation
formatContentRange(start, end, total)
- Format Content-Range headers
DownloadTracker
- Track download progress with speed calculation
S3-Compatible Object Storage
createS3Server(options)
- Create S3-compatible object storage server
- Options:
port
, serverName
, rootDirectory
, bucketMapping
, enableAuth
generateTemporaryCredentials(options)
- Generate temp AWS-style credentials
- Options:
permissions
, buckets
, expiryMinutes
mapBucketKeyToPath(bucket, key)
- Map S3 bucket/key to file path
validateBucketAccess(bucket)
- Check bucket access permissions
getDownloadStats()
- Get download statistics and metrics
generateDownloadAnalytics(options)
- Generate analytics report
getDownloadDashboard()
- Get real-time dashboard data
startRealTimeMonitoring(options)
- Start live monitoring console
stopRealTimeMonitoring()
- Stop real-time monitoring
Auto-Shutdown & Monitoring
enableAutoShutdown(options)
- Enable auto-shutdown with configurable triggers
- Options:
shutdownTriggers
, completionShutdownDelay
, maxIdleTime
, maxTotalTime
setExpectedDownloads(downloads)
- Configure expected downloads for completion detection
- Downloads: Array of
{ bucket, key, size }
objects
getDownloadCompletionStatus()
- Get current download completion status
scheduleShutdown(trigger, delay)
- Manually schedule server shutdown
cancelScheduledShutdown()
- Cancel previously scheduled shutdown
startMonitoringDashboard(options)
- Start real-time visual monitoring dashboard
- Options:
updateInterval
, showServerStats
, showDownloadProgress
, showActiveDownloads
stopMonitoringDashboard()
- Stop monitoring dashboard
getMonitoringData()
- Get current monitoring data snapshot
addDownloadEventListener(event, callback)
- Listen to download events
- Events:
downloadStarted
, downloadCompleted
, downloadFailed
, allDownloadsComplete
addShutdownEventListener(event, callback)
- Listen to shutdown events
- Events:
shutdownScheduled
, shutdownWarning
, shutdownCancelled
Event Notification System
enableEventNotifications(channels)
- Enable event notifications
- Channels:
['console', 'file', 'webhook']
configureWebhookNotifications(url, options)
- Configure webhook notifications
- Options:
retryAttempts
, timeout
, headers
getEventHistory(options)
- Get event history with filtering
- Options:
startDate
, endDate
, eventTypes
, limit
Provider Management
testConnection(providerName)
- Test specific provider connection and capabilities
validateProvider(providerName)
- Validate provider configuration
getSystemInfo()
- Get comprehensive system information
shutdown()
- Gracefully shutdown all providers and cleanup resources
Event System
The LocalRemoteManager extends EventEmitter and provides these events:
fileEvent
- File system changes (add, change, unlink, addDir, unlinkDir)
compressionProgress
- Compression operation progress updates
decompressionProgress
- Decompression operation progress updates
tunnelProgress
- Tunnel creation/destruction progress
urlCreated
- Temporary URL creation events
urlRevoked
- URL revocation events
fileAccessed
- File access via temporary URLs
tunnelError
- Tunnel-related errors
downloadStarted
- Download operation started
downloadProgress
- Real-time download progress updates
downloadCompleted
- Download operation completed
downloadFailed
- Download operation failed
allDownloadsComplete
- All expected downloads completed
shutdownScheduled
- Auto-shutdown has been scheduled
shutdownWarning
- Shutdown warning (time remaining)
shutdownCancelled
- Scheduled shutdown was cancelled
monitoringEnabled
- Real-time monitoring started
monitoringDisabled
- Real-time monitoring stopped
dashboardUpdated
- Monitoring dashboard data updated
ConfigManager
Configuration Management
load()
- Load configuration from environment variables and defaults
get(key)
- Get configuration value by key
set(key, value)
- Set configuration value
validate()
- Validate current configuration and return validation result
save()
- Save configuration to persistent storage
Provider-Specific Configuration
getLocalConfig()
- Get local provider configuration (paths, permissions)
getWatchConfig()
- Get file watching configuration (debounce, patterns)
getCompressionConfig()
- Get compression configuration (formats, levels)
getTunnelConfig()
- Get tunneling configuration (services, fallback)
Provider Interfaces
Each provider implements a consistent interface:
Local Provider
- File CRUD operations with path validation
- Folder management with recursive support
- Search functionality with pattern matching
- System information and disk space monitoring
Watch Provider
- Directory and file monitoring with chokidar
- Event filtering and debouncing
- Recursive watching with ignore patterns
- Watcher lifecycle management
Compression Provider
- ZIP and TAR.GZ format support
- Multiple compression levels (1-9)
- Batch operations with progress tracking
- Format auto-detection and validation
Tunnel Provider
- Multiple tunnel service support (Serveo, ngrok, Pinggy)
- Automatic fallback between services
- External server forwarding support with
useExternalServer
option
- Target port specification with
localPort
parameter
- Tunnels forward to existing HTTP servers for consistent content serving
- HTTP server for file serving (fallback mode only)
- Access token security and expiration
HTTP Server Provider
- Static file serving with configurable root directory
- Automatic port assignment and management
- Integrated tunnel support for public access
- Multiple concurrent server support
- Request logging and analytics
- MIME type detection and headers
- Graceful shutdown and cleanup
Error Handling
All methods throw structured errors with:
code
- Error code (ENOENT, EACCES, etc.)
message
- Human-readable error description
path
- File/directory path related to error (when applicable)
provider
- Provider name that generated the error
๐๏ธ Architecture & Design
HTTP Server Provider Implementation
Tunnel Integration: The tunnel system is designed to forward external HTTP servers for consistent content serving.
External Server Forwarding
The TunnelProvider integrates with HTTP servers by forwarding tunnel traffic to existing server ports rather than creating separate internal servers:
const httpServer = await httpServerProvider.createTunneledServer({
port: 4005,
rootDirectory: './content',
tunnelService: 'serveo'
});
const tunnel = await tunnelProvider.createTunnel({
useExternalServer: true,
localPort: 4005
});
Benefits of This Architecture
- Consistency: Tunnel serves same content as local HTTP server
- Flexibility: Multiple servers can have dedicated tunnels
- Performance: No duplicate servers or port conflicts
- Debugging: Clear separation between HTTP serving and tunnel forwarding
- Container Registry Ready: Foundation for container serving capabilities
Usage Patterns
For file serving with tunnel access:
const server = await manager.createTunneledServer({
port: 3000,
rootDirectory: './public',
tunnelService: 'serveo'
});
const tunnelUrl = server.tunnelUrl;
TunnelProvider API Reference
Core Methods
createTunnel(options)
- Create tunnel with external server support
const tunnel = await tunnelProvider.createTunnel({
subdomain: 'myapp',
service: 'serveo',
useExternalServer: true,
localPort: 4005,
authToken: 'optional',
region: 'us'
});
Return Value:
{
tunnelId: 'tunnel_abc123',
url: 'https://subdomain.serveo.net',
service: 'serveo',
port: 4005,
createdAt: '2025-08-13T...',
useExternalServer: true,
targetPort: 4005
}
Service-Specific Methods
All tunnel creation methods accept targetPort
parameter:
await createServiceTunnel(serviceName, tunnelId, options, targetPort)
await createPinggyTunnel(tunnelId, options, targetPort)
await createServeoTunnel(tunnelId, options, targetPort)
await createLocalTunnel(tunnelId, options, targetPort)
Configuration
{
service: 'serveo',
fallbackServices: 'serveo,localtunnel',
autoFallback: true,
useExternalServer: false
}
Method Return Types
File Operations
{
name: string,
path: string,
size: number,
isDirectory: boolean,
createdAt: Date,
modifiedAt: Date,
permissions: string
}
{
name: string,
path: string,
size: number,
completedAt: Date
}
Compression Operations
{
operationId: string,
name: string,
format: string,
size: number,
originalSize: number,
compressionRatio: number,
level: number,
completedAt: Date
}
{
successful: Array,
failed: Array,
summary: {
total: number,
successful: number,
failed: number,
successRate: string
}
}
Tunneling Operations
{
tunnelId: string,
url: string,
service: string,
port: number,
createdAt: Date,
useExternalServer?: boolean,
targetPort?: number
}
{
serverId: string,
port: number,
rootDirectory: string,
url: string,
status: 'running' | 'stopped',
tunnelEnabled: boolean,
tunnelUrl?: string,
tunnelService?: string,
createdAt: Date,
requestCount: number
}
{
urlId: string,
shareableUrl: string,
accessToken: string,
expiresAt: Date,
permissions: Array,
filePath: string
}