
Product
Socket for Jira Is Now Available
Socket for Jira lets teams turn alerts into Jira tickets with manual creation, automated ticketing rules, and two-way sync.
Upfly: Upload. Convert. Optimize. All in one middleware for Express (Multer (peer) + Sharp)
The Complete File Upload Solution You've Been Looking For
One middleware. Stream-based processing. Zero data loss. Production-ready.
Upfly is an Express middleware that handles the entire file upload pipeline—from receiving the request to storing in the cloud—with automatic image optimization, intelligent fallback protection, and stream-based processing that scales from kilobytes to gigabytes.
Built on a proven architecture:
HTTP Request → Multer Interception → Stream Pipeline → Processing
↓
Parallel Paths:
• Main: Convert + Upload
• Backup: Original Safety Net
↓
Memory/Disk/Cloud → Response
The result: Replace 500+ lines of boilerplate with 15 lines. Zero data loss. Production-ready error handling. Multi-cloud support.
File uploads shouldn't be this hard. Yet every project starts the same way:
// Week 1: Setup hell
const multer = require('multer');
const sharp = require('sharp');
const cloudinary = require('cloudinary').v2;
const fs = require('fs');
// Week 2: Configuration nightmare
const storage = multer.diskStorage({
destination: (req, file, cb) => { /* ... */ },
filename: (req, file, cb) => { /* ... */ }
});
// Week 3: Error handling chaos
app.post('/upload', upload.single('image'), async (req, res) => {
try {
await sharp(req.file.path).webp({ quality: 80 }).toFile(...);
const result = await cloudinary.uploader.upload(...);
fs.unlinkSync(req.file.path);
res.json({ url: result.url });
} catch (err) {
// 🔴 Data loss risk - no backup
// 🔴 Manual cleanup required
// 🔴 Memory leaks with large files
res.status(500).json({ error: err.message });
}
});
You lose 3-4 weeks writing:
One middleware. Everything handled.
const { upflyUpload } = require('upfly');
app.post('/upload',
upflyUpload({
fields: {
avatar: {
cloudStorage: true,
cloudProvider: 'cloudinary',
cloudConfig: {
cloud_name: process.env.CLOUDINARY_CLOUD_NAME,
api_key: process.env.CLOUDINARY_API_KEY,
api_secret: process.env.CLOUDINARY_API_SECRET
},
format: 'webp',
quality: 80
}
},
safeFile: true // ← Zero data loss guarantee
}),
(req, res) => res.json({ url: req.files.avatar[0].cloudUrl })
);
That's it. ✅ Image optimization ✅ Cloud storage ✅ Error handling ✅ Backup fallback ✅ Memory management
| Metric | Traditional Approach | With Upfly | Improvement |
|---|---|---|---|
| Setup Time | 3-4 weeks | 30 minutes | 99% Faster |
| Code Lines | 500+ lines | 15 lines | 93% Less |
| Data Loss Risk | High (no fallback) | Zero (automatic backup) | 100% Reliable |
| Cloud Providers | 1 (locked-in) | 3 (switchable) | 3x Flexibility |
| Memory Issues | Common with large files | None (stream-based) | Production-Safe |
Upfly uses a sophisticated pipeline that processes files without blocking your server:
┌─────────────────┐
│ HTTP Request │
└────────┬────────┘
│
▼
┌─────────────────┐
│ Multer │ ◄── File interception
└────────┬────────┘
│
▼
┌─────────────────┐
│ Custom Storage │ ◄── Stream-based processing
└────────┬────────┘
│
├──── safeFile enabled?
│
▼
┌────┴────┐
│ Tee │ ◄── Backup protection
│ Stream │
└─┬────┬──┘
│ │
│ └──────► Backup Stream ──► Memory/Disk
│ (safety net)
▼
Main Stream
│
├──── Image?
│
▼
┌────────┐
│ Sharp │ ◄── Format conversion
│Convert │ (WebP, AVIF, etc.)
└───┬────┘
│
▼
┌──────────┐
│ Output │
│ Routing │
└────┬─────┘
│
├─────► Memory Buffer
├─────► Disk Write
└─────► Cloud Upload
│ (Cloudinary/S3/GCS)
▼
┌─────────┐
│ Success │
└─────────┘
│
▼ (error?)
┌─────────┐
│ Fallback│ ◄── Use backup automatically
│ System │ (zero data loss)
└─────────┘
Key Innovations:
safeFile: true, stream splits automatically
npm install upfly multer
const express = require('express');
const { upflyUpload } = require('upfly');
const app = express();
app.post('/upload',
upflyUpload({
fields: {
images: {
format: 'webp',
quality: 80
}
}
}),
(req, res) => {
res.json({
success: true,
files: req.files.images
});
}
);
app.listen(3000);
curl -X POST -F "images=@photo.jpg" http://localhost:3000/upload
Result: Your image is automatically optimized to WebP (80% quality), saving 30-70% in file size.
{
safeFile: true // ← Automatic backup system
}
// Memory (fast, for small files)
output: 'memory'
// Disk (scalable, for large files)
output: 'disk'
// Cloud (production-ready)
cloudStorage: true
Each field in your HTML form can have its own processing rules:
{
fields: {
fieldname: {
// Output destination
output: 'memory', // 'memory' | 'disk'
// Image processing
format: 'webp', // 'webp' | 'jpeg' | 'png' | 'avif' | etc.
quality: 80, // 1-100 (higher = better quality)
keepOriginal: false, // Skip conversion
// Cloud storage
cloudStorage: false, // Enable cloud upload
cloudProvider: 'cloudinary', // 'cloudinary' | 's3' | 'gcs'
cloudConfig: { /* ... */ } // Provider-specific config
}
}
}
{
fields: { /* ... */ },
outputDir: './uploads', // Disk storage directory
limit: 10 * 1024 * 1024, // Max file size (10MB)
safeFile: true // Enable backup fallback
}
cloudConfig: {
cloud_name: process.env.CLOUDINARY_CLOUD_NAME,
api_key: process.env.CLOUDINARY_API_KEY,
api_secret: process.env.CLOUDINARY_API_SECRET,
folder: 'user-uploads' // Optional: organize in folders
}
Install: npm install cloudinary
cloudConfig: {
region: 'us-east-1',
bucket: 'my-bucket',
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
acl: 'public-read' // or 'private'
}
Install: npm install @aws-sdk/client-s3 @aws-sdk/lib-storage
cloudConfig: {
bucket: 'my-gcs-bucket',
keyFilename: './service-account.json',
projectId: 'my-project-id',
public: true
}
Install: npm install @google-cloud/storage
app.post('/profile',
upflyUpload({
fields: {
avatar: {
cloudStorage: true,
cloudProvider: 'cloudinary',
cloudConfig: {
cloud_name: process.env.CLOUDINARY_CLOUD_NAME,
api_key: process.env.CLOUDINARY_API_KEY,
api_secret: process.env.CLOUDINARY_API_SECRET,
folder: 'avatars'
},
format: 'webp',
quality: 85
}
},
limit: 5 * 1024 * 1024, // 5MB limit
safeFile: true
}),
(req, res) => {
const { cloudUrl } = req.files.avatar[0];
res.json({ avatarUrl: cloudUrl });
}
);
app.post('/documents',
upflyUpload({
fields: {
files: {
output: 'disk',
keepOriginal: true // Don't convert documents
}
},
outputDir: './user-documents',
limit: 50 * 1024 * 1024 // 50MB
}),
(req, res) => {
const files = req.files.files.map(f => ({
path: f.path,
name: f.originalname,
size: f.size
}));
res.json({ files });
}
);
app.post('/post',
upflyUpload({
fields: {
// Thumbnail: Small, aggressive compression
thumbnail: {
format: 'webp',
quality: 60,
output: 'memory'
},
// Main image: High quality to cloud
image: {
cloudStorage: true,
cloudProvider: 's3',
cloudConfig: {
region: process.env.AWS_REGION,
bucket: process.env.AWS_BUCKET,
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY
},
format: 'webp',
quality: 85
},
// Attachments: Keep original to disk
attachments: {
output: 'disk',
keepOriginal: true
}
},
outputDir: './uploads',
safeFile: true
}),
(req, res) => {
res.json({
thumbnail: req.files.thumbnail[0].buffer.toString('base64'),
imageUrl: req.files.image[0].cloudUrl,
attachments: req.files.attachments.map(f => f.path)
});
}
);
Upfly never crashes your app. All errors are captured in _metadata:
app.post('/upload',
upflyUpload({
fields: { images: { format: 'webp' } },
safeFile: true
}),
(req, res) => {
const file = req.files.images[0];
if (file._metadata?.isSkipped) {
// Total failure - couldn't process at all
return res.status(500).json({
error: file._metadata.errors.message
});
}
if (file._metadata?.isBackupFallback) {
// Partial failure - used backup (file still available)
console.warn('Conversion failed, used original:',
file._metadata.errors.conversion
);
// File uploaded successfully, just not converted
}
// Success - file processed normally
res.json({ url: file.cloudUrl || file.path });
}
);
_metadata: {
isBackupFallback: boolean, // true if backup was used
isSkipped: boolean, // true if totally failed
isProcessed: boolean, // true if successful
errors: {
conversion?: string, // Sharp error
cloudUpload?: string, // Cloud provider error
diskWrite?: string, // Filesystem error
message?: string // General error
}
}
// User uploads .bmp file
file._metadata = {
isSkipped: true,
errors: {
message: 'Unsupported image format: image/bmp'
}
}
Solution: Use keepOriginal: true or validate MIME types before upload
// Incomplete/damaged file
file._metadata = {
isBackupFallback: true, // ← Original file was saved
errors: {
conversion: 'Input buffer has corrupt header'
}
}
Solution: With safeFile: true, user still gets their file
// Network issue during upload
file._metadata = {
isSkipped: true,
errors: {
cloudUpload: 'Request timeout after 30s'
}
}
Solution: Upfly retries once with backup stream automatically
Optimization Savings (1920×1080 images):
| Original Format | Size | WebP 80% | Savings |
|---|---|---|---|
| PNG (screenshot) | 884 KB | 72 KB | 91.9% |
| JPEG (photo) | 204 KB | 67 KB | 67.1% |
| PNG (graphic) | 168 KB | 126 KB | 25.1% |
Processing Speed (average):
Memory Usage:
Throughput (1000 concurrent uploads, 2MB average):
{
limit: 10 * 1024 * 1024 // 10MB - adjust per use case
}
app.post('/upload',
upflyUpload({ /* ... */ }),
(req, res) => {
const file = req.files.image[0];
const allowed = ['image/jpeg', 'image/png', 'image/webp'];
if (!allowed.includes(file.mimetype)) {
return res.status(400).json({ error: 'Invalid file type' });
}
res.json({ file });
}
);
const rateLimit = require('express-rate-limit');
const uploadLimiter = rateLimit({
windowMs: 15 * 60 * 1000, // 15 minutes
max: 10 // 10 uploads per window
});
app.post('/upload', uploadLimiter, upflyUpload({ /* ... */ }));
const requireAuth = (req, res, next) => {
if (!req.user) return res.status(401).json({ error: 'Unauthorized' });
next();
};
app.post('/upload', requireAuth, upflyUpload({ /* ... */ }));
// ✅ Good: Use environment variables
cloudConfig: {
cloud_name: process.env.CLOUDINARY_CLOUD_NAME,
api_key: process.env.CLOUDINARY_API_KEY,
api_secret: process.env.CLOUDINARY_API_SECRET,
secure: true // Always HTTPS
}
// ❌ Bad: Hardcoded credentials
cloudConfig: {
cloud_name: 'my-cloud', // Don't do this!
api_key: '123456',
api_secret: 'secret'
}
Before (50+ lines):
const multer = require('multer');
const sharp = require('sharp');
const upload = multer({ dest: 'temp/' });
app.post('/upload', upload.single('image'), async (req, res) => {
const output = `uploads/${Date.now()}.webp`;
await sharp(req.file.path).webp({ quality: 80 }).toFile(output);
fs.unlinkSync(req.file.path);
res.json({ path: output });
});
After (Upfly):
const { upflyUpload } = require('upfly');
app.post('/upload',
upflyUpload({
fields: { image: { output: 'disk', format: 'webp', quality: 80 } },
outputDir: './uploads'
}),
(req, res) => res.json({ path: req.files.image[0].path })
);
Before (manual upload):
const multer = require('multer');
const cloudinary = require('cloudinary').v2;
const upload = multer({ dest: 'temp/' });
app.post('/upload', upload.single('image'), async (req, res) => {
const result = await cloudinary.uploader.upload(req.file.path);
fs.unlinkSync(req.file.path);
res.json({ url: result.secure_url });
});
After (Upfly):
const { upflyUpload } = require('upfly');
app.post('/upload',
upflyUpload({
fields: {
image: {
cloudStorage: true,
cloudProvider: 'cloudinary',
cloudConfig: {
cloud_name: process.env.CLOUDINARY_CLOUD_NAME,
api_key: process.env.CLOUDINARY_API_KEY,
api_secret: process.env.CLOUDINARY_API_SECRET
}
}
}
}),
(req, res) => res.json({ url: req.files.image[0].cloudUrl })
);
Check credentials:
// Verify env variables are loaded
console.log('Cloud name:', process.env.CLOUDINARY_CLOUD_NAME);
Test connection at startup:
const { validateAllCloudConfigs } = require('upfly/cloud-setup/cloud');
validateAllCloudConfigs(config.fields)
.then(() => console.log('✓ Cloud configs validated'))
.catch(err => console.error('✗ Cloud config error:', err));
Enable safeFile with absolute paths:
const path = require('path');
upflyUpload({
safeFile: true,
outputDir: path.join(__dirname, 'uploads') // Use absolute path
})
Sharp supports: JPEG, PNG, WebP, GIF, AVIF, TIFF, SVG, HEIF
For other formats:
{
keepOriginal: true // Skip conversion
}
Specify cloudProvider before cloudConfig:
// ✓ Correct - enables conditional types
cloudProvider: 'cloudinary',
cloudConfig: { /* autocomplete works */ }
// ✗ Wrong - breaks type inference
cloudConfig: { /* no autocomplete */ },
cloudProvider: 'cloudinary'
Main middleware for file uploads with processing.
interface UpflyOptions {
fields: Record<string, FieldConfig>;
outputDir?: string; // Default: './uploads'
limit?: number; // Default: 10485760 (10MB)
safeFile?: boolean; // Default: false
}
Returns: Express middleware function
Conversion-only middleware for existing Multer uploads.
interface ConvertOptions {
fields: Record<string, FieldConfig>;
outputDir?: string;
safeFile?: boolean;
}
Requirements:
memoryStorage() (files need .buffer)Example:
const multer = require('multer');
const upload = multer({ storage: multer.memoryStorage() });
app.post('/upload',
upload.single('image'),
upflyConvert({
fields: { image: { format: 'webp', quality: 80 } }
}),
(req, res) => res.json({ file: req.file })
);
A: Only if using cloud storage. Basic image processing works without them.
A: Yes! Different fields can use different providers:
fields: {
avatar: { cloudProvider: 'cloudinary', /* ... */ },
documents: { cloudProvider: 's3', /* ... */ }
}
A: With safeFile: true, the original file uploads automatically. You always get the file.
A: Use output: 'disk' with keepOriginal: true. Streaming handles any size.
A: Yes! Use keepOriginal: true for documents, PDFs, etc.
We welcome contributions! Please:
git checkout -b feature/amazinggit commit -m 'Add amazing feature'git push origin feature/amazingMIT © Ramin
See LICENSE for details.
Stop fighting file uploads. Start building features.
Made with ⚡ by developers, for developers.
FAQs
Upfly: Upload. Convert. Optimize. All in one middleware for Express (Multer (peer) + Sharp)
We found that upfly demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Product
Socket for Jira lets teams turn alerts into Jira tickets with manual creation, automated ticketing rules, and two-way sync.

Company News
Socket won two 2026 Reppy Awards from RepVue, ranking in the top 5% of all sales orgs. AE Alexandra Lister shares what it's like to grow a sales career here.

Security News
NIST will stop enriching most CVEs under a new risk-based model, narrowing the NVD's scope as vulnerability submissions continue to surge.