
Research
SANDWORM_MODE: Shai-Hulud-Style npm Worm Hijacks CI Workflows and Poisons AI Toolchains
An emerging npm supply chain attack that infects repos, steals CI secrets, and targets developer AI toolchains for further compromise.
cloudku-uploader
Advanced tools
Blazing-fast, zero-dependency uploader for CloudKu. Supports auto-conversion, chunked uploads, and TypeScript. Easily upload images, videos, audio, and documents via Node.js.
Powerful file uploader client for CloudKu image hosting service with automatic chunked uploads, stream support, and load balancing across multiple endpoints.
cloudkuimages.guru and cloudkuimages-guru.us.itpanel.appPromise.allSettled for resilient batch operationsnpm install cloudku-uploader
yarn add cloudku-uploader
pnpm add cloudku-uploader
import cloudku from 'cloudku-uploader'
const buffer = await fetch('image.jpg').then(r => r.arrayBuffer())
const result = await cloudku.uploadFile(buffer, 'image.jpg')
console.log(result.url)
const cloudku = require('cloudku-uploader')
const fs = require('fs').promises
async function upload() {
const buffer = await fs.readFile('image.jpg')
const result = await cloudku.uploadFile(buffer, 'image.jpg')
console.log(result.url)
}
upload()
import cloudku from 'cloudku-uploader'
document.querySelector('#fileInput').addEventListener('change', async (e) => {
const file = e.target.files[0]
const buffer = await file.arrayBuffer()
const result = await cloudku.uploadFile(buffer, file.name)
console.log('Uploaded to:', result.url)
})
import cloudku from 'cloudku-uploader'
import { readFile } from 'fs/promises'
const buffer = await readFile('./photo.jpg')
const result = await cloudku.uploadFile(buffer.buffer, 'photo.jpg')
console.log('File URL:', result.url)
import cloudku from 'cloudku-uploader'
import { createReadStream } from 'fs'
import { pipeline } from 'stream/promises'
const stream = createReadStream('./large-video.mp4')
const chunks = []
for await (const chunk of stream) {
chunks.push(chunk)
}
const buffer = Buffer.concat(chunks)
const result = await cloudku.uploadFile(buffer.buffer, 'video.mp4')
console.log('Stream uploaded:', result.url)
import cloudku from 'cloudku-uploader'
import { createReadStream } from 'fs'
import { stat } from 'fs/promises'
const filePath = './movie.mp4'
const fileStats = await stat(filePath)
const totalSize = fileStats.size
const stream = createReadStream(filePath)
let uploadedSize = 0
const chunks = []
for await (const chunk of stream) {
chunks.push(chunk)
uploadedSize += chunk.length
const progress = (uploadedSize / totalSize * 100).toFixed(2)
console.log(`Progress: ${progress}%`)
}
const buffer = Buffer.concat(chunks)
const result = await cloudku.uploadFile(buffer.buffer, 'movie.mp4')
console.log('Complete:', result.url)
import cloudku from 'cloudku-uploader'
const buffer = await fetch('4k-video.mp4').then(r => r.arrayBuffer())
const result = await cloudku.uploadLarge(
buffer,
'4k-video.mp4',
16 * 1024 * 1024
)
console.log('Large file URL:', result.url)
import cloudku from 'cloudku-uploader'
const files = [
{ buffer: buffer1, name: 'photo1.jpg' },
{ buffer: buffer2, name: 'photo2.png' },
{ buffer: buffer3, name: 'document.pdf' }
]
const results = await cloudku.uploadBatch(files)
const successful = results.filter(r => r.status === 'fulfilled')
const failed = results.filter(r => r.status === 'rejected')
console.log(`✓ ${successful.length} uploaded successfully`)
console.log(`✗ ${failed.length} failed`)
results.forEach((result, index) => {
if (result.status === 'fulfilled') {
console.log(`[${index + 1}] ${files[index].name}: ${result.value.url}`)
} else {
console.error(`[${index + 1}] ${files[index].name}: ${result.reason.message}`)
}
})
uploadFile(buffer, name?)Main upload method with automatic strategy selection based on file size.
Parameters:
buffer {ArrayBuffer} - File content as ArrayBuffer (required)name {string} - Filename with extension (optional, default: 'file.bin')Returns: Promise<UploadResult>
Behavior:
Example:
const buffer = await file.arrayBuffer()
const result = await cloudku.uploadFile(buffer, 'photo.jpg')
Response Schema:
{
status: 'success',
url: 'https://cloudkuimages.guru/files/abc123.jpg',
filename: 'photo.jpg',
size: 2048576
}
uploadLarge(buffer, name?, chunkSize?)Explicit chunked upload for large files with progress control.
Parameters:
buffer {ArrayBuffer} - File content as ArrayBuffer (required)name {string} - Filename with extension (optional, default: 'file.bin')chunkSize {number} - Chunk size in bytes (optional, default: 8388608 = 8MB)Returns: Promise<UploadResult>
Implementation Details:
fileId for chunk trackingchunk: Current chunk index (0-based)chunks: Total number of chunksfilename: Original filenamefileId: UUID for trackingsize: Total file size in byteschunked=1&finalize=1 query paramsExample:
const result = await cloudku.uploadLarge(
buffer,
'movie.mkv',
10 * 1024 * 1024
)
Chunk Upload Request:
FormData {
file: Blob(chunk),
chunk: 0,
chunks: 12,
filename: 'movie.mkv',
fileId: '550e8400-e29b-41d4-a716-446655440000',
size: 104857600
}
Finalization Request:
POST /upload.php?chunked=1&finalize=1
Content-Type: application/json
{
"fileId": "550e8400-e29b-41d4-a716-446655440000",
"filename": "movie.mkv",
"chunks": 12
}
uploadBatch(files)Upload multiple files concurrently with individual error handling.
Parameters:
files {Array} - Array of file objects (required)FileObject Schema:
{
buffer: ArrayBuffer,
name: string
}
Returns: Promise<Array<PromiseSettledResult<UploadResult>>>
Example:
const results = await cloudku.uploadBatch([
{ buffer: buffer1, name: 'image1.jpg' },
{ buffer: buffer2, name: 'image2.png' },
{ buffer: buffer3, name: 'video.mp4' }
])
results.forEach((result, index) => {
if (result.status === 'fulfilled') {
console.log(`Success: ${result.value.url}`)
} else {
console.error(`Failed: ${result.reason}`)
}
})
try {
const result = await cloudku.uploadFile(buffer, 'image.jpg')
if (result.status === 'error') {
throw new Error(result.message)
}
console.log('Uploaded:', result.url)
} catch (error) {
console.error('Upload failed:', error.message)
}
async function uploadWithRetry(buffer, name, maxRetries = 3) {
for (let i = 0; i < maxRetries; i++) {
try {
const result = await cloudku.uploadFile(buffer, name)
return result
} catch (error) {
if (i === maxRetries - 1) throw error
await new Promise(resolve => setTimeout(resolve, 1000 * (i + 1)))
}
}
}
function getOptimalChunkSize(connectionType) {
const sizes = {
'4g': 16 * 1024 * 1024,
'wifi': 10 * 1024 * 1024,
'3g': 4 * 1024 * 1024,
'slow-2g': 1 * 1024 * 1024
}
return sizes[connectionType] || 8 * 1024 * 1024
}
const connection = navigator.connection?.effectiveType || 'wifi'
const chunkSize = getOptimalChunkSize(connection)
const result = await cloudku.uploadLarge(buffer, 'file.zip', chunkSize)
async function uploadBatchWithLimit(files, limit = 3) {
const results = []
for (let i = 0; i < files.length; i += limit) {
const batch = files.slice(i, i + limit)
const batchResults = await cloudku.uploadBatch(batch)
results.push(...batchResults)
}
return results
}
The uploader uses random selection between two endpoints:
https://cloudkuimages.guruhttps://cloudkuimages-guru.us.itpanel.appThis provides basic load balancing and failover capability.
Small File (<= 100MB):
Client → pickBase() → POST /upload.php → Response
Large File (> 100MB):
Client → Generate UUID
→ Split into chunks
→ For each chunk:
→ POST /upload.php (with metadata)
→ POST /upload.php?chunked=1&finalize=1
→ Response
All requests include:
{
'User-Agent': 'cloudku-uploader/5.0',
'Accept': 'application/json'
}
import cloudku from 'cloudku-uploader'
const cloudku = require('cloudku-uploader')
import cloudku from 'cloudku-uploader'
import type { UploadResult, FileObject } from 'cloudku-uploader'
const result: UploadResult = await cloudku.uploadFile(buffer, 'file.jpg')
interface UploadResult {
status: 'success' | 'error'
url?: string
filename?: string
size?: number
message?: string
}
interface FileObject {
buffer: ArrayBuffer
name: string
}
interface CloudkuUploader {
uploadFile(buffer: ArrayBuffer, name?: string): Promise<UploadResult>
uploadLarge(buffer: ArrayBuffer, name?: string, chunkSize?: number): Promise<UploadResult>
uploadBatch(files: FileObject[]): Promise<PromiseSettledResult<UploadResult>[]>
}
Contributions are welcome! Please follow these guidelines:
git checkout -b feature/amazing-featuregit commit -m 'Add amazing feature'git push origin feature/amazing-featurePlease ensure:
MIT License - see LICENSE file for details
Special thanks to the CloudKu team for providing the hosting infrastructure.
Made with ❤️ for the JavaScript community
FAQs
Blazing-fast, zero-dependency uploader for CloudKu. Supports auto-conversion, chunked uploads, and TypeScript. Easily upload images, videos, audio, and documents via Node.js.
The npm package cloudku-uploader receives a total of 1,627 weekly downloads. As such, cloudku-uploader popularity was classified as popular.
We found that cloudku-uploader demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 3 open source maintainers collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Research
An emerging npm supply chain attack that infects repos, steals CI secrets, and targets developer AI toolchains for further compromise.

Company News
Socket is proud to join the OpenJS Foundation as a Silver Member, deepening our commitment to the long-term health and security of the JavaScript ecosystem.

Security News
npm now links to Socket's security analysis on every package page. Here's what you'll find when you click through.