
Security News
Attackers Are Hunting High-Impact Node.js Maintainers in a Coordinated Social Engineering Campaign
Multiple high-impact npm maintainers confirm they have been targeted in the same social engineering campaign that compromised Axios.
@dropgate/core
Advanced tools
A headless, environment-agnostic TypeScript library for Dropgate file sharing operations.
@dropgate/core is the universal client library for Dropgate. It provides all the core functionality for:
This package is headless and environment-agnostic — it contains no DOM manipulation, no browser-specific APIs, and no Node.js-specific code. All environment-specific concerns (loading PeerJS, handling file streams, etc.) are handled by the consumer.
npm install @dropgate/core
The package ships with multiple build targets:
| Format | File | Use Case |
|---|---|---|
| ESM | dist/index.js | Modern bundlers, Node.js 18+ |
| CJS | dist/index.cjs | Legacy Node.js, CommonJS |
| Browser IIFE | dist/index.browser.js | <script> tag, exposes DropgateCore global |
All operations go through a single DropgateClient instance. Server connection details are specified once in the constructor:
import { DropgateClient } from '@dropgate/core';
const client = new DropgateClient({
clientVersion: '3.0.10',
server: 'https://dropgate.link', // URL string or { host, port?, secure? }
fallbackToHttp: true, // auto-retry HTTP if HTTPS fails (optional)
});
connect() fetches server info, checks version compatibility, and caches the result. All methods call connect() internally, so explicit calls are optional — useful for "Test Connection" buttons or eager validation.
const { serverInfo, compatible, message } = await client.connect({ timeoutMs: 5000 });
console.log('Server version:', serverInfo.version);
console.log('Compatible:', compatible);
console.log('Upload enabled:', serverInfo.capabilities?.upload?.enabled);
console.log('P2P enabled:', serverInfo.capabilities?.p2p?.enabled);
const session = await client.uploadFiles({
file: myFile, // File or Blob (implements FileSource)
lifetimeMs: 3600000, // 1 hour
maxDownloads: 5,
encrypt: true,
onProgress: ({ phase, text, percent }) => {
console.log(`${phase}: ${text} (${percent ?? 0}%)`);
},
});
const result = await session.result;
console.log('Download URL:', result.downloadUrl);
// Cancel an in-progress upload:
// session.cancel('User cancelled');
// Fetch file metadata (size, encryption status, filename)
const fileMeta = await client.getFileMetadata('file-id-123');
console.log('File size:', fileMeta.sizeBytes);
console.log('Encrypted:', fileMeta.isEncrypted);
console.log('Filename:', fileMeta.filename || fileMeta.encryptedFilename);
// Fetch bundle metadata with automatic derivation
const bundleMeta = await client.getBundleMetadata(
'bundle-id-456',
'base64-key-from-url-hash' // Required for encrypted bundles
);
console.log('Files:', bundleMeta.fileCount);
console.log('Total size:', bundleMeta.totalSizeBytes);
console.log('Sealed:', bundleMeta.sealed);
// For sealed bundles, the manifest is automatically decrypted
// and files array is populated from the decrypted manifest
bundleMeta.files.forEach(file => {
console.log(`- ${file.filename}: ${file.sizeBytes} bytes`);
});
// Download with streaming (for large files)
const result = await client.downloadFiles({
fileId: 'abc123',
keyB64: 'base64-key-from-url-hash', // Required for encrypted files
onProgress: ({ phase, percent, processedBytes, totalBytes }) => {
console.log(`${phase}: ${percent}% (${processedBytes}/${totalBytes})`);
},
onData: async (chunk) => {
await writer.write(chunk);
},
});
console.log('Downloaded:', result.filename);
// Or download to memory (for small files — omit onData)
const memoryResult = await client.downloadFiles({ fileId: 'abc123' });
console.log('File size:', memoryResult.data?.length);
const Peer = await loadPeerJS(); // Your loader function
const session = await client.p2pSend({
file: myFile,
Peer,
onCode: (code) => console.log('Share this code:', code),
onProgress: ({ processedBytes, totalBytes, percent }) => {
console.log(`Sending: ${percent.toFixed(1)}%`);
},
onComplete: () => console.log('Transfer complete!'),
onError: (err) => console.error('Error:', err),
onCancel: ({ cancelledBy }) => console.log(`Cancelled by ${cancelledBy}`),
onDisconnect: () => console.log('Receiver disconnected'),
});
// Session control
console.log('Status:', session.getStatus());
console.log('Bytes sent:', session.getBytesSent());
session.stop(); // Cancel
const Peer = await loadPeerJS();
const session = await client.p2pReceive({
code: 'ABCD-1234',
Peer,
onMeta: ({ name, total, fileCount, files }) => {
console.log(`Receiving: ${name} (${total} bytes)`);
if (fileCount) console.log(`Multi-file transfer: ${fileCount} files`);
},
onData: async (chunk) => {
await writer.write(chunk);
},
onProgress: ({ processedBytes, totalBytes, percent }) => {
console.log(`Receiving: ${percent.toFixed(1)}%`);
},
// Multi-file transfers: called when each individual file starts/ends
onFileStart: ({ fileIndex, name, size }) => {
console.log(`File ${fileIndex}: ${name} (${size} bytes)`);
},
onFileEnd: ({ fileIndex, receivedBytes }) => {
console.log(`File ${fileIndex} complete (${receivedBytes} bytes)`);
},
onComplete: ({ received, total }) => console.log(`Complete! ${received}/${total}`),
onCancel: ({ cancelledBy }) => console.log(`Cancelled by ${cancelledBy}`),
onError: (err) => console.error('Error:', err),
onDisconnect: () => console.log('Sender disconnected'),
});
session.stop(); // Cancel
Use autoReady: false to show a file preview before starting the transfer:
const session = await client.p2pReceive({
code: 'ABCD-1234',
Peer,
autoReady: false,
onMeta: ({ name, total, sendReady }) => {
console.log(`File: ${name} (${total} bytes)`);
showPreviewUI(name, total);
confirmButton.onclick = () => {
writer = createWriteStream(name);
sendReady(); // Signal sender to begin transfer
};
},
onData: async (chunk) => {
await writer.write(chunk);
},
onComplete: () => {
writer.close();
console.log('Transfer complete!');
},
});
For one-off checks before constructing a client:
import { getServerInfo } from '@dropgate/core';
const { serverInfo } = await getServerInfo({
server: 'https://dropgate.link',
timeoutMs: 5000,
});
console.log('Server version:', serverInfo.version);
The core library provides intelligent metadata fetching methods that handle all the complexity of deriving computed fields from server responses.
The server stores and sends only minimal, essential data:
The core library derives all computed fields:
totalSizeBytes: Sum of all file sizesfileCount: Length of files array// The core library automatically:
// 1. Fetches raw metadata from the server
// 2. Decrypts sealed bundle manifests (if keyB64 provided)
// 3. Derives totalSizeBytes and fileCount from files array
// 4. Returns a complete BundleMetadata object
const meta = await client.getBundleMetadata('bundle-id', 'optional-key');
// meta.totalSizeBytes and meta.fileCount are computed client-side
The main client class for interacting with Dropgate servers.
| Option | Type | Required | Description |
|---|---|---|---|
clientVersion | string | Yes | Client version for compatibility checking |
server | string | ServerTarget | Yes | Server URL or { host, port?, secure? } |
fallbackToHttp | boolean | No | Auto-retry with HTTP if HTTPS fails in connect() |
chunkSize | number | No | Upload chunk size fallback (default: 5MB). The server's configured chunk size (from /api/info) takes precedence when available. |
fetchFn | FetchFn | No | Custom fetch implementation |
cryptoObj | CryptoAdapter | No | Custom crypto implementation |
base64 | Base64Adapter | No | Custom base64 encoder/decoder |
| Property | Type | Description |
|---|---|---|
baseUrl | string | Resolved server base URL (may change if HTTP fallback occurs) |
serverTarget | ServerTarget | Derived { host, port, secure } from baseUrl |
| Method | Description |
|---|---|
connect(opts?) | Fetch server info, check compatibility, cache result |
getFileMetadata(fileId, opts?) | Fetch metadata for a single file |
getBundleMetadata(bundleId, keyB64?, opts?) | Fetch bundle metadata with automatic manifest decryption and field derivation |
uploadFiles(opts) | Upload a file with optional encryption |
downloadFiles(opts) | Download a file with optional decryption |
p2pSend(opts) | Start a P2P send session |
p2pReceive(opts) | Start a P2P receive session |
validateUploadInputs(opts) | Validate file and settings before upload |
resolveShareTarget(value, opts?) | Resolve a sharing code via the server |
| Function | Description |
|---|---|
generateP2PCode(cryptoObj?) | Generate a secure sharing code |
isP2PCodeLike(code) | Check if a string looks like a P2P code |
isSecureContextForP2P(hostname, isSecureContext) | Check if P2P is allowed |
isLocalhostHostname(hostname) | Check if hostname is localhost |
| Function | Description |
|---|---|
getServerInfo(opts) | Fetch server info and capabilities (standalone) |
lifetimeToMs(value, unit) | Convert lifetime to milliseconds |
estimateTotalUploadSizeBytes(...) | Estimate upload size with encryption overhead |
bytesToBase64(bytes) | Convert bytes to base64 |
arrayBufferToBase64(buffer) | Convert an ArrayBuffer to base64 |
base64ToBytes(b64) | Convert base64 to bytes |
parseSemverMajorMinor(version) | Parse a semver string into { major, minor } |
validatePlainFilename(name) | Validate that a filename has no path traversal or illegal characters |
| Function | Description |
|---|---|
sha256Hex(data) | Compute a SHA-256 hex digest |
generateAesGcmKey() | Generate a random AES-256-GCM CryptoKey |
exportKeyBase64(key) | Export a CryptoKey as a base64 string |
importKeyFromBase64(b64) | Import a CryptoKey from a base64 string |
encryptToBlob(blob, key) | Encrypt a Blob with AES-256-GCM |
encryptFilenameToBase64(name, key) | Encrypt a filename string to base64 |
decryptChunk(chunk, key) | Decrypt an AES-256-GCM encrypted chunk |
decryptFilenameFromBase64(b64, key) | Decrypt a filename from base64 |
| Function | Description |
|---|---|
getDefaultFetch() | Get the default fetch implementation for the current environment |
getDefaultCrypto() | Get the default CryptoAdapter (Web Crypto API) |
getDefaultBase64() | Get the default Base64Adapter for the current environment |
| Function | Description |
|---|---|
buildBaseUrl(server) | Build a base URL string from a server URL or ServerTarget |
parseServerUrl(url) | Parse a URL string into a ServerTarget |
fetchJson(url, opts?) | Fetch JSON with timeout and error handling |
sleep(ms) | Promise-based delay |
makeAbortSignal(timeoutMs?) | Create an AbortSignal with optional timeout |
| Constant | Description |
|---|---|
DEFAULT_CHUNK_SIZE | Default upload chunk size in bytes (5 MB) |
AES_GCM_IV_BYTES | AES-GCM initialisation vector length |
AES_GCM_TAG_BYTES | AES-GCM authentication tag length |
ENCRYPTION_OVERHEAD_PER_CHUNK | Total encryption overhead added to each chunk |
A streaming ZIP assembler for multi-file P2P transfers. Wraps fflate and produces a valid ZIP archive without buffering entire files in memory.
import { StreamingZipWriter } from '@dropgate/core';
const zipWriter = new StreamingZipWriter((zipChunk) => {
// Write each ZIP chunk to your output (e.g., StreamSaver writer)
writer.write(zipChunk);
});
zipWriter.startFile('photo.jpg');
zipWriter.writeChunk(chunk1);
zipWriter.writeChunk(chunk2);
zipWriter.endFile();
zipWriter.startFile('notes.txt');
zipWriter.writeChunk(chunk3);
zipWriter.endFile();
zipWriter.finalize(); // Flush remaining data and write ZIP footer
| Class | Description |
|---|---|
DropgateError | Base error class |
DropgateValidationError | Input validation errors |
DropgateNetworkError | Network/connection errors |
DropgateProtocolError | Server protocol errors |
DropgateAbortError | Operation aborted |
DropgateTimeoutError | Operation timed out |
For browser environments, you can use the IIFE bundle:
<script src="/path/to/dropgate-core.browser.js"></script>
<script>
const { DropgateClient } = DropgateCore;
const client = new DropgateClient({ clientVersion: '3.0.10', server: location.origin });
// ...
</script>
Or as an ES module:
<script type="module">
import { DropgateClient } from '/path/to/dropgate-core.js';
const client = new DropgateClient({ clientVersion: '3.0.10', server: location.origin });
// ...
</script>
The P2P methods are headless. The consumer is responsible for:
Peer constructor to p2pSend/p2pReceiveonData callback (e.g., using streamSaver)onProgress, onStatus, etc.)This design allows the library to work in any environment (browser, Electron, Node.js with WebRTC).
The P2P implementation is designed for unlimited file sizes with constant memory usage:
onData, no bufferingNote: For large files, always use the
onDatacallback approach rather than buffering in memory.
Licensed under the Apache-2.0 License. See the LICENSE file for details.
FAQs
Headless Dropgate client library for file uploads and P2P transfers.
We found that @dropgate/core demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
Multiple high-impact npm maintainers confirm they have been targeted in the same social engineering campaign that compromised Axios.

Security News
Axios compromise traced to social engineering, showing how attacks on maintainers can bypass controls and expose the broader software supply chain.

Security News
Node.js has paused its bug bounty program after funding ended, removing payouts for vulnerability reports but keeping its security process unchanged.