
Research
/Security News
9 Malicious NuGet Packages Deliver Time-Delayed Destructive Payloads
Socket researchers discovered nine malicious NuGet packages that use time-delayed payloads to crash applications and corrupt industrial control systems.
capacity-limiter
Advanced tools
A powerful, flexible task scheduler and rate limiter based on resource capacity. Efficiently manage task execution in JavaScript and TypeScript applications.
reserve (temporary) and claim (lasting) capacity usagenpm install capacity-limiter
# or
yarn add capacity-limiter
# or
pnpm add capacity-limiter
Example: API Rate Limiter
import { CapacityLimiter } from 'capacity-limiter';
// Using claim strategy and reset rule to handle APIs with a quota that resets every minute
const burstLimiter = new CapacityLimiter({
maxCapacity: 1000, // API allows 1000 requests per minute
capacityStrategy: 'claim', // Each request permanently claims capacity
releaseRules: [
{ type: 'reset', interval: 60 * 1000 } // Reset capacity to 0 every minute
]
});
// Make API requests through the limiter
async function makeApiRequest(endpoint, data) {
return burstLimiter.schedule(1, async () => {
const response = await fetch(endpoint, {
method: 'POST',
body: JSON.stringify(data),
headers: { 'Content-Type': 'application/json' }
});
return response.json();
});
}
More examples can be found in the Real-World Examples section.
Capacity represents abstract resource units (CPU, memory, network, etc.) that tasks consume.
You define what a unit means in your context - it could be memory in MB, API requests, database connections, etc.
You can also use maxConcurrent and minDelayBetweenTasks as a replacement or addition to capacity-based management.
Tasks are scheduled with a specified capacity requirement. If there's enough available capacity, tasks execute immediately. Otherwise, they're queued until capacity becomes available.
When maxQueueSize is set and exceeded, the limiter uses one of these strategies:
CapacityLimiterError when the queue is full (default)When a task requires more capacity than maxCapacity:
CapacityLimiterError (default)const result = await limiter.schedule({
task: async () => {
// Your task implementation
return 'task result';
},
capacity: 5, // Amount of capacity required
priority: 1, // Priority (0-9, lower is higher priority)
queueWaitingLimit: 1000, // Prioritize after waiting 1000ms
queueWaitingTimeout: 2000, // Fail if waiting over 2000ms in queue
executionTimeout: 3000, // Fail if execution takes over 3000ms
});
const limiter = new CapacityLimiter({
maxCapacity: 100,
capacityStrategy: 'claim',
releaseRules: [
// Reset capacity to zero every hour
{ type: 'reset', interval: 60 * 60 * 1000 },
// Reduce capacity by 10 units every minute
{ type: 'reduce', value: 10, interval: 60 * 1000 },
],
});
const simpleRetryLimiter = new CapacityLimiter({
maxCapacity: 10,
failRecoveryStrategy: 'retry',
});
const configuredRetryLimiter = new CapacityLimiter({
maxCapacity: 10,
failRecoveryStrategy: {
type: 'retry',
retries: 3, // Number of retries
minTimeout: 1000, // Minimum timeout between retries
maxTimeout: 5000, // Maximum timeout between retries
randomize: true, // Randomize the timeout between min and max
factor: 2, // Exponential backoff factor
},
});
const customRetryLimiter = new CapacityLimiter({
maxCapacity: 10,
failRecoveryStrategy: {
type: 'custom',
retry: async (error, attempt) => {
// Custom retry logic
if (String(error).includes('Rate limit exceeded') && attempt < 3) {
console.log(`Retrying... Attempt ${attempt}`);
return {
type: 'retry',
timeout: 1000 * Math.pow(2, attempt), // Exponential backoff
};
}
// Do not retry
return {
type: 'throw-error',
error
};
},
},
});
// Set queue size limit and behavior
const limiter = new CapacityLimiter({
maxCapacity: 10,
maxQueueSize: 100, // Limit queue to 100 tasks
queueSizeExceededStrategy: 'replace-by-priority', // Strategy when queue is full
// Optional timeouts for queued tasks
queueWaitingLimit: 500, // Tasks waiting over 500ms get highest priority
queueWaitingTimeout: 5000, // Tasks waiting over 5000ms are rejected
});
// Handle task capacity exceeding max capacity
const bigTaskLimiter = new CapacityLimiter({
maxCapacity: 10,
taskExceedsMaxCapacityStrategy: 'wait-for-full-capacity', // Wait for all capacity to be free
});
// Wait for all tasks to complete
await limiter.stop();
// Reject waiting tasks but allow executing tasks to complete
await limiter.stop({ stopWaitingTasks: true });
// Reject all tasks, including those currently executing
await limiter.stop({ stopWaitingTasks: true, rejectExecutingTasks: true });
CapacityLimiterconstructor(options: CapacityLimiterOptions)
| Option | Type | Default | Description |
|---|---|---|---|
maxCapacity | number | (optional) | Maximum capacity that can be used |
initiallyUsedCapacity | number | 0 | Initial capacity already in use |
capacityStrategy | 'reserve' | 'claim' | 'reserve' | How capacity is managed |
releaseRules | ReleaseRule[] | [] | Rules for releasing capacity |
taskExceedsMaxCapacityStrategy | 'throw-error' | 'wait-for-full-capacity' | 'throw-error' | Strategy when task exceeds max capacity |
maxConcurrent | number | (unlimited) | Maximum concurrent tasks |
maxQueueSize | number | (unlimited) | Maximum queue size |
queueSizeExceededStrategy | 'throw-error' | 'replace' | 'replace-by-priority' | 'throw-error' | Strategy when queue size is exceeded |
queueWaitingLimit | number | (unlimited) | Max queue waiting time before prioritization |
queueWaitingTimeout | number | (unlimited) | Max queue waiting time before failure |
executionTimeout | number | (unlimited) | Max execution time |
minDelayBetweenTasks | number | (no delay) | Minimum time between task executions in milliseconds |
failRecoveryStrategy | FailRecoveryStrategy | 'none' | Strategy for recovering from failures |
schedule// Basic scheduling with default capacity
schedule<TResult>(callback: () => Promise<TResult>): Promise<TResult>;
// Scheduling with specified capacity
schedule<TResult>(capacity: number, callback: () => Promise<TResult>): Promise<TResult>;
// Scheduling with full configuration
schedule<TResult>(params: TaskParams<TResult>): Promise<TResult>;
wrapThe wrap() method creates a wrapper function that automatically applies the capacity limiter to any function calls. This is particularly useful for creating rate-limited versions of existing functions.
// Wrap a function with default capacity (1)
wrap<T extends (...args: unknown[]) => Promise<unknown>>(callback: T): T;
// Wrap a function with custom configuration
wrap<T extends (...args: unknown[]) => Promise<unknown>>(params: WrappedTaskParams<T>): T;
Examples:
// Basic function wrapping
const originalApiCall = async (endpoint: string, data: any) => {
const response = await fetch(endpoint, {
method: 'POST',
body: JSON.stringify(data),
headers: { 'Content-Type': 'application/json' }
});
return response.json();
};
// Create a rate-limited version
const rateLimitedApiCall = limiter.wrap(originalApiCall);
// Use exactly like the original function
const result = await rateLimitedApiCall('/api/users', { name: 'John' });
// Wrapping with custom configuration
const heavyTask = async (data: any) => {
// CPU-intensive work
return processData(data);
};
const wrappedHeavyTask = limiter.wrap({
task: heavyTask,
capacity: 5, // Requires 5 capacity units
priority: 2, // High priority
executionTimeout: 30000, // 30 second timeout
failRecoveryStrategy: 'retry' // Retry on failure
});
// The wrapped function maintains the same signature
const result = await wrappedHeavyTask(myData);
The wrapped function:
// Get current options
getOptions(): CapacityLimiterOptions;
// Set/update options
setOptions(options: CapacityLimiterOptions): void;
// Get current used capacity
getUsedCapacity(): Promise<number>;
// Set used capacity
setUsedCapacity(usedCapacity: number): Promise<void>;
// Adjust used capacity relatively.
// Positive values increase used capacity.
// Negative values release capacity.
adjustUsedCapacity(diff: number): Promise<void>;
// Stop the limiter
stop(params?: StopParams): Promise<void>;
Jump to:

The way you handle API rate limiting depends on the API's behavior and your requirements.
claim strategy with a reset rule to handle it for maximum efficiency.claim strategy with a reduce rule to spread requests evenly.minDelayBetweenTasks to enforce a minimum delay between requests.minDelayBetweenTasks with maxConcurrent set to 1.import { CapacityLimiter } from 'capacity-limiter';
// Using claim strategy and reset rule to handle APIs with a quota that resets every minute
const burstLimiter = new CapacityLimiter({
maxCapacity: 1000, // API allows 1000 requests per minute
capacityStrategy: 'claim', // Each request permanently claims capacity
releaseRules: [
{ type: 'reset', interval: 60 * 1000 } // Reset capacity to 0 every minute
]
});
// Make API requests through the limiter
async function makeApiRequest(endpoint, data) {
return burstLimiter.schedule(1, async () => {
const response = await fetch(endpoint, {
method: 'POST',
body: JSON.stringify(data),
headers: { 'Content-Type': 'application/json' }
});
return response.json();
});
}
import { CapacityLimiter } from 'capacity-limiter';
// Rate limit: 1000 requests per minute
const REQUESTS_PER_MINUTE = 1000;
const RELEASE_INTERVAL_MS = 15 * 1000; // Release capacity every 15 seconds
// Using claim strategy and reduce rule to limit bursts and spread requests
const smoothLimiter = new CapacityLimiter({
maxCapacity: REQUESTS_PER_MINUTE,
capacityStrategy: 'claim',
releaseRules: [
// Release (REQUESTS_PER_MINUTE / (60 * 1000 / RELEASE_INTERVAL_MS)) capacity every interval
{
type: 'reduce',
value: REQUESTS_PER_MINUTE / (60 * 1000 / RELEASE_INTERVAL_MS),
interval: RELEASE_INTERVAL_MS
}
]
});
// This approach prevents bursts from using all capacity at once
// and spreads requests more evenly throughout the minute
import { CapacityLimiter } from 'capacity-limiter';
// Rate limit: 1000 requests per minute
const REQUESTS_PER_MINUTE = 1000;
const MIN_DELAY_MS = (60 * 1000) / REQUESTS_PER_MINUTE; // 60ms between requests
// Using minDelayBetweenTasks to enforce a minimum time between requests
const delayLimiter = new CapacityLimiter({
minDelayBetweenTasks: MIN_DELAY_MS
});
// This spreads out requests evenly, but allows multiple requests to
// execute concurrently if they take longer than minDelayBetweenTasks
import { CapacityLimiter } from 'capacity-limiter';
// Rate limit: 1000 requests per minute
const REQUESTS_PER_MINUTE = 1000;
const MIN_DELAY_MS = (60 * 1000) / REQUESTS_PER_MINUTE; // 60ms between requests
// Combining minDelayBetweenTasks with maxConcurrent to prevent overlapping requests
const sequentialLimiter = new CapacityLimiter({
minDelayBetweenTasks: MIN_DELAY_MS,
maxConcurrent: 1 // Only one request can execute at a time
});
// This guarantees that requests never overlap and are spaced at least
// 60ms apart, regardless of how long each request takes to complete
import os from 'os';
import { CapacityLimiter } from 'capacity-limiter';
const cpuLimiter = new CapacityLimiter({
maxConcurrent: os.cpus().length, // Use available CPU cores
});
// Process items utilizing available CPU cores
async function processItems(items) {
return Promise.all(
items.map(item =>
cpuLimiter.schedule(async () => {
// CPU-intensive work here
return heavyComputation(item);
})
)
);
}
// Limit memory usage for processing large files
const memoryLimiter = new CapacityLimiter({
maxCapacity: 500 * 1024 * 1024, // Limit to 500MB of memory usage
});
// Process files with file size determining capacity requirements
async function processFile(filePath) {
const stats = await fs.promises.stat(filePath);
return memoryLimiter.schedule(stats.size, async () => {
// Read and process file
const data = await fs.promises.readFile(filePath);
return processData(data);
});
}
import { CapacityLimiter } from 'capacity-limiter';
// Create a limiter for API calls
const apiLimiter = new CapacityLimiter({
maxCapacity: 100, // 100 requests capacity
capacityStrategy: 'claim', // Each request claims capacity
releaseRules: [
{ type: 'reset', interval: 60 * 1000 } // Reset every minute
]
});
// Original functions that need rate limiting
const fetchUserData = async (userId: string) => {
const response = await fetch(`/api/users/${userId}`);
return response.json();
};
const updateUserProfile = async (userId: string, data: any) => {
const response = await fetch(`/api/users/${userId}`, {
method: 'PUT',
body: JSON.stringify(data),
headers: { 'Content-Type': 'application/json' }
});
return response.json();
};
// Create rate-limited versions using wrap()
const rateLimitedFetchUser = apiLimiter.wrap({
task: fetchUserData,
capacity: 1,
priority: 3
});
const rateLimitedUpdateUser = apiLimiter.wrap({
task: updateUserProfile,
capacity: 2, // Updates use more capacity
priority: 1, // Higher priority than fetches
failRecoveryStrategy: 'retry'
});
// Use the wrapped functions exactly like the originals
const userData = await rateLimitedFetchUser('user123');
const updateResult = await rateLimitedUpdateUser('user123', { name: 'John Doe' });
Contributions are welcome! Please feel free to submit a Pull Request.
git checkout -b feature/amazing-feature)git commit -m 'Add some amazing feature')git push origin feature/amazing-feature)This project is licensed under the MIT License - see the LICENSE file for details.
If you find this package helpful, consider sponsoring the author or becoming a patron.
FAQs
Job scheduler and rate limiter based on capacity. Execution pool.
The npm package capacity-limiter receives a total of 13 weekly downloads. As such, capacity-limiter popularity was classified as not popular.
We found that capacity-limiter demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Research
/Security News
Socket researchers discovered nine malicious NuGet packages that use time-delayed payloads to crash applications and corrupt industrial control systems.

Security News
Socket CTO Ahmad Nassri discusses why supply chain attacks now target developer machines and what AI means for the future of enterprise security.

Security News
Learn the essential steps every developer should take to stay secure on npm and reduce exposure to supply chain attacks.