What is rate-limiter-flexible?
The rate-limiter-flexible npm package is a powerful and flexible rate limiting library for Node.js. It supports various backends like Redis, MongoDB, and in-memory storage, making it suitable for distributed systems. It helps in controlling the rate of requests to APIs, preventing abuse, and ensuring fair usage.
What are rate-limiter-flexible's main functionalities?
Basic Rate Limiting
This feature allows you to set up basic rate limiting using in-memory storage. The example limits a user to 5 requests per second.
const { RateLimiterMemory } = require('rate-limiter-flexible');
const rateLimiter = new RateLimiterMemory({
points: 5, // 5 points
duration: 1, // Per second
});
rateLimiter.consume('user-key')
.then(() => {
// Allowed
})
.catch(() => {
// Blocked
});
Rate Limiting with Redis
This feature demonstrates how to use Redis as a backend for rate limiting. The example limits a user to 10 requests per minute.
const { RateLimiterRedis } = require('rate-limiter-flexible');
const Redis = require('ioredis');
const redisClient = new Redis();
const rateLimiter = new RateLimiterRedis({
storeClient: redisClient,
points: 10, // 10 points
duration: 60, // Per minute
});
rateLimiter.consume('user-key')
.then(() => {
// Allowed
})
.catch(() => {
// Blocked
});
Rate Limiting with MongoDB
This feature shows how to use MongoDB as a backend for rate limiting. The example limits a user to 5 requests per minute.
const { RateLimiterMongo } = require('rate-limiter-flexible');
const mongoose = require('mongoose');
mongoose.connect('mongodb://localhost:27017/rate-limiter', { useNewUrlParser: true, useUnifiedTopology: true });
const rateLimiter = new RateLimiterMongo({
storeClient: mongoose.connection,
points: 5, // 5 points
duration: 60, // Per minute
});
rateLimiter.consume('user-key')
.then(() => {
// Allowed
})
.catch(() => {
// Blocked
});
Rate Limiting with Bursts
This feature allows for burst handling by blocking the user for a specified duration if they exceed the rate limit. The example blocks a user for 10 seconds if they exceed 10 requests per second.
const { RateLimiterMemory } = require('rate-limiter-flexible');
const rateLimiter = new RateLimiterMemory({
points: 10, // 10 points
duration: 1, // Per second
blockDuration: 10, // Block for 10 seconds if consumed more than points
});
rateLimiter.consume('user-key')
.then(() => {
// Allowed
})
.catch(() => {
// Blocked
});
Other packages similar to rate-limiter-flexible
express-rate-limit
express-rate-limit is a basic rate-limiting middleware for Express applications. It is simpler and less flexible compared to rate-limiter-flexible, but it is easier to set up for basic use cases.
rate-limiter
rate-limiter is another rate limiting library for Node.js. It is less feature-rich compared to rate-limiter-flexible and does not support as many backends, but it is straightforward to use for simple rate limiting needs.
bottleneck
bottleneck is a powerful rate limiting and job scheduling library for Node.js. It offers more advanced features like priority queues and job scheduling, making it more suitable for complex use cases compared to rate-limiter-flexible.
node-rate-limiter-flexible
Flexible rate limiter and anti-DDoS protector works in process
Memory, Cluster, MongoDB or Redis allows to control requests rate in single process or distributed environment.
It uses fixed window as it is much faster than rolling window.
See comparative benchmarks with other libraries here
Advantages:
- block strategy against really powerful DDoS attacks (like 100k requests per sec) Read about it and benchmarking here
- backed on native Promises
- works in Cluster without additional software See RateLimiterCluster benchmark and detailed description here
- actions can be done evenly over duration window to cut off picks
- no race conditions
- covered by tests
- no prod dependencies
- Redis and Mongo errors don't result to broken app if
insuranceLimiter
set up - useful
penalty
and reward
methods to change limits on some results of an action
Benchmark
Endpoint is simple Express 4.x route launched in node:latest
and redis:alpine
Docker containers by PM2 with 4 workers
By bombardier -c 1000 -l -d 30s -r 2000 -t 5s http://127.0.0.1:3000/pricing
Test with 1000 concurrent requests with maximum 2000 requests per sec during 30 seconds
Statistics Avg Stdev Max
Reqs/sec 1994.83 439.72 5377.15
Latency 6.09ms 5.06ms 88.44ms
Latency Distribution
50% 4.98ms
75% 6.65ms
90% 9.33ms
95% 13.65ms
99% 34.27ms
HTTP codes:
1xx - 0, 2xx - 59997, 3xx - 0, 4xx - 0, 5xx - 0
Note: Performance will be much better on real servers, as for this benchmark everything was launched on one machine
Installation
npm i rate-limiter-flexible
Usage
RateLimiterRedis
It supports both redis
and ioredis
clients.
Redis client must be created with offline queue switched off.
const redis = require('redis');
const redisClient = redis.createClient({ enable_offline_queue: false });
const Redis = require('ioredis');
const redisClient = new Redis({
options: {
enableOfflineQueue: false
}
});
const { RateLimiterRedis, RateLimiterMemory } = require('rate-limiter-flexible');
redisClient.on('error', (err) => {
});
const opts = {
redis: redisClient,
keyPrefix: 'rlflx',
points: 5,
duration: 5,
execEvenly: false,
blockOnPointsConsumed: 10,
blockDuration: 30,
insuranceLimiter: new RateLimiterMemory(
{
points: 1,
duration: 5,
execEvenly: false,
})
};
const rateLimiterRedis = new RateLimiterRedis(opts);
rateLimiterRedis.consume(remoteAddress)
.then((rateLimiterRes) => {
rateLimiterRedis.penalty(remoteAddress, 3)
.then((rateLimiterRes) => {});
rateLimiterRedis.reward(remoteAddress, 2)
.then((rateLimiterRes) => {});
})
.catch((rejRes) => {
if (rejRes instanceof Error) {
} else {
const secs = Math.round(rejRes.msBeforeNext / 1000) || 1;
res.set('Retry-After', String(secs));
res.status(429).send('Too Many Requests');
}
});
RateLimiterMongo
It supports mongodb
native and mongoose
packages
See RateLimiterMongo benchmark here
const { RateLimiterMongo } = require('rate-limiter-flexible');
const mongoose = require('mongoose');
const mongoOpts = {
reconnectTries: Number.MAX_VALUE,
reconnectInterval: 100,
};
mongoose.createConnection('mongodb://localhost:27017/' + RateLimiterMongo.getDbName(), mongoOpts)
.then((mongo) => {
const opts = {
mongo: mongo,
points: 10,
duration: 1,
};
const rateLimiterMongo = new RateLimiterMongo(opts);
});
const { MongoClient } = require('mongodb');
const mongoOpts = {
useNewUrlParser: true,
reconnectTries: Number.MAX_VALUE,
reconnectInterval: 100,
};
MongoClient.connect(
'mongodb://localhost:27017',
mongoOpts
).then((mongo) => {
const opts = {
mongo: mongo,
points: 10,
duration: 1,
};
const rateLimiterMongo = new RateLimiterMongo(opts);
rateLimiterMongo.consume(remoteAddress)
.then(() => {})
.catch(() => {});
});
RateLimiterCluster
Note: it doesn't work with PM2 yet
RateLimiterCluster performs limiting using IPC.
Each request is sent to master process, which handles all the limits, then master send results back to worker.
See RateLimiterCluster benchmark and detailed description here
const cluster = require('cluster');
const numCPUs = require('os').cpus().length;
const { RateLimiterClusterMaster, RateLimiterCluster } = require('rate-limiter-flexible');
if (cluster.isMaster) {
new RateLimiterClusterMaster();
for (let i = 0; i < numCPUs; i++) {
cluster.fork();
}
} else {
const rateLimiter = new RateLimiterCluster({
keyPrefix: 'myclusterlimiter',
points: 100,
duration: 1,
timeoutMs: 3000
});
}
RateLimiterMemory
It manages limits in current process memory, so keep it in mind when use it in cluster
const rateLimiter = new RateLimiterMemory(
{
keyPrefix: 'rlflx',
points: 1,
duration: 5,
execEvenly: false,
});
Options
-
keyPrefix
Default: 'rlflx'
If you need to create several limiters for different purpose
-
points
Default: 4
Maximum number of points can be consumed over duration
-
duration
Default: 1
Number of seconds before points are reset
-
execEvenly
Default: false
Delay action to be executed evenly over duration
First action in duration is executed without delay.
All next allowed actions in current duration are delayed by formula msBeforeDurationEnd / (remainingPoints + 2)
It allows to cut off load peaks.
Note: it isn't recommended to use it for long duration, as it may delay action for too long
Options specific to Redis and Mongo
-
blockOnPointsConsumed
Default: 0
Against DDoS attacks. Blocked key isn't checked by requesting Redis.
Blocking works in current process memory.
Redis is quite fast, however, it may be significantly slowed down on dozens of thousands requests.
-
blockDuration
Default: 0
Block key for blockDuration
seconds,
if blockOnPointsConsumed
or more points are consumed
-
insuranceLimiter
Default: undefined
Instance of RateLimiterAbstract extended object to store limits,
when Redis comes up with any error.
Additional RateLimiterRedis or RateLimiterMemory can be used as insurance.
Be careful when use RateLimiterMemory in cluster or in distributed app.
It may result to floating number of allowed actions.
If an action with a same key
is launched on one worker several times in sequence,
limiter will reach out of points soon.
Omit it if you want strictly use Redis and deal with errors from it
Options specific to Cluster
timeoutMs
Default: 5000
Timeout for communication between worker and master over IPC.
If master doesn't response in time, promise is rejected with Error
API
RateLimiterRes object
Both Promise resolve and reject returns object of RateLimiterRes
class if there is no any error.
Object attributes:
RateLimiterRes = {
msBeforeNext: 250,
remainingPoints: 0,
consumedPoints: 5,
isFirstInDuration: false,
}
rateLimiter.consume(key, points = 1)
Returns Promise, which:
- resolved with
RateLimiterRes
when point(s) is consumed, so action can be done - only for RateLimiterRedis if
insuranceLimiter
isn't setup: rejected when some Redis error happened, where reject reason rejRes
is Error object - only for RateLimiterCluster: rejected when
timeoutMs
exceeded, where reject reason rejRes
is Error object - rejected when there is no points to be consumed, where reject reason
rejRes
is RateLimiterRes
object - rejected when key is blocked (if block strategy is set up), where reject reason
rejRes
is RateLimiterRes
object
Arguments:
key
is usually IP address or some unique client idpoints
number of points consumed. default: 1
rateLimiter.penalty(key, points = 1)
Fine key
by points
number of points for one duration.
Note: Depending on time penalty may go to next durations
Returns Promise, which:
- resolved with
RateLimiterRes
- only for RateLimiterRedis if
insuranceLimiter
isn't setup:
rejected when some Redis error happened, where reject reason rejRes
is Error object - only for RateLimiterCluster: rejected when
timeoutMs
exceeded, where reject reason rejRes
is Error object
rateLimiter.reward(key, points = 1)
Reward key
by points
number of points for one duration.
Note: Depending on time reward may go to next durations
Returns Promise, which:
- resolved with
RateLimiterRes
- only for RateLimiterRedis if
insuranceLimiter
isn't setup:
rejected when some Redis error happened, where reject reason rejRes
is Error object - only for RateLimiterCluster: rejected when
timeoutMs
exceeded, where reject reason rejRes
is Error object
Contribution
Make sure you've launched npm run eslint
before creating PR, all errors have to be fixed.
You can try to run npm run eslint-fix
to fix some issues.
Appreciated, feel free!