Research
Security News
Threat Actor Exposes Playbook for Exploiting npm to Build Blockchain-Powered Botnets
A threat actor's playbook for exploiting the npm ecosystem was exposed on the dark web, detailing how to build a blockchain-powered botnet.
rate-limiter-flexible
Advanced tools
Flexible API rate limiter backed by Redis for distributed node.js applications
The rate-limiter-flexible npm package is a powerful and flexible rate limiting library for Node.js. It supports various backends like Redis, MongoDB, and in-memory storage, making it suitable for distributed systems. It helps in controlling the rate of requests to APIs, preventing abuse, and ensuring fair usage.
Basic Rate Limiting
This feature allows you to set up basic rate limiting using in-memory storage. The example limits a user to 5 requests per second.
const { RateLimiterMemory } = require('rate-limiter-flexible');
const rateLimiter = new RateLimiterMemory({
points: 5, // 5 points
duration: 1, // Per second
});
rateLimiter.consume('user-key')
.then(() => {
// Allowed
})
.catch(() => {
// Blocked
});
Rate Limiting with Redis
This feature demonstrates how to use Redis as a backend for rate limiting. The example limits a user to 10 requests per minute.
const { RateLimiterRedis } = require('rate-limiter-flexible');
const Redis = require('ioredis');
const redisClient = new Redis();
const rateLimiter = new RateLimiterRedis({
storeClient: redisClient,
points: 10, // 10 points
duration: 60, // Per minute
});
rateLimiter.consume('user-key')
.then(() => {
// Allowed
})
.catch(() => {
// Blocked
});
Rate Limiting with MongoDB
This feature shows how to use MongoDB as a backend for rate limiting. The example limits a user to 5 requests per minute.
const { RateLimiterMongo } = require('rate-limiter-flexible');
const mongoose = require('mongoose');
mongoose.connect('mongodb://localhost:27017/rate-limiter', { useNewUrlParser: true, useUnifiedTopology: true });
const rateLimiter = new RateLimiterMongo({
storeClient: mongoose.connection,
points: 5, // 5 points
duration: 60, // Per minute
});
rateLimiter.consume('user-key')
.then(() => {
// Allowed
})
.catch(() => {
// Blocked
});
Rate Limiting with Bursts
This feature allows for burst handling by blocking the user for a specified duration if they exceed the rate limit. The example blocks a user for 10 seconds if they exceed 10 requests per second.
const { RateLimiterMemory } = require('rate-limiter-flexible');
const rateLimiter = new RateLimiterMemory({
points: 10, // 10 points
duration: 1, // Per second
blockDuration: 10, // Block for 10 seconds if consumed more than points
});
rateLimiter.consume('user-key')
.then(() => {
// Allowed
})
.catch(() => {
// Blocked
});
express-rate-limit is a basic rate-limiting middleware for Express applications. It is simpler and less flexible compared to rate-limiter-flexible, but it is easier to set up for basic use cases.
rate-limiter is another rate limiting library for Node.js. It is less feature-rich compared to rate-limiter-flexible and does not support as many backends, but it is straightforward to use for simple rate limiting needs.
bottleneck is a powerful rate limiting and job scheduling library for Node.js. It offers more advanced features like priority queues and job scheduling, making it more suitable for complex use cases compared to rate-limiter-flexible.
rate-limiter-flexible limits number of actions by key and protects from DDoS and brute force attacks at any scale.
Fast. Average request takes 0.7ms
in Cluster and 2.5ms
in Distributed application.
Flexible. Combine limiters, block key for some duration, delay actions, manage failover with insurance options, configure smart key blocking in memory and many others.
Ready for growth. It provides unified API for all limiters. Whenever your application grows, it is ready. Prepare your limiters in minutes.
Friendly. No matter which node package you prefer: redis
or ioredis
, sequelize
or knex
, native driver or mongoose
. It works with all of them.
It works in process Memory, Cluster, MongoDB, MySQL, PostgreSQL or Redis allows to control requests rate in single process or distributed environment.
It uses fixed window as it is much faster than rolling window. See comparative benchmarks with other libraries here
:star: It is STARving, don't forget to feed the beast! :star:
Advantages:
get
, block
, penalty
and reward
methodsconst opts = {
points: 6, // 6 points
duration: 1, // Per second
};
const rateLimiter = new RateLimiterMemory(opts);
rateLimiter.consume(remoteAddress, 2) // consume 2 points
.then((rateLimiterRes) => {
// 2 points consumed
})
.catch((rateLimiterRes) => {
// Not enough points to consume
});
const rateLimiterMiddleware = (req, res, next) => {
rateLimiter.consume(req.connection.remoteAddress)
.then(() => {
next();
})
.catch((rejRes) => {
res.status(429).send('Too Many Requests');
});
};
app.use(async (ctx, next) => {
try {
await rateLimiter.consume(ctx.ip)
next()
} catch (rejRes) {
ctx.status = 429
ctx.body = 'Too Many Requests'
}
})
Average latency during test pure NodeJS endpoint in cluster of 4 workers with everything set up on one server.
1000 concurrent clients with maximum 2000 requests per sec during 30 seconds.
1. Memory 0.34 ms
2. Cluster 0.69 ms
3. Redis 2.45 ms
4. Mongo 4.75 ms
500 concurrent clients with maximum 1000 req per sec during 30 seconds
5. PostgreSQL 7.48 ms (with connection pool max 100)
6. MySQL 14.59 ms (with connection pool 100)
npm i rate-limiter-flexible
yarn add rate-limiter-flexible
keyPrefix
Default: 'rlflx'
If you need to create several limiters for different purpose.
Note: for some limiters it should correspond to Storage requirements for tables or collections name,
as keyPrefix
may be used as their name.
points
Default: 4
Maximum number of points can be consumed over duration
duration
Default: 1
Number of seconds before consumed points are reset
execEvenly
Default: false
Delay action to be executed evenly over duration
First action in duration is executed without delay.
All next allowed actions in current duration are delayed by formula msBeforeDurationEnd / (remainingPoints + 2)
with minimum delay of duration * 1000 / points
It allows to cut off load peaks similar way to Leaky Bucket. Read detailed Leaky Bucket description
Note: it isn't recommended to use it for long duration and few points,
as it may delay action for too long with default execEvenlyMinDelayMs
.
execEvenlyMinDelayMs
Default: duration * 1000 / points
Sets minimum delay in milliseconds, when action is delayed with execEvenly
blockDuration
Default: 0
If positive number and consumed more than points in current duration,
block for blockDuration
seconds.
It sets consumed points more than allowed points for blockDuration
seconds, so actions are rejected.
storeClient
Required
Have to be redis
, ioredis
, mongodb
, pg
, mysql2
, mysql
or any other related pool or connection.
inmemoryBlockOnConsumed
Default: 0
Against DDoS attacks. Blocked key isn't checked by requesting Redis, MySQL or Mongo.
In-memory blocking works in current process memory.
inmemoryBlockDuration
Default: 0
Block key for inmemoryBlockDuration
seconds,
if inmemoryBlockOnConsumed
or more points are consumed
insuranceLimiter
Default: undefined
Instance of RateLimiterAbstract extended object to store limits,
when database comes up with any error.
All data from insuranceLimiter
is NOT copied to parent limiter, when error gone
Note: insuranceLimiter
automatically setup blockDuration
and execEvenly
to same values as in parent to avoid unexpected behaviour
tableName
Default: equals to 'keyPrefix' option
By default, limiter creates table for each unique keyPrefix
.
All limits for all limiters are stored in one table if custom name is set.
storeType
Default: storeClient.constructor.name
It is required only for Knex and have to be set to 'knex'
dbName
Default: 'rtlmtrflx'
Database where limits are stored. It is created during creating a limiterdbName
Default: 'node-rate-limiter-flexible'
Database where limits are stored. It is created during creating a limiter.
Doesn't work with Mongoose, as mongoose connection is established to exact database.timeoutMs
Default: 5000
Timeout for communication between worker and master over IPC.
If master doesn't response in time, promise is rejected with ErrorBoth Promise resolve and reject returns object of RateLimiterRes
class if there is no any error.
Object attributes:
RateLimiterRes = {
msBeforeNext: 250, // Number of milliseconds before next action can be done
remainingPoints: 0, // Number of remaining points in current duration
consumedPoints: 5, // Number of consumed points in current duration
isFirstInDuration: false, // action is first in current duration
}
Returns Promise, which:
RateLimiterRes
when point(s) is consumed, so action can be doneinsuranceLimiter
isn't setup: when some error happened, where reject reason rejRes
is Error objectinsuranceLimiter
isn't setup: when timeoutMs
exceeded, where reject reason rejRes
is Error objectrejRes
is RateLimiterRes
objectrejRes
is RateLimiterRes
objectArguments:
key
is usually IP address or some unique client idpoints
number of points consumed. default: 1
Get RateLimiterRes
in current duration.
Returns Promise, which:
RateLimiterRes
if key is setnull
if key is NOT set or expiredinsuranceLimiter
isn't setup: when some error happened, where reject reason rejRes
is Error objectinsuranceLimiter
isn't setup: when timeoutMs
exceeded, where reject reason rejRes
is Error objectArguments:
key
is usually IP address or some unique client idFine key
by points
number of points for one duration.
Note: Depending on time penalty may go to next durations
Returns Promise, which:
RateLimiterRes
insuranceLimiter
isn't setup: when some error happened, where reject reason rejRes
is Error objectinsuranceLimiter
isn't setup: when timeoutMs
exceeded, where reject reason rejRes
is Error objectReward key
by points
number of points for one duration.
Note: Depending on time reward may go to next durations
Returns Promise, which:
RateLimiterRes
insuranceLimiter
isn't setup: when some error happened, where reject reason rejRes
is Error objectinsuranceLimiter
isn't setup: when timeoutMs
exceeded, where reject reason rejRes
is Error objectBlock key
for secDuration
seconds
Returns Promise, which:
RateLimiterRes
insuranceLimiter
isn't setup: when some error happened, where reject reason rejRes
is Error objectinsuranceLimiter
isn't setup: when timeoutMs
exceeded, where reject reason rejRes
is Error objectRedis >=2.6.12
It supports both redis
and ioredis
clients.
Redis client must be created with offline queue switched off.
const redis = require('redis');
const redisClient = redis.createClient({ enable_offline_queue: false });
const Redis = require('ioredis');
const redisClient = new Redis({
options: {
enableOfflineQueue: false
}
});
const { RateLimiterRedis, RateLimiterMemory } = require('rate-limiter-flexible');
// It is recommended to process Redis errors and setup some reconnection strategy
redisClient.on('error', (err) => {
});
const opts = {
// Basic options
storeClient: redisClient,
points: 5, // Number of points
duration: 5, // Per second(s)
// Custom
execEvenly: false, // Do not delay actions evenly
blockDuration: 0, // Do not block if consumed more than points
keyPrefix: 'rlflx', // must be unique for limiters with different purpose
// Database limiters specific
inmemoryBlockOnConsumed: 10, // If 10 points consumed in current duration
inmemoryBlockDuration: 30, // block for 30 seconds in current process memory
};
const rateLimiterRedis = new RateLimiterRedis(opts);
rateLimiterRedis.consume(remoteAddress)
.then((rateLimiterRes) => {
// ... Some app logic here ...
// Depending on results it allows to fine
rateLimiterRedis.penalty(remoteAddress, 3)
.then((rateLimiterRes) => {});
// or rise number of points for current duration
rateLimiterRedis.reward(remoteAddress, 2)
.then((rateLimiterRes) => {});
})
.catch((rejRes) => {
if (rejRes instanceof Error) {
// Some Redis error
// Never happen if `insuranceLimiter` set up
// Decide what to do with it in other case
} else {
// Can't consume
// If there is no error, rateLimiterRedis promise rejected with number of ms before next request allowed
const secs = Math.round(rejRes.msBeforeNext / 1000) || 1;
res.set('Retry-After', String(secs));
res.status(429).send('Too Many Requests');
}
});
Endpoint is pure NodeJS endpoint launched in node:10.5.0-jessie
and redis:4.0.10-alpine
Docker containers by PM2 with 4 workers
By bombardier -c 1000 -l -d 30s -r 2000 -t 5s http://127.0.0.1:8000
Test with 1000 concurrent requests with maximum 2000 requests per sec during 30 seconds
Statistics Avg Stdev Max
Reqs/sec 2015.20 511.21 14570.19
Latency 2.45ms 7.51ms 138.41ms
Latency Distribution
50% 1.95ms
75% 2.16ms
90% 2.43ms
95% 2.77ms
99% 5.73ms
HTTP codes:
1xx - 0, 2xx - 53556, 3xx - 0, 4xx - 6417, 5xx - 0
Appreciated, feel free!
Make sure you've launched npm run eslint
before creating PR, all errors have to be fixed.
You can try to run npm run eslint-fix
to fix some issues.
Any new limiter with storage have to be extended from RateLimiterStoreAbstract
.
It has to implement at least 3 methods:
_getRateLimiterRes
parses raw data from store to RateLimiterRes
object_upsert
inserts or updates limits data by key and returns raw data_get
returns raw data by keyAll other methods depends on store. See RateLimiterRedis
or RateLimiterPostgres
for example.
FAQs
Node.js rate limiter by key and protection from DDoS and Brute-Force attacks in process Memory, Redis, MongoDb, Memcached, MySQL, PostgreSQL, Cluster or PM
We found that rate-limiter-flexible demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 0 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Research
Security News
A threat actor's playbook for exploiting the npm ecosystem was exposed on the dark web, detailing how to build a blockchain-powered botnet.
Security News
NVD’s backlog surpasses 20,000 CVEs as analysis slows and NIST announces new system updates to address ongoing delays.
Security News
Research
A malicious npm package disguised as a WhatsApp client is exploiting authentication flows with a remote kill switch to exfiltrate data and destroy files.