
Research
Malicious npm Packages Impersonate Flashbots SDKs, Targeting Ethereum Wallet Credentials
Four npm packages disguised as cryptographic tools steal developer credentials and send them to attacker-controlled Telegram infrastructure.
slide-limiter
Advanced tools
A TypeScript implementation of a sliding window rate limiting algorithm that provides rate limiting functionality using different storage mechanisms, including in-memory storage and Redis. It allows you to limit the number of requests or actions that can
A TypeScript implementation of a sliding window rate limiting algorithm that provides rate limiting functionality using different storage mechanisms, including in-memory storage and Redis. It allows you to limit the number of requests or actions that can be performed within a specific time window for a given key or identifier.
Rate limiting is a common technique used in software systems to control the rate at which requests are made. This sliding window rate limiter allows you to restrict the number of requests within a specified time window for a particular key or client.
You can install slide limiter using npm:
npm install slide-limiter
# OR
yarn add slide-limiter
The Sliding Window Algorithm is a time-based method used to track and control the rate at which requests or operations can be made within a specific time window. It's a dynamic system that adapts to changing traffic patterns, making it an effective tool for rate limiting in various applications.
The central concept behind the Sliding Window Algorithm is the utilization of a dynamic time window that moves with the flow of time. Unlike static methods that reset rate limits at fixed intervals, the sliding window continuously adjusts to the current time, allowing for a more flexible and adaptable approach.
The diagram depicts a sequence of events for handling incoming requests:
The Sliding Window Algorithm offers several advantages, including:
Here's how you can use slide limiter in your project:
import { MemoryStore, SlideLimiter } from "slide-limiter";
// Create a MemoryStore instance
const store = new MemoryStore();
// Create a SlideLimiter instance with options
const options = {
windowMs: 60000, // 1 minute
maxLimit: 10,
};
const limiter = new SlideLimiter(store, options);
// Perform a rate-limited action
const bucket = 'users';
const key = "user123";
const remainingRequests = await limiter.hit(bucket, key);
if (remainingRequests > 0) {
// Allow the action
console.log(`Action performed. Remaining requests: ${remainingRequests}`);
} else {
// Rate limit exceeded
console.log("Rate limit exceeded. Please try again later.");
}
Default values for windowMs
and maxLimit
defined in the SlideLimiter options, but you have the flexibility to change these parameters on a per-request basis. This means that for a specific request, you can override the default values of windowMs
and maxLimit
by passing custom values for these parameters when you call the hit
method.
Dynamic rate limiting is beneficial when you have different rate limiting requirements for various operations or clients. For example, you might want to allow a higher rate limit for certain premium users or reduce the rate limit for specific resource-intensive operations. Instead of creating a new rate limiter instance for each of these scenarios, you can simply adjust the parameters on the fly when making a rate-limited request.
async hit(bucket: string, key: string, options: WindowOptions): Promise<number>
options
parameter is optional, rate limiter options will be used as default
bucket
: The bucket
parameter is a higher-level categorization that allows you to group rate limits for different resources or endpoints. It can be used to separate the count for specific endpoints or functionalities within your application. In the context of a web API, for example, you might want to rate limit different endpoints differently. By using a bucket
, you can create distinct rate limits for different parts of your application. For instance, you could have separate buckets
for /users/auth
, /api/orders
, and /api/profile
, each with its own rate limit configuration.key
: The key
parameter represents the identifier for the resource or entity you want to rate limit. It is typically associated with a specific user, IP address, or any other unique identifier for the client making the request. For example, if you want to rate limit requests from different users or clients separately, you can use the user's ID or IP address as the key
. The key
is an essential part of the rate limiting process because it allows you to track and limit requests on a per-client basis.windowMs
: The windowMs
parameter represents the time window (in milliseconds) during which rate limiting is applied. It defines the period over which requests are counted and limited. Any requests made within this time window are considered when enforcing rate limits. For example, if windowMs
is set to 60000 (60 seconds), it means that the rate limiting applies to requests made within the last minute.maxLimit
: The maxLimit
parameter specifies the maximum number of requests that a client is allowed to make within the defined time window (windowMs
). It is the threshold beyond which a client's requests will be rate-limited, meaning that they will be denied access until the rate limit resets.By using both the
bucket
andkey
parameters, you can achieve a flexible and effective rate limiting strategy that provides precise control over how different clients interact with various parts of your application.
Advantages of Using bucket
:
buckets
, you can apply fine-grained rate limiting rules to different parts of your application. This allows you to prevent excessive requests to critical endpoints while being more permissive with less critical ones.bucket
do not affect other buckets
. This means that exceeding the limit for one endpoint (e.g., /users/auth
) won't impact the rate limits for other endpoints (e.g., /api/orders
).buckets
. You can set specific limits for each bucket
without needing to define separate SlideLimiter
instances for each endpoint.buckets
for different endpoints or features makes it easier to scale your rate limiting strategy. You can adapt the rate limits for specific buckets
based on their importance and usage patterns.The library supports two built-in storage mechanisms:
This stores rate limiting data in memory, making it suitable for single-server applications where rate limiting is local to that server.
Very basic implementation
This stores rate limiting data in a Redis database, which is suitable for distributed systems where rate limiting needs to be shared across multiple servers.
RateLimiterStore
is an abstract class that defines the basic structure of a rate limiter store.MemoryStore
and RedisStore
are concrete classes that inherit from Store and provide specific implementations.SlideLimiter
is the main library class that uses a RateLimiterStore
for rate limiting.This Lua script is designed to be executed atomically in Redis, ensuring that rate limiting operations are consistent and thread-safe when multiple requests are made concurrently. It checks if the request count is within the defined limit and records each request's timestamp, allowing for accurate rate limiting enforcement.
local current_time = redis.call('TIME')
local bucket = KEYS[1]
local id = KEYS[2]
local key = bucket .. ":" .. id
local window = tonumber(ARGV[1]) / 1000
local limit = tonumber(ARGV[2])
local trim_time = tonumber(current_time[1]) - window
redis.call('ZREMRANGEBYSCORE', key, 0, trim_time)
local request_count = redis.call('ZCARD', key)
if request_count < limit then
redis.call('ZADD', key, current_time[1], current_time[1] .. current_time[2])
redis.call('EXPIRE', key, window)
return limit - request_count - 1;
end
return 0
It performs the following steps:
redis.call('TIME')
to retrieve the current time in Redis.bucket
and id
to create a unique key in Redis. This key represents the rate limiting window for a specific resource.windowMs
(in milliseconds) to seconds by dividing it by 1000.redis.call('ZCARD', key)
. This count represents the requests made within the defined time window.limit
, it means there's still capacity for more requests.redis.call('EXPIRE', key, window)
. This ensures that rate limiting data is automatically cleared from Redis after the defined time window.limit
and then subtracting 1. This accounts for the current request.RedisStore
, you can implement rate limiting in a distributed system, making it easier to handle high traffic and load balancing across multiple instances.MIT
FAQs
A TypeScript implementation of a sliding window rate limiting algorithm that provides rate limiting functionality using different storage mechanisms, including in-memory storage and Redis. It allows you to limit the number of requests or actions that can
The npm package slide-limiter receives a total of 223 weekly downloads. As such, slide-limiter popularity was classified as not popular.
We found that slide-limiter demonstrated a not healthy version release cadence and project activity because the last version was released a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Research
Four npm packages disguised as cryptographic tools steal developer credentials and send them to attacker-controlled Telegram infrastructure.
Security News
Ruby maintainers from Bundler and rbenv teams are building rv to bring Python uv's speed and unified tooling approach to Ruby development.
Security News
Following last week’s supply chain attack, Nx published findings on the GitHub Actions exploit and moved npm publishing to Trusted Publishers.