
Research
2025 Report: Destructive Malware in Open Source Packages
Destructive malware is rising across open source registries, using delays and kill switches to wipe code, break builds, and disrupt CI/CD.
@shipengine/connect-rate-limiter
Advanced tools
A Redis-based rate limiting library for ShipEngine Connect modules, providing distributed rate limiting and quota management capabilities.
The Connect Rate Limiter provides a simple, Redis-backed solution for implementing rate limiting in Connect carrier modules. It supports multiple rate limiting strategies including rolling and fixed windows, with optional quota management for longer-term limits.
For local testing and development:
# In the connect-rate-limiter directory
cd shared-libs/connect-rate-limiter
npm install
npm run build
npm link
# In your Connect module directory
cd modules/your-module
npm link @shipengine/connect-rate-limiter
Add to your Connect module's package.json:
{
"dependencies": {
"@shipengine/connect-rate-limiter": "file:../../shared-libs/connect-rate-limiter"
}
}
Then run:
npm install
The package includes a Docker Compose configuration for easy local Redis setup:
# docker-compose.yml (included in the package)
version: '3.8'
services:
redis:
image: redis:7-alpine
container_name: connect-rate-limiter-redis
ports:
- "6379:6379"
volumes:
- redis_data:/data
command: redis-server --appendonly yes
restart: unless-stopped
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 3s
retries: 3
volumes:
redis_data:
Start Redis for local development:
# In the connect-rate-limiter directory
docker compose up -d
# Check Redis is running
docker compose ps
# View Redis logs
docker compose logs redis
# Stop Redis when done
docker compose down
The rate limiter automatically connects to Redis at localhost:6379 by default for local development. For custom Redis configurations, you can provide a Redis configuration object with the following structure:
{
"host": "your-redis-host",
"port": 6379,
"connectTimeout": 3000,
"commandTimeout": 3000,
"retryDelayOnFailover": 100,
"maxRetriesPerRequest": 3,
"tls": {}
}
Configuration Options:
host: Redis server hostnameport: Redis server port (default: 6379)connectTimeout: Connection timeout in millisecondscommandTimeout: Command execution timeout in millisecondsretryDelayOnFailover: Delay between retry attempts in millisecondsmaxRetriesPerRequest: Maximum number of retry attempts per requesttls: TLS configuration object (empty object enables TLS with default settings)Production Deployment: For production environments, Redis configuration needs to be set up in the infra/helm directory, similar to the current setup for the dummy module. Add the Redis configuration to your module's values.yaml:
# infra/helm/your-module/values.yaml
secret:
REDIS_CONFIGURATION: '#{YOUR_MODULE_REDIS_CONFIGURATION}'
This ensures proper Redis connectivity in deployed environments by using the appropriate Redis instance for your module.
Create a rate limiter configuration file in your Connect module:
// src/rate-limit-config.ts
import {
RateLimitMode,
RateLimitConfiguration,
RateLimiter,
QuotaRateLimitConfig
} from '@shipengine/connect-rate-limiter';
// Create a singleton instance
export const rateLimiter = new RateLimiter();
// Default configuration with quota (10 requests per minute, 15 per day)
export const quotaRateLimitConfig: QuotaRateLimitConfig = {
rateLimit: {
maxRequests: 10,
windowSize: 60
},
quota: {
maxRequests: 15,
windowSize: 86400 // 24 hours
}
};
export const baseLimitConfig: RateLimitConfiguration = {
strategy: {
mode: RateLimitMode.ROLLING_QUOTA_ROLLING_RATE_LIMIT
},
config: quotaRateLimitConfig
};
This method checks rate limits and increments counters using the base configuration. This should be used for carriers with rate limits configured for our entire organization (one limit for the whole module)
// src/methods/track/track.ts
import { TrackingRequest, TrackingResponse } from '@shipengine/connect-carrier-api';
import { rateLimiter, baseLimitConfig } from '../../rate-limit-config';
export const Track = async (request: TrackingRequest): Promise<TrackingResponse> => {
// Check rate limit and increment counter before executing API call
// Rate limit errors are automatically handled by the package
const sellerId = request.metadata?.seller_id;
const limitId = `your_module_se-${sellerId}`;
await rateLimiter.checkRateLimitAndIncrementCounter(limitId, baseLimitConfig);
// Execute api call after rate limit check
const apiResponse = await callCarrierAPI(request);
return mapResponse(apiResponse);
};
For cases where rate limit configuration comes from seller metadata (set during registration). This should be used for carriers with rate limits configured per each seller.
// src/methods/track/track.ts
import { TrackingRequest, TrackingResponse } from '@shipengine/connect-carrier-api';
import { rateLimiter, baseLimitConfig } from '../../rate-limit-config';
export const Track = async (request: TrackingRequest): Promise<TrackingResponse> => {
const sellerId = request.metadata?.seller_id;
const limitId = `your_module_${sellerId}`;
// Use configuration from metadata if available, otherwise use base config
const rateLimitConfig = request.metadata?.rate_limit_config || baseLimitConfig;
// Rate limit check and increment counter with configuration from metadata or base config
// Errors are automatically handled by the package
await rateLimiter.checkRateLimitAndIncrementCounter(limitId, rateLimitConfig);
// Execute api call
const apiResponse = await callCarrierAPI(request);
return mapResponse(apiResponse);
};
For carriers that have per-seller API limits:
export const Track = async (request: TrackingRequest): Promise<TrackingResponse> => {
const sellerId = request.metadata?.seller_id;
const limitId = `your_module_se-${sellerId}`;
await rateLimiter.checkRateLimitAndIncrementCounter(limitId, baseLimitConfig);
return performTracking(request);
};
Set up rate limiting configuration during carrier registration:
// src/methods/register/register.ts
import { RegisterRequest, RegisterResponse } from '@shipengine/connect-carrier-api';
import { rateLimiter, RateLimitMode } from '../../rate-limit-config';
export const Register = async (request: RegisterRequest): Promise<RegisterResponse> => {
const registrationInfo = request.registration_info;
// Set up rate limiting based on registration information
if (registrationInfo.api_rate_limit) {
await rateLimiter.setConfiguration(`your_module_se-${request.metadata.seller_id}`, {
strategy: { mode: RateLimitMode.ROLLING_RATE_LIMIT },
config: {
rateLimit: {
maxRequests: registrationInfo.api_rate_limit.max_requests,
windowSize: registrationInfo.api_rate_limit.window_size
}
}
});
}
// Continue with registration logic
return mapRegistrationResponse(registrationInfo);
};
Note: Configuration updates should also be handled in the UpdateSettings method. Use the same setConfiguration pattern when processing settings changes to ensure rate limiting configurations stay synchronized with carrier settings.
The library supports six different rate limiting modes:
ROLLING_RATE_LIMIT: Sliding window rate limiting (most common)FIXED_RATE_LIMIT: Fixed window rate limitingROLLING_QUOTA_ROLLING_RATE_LIMIT: Rolling quota with rolling rate limitROLLING_QUOTA_FIXED_RATE_LIMIT: Rolling quota with fixed rate limitFIXED_QUOTA_ROLLING_RATE_LIMIT: Fixed quota with rolling rate limitFIXED_QUOTA_FIXED_RATE_LIMIT: Fixed quota with fixed rate limit// Default configuration: 10 requests per minute, 1000 per day
const defaultConfig: RateLimitConfiguration = {
strategy: { mode: RateLimitMode.ROLLING_QUOTA_ROLLING_RATE_LIMIT },
config: {
rateLimit: {
maxRequests: 10,
windowSize: 60
},
quota: {
maxRequests: 1000,
windowSize: 86400
}
}
};
When using checkRateLimitAndIncrementCounter(), error handling is automatically managed by the package. The library will throw appropriate Connect runtime errors that are handled by the Connect framework.
The library provides specific error types for different rate limiting scenarios:
RateLimitExceededError: Thrown when short-term rate limits are exceededQuotaExceededError: Thrown when long-term quota limits are exceededRateLimiterError: Thrown for other rate limiter errorsThese errors are automatically converted to appropriate Connect runtime errors and handled by the framework. You typically don't need to catch these errors manually when using checkRateLimitAndIncrementCounter().
For advanced use cases where you need custom error handling, you can use the isAllowed() method instead:
import { rateLimiter, baseLimitConfig } from '../../rate-limit-config';
const limitId = `your_module_${sellerId}`;
const isAllowed = await rateLimiter.checkRateLimitAndIncrementCounter(limitId, baseLimitConfig);
if (!isAllowed) {
// Handle rate limiting without exceptions
return customRateLimitResponse();
}
# Run all tests
npm test
# Run with coverage
npm run test -- --coverage
# Run specific test file
npm test -- rate-limiter.test.ts
The package is automatically published to npm after merging changes to the master branch.
To use the latest version in your Connect module:
npm install @shipengine/connect-rate-limiter@latest
Or update your package.json to use the latest version and run npm install.
FAQs
Connect rate limiter
We found that @shipengine/connect-rate-limiter demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 43 open source maintainers collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Research
Destructive malware is rising across open source registries, using delays and kill switches to wipe code, break builds, and disrupt CI/CD.

Security News
Socket CTO Ahmad Nassri shares practical AI coding techniques, tools, and team workflows, plus what still feels noisy and why shipping remains human-led.

Research
/Security News
A five-month operation turned 27 npm packages into durable hosting for browser-run lures that mimic document-sharing portals and Microsoft sign-in, targeting 25 organizations across manufacturing, industrial automation, plastics, and healthcare for credential theft.