
Research
/Security News
Critical Vulnerability in NestJS Devtools: Localhost RCE via Sandbox Escape
A flawed sandbox in @nestjs/devtools-integration lets attackers run code on your machine via CSRF, leading to full Remote Code Execution (RCE).
django-smart-ratelimit
Advanced tools
A flexible and efficient rate limiting library for Django applications
A comprehensive rate limiting library for Django applications with multiple algorithms, backends, and flexible configuration options.
Rate limiting helps protect your Django applications from:
pip install django-smart-ratelimit[redis]
from django_smart_ratelimit import rate_limit
# IP-based rate limiting
@rate_limit(key='ip', rate='100/h')
def api_endpoint(request):
return JsonResponse({'data': 'protected'})
# User-based rate limiting
@rate_limit(key='user', rate='50/h')
def user_dashboard(request):
return JsonResponse({'user_data': '...'})
@rate_limit(key='ip', rate='5/m', block=True)
def login_view(request):
return authenticate_user(request)
from rest_framework import viewsets
from django_smart_ratelimit import rate_limit
class APIViewSet(viewsets.ViewSet):
@rate_limit(key='ip', rate='100/h')
def list(self, request):
return Response({'data': 'list'})
# settings.py
MIDDLEWARE = ['django_smart_ratelimit.middleware.RateLimitMiddleware']
RATELIMIT_MIDDLEWARE = {
'DEFAULT_RATE': '1000/h',
'RATE_LIMITS': {
'/api/auth/': '10/m',
'/api/': '500/h',
}
}
Algorithm | Characteristics | Best For |
---|---|---|
token_bucket | Allows traffic bursts | APIs with variable load |
sliding_window | Smooth request distribution | Consistent traffic control |
fixed_window | Simple, predictable behavior | Basic rate limiting needs |
The token bucket algorithm allows for burst traffic handling:
@rate_limit(
key='user',
rate='100/h', # Base rate
algorithm='token_bucket',
algorithm_config={
'bucket_size': 200, # Allow bursts up to 200 requests
'refill_rate': 2.0, # Refill tokens at 2 per second
}
)
def api_with_bursts(request):
return JsonResponse({'data': 'handled'})
Common use cases:
RATELIMIT_BACKEND = 'redis'
RATELIMIT_REDIS = {
'host': 'localhost',
'port': 6379,
'db': 0,
}
RATELIMIT_BACKEND = 'database'
# Uses your default Django database
RATELIMIT_BACKENDS = [
{
'name': 'primary_redis',
'backend': 'redis',
'config': {'host': 'redis-primary.example.com'}
},
{
'name': 'fallback_db',
'backend': 'database',
'config': {}
}
]
The library adds standard rate limit headers to responses:
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 99
X-RateLimit-Reset: 1642678800
# Check backend health
python manage.py ratelimit_health
# Detailed status
python manage.py ratelimit_health --verbose
# Clean expired entries
python manage.py cleanup_ratelimit
# Preview cleanup
python manage.py cleanup_ratelimit --dry-run
The library includes comprehensive examples for various use cases:
@rate_limit(
key='user', # Key function or string
rate='100/h', # Rate limit (requests/period)
algorithm='token_bucket', # Algorithm choice
algorithm_config={}, # Algorithm-specific config
backend='redis', # Backend override
block=True, # Block vs. continue on limit
skip_if=lambda req: req.user.is_staff, # Skip condition
)
# Run tests
python -m pytest
# Run with coverage
python -m pytest --cov=django_smart_ratelimit
from django_smart_ratelimit import rate_limit
# Basic IP-based limiting
@rate_limit(key='ip', rate='10/m')
def public_api(request):
return JsonResponse({'message': 'Hello World'})
# User-based limiting (automatically falls back to IP for anonymous users)
@rate_limit(key='user', rate='100/h')
def user_dashboard(request):
return JsonResponse({'user_data': '...'})
# Custom key function for more control
@rate_limit(key=lambda req: f"user:{req.user.id}" if req.user.is_authenticated else f"ip:{req.META.get('REMOTE_ADDR')}", rate='50/h')
def flexible_api(request):
return JsonResponse({'data': '...'})
# Block when limit exceeded (default is to continue)
@rate_limit(key='ip', rate='5/m', block=True)
def strict_api(request):
return JsonResponse({'sensitive': 'data'})
# Skip rate limiting for staff users
@rate_limit(key='ip', rate='10/m', skip_if=lambda req: req.user.is_staff)
def staff_friendly_api(request):
return JsonResponse({'data': 'staff can access unlimited'})
# Use sliding window algorithm
@rate_limit(key='user', rate='100/h', algorithm='sliding_window')
def smooth_api(request):
return JsonResponse({'algorithm': 'sliding_window'})
# Use fixed window algorithm
@rate_limit(key='ip', rate='20/m', algorithm='fixed_window')
def burst_api(request):
return JsonResponse({'algorithm': 'fixed_window'})
# Use token bucket algorithm (NEW!)
@rate_limit(
key='api_key',
rate='100/m', # Base rate: 100 requests per minute
algorithm='token_bucket',
algorithm_config={
'bucket_size': 200, # Allow bursts up to 200 requests
'refill_rate': 2.0, # Refill at 2 tokens per second
}
)
def api_with_bursts(request):
return JsonResponse({'algorithm': 'token_bucket', 'burst_allowed': True})
Automatic failure detection and recovery for backend operations to ensure system reliability:
# settings.py
RATELIMIT_CIRCUIT_BREAKER = {
'failure_threshold': 5, # Open circuit after 5 failures
'recovery_timeout': 60, # Wait 60 seconds before testing recovery
'reset_timeout': 300, # Reset after 5 minutes of success
'half_open_max_calls': 1, # Test with 1 call in half-open state
}
from django_smart_ratelimit.backends import MemoryBackend
# Enable circuit breaker (default: enabled)
backend = MemoryBackend(enable_circuit_breaker=True)
# Custom configuration
custom_config = {'failure_threshold': 3, 'recovery_timeout': 30}
backend = MemoryBackend(
enable_circuit_breaker=True,
circuit_breaker_config=custom_config
)
# Check circuit breaker status
status = backend.get_backend_health_status()
print(f"Circuit breaker enabled: {status['circuit_breaker_enabled']}")
print(f"Current state: {status['circuit_breaker']['state']}")
If you find this project helpful and want to support its development, you can make a donation to help maintain and improve this open-source library:
0xBD90e5df7389295AE6fbaB5FEf6817f22A8123eF
WzQHS7hzBcznkYoR7TkMH1DRo3WLYQdWCNBuy6ZfY3h
rE8CM2sv4gBEDhek2Ajm2vMmqMXdPV34jC
Your support helps maintain and improve this project for the Django community! 🙏
This project is licensed under the MIT License - see the LICENSE file for details.
📚 Documentation • 💡 Examples • 🤝 Contributing • 💬 Discussions • 🤖 AI Usage
FAQs
A flexible and efficient rate limiting library for Django applications
We found that django-smart-ratelimit demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Research
/Security News
A flawed sandbox in @nestjs/devtools-integration lets attackers run code on your machine via CSRF, leading to full Remote Code Execution (RCE).
Product
Customize license detection with Socket’s new license overlays: gain control, reduce noise, and handle edge cases with precision.
Product
Socket now supports Rust and Cargo, offering package search for all users and experimental SBOM generation for enterprise projects.