
Security News
OWASP 2025 Top 10 Adds Software Supply Chain Failures, Ranked Top Community Concern
OWASP’s 2025 Top 10 introduces Software Supply Chain Failures as a new category, reflecting rising concern over dependency and build system risks.
██╗ ██╗███████╗██╗ ██████╗ ██████╗██████╗ ██╗██╗ ██╗███╗ ███╗
██║ ██║██╔════╝██║ ██╔═══██╗██╔════╝██╔══██╗██║██║ ██║████╗ ████║
██║ ██║█████╗ ██║ ██║ ██║██║ ██████╔╝██║██║ ██║██╔████╔██║
╚██╗ ██╔╝██╔══╝ ██║ ██║ ██║██║ ██╔══██╗██║██║ ██║██║╚██╔╝██║
╚████╔╝ ███████╗███████╗╚██████╔╝╚██████╗██║ ██║██║╚██████╔╝██║ ╚═╝ ██║
╚═══╝ ╚══════╝╚══════╝ ╚═════╝ ╚═════╝╚═╝ ╚═╝╚═╝ ╚═════╝ ╚═╝ ╚═╝
Lightning Fast • Intelligent • Production Ready
Quick Start • Features • Benchmarks • Documentation • Examples

VELOCRIUM is not just another HTTP client. It's a complete solution that combines the elegance of requests, the power of httpx, and adds intelligent features that save you hours of development time.
# This simple code does SO MUCH under the hood:
import velocrium
client = velocrium.Client(
retry=velocrium.Retry(max_attempts=3), # ✅ Auto-retry with backoff
cache=velocrium.Cache(ttl=300), # ✅ Smart HTTP caching
rate_limit=velocrium.RateLimit("100/min") # ✅ Built-in rate limiting
)
response = client.get("https://api.example.com/data") # 🚀 One line, all features!
| Library | Speed | Async/Sync | Retry | Cache | Rate Limit | Type Hints | Learning Curve |
|---|---|---|---|---|---|---|---|
| 🏆 VELOCRIUM | ⚡⚡⚡⚡⚡ | ✅ Both | ✅ Built-in | ✅ Built-in | ✅ Built-in | ✅ Complete | 🟢 Easy |
| requests | ⚡⚡⚡ | ❌ Sync only | ❌ Manual | ❌ No | ❌ No | ⚠️ Partial | 🟢 Easy |
| httpx | ⚡⚡⚡⚡ | ✅ Both | ⚠️ Manual | ❌ No | ❌ No | ✅ Yes | 🟡 Medium |
| aiohttp | ⚡⚡⚡⚡⚡ | ❌ Async only | ❌ Manual | ❌ No | ❌ No | ⚠️ Partial | 🔴 Hard |
🚀 Performance Features
|
🛡️ Reliability Features
|
🔐 Security Features
|
🎯 Developer Experience
|
# Basic installation
pip install velocrium
# With Redis caching support
pip install velocrium[redis]
# With all optional features
pip install velocrium[all]
Requirements:
import velocrium
# Create a client
client = velocrium.Client()
# Make a request (works in sync context)
response = client.get("https://api.github.com/users/octocat")
print(response.json())
# Or use async (same API!)
async def fetch():
response = await client.get("https://api.github.com/users/octocat")
return response.json()
import velocrium
# Configure once, benefit everywhere
client = velocrium.Client(
base_url="https://api.example.com",
# Auto-retry failed requests
retry=velocrium.Retry(
max_attempts=3,
backoff="exponential", # 1s, 2s, 4s, 8s...
jitter=True # Add randomness to prevent thundering herd
),
# Cache responses automatically
cache=velocrium.Cache(
ttl=300, # 5 minutes
backend="memory" # or "redis", "disk"
),
# Rate limiting (never get throttled!)
rate_limit=velocrium.RateLimit("100/minute"),
# Sensible timeouts
timeout=velocrium.Timeout(
connect=5,
read=30,
write=10
),
# Authentication
auth=velocrium.BearerAuth("your-token-here")
)
# Now every request uses all these features automatically! 🎉
response = client.get("/users")
graph TB
A[Your Application] --> B[VELOCRIUM Client]
B --> C{Request Pipeline}
C --> D[Auth Handler]
C --> E[Cache Layer]
C --> F[Rate Limiter]
C --> G[Retry Logic]
D --> H[HTTP Transport]
E --> H
F --> H
G --> H
H --> I{Backend}
I --> J[httpx]
I --> K[aiohttp]
J --> L[Target API]
K --> L
L --> M[Response]
M --> N[Cache Store]
M --> O[Your App]
style B fill:#4CAF50
style H fill:#2196F3
style L fill:#FF9800
def log_request(request):
print(f"→ {request.method} {request.url}")
return request
def log_response(response):
print(f"← {response.status_code} ({response.elapsed}s)")
return response
client = velocrium.Client(
hooks={
"request": [log_request],
"response": [log_response]
}
)
# Execute multiple requests in parallel
with client.batch() as batch:
batch.get("/users/1")
batch.get("/users/2")
batch.post("/users", json={"name": "John"})
# All executed concurrently!
responses = await batch.execute()
from velocrium import Retry
# Exponential backoff with jitter
retry = Retry(
max_attempts=5,
backoff="exponential",
base_delay=1.0,
max_delay=60.0,
jitter=True
)
# Linear backoff
retry = Retry(
max_attempts=3,
backoff="linear",
base_delay=2.0
)
# Constant delay
retry = Retry(
max_attempts=10,
backoff="constant",
base_delay=0.5
)
╔══════════════╦═══════════╦═══════════╦═══════════╗
║ Library ║ Simple ║ Retries ║ Caching ║
╠══════════════╬═══════════╬═══════════╬═══════════╣
║ VELOCRIUM ║ 45ms ║ 47ms ║ 2ms ║
║ httpx ║ 46ms ║ N/A ║ N/A ║
║ requests ║ 52ms ║ N/A ║ N/A ║
║ aiohttp ║ 43ms ║ N/A ║ N/A ║
╚══════════════╩═══════════╩═══════════╩═══════════╝
VELOCRIUM: 12.3 MB (with caching)
httpx: 8.5 MB
requests: 6.2 MB
aiohttp: 11.8 MB
💡 Note: VELOCRIUM uses slightly more memory because it includes cache, retry state, and rate limiting - features that would add similar overhead to other libraries if implemented manually.
import velocrium
from velocrium.exceptions import HTTPError, TimeoutError, RetryError
client = velocrium.Client(
base_url="https://api.example.com",
retry=velocrium.Retry(max_attempts=3),
timeout=velocrium.Timeout(connect=5, read=30)
)
try:
response = client.get("/users")
users = response.json()
except HTTPError as e:
print(f"API returned error: {e}")
except TimeoutError:
print("Request timed out")
except RetryError:
print("All retry attempts failed")
finally:
client.close()
import velocrium
import asyncio
# Never get blocked again!
scraper = velocrium.Client(
rate_limit=velocrium.RateLimit("10/second"),
retry=velocrium.Retry(max_attempts=5),
timeout=velocrium.Timeout(read=60)
)
async def scrape_pages(urls):
results = []
for url in urls:
# Automatically rate-limited and retried
response = await scraper.get(url)
results.append(response.text)
return results
urls = ["https://example.com/page1", "https://example.com/page2"]
data = asyncio.run(scrape_pages(urls))
# service_a.py
import velocrium
# Configure for internal service mesh
api_client = velocrium.Client(
base_url="http://service-b:8080",
retry=velocrium.Retry(max_attempts=2), # Quick fail for services
cache=velocrium.Cache(ttl=60), # Cache for 1 minute
timeout=velocrium.Timeout(connect=1, read=5) # Fast timeouts
)
def get_user_data(user_id):
response = api_client.get(f"/users/{user_id}")
return response.json()
# Install development dependencies
pip install -e .[dev]
# Run tests
pytest
# Run tests with coverage
pytest --cov=velocrium --cov-report=html
# Type checking
mypy src/velocrium
# Code formatting
black src/velocrium tests/
isort src/velocrium tests/
Test Results:
tests/test_client.py ...................... [11 tests] ✅ PASSED
tests/test_retry.py ........................ [8 tests] ✅ PASSED
tests/test_cache.py ........................ [6 tests] ✅ PASSED
Total: 25 tests | Coverage: 95% | Duration: 2.3s
We love contributions! Here's how you can help:
git checkout -b feature/amazing)pytest)git commit -m 'Add amazing feature')git push origin feature/amazing)Development Setup:
git clone https://github.com/jdevsky/velocrium.git
cd velocrium
pip install -e .[dev]
pytest # Run tests
This project is licensed under the MIT License - see the LICENSE file for details.
TL;DR: You can use VELOCRIUM for anything, including commercial projects. Just keep the copyright notice.
VELOCRIUM is built on the shoulders of giants:
Special thanks to the Python community for continuous inspiration! ❤️
If VELOCRIUM makes your life easier, consider:
FAQs
The Next-Gen Async HTTP Client for Python - Fast, intuitive, and powerful
We found that velocrium demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
OWASP’s 2025 Top 10 introduces Software Supply Chain Failures as a new category, reflecting rising concern over dependency and build system risks.

Research
/Security News
Socket researchers discovered nine malicious NuGet packages that use time-delayed payloads to crash applications and corrupt industrial control systems.

Security News
Socket CTO Ahmad Nassri discusses why supply chain attacks now target developer machines and what AI means for the future of enterprise security.