
Company News
Socket Named Top Sales Organization by RepVue
Socket won two 2026 Reppy Awards from RepVue, ranking in the top 5% of all sales orgs. AE Alexandra Lister shares what it's like to grow a sales career here.
iploop
Advanced tools
Residential proxy SDK with full scraping capabilities — render JS, bypass protection, extract data
Residential proxy SDK — one-liner web fetching through millions of real IPs.
pip install iploop
from iploop import IPLoop
ip = IPLoop("your-api-key")
# Fetch any URL through a residential proxy
response = ip.get("https://httpbin.org/ip")
print(response.text)
# Target a specific country
response = ip.get("https://example.com", country="DE")
# POST request
response = ip.post("https://api.example.com/data", json={"key": "value"})
Headers are automatically matched to the target country — correct language, timezone, and User-Agent:
ip = IPLoop("key", country="JP") # Japanese Chrome headers automatically
Keep the same IP across multiple requests:
s = ip.session(country="US", city="newyork")
page1 = s.fetch("https://site.com/page1") # same IP
page2 = s.fetch("https://site.com/page2") # same IP
Failed requests (403, 502, 503, timeouts) automatically retry with a fresh IP:
# Retries up to 3 times with different IPs
response = ip.get("https://tough-site.com", retries=5)
import asyncio
from iploop import AsyncIPLoop
async def main():
async with AsyncIPLoop("key") as ip:
results = await asyncio.gather(
ip.get("https://site1.com"),
ip.get("https://site2.com"),
ip.get("https://site3.com"),
)
for r in results:
print(r.status_code)
asyncio.run(main())
ip.usage() # Check bandwidth quota
ip.status() # Service status
ip.ask("how do I handle captchas?") # Ask support
ip.countries() # List available countries
Auto-extract structured data from popular sites:
# eBay — extract product listings
products = ip.ebay.search("laptop", extract=True)["products"]
# [{"title": "MacBook Pro 16", "price": "$1,299.00"}, ...]
# Nasdaq — extract stock quotes
quote = ip.nasdaq.quote("AAPL", extract=True)
# {"price": "$185.50", "change": "+2.30", "pct_change": "+1.25%"}
# Google — extract search results
results = ip.google.search("best proxy service", extract=True)["results"]
# [{"title": "...", "url": "..."}, ...]
# Twitter — extract profile info
profile = ip.twitter.profile("elonmusk", extract=True)
# {"name": "Elon Musk", "handle": "elonmusk", ...}
# YouTube — extract video metadata
video = ip.youtube.video("dQw4w9WgXcQ", extract=True)
# {"title": "...", "channel": "...", "views": 1234567}
Built-in per-site rate limiting prevents blocks automatically:
# These calls auto-delay to respect site limits
for q in ["laptop", "phone", "tablet"]:
ip.ebay.search(q) # 15s delay between requests
ip.linkedin.profile("satyanadella")
ip.linkedin.company("microsoft")
Batch fetch up to 25 URLs in parallel:
# Concurrent fetching (safe up to 25)
batch = ip.batch(max_workers=10)
results = batch.fetch_all([
"https://ebay.com/sch/i.html?_nkw=laptop",
"https://ebay.com/sch/i.html?_nkw=phone",
"https://ebay.com/sch/i.html?_nkw=tablet"
], country="US")
# Multi-country comparison
prices = batch.fetch_multi_country("https://ebay.com/sch/i.html?_nkw=iphone", ["US", "GB", "DE"])
Every request auto-applies a 14-header Chrome desktop fingerprint — the universal recipe from Phase 9 testing:
# Auto fingerprinting — no setup needed
html = ip.fetch("https://ebay.com", country="US") # fingerprinted automatically
# Get fingerprint headers directly
headers = ip.fingerprint("DE") # 14 headers for German Chrome
# After making requests...
print(ip.stats)
# {"requests": 10, "success": 9, "errors": 1, "total_time": 23.5, "avg_time": 2.35, "success_rate": 90.0}
ip = IPLoop("key", debug=True)
# Logs: GET https://example.com → 200 (0.45s) country=US session=abc123
from iploop import AuthError, QuotaExceeded, ProxyError, TimeoutError
try:
response = ip.get("https://example.com")
except QuotaExceeded:
print("Upgrade at https://iploop.io/pricing")
except ProxyError:
print("Proxy connection failed")
except TimeoutError:
print("Request timed out")
FAQs
Residential proxy SDK with full scraping capabilities — render JS, bypass protection, extract data
We found that iploop demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Company News
Socket won two 2026 Reppy Awards from RepVue, ranking in the top 5% of all sales orgs. AE Alexandra Lister shares what it's like to grow a sales career here.

Security News
NIST will stop enriching most CVEs under a new risk-based model, narrowing the NVD's scope as vulnerability submissions continue to surge.

Company News
/Security News
Socket is an initial recipient of OpenAI's Cybersecurity Grant Program, which commits $10M in API credits to defenders securing open source software.