
Research
/Security News
11 Malicious Go Packages Distribute Obfuscated Remote Payloads
Socket uncovered 11 malicious Go packages using obfuscated loaders to fetch and execute second-stage payloads via C2 domains.
FireRequests is a high-performance, asynchronous HTTP client library for Python, engineered to accelerate your file transfers. By harnessing advanced concepts like semaphores, exponential backoff with jitter, concurrency, and fault tolerance, FireRequests can achieve up to a 10x real-world speedup in file downloads and uploads compared to traditional synchronous methods and enables scalable, parallelized LLM interactions with providers like OpenAI and Google.
asyncio
, aiohttp
, and aiofiles
, boosting throughput for I/O-bound tasks.asyncio.Semaphore
to limit simultaneous tasks, optimizing performance by managing system resources effectively.nest_asyncio
, enabling reusable asyncio
loops for both batch and interactive Jupyter use.Install FireRequests using pip:
!pip install firerequests
Accelerate your downloads with just a few lines of code:
from firerequests import FireRequests
url = "https://mirror.clarkson.edu/zorinos/isos/17/Zorin-OS-17.2-Core-64-bit.iso"
fr = FireRequests()
fr.download(url)
!fr download https://mirror.clarkson.edu/zorinos/isos/17/Zorin-OS-17.2-Core-64-bit.iso
urls
(required): The URL to download the file from.--filenames
(optional): The name to save the downloaded file. Defaults to filename from URL.--max_files
(optional): The number of concurrent file chunks. Defaults to 10.--chunk_size
(optional): The size of each chunk in bytes. Defaults to 2 * 1024 * 1024
(2 MB).--headers
(optional): A dictionary of headers to include in the download request.--show_progress
(optional): Whether to show a progress bar. Defaults to True for single file downloads, and False for multiple files.FireRequests delivers significant performance improvements over traditional download methods. Below is the result of a real-world speed test:
Normal Download š: 100%|āāāāāāāāāā| 3.42G/3.42G [18:24<00:00, 3.10MB/s]
Downloading on š„: 100%|āāāāāāāāāā| 3.42G/3.42G [02:38<00:00, 21.6MB/s]
š Download Time: 1104.84 seconds
š„ Download Time: 158.22 seconds
[!TIP] For Hugging Face Hub downloads it is recommended to use
hf_transfer
for maximum speed gains! For more details, please take a look at this section.
from firerequests import FireRequests
urls = ["https://example.com/file1.iso", "https://example.com/file2.iso"]
filenames = ["file1.iso", "file2.iso"]
fr = FireRequests()
fr.download(urls, filenames, max_files=10, chunk_size=2 * 1024 * 1024, headers={"Authorization": "Bearer token"}, show_progress=True)
urls
: The URL or list of URLs of the file(s) to download.filenames
: The filename(s) to save the downloaded file(s). If not provided, filenames are extracted from the URLs.max_files
: The maximum number of concurrent chunk downloads. Defaults to 10.chunk_size
: The size of each chunk in bytes. Defaults to 2 * 1024 * 1024
(2 MB).headers
: A dictionary of headers to include in the download request (optional).show_progress
: Whether to show a progress bar during download. Defaults to True
for a single file, and False
for multiple files (optional).from firerequests import FireRequests
file_path = "largefile.iso"
parts_urls = ["https://example.com/upload_part1", "https://example.com/upload_part2", ...]
fr = FireRequests()
fr.upload(file_path, parts_urls, chunk_size=2 * 1024 * 1024, max_files=10, show_progress=True)
file_path
: The local path to the file to upload.parts_urls
: A list of URLs where each part of the file will be uploaded.chunk_size
: The size of each chunk in bytes. Defaults to 2 * 1024 * 1024
(2 MB).max_files
: The maximum number of concurrent chunk uploads. Defaults to 10.show_progress
: Whether to show a progress bar during upload. Defaults to True
.from firerequests import FireRequests
url = "https://example.com/largefile.iso"
fr = FireRequests()
fr.compare(url)
FireRequests allows you to run LLM API calls (like OpenAI or Google) in parallel batches using a decorator. This keeps the library lightweight and lets users supply their own logic for calling APIs. This approach currently doesn't work in Colab.
from firerequests import FireRequests
# Initialize FireRequests
fr = FireRequests()
# Use the decorator to define your own prompt function
@fr.op(max_reqs=2, prompts=[
"What is AI?",
"Explain quantum computing.",
"What is Bitcoin?",
"Explain neural networks."
])
def generate(system: str = "Provide concise answers.", prompt: str = ""):
# You can use OpenAI, Google, or any other LLM API here
from openai import OpenAI
import os
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[
{"role": "system", "content": system},
{"role": "user", "content": prompt}
]
)
return response.choices[0].message.content
# Call your decorated function
responses = generate()
print(responses)
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
Become a sponsor and get a logo here. The funds are used to defray the cost of development.
FAQs
High-Performance Asynchronous HTTP Client setting Requests on Fire š„
We found that firerequests demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago.Ā It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Research
/Security News
Socket uncovered 11 malicious Go packages using obfuscated loaders to fetch and execute second-stage payloads via C2 domains.
Security News
TC39 advances 11 JavaScript proposals, with two moving to Stage 4, bringing better math, binary APIs, and more features one step closer to the ECMAScript spec.
Research
/Security News
A flawed sandbox in @nestjs/devtools-integration lets attackers run code on your machine via CSRF, leading to full Remote Code Execution (RCE).