Research
Security News
Quasar RAT Disguised as an npm Package for Detecting Vulnerabilities in Ethereum Smart Contracts
Socket researchers uncover a malicious npm package posing as a tool for detecting vulnerabilities in Etherium smart contracts.
Easily turn large sets of image urls to an image dataset. Can download, resize and package 100M urls in 20h on one machine.
Also supports saving captions for url+caption datasets.
If you believe in making reusable tools to make data easy to use for ML and you would like to contribute, please join the DataToML chat.
pip install img2dataset
For better performance, it's highly recommended to set up a fast dns resolver, see this section
Websites can pass the http headers X-Robots-Tag: noai
, X-Robots-Tag: noindex
, X-Robots-Tag: noimageai
and X-Robots-Tag: noimageindex
By default img2dataset will ignore images with such headers.
To disable this behavior and download all images, you may pass --disallowed_header_directives '[]'
See AI use impact to understand better why you may decide to enable or disable this feature.
Example of datasets to download with example commands are available in the dataset_examples folder. In particular:
For all these examples, you may want to tweak the resizing to your preferences. The default is 256x256 with white borders. See options below.
First get some image url list. For example:
echo 'https://placekitten.com/200/305' >> myimglist.txt
echo 'https://placekitten.com/200/304' >> myimglist.txt
echo 'https://placekitten.com/200/303' >> myimglist.txt
Then, run the tool:
img2dataset --url_list=myimglist.txt --output_folder=output_folder --thread_count=64 --image_size=256
The tool will then automatically download the urls, resize them, and store them with that format:
or as this format if choosing webdataset:
with each number being the position in the list. The subfolders avoids having too many files in a single folder.
If captions are provided, they will be saved as 0.txt, 1.txt, ...
This can then easily be fed into machine learning training or any other use case.
Also .json files named 0.json, 1.json,... are saved with these keys:
Also a .parquet file will be saved with the same name as the subfolder/tar files containing these same metadata. It can be used to analyze the results efficiently.
.json files will also be saved with the same name suffixed by _stats, they contain stats collected during downloading (download time, number of success, ...)
Checkout these examples to call this as a lib:
This module exposes a single function download
which takes the same arguments as the command line tool:
[x_min, y_min, x_max, y_max]
, with all elements being floats in [0,1] (relative to the size of the image). If None, then no bounding box blurring is performed (default None)If a first download got interrupted for any reason, you can run again with --incremental "incremental" (this is the default) and using the same output folder , the same number_sample_per_shard and the same input urls, and img2dataset will complete the download.
Img2dataset support several formats. There are trade off for which to choose:
Images can be encoded in jpeg, png or webp, with different quality settings.
Here are a few comparisons of space used for 1M images at 256 x 256:
format | quality | compression | size (GB) |
---|---|---|---|
jpg | 100 | N/A | 54.2 |
jpg | 95 | N/A | 29.9 |
png | N/A | 0 | 187.9 |
png | N/A | 9 | 97.7 |
webp | 100 | N/A | 31.0 |
webp | 95 | N/A | 23.8 |
Notes:
Whenever feasible, you should pre-filter your dataset prior to downloading.
If needed, you can use:
When filtering data, it is recommended to pre-shuffle your dataset to limit the impact on shard size distribution.
Some dataset (for example laion5B) expose hashes of original images.
If you want to be extra safe, you may automatically drop out the images that do not match theses hashes.
In that case you can use --compute_hash "md5" --verify_hash '["md5","md5"]'
Some of those images are actually still good but have been slightly changed by the websites.
The default values should be good enough for small sized dataset. For larger ones, these tips may help you get the best performance:
To benchmark your system, and img2dataset interactions with it, it may be interesting to enable these options (only for testing, not for real downloads)
Thanks to fsspec, img2dataset supports reading and writing files in many file systems.
To use it, simply use the prefix of your filesystem before the path. For example hdfs://
, s3://
, http://
, gcs://
, ssh://
or hf://
(includes a Dataset Viewer).
Some of these file systems require installing an additional package (for example s3fs for s3, gcsfs for gcs, fsspec/sshfs for ssh, huggingface_hub for hf).
See fsspec doc for all the details.
If you need specific configuration for your filesystem, you may handle this problem by using the fsspec configuration system that makes it possible to create a file such as .config/fsspec/s3.json
and have information in it such as:
{
"s3": {
"client_kwargs": {
"endpoint_url": "https://some_endpoint",
"aws_access_key_id": "your_user",
"aws_secret_access_key": "your_password"
}
}
}
Which may be necessary if using s3 compatible file systems such as minio. That kind of configuration also work for all other fsspec-supported file systems.
Img2dataset supports several distributors.
multiprocessing is a good option for downloading on one machine, and as such it is the default. Pyspark lets img2dataset use many nodes, which makes it as fast as the number of machines. It can be particularly useful if downloading datasets with more than a billion image.
In order to use img2dataset with pyspark, you will need to do this:
pip install pyspark
--distributor pyspark
option--subjob_size 1000
option: this is the number of images to download in each subjob. Increasing it will mean a longer time of preparation to put the feather files in the temporary dir, a shorter time will mean sending less shards at a time to the pyspark job.By default a local spark session will be created. You may want to create a custom spark session depending on your specific spark cluster. To do that check pyspark_example.py, there you can plug your custom code to create a spark session, then run img2dataset which will use it for downloading.
To create a spark cluster check the distributed img2dataset tutorial
To enable wandb, use the --enable_wandb=True
option.
Performance metrics are monitored through Weights & Biases.
In addition, most frequent errors are logged for easier debugging.
Other features are available:
When running the script for the first time, you can decide to either associate your metrics to your account or log them anonymously.
You can also log in (or create an account) before by running wandb login
.
This tool works very well in the current state for up to 100M elements. Future goals include:
This tool is designed to download pictures as fast as possible. This put a stress on various kind of resources. Some numbers assuming 1350 image/s:
With these information in mind, the design choice was done in this way:
This design make it possible to use the CPU resource efficiently by doing only 1 resize per core, reduce disk overhead by opening 1 file per core, while using the bandwidth resource as much as possible by using M thread per process.
Also see architecture.md for the precise split in python modules.
To get the best performances with img2dataset, using an efficient dns resolver is needed.
Follow the official quick start or run this on ubuntu:
install knot with
wget https://secure.nic.cz/files/knot-resolver/knot-resolver-release.deb
sudo dpkg -i knot-resolver-release.deb
sudo apt update
sudo apt install -y knot-resolver
sudo sh -c 'echo `hostname -I` `hostname` >> /etc/hosts'
sudo sh -c 'echo nameserver 127.0.0.1 > /etc/resolv.conf'
sudo systemctl stop systemd-resolved
then start 4 instances with
sudo systemctl start kresd@1.service
sudo systemctl start kresd@2.service
sudo systemctl start kresd@3.service
sudo systemctl start kresd@4.service
Check it works with
dig @localhost google.com
In order to keep the success rate high, it is necessary to use an efficient DNS resolver. I tried several options: systemd-resolved, dnsmaskq and bind9 and reached the conclusion that bind9 reaches the best performance for this use case. Here is how to set this up on Ubuntu. Run:
sudo apt install bind9
sudo vim /etc/bind/named.conf.options
And add this in options
:
recursive-clients 10000;
resolver-query-timeout 30000;
max-clients-per-query 10000;
max-cache-size 2000m;
Then, run:
sudo systemctl restart bind9
echo nameserver 127.0.0.1 | sudo tee -a /etc/resolv.conf
This will make it possible to keep an high success rate while doing thousands of dns queries. You may also want to setup bind9 logging in order to check that few dns errors happen.
img2dataset is used to retrieve images from the web and make them easily available for ML use cases. Use cases involve:
Models that can be trained using image/text datasets include:
There is a lot of discussions regarding the consequences of text to image models. Some opinions include:
The opt out directive try to let creators that do not want to share their art not be used for indexing and for training.
Either locally, or in gitpod (do export PIP_USER=false
there)
Setup a virtualenv:
python3 -m venv .env
source .env/bin/activate
pip install -e .
to run tests:
pip install -r requirements-test.txt
then
make lint
make test
You can use make black
to reformat the code
python -m pytest -x -s -v tests -k "dummy"
to run a specific test
cd tests/test_files
bash benchmark.sh
Download crawling at home first part, then:
cd tests
bash large_bench.sh
It takes 3.7h to download 18M pictures
1350 images/s is the currently observed performance. 4.8M images per hour, 116M images per 24h.
downloading 2 parquet files of 18M items (result 936GB) took 7h24 average of 1345 image/s
downloading 190M images from the crawling at home dataset took 41h (result 5TB) average of 1280 image/s
downloading 5.8B images from the laion5B dataset took 7 days (result 240TB), average of 9500 sample/s on 10 machines, technical details
@misc{beaumont-2021-img2dataset,
author = {Romain Beaumont},
title = {img2dataset: Easily turn large sets of image urls to an image dataset},
year = {2021},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/rom1504/img2dataset}}
}
FAQs
Easily turn a set of image urls to an image dataset
We found that img2dataset demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Research
Security News
Socket researchers uncover a malicious npm package posing as a tool for detecting vulnerabilities in Etherium smart contracts.
Security News
Research
A supply chain attack on Rspack's npm packages injected cryptomining malware, potentially impacting thousands of developers.
Research
Security News
Socket researchers discovered a malware campaign on npm delivering the Skuld infostealer via typosquatted packages, exposing sensitive data.