
Research
/Security News
Critical Vulnerability in NestJS Devtools: Localhost RCE via Sandbox Escape
A flawed sandbox in @nestjs/devtools-integration lets attackers run code on your machine via CSRF, leading to full Remote Code Execution (RCE).
A toolkit for converting PDFs and other image-based document formats into clean, readable, plain text format.
Try the online demo: https://olmocr.allenai.org/
Features:
olmOCR-Bench: We also ship a comprehensive benchmark suite covering over 7,000 test cases across 1,400 documents to help measure performance of OCR systems.
Model | ArXiv | Old Scans Math | Tables | Old Scans | Headers and Footers | Multi column | Long tiny text | Base | Overall |
---|---|---|---|---|---|---|---|---|---|
Marker v1.7.5 (base, force_ocr) | 76.0 | 57.9 | 57.6 | 27.8 | 84.9 | 72.9 | 84.6 | 99.1 | 70.1 ± 1.1 |
MinerU v1.3.10 | 75.4 | 47.4 | 60.9 | 17.3 | 96.6 | 59.0 | 39.1 | 96.6 | 61.5 ± 1.1 |
Mistral OCR API | 77.2 | 67.5 | 60.6 | 29.3 | 93.6 | 71.3 | 77.1 | 99.4 | 72.0 ± 1.1 |
olmOCR v0.1.75 (Anchored) | 74.9 | 71.2 | 71.0 | 42.2 | 94.5 | 78.3 | 73.3 | 98.3 | 75.5 ± 1.0 |
olmOCR v0.2.0 | 78.8 | 77.5 | 71.9 | 45.4 | 94.2 | 78.6 | 81.4 | 99.8 | 78.5 ± 1.1 |
Requirements:
You will need to install poppler-utils and additional fonts for rendering PDF images.
Install dependencies (Ubuntu/Debian)
sudo apt-get update
sudo apt-get install poppler-utils ttf-mscorefonts-installer msttcorefonts fonts-crosextra-caladea fonts-crosextra-carlito gsfonts lcdf-typetools
Set up a conda environment and install olmocr. The requirements for running olmOCR are difficult to install in an existing python environment, so please do make a clean python environment to install into.
conda create -n olmocr python=3.11
conda activate olmocr
# For CPU-only operations, ex running the benchmark
pip install olmocr[bench]
# For actually converting the files with your own GPU
pip install olmocr[gpu] --extra-index-url https://download.pytorch.org/whl/cu128
# Recommended: Install flash infer for faster inference on GPU
pip install https://download.pytorch.org/whl/cu128/flashinfer/flashinfer_python-0.2.5%2Bcu128torch2.7-cp38-abi3-linux_x86_64.whl
For quick testing, try the web demo. To run locally, a GPU is required, as inference is powered by sglang under the hood.
Convert a Single PDF:
# Download a sample PDF
curl -o olmocr-sample.pdf https://olmocr.allenai.org/papers/olmocr_3pg_sample.pdf
# Convert it to markdown
python -m olmocr.pipeline ./localworkspace --markdown --pdfs olmocr-sample.pdf
Convert an Image file:
python -m olmocr.pipeline ./localworkspace --markdown --pdfs random_page.png
Convert Multiple PDFs:
python -m olmocr.pipeline ./localworkspace --markdown --pdfs tests/gnarly_pdfs/*.pdf
With the addition of the --markdown
flag, results will be stored as markdown files inside of ./localworkspace/markdown/
.
The ./localworkspace/
workspace folder will then have both Dolma and markdown files (if using --markdown
).
cat localworkspace/markdown/olmocr-sample.md
olmOCR: Unlocking Trillions of Tokens in PDFs with Vision Language Models
...
If you want to convert millions of PDFs, using multiple nodes running in parallel, then olmOCR supports reading your PDFs from AWS S3, and coordinating work using an AWS S3 output bucket.
For example, you can start this command on your first worker node, and it will set up a simple work queue in your AWS bucket and start converting PDFs.
python -m olmocr.pipeline s3://my_s3_bucket/pdfworkspaces/exampleworkspace --pdfs s3://my_s3_bucket/jakep/gnarly_pdfs/*.pdf
Now on any subsequent nodes, just run this and they will start grabbing items from the same workspace queue.
python -m olmocr.pipeline s3://my_s3_bucket/pdfworkspaces/exampleworkspace
If you are at Ai2 and want to linearize millions of PDFs efficiently using beaker, just add the --beaker
flag. This will prepare the workspace on your local machine, and then launch N GPU workers in the cluster to start
converting PDFs.
For example:
python -m olmocr.pipeline s3://my_s3_bucket/pdfworkspaces/exampleworkspace --pdfs s3://my_s3_bucket/jakep/gnarly_pdfs/*.pdf --beaker --beaker_gpus 4
Pull the Docker image.
docker pull alleninstituteforai/olmocr:latest
To run the container interactively:
docker run -it --gpus all --name olmocr_container alleninstituteforai/olmocr:latest /bin/bash
If you want to access your local files inside the container, use volume mounting:
docker run -it --gpus all \
-v /path/to/your/local/files:/local_files \
--name olmocr_container \
alleninstituteforai/olmocr:latest /bin/bash
All dependencies are already installed. Once you’re inside the container, you can run olmOCR commands. For example:
curl -o olmocr-sample.pdf https://olmocr.allenai.org/papers/olmocr_3pg_sample.pdf
python -m olmocr.pipeline ./localworkspace --markdown --pdfs olmocr-sample.pdf
You can also visit our Docker repository on Docker Hub.
python -m olmocr.pipeline --help
usage: pipeline.py [-h] [--pdfs [PDFS ...]] [--workspace_profile WORKSPACE_PROFILE] [--pdf_profile PDF_PROFILE] [--pages_per_group PAGES_PER_GROUP] [--max_page_retries MAX_PAGE_RETRIES]
[--max_page_error_rate MAX_PAGE_ERROR_RATE] [--workers WORKERS] [--apply_filter] [--stats] [--markdown] [--model MODEL] [--gpu-memory-utilization GPU_MEMORY_UTILIZATION]
[--max_model_len MAX_MODEL_LEN] [--model_max_context MODEL_MAX_CONTEXT] [--model_chat_template MODEL_CHAT_TEMPLATE] [--target_longest_image_dim TARGET_LONGEST_IMAGE_DIM]
[--target_anchor_text_len TARGET_ANCHOR_TEXT_LEN] [--beaker] [--beaker_workspace BEAKER_WORKSPACE] [--beaker_cluster BEAKER_CLUSTER] [--beaker_gpus BEAKER_GPUS]
[--beaker_priority BEAKER_PRIORITY] [--port PORT] [--tensor-parallel-size TENSOR_PARALLEL_SIZE] [--data-parallel-size DATA_PARALLEL_SIZE]
workspace
Manager for running millions of PDFs through a batch inference pipeline
positional arguments:
workspace The filesystem path where work will be stored, can be a local folder, or an s3 path if coordinating work with many workers, s3://bucket/prefix/
options:
-h, --help show this help message and exit
--pdfs PDFS Path to add pdfs stored in s3 to the workspace, can be a glob path s3://bucket/prefix/*.pdf or path to file containing list of pdf paths
--workspace_profile WORKSPACE_PROFILE
S3 configuration profile for accessing the workspace
--pdf_profile PDF_PROFILE
S3 configuration profile for accessing the raw pdf documents
--pages_per_group PAGES_PER_GROUP
Aiming for this many pdf pages per work item group
--max_page_retries MAX_PAGE_RETRIES
Max number of times we will retry rendering a page
--max_page_error_rate MAX_PAGE_ERROR_RATE
Rate of allowable failed pages in a document, 1/250 by default
--workers WORKERS Number of workers to run at a time
--apply_filter Apply basic filtering to English pdfs which are not forms, and not likely seo spam
--stats Instead of running any job, reports some statistics about the current workspace
--markdown Also write natural text to markdown files preserving the folder structure of the input pdfs
--model MODEL List of paths where you can find the model to convert this pdf. You can specify several different paths here, and the script will try to use the
one which is fastest to access
--gpu-memory-utilization GPU_MEMORY_UTILIZATION
Fraction of VRAM vLLM may pre-allocate for KV-cache (passed through to vllm serve).
--max_model_len MAX_MODEL_LEN
Upper bound (tokens) vLLM will allocate KV-cache for; passed through to vllm serve as --max-model-len.
--model_max_context MODEL_MAX_CONTEXT
Maximum context length that the model was fine tuned under
--model_chat_template MODEL_CHAT_TEMPLATE
Chat template to pass to sglang server
--target_longest_image_dim TARGET_LONGEST_IMAGE_DIM
Dimension on longest side to use for rendering the pdf pages
--target_anchor_text_len TARGET_ANCHOR_TEXT_LEN
Maximum amount of anchor text to use (characters)
--beaker Submit this job to beaker instead of running locally
--beaker_workspace BEAKER_WORKSPACE
Beaker workspace to submit to
--beaker_cluster BEAKER_CLUSTER
Beaker clusters you want to run on
--beaker_gpus BEAKER_GPUS
Number of gpu replicas to run
--beaker_priority BEAKER_PRIORITY
Beaker priority level for the job
There are some nice reusable pieces of the code that may be useful for your own projects:
olmOCR is developed and maintained by the AllenNLP team, backed by the Allen Institute for Artificial Intelligence (AI2). AI2 is a non-profit institute with the mission to contribute to humanity through high-impact AI research and engineering. To learn more about who specifically contributed to this codebase, see our contributors page.
olmOCR is licensed under Apache 2.0. A full copy of the license can be found on GitHub.
@misc{olmocr,
title={{olmOCR: Unlocking Trillions of Tokens in PDFs with Vision Language Models}},
author={Jake Poznanski and Jon Borchardt and Jason Dunkelberger and Regan Huff and Daniel Lin and Aman Rangapur and Christopher Wilhelm and Kyle Lo and Luca Soldaini},
year={2025},
eprint={2502.18443},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2502.18443},
}
FAQs
Fast, efficient, and high quality OCR powered by open visual language models
We found that olmocr demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Research
/Security News
A flawed sandbox in @nestjs/devtools-integration lets attackers run code on your machine via CSRF, leading to full Remote Code Execution (RCE).
Product
Customize license detection with Socket’s new license overlays: gain control, reduce noise, and handle edge cases with precision.
Product
Socket now supports Rust and Cargo, offering package search for all users and experimental SBOM generation for enterprise projects.