
Security News
Insecure Agents Podcast: Certified Patches, Supply Chain Security, and AI Agents
Socket CEO Feross Aboukhadijeh joins Insecure Agents to discuss CVE remediation and why supply chain attacks require a different security approach.
chandra-ocr
Advanced tools
Chandra is a highly accurate OCR model that converts images and PDFs into structured HTML/Markdown/JSON while preserving layout information.
The easiest way to start is with the CLI tools:
pip install chandra-ocr
# With VLLM
chandra_vllm
chandra input.pdf ./output
# With HuggingFace
chandra input.pdf ./output --method hf
# Interactive streamlit app
chandra_app
These are overall scores on the olmocr bench.
See full scores below.
| Type | Name | Link |
|---|---|---|
| Tables | Water Damage Form | View |
| Tables | 10K Filing | View |
| Forms | Handwritten Form | View |
| Forms | Lease Agreement | View |
| Handwriting | Doctor Note | View |
| Handwriting | Math Homework | View |
| Books | Geography Textbook | View |
| Books | Exercise Problems | View |
| Math | Attention Diagram | View |
| Math | Worksheet | View |
| Math | EGA Page | View |
| Newspapers | New York Times | View |
| Newspapers | LA Times | View |
| Other | Transcript | View |
| Other | Flowchart | View |
Discord is where we discuss future development.
pip install chandra-ocr
If you're going to use the huggingface method, we also recommend installing flash attention.
git clone https://github.com/datalab-to/chandra.git
cd chandra
uv sync
source .venv/bin/activate
Process single files or entire directories:
# Single file, with vllm server (see below for how to launch vllm)
chandra input.pdf ./output --method vllm
# Process all files in a directory with local model
chandra ./documents ./output --method hf
CLI Options:
--method [hf|vllm]: Inference method (default: vllm)--page-range TEXT: Page range for PDFs (e.g., "1-5,7,9-12")--max-output-tokens INTEGER: Max tokens per page--max-workers INTEGER: Parallel workers for vLLM--include-images/--no-images: Extract and save images (default: include)--include-headers-footers/--no-headers-footers: Include page headers/footers (default: exclude)--batch-size INTEGER: Pages per batch (default: 1)Output Structure:
Each processed file creates a subdirectory with:
<filename>.md - Markdown output<filename>.html - HTML output<filename>_metadata.json - Metadata (page info, token count, etc.)images/ - Extracted images from the documentLaunch the interactive demo for single-page processing:
chandra_app
For production deployments or batch processing, use the vLLM server:
chandra_vllm
This launches a Docker container with optimized inference settings. Configure via environment variables:
VLLM_API_BASE: Server URL (default: http://localhost:8000/v1)VLLM_MODEL_NAME: Model name for the server (default: chandra)VLLM_GPUS: GPU device IDs (default: 0)You can also start your own vllm server with the datalab-to/chandra model.
Settings can be configured via environment variables or a local.env file:
# Model settings
MODEL_CHECKPOINT=datalab-to/chandra
MAX_OUTPUT_TOKENS=8192
# vLLM settings
VLLM_API_BASE=http://localhost:8000/v1
VLLM_MODEL_NAME=chandra
VLLM_GPUS=0
This code is Apache 2.0, and our model weights use a modified OpenRAIL-M license (free for research, personal use, and startups under $2M funding/revenue, cannot be used competitively with our API). To remove the OpenRAIL license requirements, or for broader commercial licensing, visit our pricing page here.
| Model | ArXiv | Old Scans Math | Tables | Old Scans | Headers and Footers | Multi column | Long tiny text | Base | Overall | Source |
|---|---|---|---|---|---|---|---|---|---|---|
| Datalab Chandra v0.1.0 | 82.2 | 80.3 | 88.0 | 50.4 | 90.8 | 81.2 | 92.3 | 99.9 | 83.1 ± 0.9 | Own benchmarks |
| Datalab Marker v1.10.0 | 83.8 | 69.7 | 74.8 | 32.3 | 86.6 | 79.4 | 85.7 | 99.6 | 76.5 ± 1.0 | Own benchmarks |
| Mistral OCR API | 77.2 | 67.5 | 60.6 | 29.3 | 93.6 | 71.3 | 77.1 | 99.4 | 72.0 ± 1.1 | olmocr repo |
| Deepseek OCR | 75.2 | 72.3 | 79.7 | 33.3 | 96.1 | 66.7 | 80.1 | 99.7 | 75.4 ± 1.0 | Own benchmarks |
| GPT-4o (Anchored) | 53.5 | 74.5 | 70.0 | 40.7 | 93.8 | 69.3 | 60.6 | 96.8 | 69.9 ± 1.1 | olmocr repo |
| Gemini Flash 2 (Anchored) | 54.5 | 56.1 | 72.1 | 34.2 | 64.7 | 61.5 | 71.5 | 95.6 | 63.8 ± 1.2 | olmocr repo |
| Qwen 3 VL 8B | 70.2 | 75.1 | 45.6 | 37.5 | 89.1 | 62.1 | 43.0 | 94.3 | 64.6 ± 1.1 | Own benchmarks |
| olmOCR v0.3.0 | 78.6 | 79.9 | 72.9 | 43.9 | 95.1 | 77.3 | 81.2 | 98.9 | 78.5 ± 1.1 | olmocr repo |
| dots.ocr | 82.1 | 64.2 | 88.3 | 40.9 | 94.1 | 82.4 | 81.2 | 99.5 | 79.1 ± 1.0 | dots.ocr repo |
Thank you to the following open source projects:
FAQs
OCR model that converts documents to markdown, HTML, or JSON.
We found that chandra-ocr demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
Socket CEO Feross Aboukhadijeh joins Insecure Agents to discuss CVE remediation and why supply chain attacks require a different security approach.

Security News
Tailwind Labs laid off 75% of its engineering team after revenue dropped 80%, as LLMs redirect traffic away from documentation where developers discover paid products.

Security News
The planned feature introduces a review step before releases go live, following the Shai-Hulud attacks and a rocky migration off classic tokens that disrupted maintainer workflows.