Internet Archive PDF tools
##########################
:authors: - Merlijn Wajer merlijn@archive.org
:date: 2021-11-14 18:00
This repository contains a library to perform MRC (Mixed Raster Content)
compression on images [*]_, which offers lossy high compression of images, in
particular images with text.
Additionally, the library can generate MRC-compressed PDF files with hOCR [*]_
text layers mixed into to the PDF, which makes searching and copy-pasting of the
PDF possible. PDFs generated by bin/recode_pdf
should be PDF/A 3b
and
PDF/UA
compatible.
Some of the tooling also supports specific Internet Archive file formats (such
as the "scandata.xml" files, but the tooling should work fine without those
files, too.
While the code is already being used internally to create PDFs at the Internet
Archive, the code still needs more documentation and cleaning up, so don't
expect this to be super well documented just yet.
Features
- Reliable: has produced over 6 million PDFs in 2021 alone (each with many
hundreds of pages)
- Fast and robust compression: Competes directly with the proprietary software
offerings when it comes to speed and compressibility (often outperforming in
both)
- MRC compression of images, leading to anywhere from 3-15x compression ratios,
depending on the quality setting provided.
- Creates PDF from a directory of images
- Improved compression based on OCR results (hOCR files)
- Hidden text layer insertion based on hOCR files, which makes a PDF searchable
and the text copy-pasteable.
- PDF/A 3b compatible.
- Basic PDF/UA support (accessibility features)
- Creation of 1 bit (black and white) PDFs
Dependencies
- Python 3.x
- Python packages (also see
requirements.txt
):
- PyMuPDF
- lxml
- scikit-image
- Pillow
- roman
archive-hocr-tools <https://github.com/internetarchive/archive-hocr-tools>
_
One-of:
Kakadu JPEG2000 binaries <https://kakadusoftware.com/>
_- Open source OpenJPEG2000 tools (opj_compress and opj_decompress)
Grok <https://github.com/GrokImageCompression/grok/>
_ (grk_compress and grk_decompress)jpegoptim <https://github.com/tjko/jpegoptim>
_ (when using JPEG instead of JPEG2000)
For JBIG2 compression:
jbig2enc <https://github.com/agl/jbig2enc>
_ for JBIG2 compression (and PyMuPDF 1.19.0 or higher)
Installation
First install dependencies. For example, in Ubuntu::
sudo apt install libleptonica-dev libopenjp2-tools libxml2-dev libxslt-dev python3-dev python3-pip
sudo apt install automake libtool
git clone https://github.com/agl/jbig2enc
cd jbig2enc
./autogen.sh
./configure && make
sudo make install
Because archive-pdf-tools
is on the Python Package Index <https://pypi.org/project/archive-pdf-tools/>
_ (PyPI), you can use pip
(the Python 3 version is often called pip3
) to install the latest version::
# Latest version
pip3 install archive-pdf-tools
# Specific version
pip3 install archive-pdf-tools==1.4.14
Alternatively, if you want a specific commit or unreleased version, check out the master branch or a tagged release <https://github.com/internetarchive/archive-pdf-tools/tags>
_ and use pip
to install::
git clone https://github.com/internetarchive/archive-pdf-tools.git
cd archive-pdf-tools
pip3 install .
Finally, if you've downloaded a wheel to test a specific commit, you can also install it using pip
::
pip3 install --force-reinstall -U --no-deps ./archive_pdf_tools-${version}.whl
To see if archive-pdf-tools
is installed correctly for your user, run::
recode_pdf --version
Not well tested features
- "Recoding" an existing PDF, extracting the images and creating a new PDF with
the images from the existing PDF is not well tested. This works OK if every
PDF page just has a single image.
Known issues
- Using
--image-mode 0
and --image-mode 1
is currently broken, so only
MRC or no images is supported. - It is not possible to recode/compress a PDF without hOCR files. This will be
addressed in the future, since it should not be a problem to generate a PDF
lacking hOCR data.
Planned features
- Addition of a second set of fonts in the PDFs, so that hidden selected text
also renders the original glyphs.
- Better background generation (text shade removal from the background)
- Better compression parameter selection, I have not toyed around that much with
kakadu and grok/openjpeg2000 parameters.
MRC
The goal of Mixed Raster Content compression is to decompose the image into a
background, foreground and mask. The background should contain components that
are not of particular interest, whereas the foreground would contain all
glyphs/text on a page, as well as the lines and edges of various drawings or
images. The mask is a 1-bit image which has the value '1' when a pixel is part
of the foreground.
This decomposition can then be used to compress the different components
individually, applying much higher compression to specific components, usually
the background, which can be downscaled as well. The foreground can be quite
compressed as well, since it mostly just needs to contain the approximate
colours of the text and other lines - any artifacts introduced during the
foreground compression (e.g. ugly artifact around text borders) are removed by
overlaying the mask component of the image, which is losslessly compressed
(typically using either JBIG2 or CCITT).
In a PDF, this usually means the background image is inserted into a page,
followed by the foreground image, which uses the mask as its alpha layer.
Usage
Creating a PDF from a set of images is pretty straightforward::
recode_pdf --from-imagestack 'sim_english-illustrated-magazine_1884-12_2_15_jp2/*' \
--hocr-file sim_english-illustrated-magazine_1884-12_2_15_hocr.html \
--dpi 400 --bg-downsample 3 \
-m 2 -t 10 --mask-compression jbig2 \
-o /tmp/example.pdf
[...]
Processed 9 pages at 1.16 seconds/page
Compression ratio: 7.144962
Or, to scan a document, OCR it with Tesseract and save the result as a compressed PDF
(JPEG2000 compression with OpenJPEG, background downsampled three times), with
text layer::
scanimage --resolution 300 --mode Color --format tiff | tee /tmp/scan.tiff | tesseract - - hocr > /tmp/scan.hocr ; recode_pdf -v -J openjpeg --bg-downsample 3 --from-imagestack /tmp/scan.tiff --hocr-file /tmp/scan.hocr -o /tmp/scan.pdf
[...]
Processed 1 pages at 11.40 seconds/page
Compression ratio: 249.876613
Examining the results
mrcview
(tools/mrcview) is shipped with the package and can be used to turn a
MRC-compressed PDF into a PDF with each layer on a separate page, this is the
easiest way to inspect the resulting compression. Run it like so:
mrcview /tmp/compressed.pdf /tmp/mrc.pdf
There is also maskview
, which just renders the masks of a PDF to another PDF.
Alternatively, one could use pdfimages
to extract the image layers of a
specific page and then view them with your favourite image viewer::
pageno=0; pdfimages -f $pageno -l $pageno -png path_to_pdf extracted_image_base
feh extracted_image_base*.png
tools/pdfimagesmrc
can be used to check how the size of the PDF
is broken down into the foreground, background, masks and text layer.
License
License for all code (minus internetarchive/pdfrenderer.py
) is AGPL 3.0.
internetarchive/pdfrenderer.py
is Apache 2.0, which matches the Tesseract
license for that file.
.. [] https://en.wikipedia.org/wiki/Mixed_raster_content
.. [] http://kba.cloud/hocr-spec/1.2/