Comparison of two pdf files - pdf

I need to compare the contents of two almost similar files and highlight the dissimilar portions in the corresponding pdf file. Am using pdfbox. Please help me atleast with the logic.

If you prefer a tool with a GUI, you could try this one: diffpdf. It's by Mark Summerfield, and since it's written with Qt, it should be available (or should be buildable) on all platforms where Qt runs on.
Here's a screenshot:

You can do the same thing with a shell script on Linux. The script wraps 3 components:
ImageMagick's compare command
the pdftk utility
Ghostscript
It's rather easy to translate this into a .bat Batch file for DOS/Windows...
Here are the building blocks:
pdftk
Use this command to split multipage PDF files into multiple singlepage PDFs:
pdftk first.pdf burst output somewhere/firstpdf_page_%03d.pdf
pdftk 2nd.pdf burst output somewhere/2ndpdf_page_%03d.pdf
compare
Use this command to create a "diff" PDF page for each of the pages:
compare \
-verbose \
-debug coder -log "%u %m:%l %e" \
somewhere/firstpdf_page_001.pdf \
somewhere/2ndpdf_page_001.pdf \
-compose src \
somewhereelse/diff_page_001.pdf
Note, that compare is part of ImageMagick. But for PDF processing it needs Ghostscript as a 'delegate', because it cannot do so natively itself.
Once more, pdftk
Now you can again concatenate your "diff" PDF pages with pdftk:
pdftk \
somewhereelse/diff_page_*.pdf \
cat \
output somewhereelse/diff_allpages.pdf
Ghostscript
Ghostscript automatically inserts meta data (such as the current date+time) into its PDF output. Therefore this is not working well for MD5hash-based file comparisons.
If you want to automatically discover all cases which consist of purely white pages (that means: there are no visible differences in your input pages), you could also convert to a meta-data free bitmap format using the bmp256 output device. You can do that for the original PDFs (first.pdf and 2nd.pdf), or for the diff-PDF pages:
gs \
-o diff_page_001.bmp \
-r72 \
-g595x842 \
-sDEVICE=bmp256 \
diff_page_001.pdf
md5sum diff_page_001.bmp
Just create an all-white BMP page with its MD5sum (for reference) like this:
gs \
-o reference-white-page.bmp \
-r72 \
-g595x842 \
-sDEVICE=bmp256 \
-c "showpage quit"
md5sum reference-white-page.bmp

I had this very problem myself and the quickest way that I've found is to use PHP and its bindings for ImageMagick (Imagick).
<?php
$im1 = new \Imagick("file1.pdf");
$im2 = new \Imagick("file2.pdf");
$result = $im1->compareImages($im2, \Imagick::METRIC_MEANSQUAREERROR);
if($result[1] > 0.0){
// Files are DIFFERENT
}
else{
// Files are IDENTICAL
}
$im1->destroy();
$im2->destroy();
Of course, you need to install the ImageMagick bindings first:
sudo apt-get install php5-imagick # Ubuntu/Debian

I have come up with a jar using apache pdfbox to compare pdf files - this can compare pixel by pixel & highlight the differences.
Check my blog : http://www.testautomationguru.com/introducing-pdfutil-to-compare-pdf-files-extract-resources/ for example & download.
To get page count
import com.taguru.utility.PDFUtil;
PDFUtil pdfUtil = new PDFUtil();
pdfUtil.getPageCount("c:/sample.pdf"); //returns the page count
To get page content as plain text
//returns the pdf content - all pages
pdfUtil.getText("c:/sample.pdf");
// returns the pdf content from page number 2
pdfUtil.getText("c:/sample.pdf",2);
// returns the pdf content from page number 5 to 8
pdfUtil.getText("c:/sample.pdf", 5, 8);
To extract attached images from PDF
//set the path where we need to store the images
pdfUtil.setImageDestinationPath("c:/imgpath");
pdfUtil.extractImages("c:/sample.pdf");
// extracts & saves the pdf content from page number 3
pdfUtil.extractImages("c:/sample.pdf", 3);
// extracts & saves the pdf content from page 2
pdfUtil.extractImages("c:/sample.pdf", 2, 2);
To store PDF pages as images
//set the path where we need to store the images
pdfUtil.setImageDestinationPath("c:/imgpath");
pdfUtil.savePdfAsImage("c:/sample.pdf");
To compare PDF files in text mode (faster – But it does not compare the format, images etc in the PDF)
String file1="c:/files/doc1.pdf";
String file1="c:/files/doc2.pdf";
// compares the pdf documents & returns a boolean
// true if both files have same content. false otherwise.
pdfUtil.comparePdfFilesTextMode(file1, file2);
// compare the 3rd page alone
pdfUtil.comparePdfFilesTextMode(file1, file2, 3, 3);
// compare the pages from 1 to 5
pdfUtil.comparePdfFilesTextMode(file1, file2, 1, 5);
To compare PDF files in Binary mode (slower – compares PDF documents pixel by pixel – highlights pdf difference & store the result as image)
String file1="c:/files/doc1.pdf";
String file1="c:/files/doc2.pdf";
// compares the pdf documents & returns a boolean
// true if both files have same content. false otherwise.
pdfUtil.comparePdfFilesBinaryMode(file1, file2);
// compare the 3rd page alone
pdfUtil.comparePdfFilesBinaryMode(file1, file2, 3, 3);
// compare the pages from 1 to 5
pdfUtil.comparePdfFilesBinaryMode(file1, file2, 1, 5);
//if you need to store the result
pdfUtil.highlightPdfDifference(true);
pdfUtil.setImageDestinationPath("c:/imgpath");
pdfUtil.comparePdfFilesBinaryMode(file1, file2);

To compare PDFs on macOS Monterey (i.e. version 12), I was able to install diff-pdf using homebrew, and run it.
The --view option didn't work for me, but the --output-diff did.

Related

How to make a vector PDF searchable?

My workflow includes making figures in Inkscape, which are then converted to PDF and included into LaTeX documents. In these figures, I often have to include mathematical formulas. For that, I use TexText. For font consistency and simplicity, when I want to add some plain text to my figure, I also use TexText. When the resulting SVG is converted to PDF, the TexText-generated text is not searchable.
How can I make a PDF from the SVG such that it is searchable while remaining a vector PDF?
I know I could rasterize the figure and then use e.g. Tesseract to create a searchable PDF. But the resulting PDF will of course contain a rasterized version of my figure. I would like the figure itself to remain vector graphics.
I am guessing there has to be a way that would go something like this: indeed rasterize the PDF and use Tesseract to extract the text. But then take the output of Tesseract and somehow add it to the original vector PDF. Unfortunately, I don't know how to do this.
It turns out a question that is directly relevant to mine was answered on another StackExchange, here. The script that answers my actual question is svgToSearchablePDF.sh, and I post it below. It uses, as the key element, the script pdf-merge-text.sh from the accepted answer to that other question. For completeness, I will repost pdf-merge-text.sh in this answer.
The solution
Note that perhaps you will need to magnify the SVG file before converting it to a searchable PDF: larger image sizes help the OCR process. To magnify, in Inkscape, select the entire image, then go to Object -> Transform… . In the Transform tab, select Scale. Then select 'Scale proportionally', and finally, in either 'Width' or 'Height', enter something like '300' (make sure % is selected in the menu immediately to the right of the 'Width' field). Next, File->Document Properties…. In the 'Document Properties' tab, under Custom size, click on Resize page to content. Make sure that either nothing is selected or else that the entire image is selected. Then click the button 'Resize page to drawing or selection'. Save the SVG.
The script svgToSearchablePDF.sh uses an SVG file as input and produces a searchable vector PDF file as output. It is assumed that all of the following are installed: Tesseract, Inkscape, and Ghostscript.
For example, assume that we used Inkscape to create the file mygraphics.svg. Then the following command will produce a searchable PDF file mygraphics.pdf:
svgToSearchablePDF.sh mygraphics.svg
The scripts
First, svgToSearchablePDF.sh:
#!/bin/bash
filename="$1"
inkscape ${filename%.*}.svg -o ${filename%.*}_auxfile.png
inkscape ${filename%.*}.svg -o ${filename%.*}_auxfileVCT.pdf
tesseract ${filename%.*}_auxfile.png ${filename%.*}_auxfileTXT -l eng pdf
pdf-merge-text.sh ${filename%.*}_auxfileTXT.pdf ${filename%.*}_auxfileVCT.pdf ${filename%.*}.pdf
rm -f ${filename%.*}_auxfile.png ${filename%.*}_auxfileVCT.pdf ${filename%.*}_auxfileTXT.pdf
As I said, that script uses the script pdf-merge-text.sh from here. For completeness, here it is:
#!/usr/bin/env bash
set -eu
pdf_merge_text() {
local txtpdf; txtpdf="$1"
local imgpdf; imgpdf="$2"
local outpdf; outpdf="${3--}"
if [ "-" != "${txtpdf}" ] && [ ! -f "${txtpdf}" ]; then echo "error: text PDF does not exist: ${txtpdf}" 1>&2; return 1; fi
if [ "-" != "${imgpdf}" ] && [ ! -f "${imgpdf}" ]; then echo "error: image PDF does not exist: ${imgpdf}" 1>&2; return 1; fi
if [ "-" != "${outpdf}" ] && [ -e "${outpdf}" ]; then echo "error: not overwriting existing output file: ${outpdf}" 1>&2; return 1; fi
(
local txtonlypdf; txtonlypdf="$(TMPDIR=. mktemp --suffix=.pdf)"
trap "rm -f -- '${txtonlypdf//'/'\\''}'" EXIT
gs -o "${txtonlypdf}" -sDEVICE=pdfwrite -dFILTERIMAGE "${txtpdf}"
pdftk "${txtonlypdf}" multistamp "${imgpdf}" output "${outpdf}"
)
}
pdf_merge_text "$#"

Batch convert svg to pdf page size

I have a number of svg files created with inkscape that contain text in non-standard fonts. As far as I understand, in order to have them printed I need to convert the text to paths. It seems that if I just use
convert input.svg output.pdf
the text is automatically converted to paths. Is this correct?
However my problem is with the page size. The input svg have a page size of A5, landscape. However the converted pdf seem to be cut on the right and bottom of the image by about 5% of the image width/height.
Why is that? How do I fix it?
As long as you have Inkscape on your system, ImageMagick convert actually delegates the PDF export to Inkscape. You can use it directly on the command line as
inkscape -zA output.pdf input.svg
Quote from man:
Used fonts are subset and embedded.
There are some options to manipulate the export area. -C explicitely sets the page area, -D the drawing bounding box.
You could even preserve the SVG format by using
inkscape -Tl output.svg input.svg
which would convert text to path.
Lastely, since you have to batch-process multiple files, you should open a shell with
inkscape --shell
and process all files in one go. Otherwise, startup time of inkscape would be 1-3 seconds for every file. Something like:
ls -1 *.svg | awk -F. \
'{ print "-AC " $1 ".pdf" $0 }
END { print "quit" }' | \
inkscape --shell

Merge certain pdf pages in one pdf with pdftk

I have some pdf files
Lettera_Contributi_201701-1.pdf
Lettera_Contributi_201701-2.pdf
Lettera_Contributi_201701-3.pdf
so on...
and I'd like to merge only their 2nd pages in one pdf file. I've tried the following pdftk command with a list of file example
pdftk *.pdf cat 2 output test.pdf
but the result I get in test.pdf is just the a.pdf's 2nd page..
Any ideas?
$ pdftk *.pdf cat 2 output test.pdf verbose
Command Line Data is valid.
Input PDF Filenames & Passwords in Order
( <filename>[, <password>] )
Lettera_Contributi_201701-1.pdf
Lettera_Contributi_201701-2.pdf
Lettera_Contributi_201701-3.pdf
Lettera_Contributi_201701-4.pdf
Lettera_Contributi_201701-5.pdf
Lettera_Contributi_201701-6.pdf
The operation to be performed:
cat - Catenate given page ranges into a new PDF.
The output file will be named:
test.pdf
Output PDF encryption settings:
Output PDF will not be encrypted.
No compression or uncompression being performed on output.
Creating Output ...
Adding page 2 X0X from Lettera_Contributi_201701-1.pdf
You may do it in two steps using 'find':
1) find all source PDFs in a current folder and execute 'pdftk' on everyone of them:
find . -name \*pdf -exec pdftk A={} cat A2 output {}_2 \;
( Above command finds all the files which have names ending with "pdf" and runs a command given after -exec. Brackets { } are substituted with a name of each file that was found. )
You'll get a set of new PDFs containing only a second page each. They will be named like: original_filename.pdf_2
e.g.
file1.pdf_2
file2.pdf_2
file3.pdf_2
2) now you can merge all the new PDFs:
pdftk \*pdf_2 output out.pdf
You will get out.pdf containing all the second pages of original PDFs.

How to get the hidden text layout that tesseract creates for pdf files?

I don't have much experience with ocr. Here's what I try:
tesseract -l eng -psm 1 image_str007_0001.jpg image_str007_tess pdf
The result is a perfectly structured hidden text layout - the words are on their exact places when searching the pdf.
My question is: can I get this layout as a file (hocr or html)?
(Config parameters preferred, not API.)
What I've tried:
tesseract -l eng -psm 1 image_str007_0001.jpg output hocr
and
hocr2pdf -i image_str007_001 -o output.pdf < output.hocr
In the file output.pdf the words are badly mislpaced when searching through the text. Is command 2. not correct for creating the tesseract hocr layout file, or the hocr2pdf app does not create the pdf correctly?

PDF text extraction from given coordinates

I would like to extract text from a portion (using coordinates) of PDF using Ghostscript.
Can anyone help me out?
Yes, with Ghostscript, you can extract text from PDFs. But no, it is not the best tool for the job. And no, you cannot do it in "portions" (parts of single pages). What you can do: extract the text of a certain range of pages only.
First: Ghostscript's txtwrite output device (not so good)
gs \
-dBATCH \
-dNOPAUSE \
-sDEVICE=txtwrite \
-dFirstPage=3 \
-dLastPage=5 \
-sOutputFile=- \
/path/to/your/pdf
This will output all text contained on pages 3-5 to stdout. If you want output to a text file, use
-sOutputFile=textfilename.txt
gs Update:
Recent versions of Ghostscript have seen major improvements in the txtwrite device and bug fixes. See recent Ghostscript changelogs (search for txtwrite on that page) for details.
Second: Ghostscript's ps2ascii.ps PostScript utility (better)
This one requires you to download the latest version of the file ps2ascii.ps from the Ghostscript Git source code repository. You'd have to convert your PDF to PostScript, then run this command on the PS file:
gs \
-q \
-dNODISPLAY \
-P- \
-dSAFER \
-dDELAYBIND \
-dWRITESYSTEMDICT \
-dSIMPLE \
/path/to/ps2ascii.ps \
input.ps \
-c quit
If the -dSIMPLE parameter is not defined, each output line contains some additional info beyond the pure text content about fonts and fontsize used.
If you replace that parameter by -dCOMPLEX, you'll get additional infos about colors and images used.
Read the comments inside the ps2ascii.ps to learn more about this utility. It's not comfortable to use, but for me it worked in most cases I needed it....
Third: XPDF's pdftotext CLI utility (more comfortable than Ghostscript)
A more comfortable way to do text extraction: use pdftotext (available for Windows as well as Linux/Unix or Mac OS X). This utility is based either on Poppler or on XPDF. This is a command you could try:
pdftotext \
-f 13 \
-l 17 \
-layout \
-opw supersecret \
-upw secret \
-eol unix \
-nopgbrk \
/path/to/your/pdf
- |less
This will display the page range 13 (first page) to 17 (last page), preserve the layout of a double-password protected named PDF file (using user and owner passwords secret and supersecret), with Unix EOL convention, but without inserting pagebreaks between PDF pages, piped through less...
pdftotext -h displays all available commandline options.
Of course, both tools only work for the text parts of PDFs (if they have any). Oh, and mathematical formula also won't work too well... ;-)
pdftotext Update:
Recent versions of Poppler's pdftotext have now options to extract "a portion (using coordinates) of PDF" pages, like the OP asked for. The parameters are:
-x <int> : top left corner's x-coordinate of crop area
-y <int> : top left corner's y-coordinate of crop area
-W <int> : crop area's width in pixels (defaults to 0)
-H <int> : crop area's height in pixels (defaults to 0)
Best, if used with the -layout parameter.
Fourth: MuPDF's mutool draw command can also extract text
The cross-platform, open source MuPDF application (made by the same company that also develops Ghostscript) has bundled a command line tool, mutool. To extract text from a PDF with this tool, use:
mutool draw -F txt the.pdf
will emit the extracted text to <stdout>. Use -o filename.txt to write it into a file.
Fifth: PDFLib's Text Extraction Toolkit (TET) (best of all... but it is PayWare)
TET, the Text Extraction Toolkit from the pdflib family of products can find the x-y-coordinate of text content in a PDF file (and much more). TET has a commandline interface, and it's the most powerful of all text extraction tools I'm aware of. (It can even handle ligatures...) Quote from their website:
Geometry
TET provides precise metrics for the text, such as the position on the page, glyph widths, and text direction. Specific areas on the page can be excluded or included in the text extraction, e.g. to ignore headers and footers or margins.
In my experience, while it's does not sport the most straight-forward CLI interface you can imagine: after you got used to it, it will do what it promises to do, for most PDFs you throw towards it...
And there are even more options:
podofotxtextract (CLI tool) from the PoDoFo project (Open Source)
calibre (normally a GUI program to handle eBooks, Open Source) has a commandline option that can extract text from PDFs
AbiWord (a GUI word processor, Open Source) can import PDFs and save its files as .txt: abiword --to=txt --to-name=output.txt input.pdf
I'm not sure GhostScript can accept coordinates, but you can convert the PDF to a image and send it to an OCR engine either as a subimage cropped from the given coordinates or as the whole image along with the coordinates. Some OCR API accepts a rectangle parameter to narrow the region for OCR.
Look at VietOCR for a working example, which uses Tesseract as its OCR engine and GhostScript as PDF-to-image converter.
Debenu Quick PDF Library can extract text from a defined area on a page. The SetTextExtractionArea function lets you specify the x and y coordinates and then you can also specify the width and height of the area.
Left = The horizontal coordinate of the left edge of the area
Top = The vertical coordinate of the top edge of the area
Width = The width of the area
Height = The height of the area
Then the GetPageText function can be called immediately after this to extract the text from that defined area.
Here's an example using C# (though the library is multi-platform and can be used with many different programming languages):
DPL.LoadFromFile(#"Sample.pdf", "");
DPL.SetOrigin(1); // Sets 0,0 coordinate position to top left of page, default is bottom left
DPL.SetTextExtractionArea(35, 35, 229, 30); // Left, Top, Width, Height
string ExtractedContent = DPL.GetPageText(8);
Console.WriteLine(ExtractedContent);
Using GetPageText it is also possible to return just the text located in that area or the text located in that area as well as information about the text's font such as name, color and size.