How to identify PDF files that need OCR? - pdf

I have over 30,000 pdf files. Some files are already OCR and some are not. Is there a way to find out which files are already OCR'd and which pdfs are image only?
It will take for ever if I ran every single file through an OCR processor.

I would write a small script to extract the text from the PDF files and see if it is "empty". If there is text the PDF already was OCRed. You could either use ghostscript or XPDF to extract the text.
EDIT:
This should get you started:
foreach ($pdffile in get-childitem -filter *.pdf){
$pdftext=invoke-expression ("\path\to\xpdf\pdftotext.exe '"+$pdffile.fullname+"' -");
write-host $pdffile.fullname
write-host $pdftext.length;
write-host $pdftext;
write-host "-------------------------------";
}
Unfortunately even when you have only images in your PDF pdftotext will extract some text, so you will have to do some more work to check whether you need to OCR the pdf.

XPDF worked for me in a different way. But not sure it is the right way.
My PDFs with image also gave text content. So I used pdffonts.exe to verify if the fonts are embedded in the document or not.In my case all image files showed 'no' for embedded value.
> Config Error: No display font for 'Symbol'
> Config Error: No display font for 'ZapfDingbats'
> name type emb sub uni object ID
> ------------------------------------ ----------------- --- --- --- ---------
> Helvetica Type 1 no no no 7 0
Where as all searchable PDFs gave 'yes'
> Config Error: No display font for 'Symbol'
> Config Error: No display font for 'ZapfDingbats'
> name type emb sub uni object ID
> ------------------------------------ ----------------- --- --- --- ---------
> ABCDEE+Calibri TrueType yes yes no 7 0
> ABCDEE+Calibri,Bold TrueType yes yes no 9 0

I found that TotalCmd has a plugin that handles this:
https://totalcmd.net/plugring/pdfOCR.html
pdfOCR is wdx plugin that discovers how many pages of PDF file in
current directory needs character recognition (OCR), i.e. how many
pages in PDF file have no searchable text in their layout. This is
mostly needed when one is preparing PDF files for one’s documentation
or archiving system. Generally in one’s work with PDF files they need
to be transformed from scanned version to text searchable form before
they are included in any documentation to allow for manual or
automatic text search. The pdfOCR plugin for Total Commander fulfils a
librarian’s need by presenting the number of pages that are images
only with no text contained. The number of scanned pages are presented
in the column “needOCR”. By comparing the needOCR number of pages with
the number of total pages one can decide if a PDF file needs
additional OCR processing.

You can scan a folder or entire drive using desktop search tool "dtSearch". At the end of the scan, it will show the list of all "image only" PDFs. In addition, it will also show a list of "encrypted" PDFs if any.

Related

Why can't I convert certain TIF files that I received in a split archive?

I received a large number of document files, where each document has its own split archive for each page (i.e. file1.001,file1.002,file2.001,file3.001). These are meant to be TIF files that can easily be combined and converted into PDF documents.
However, some of these files will not convert through imagemagick. Some can simply be converted using a different program, which works fine. There are some files where this doesn't work. I tried converting them to .jpg, then to tif, but they won't convert to .jpg. Things got weird when I converted them to .png, as some of these files would have multiple output files associated with them.
This is hard to explain, but I'll try and give an example; file1.001 and file1.002 both have the same image present on them when converted to tif and opened. However, when either of the tif documents is converted to a .png, two .png files are created. One has the original page, but the other one has a second page of the document that I could not view previously.
What could be causing this weird behavior, and how can I convert these to pdf more reliably?
I also used BlueBeam Staple to convert the files, if that helps at all.
Edit:
I've verified I'm on the latest imagemagick release, and I've been using it through PHP to process files. I'm running Windows 10.
Also, here's some example files to play around with. The first TIF actually shows the second page, instead of the page I normally see when I open the file.
Edit 2: Sorry, I thought uploading the image would preserve the file type. Here's a link to some test samples
When I convert your tiff to png, I get two files using IM 7.1.0-10 Q16-HDRI or IM 6.9.12-25 Q16 both on Mac OSX Sierra.
magick -quiet 294944.tif x.png
Produces:
and
Is this not what you get or expect?
P.S.
What are the other two files: 327924.001 327924.002
If those are some kind of split tiff, then it does not look like libtiff, which Imagemagick uses to read TIFFs can handle them. I get errors when attempting to use identify on them.
You definitely have some issue with whatever attempted to write those tiffs.
instrument 294944 page 1 of 2 = G4 199 dpi sheet 2 of 2 294944.tif (25.17 x 17.53 inches)
instrument 294944 page 2 of 2 = G4 199 dpi sheet 1 of 2 294944.tif (24.12 x 17.63 inches)
instrument 327501 page 1 of 1 = UN 72 dpi sheet 1 of 1 327924.001 (124.78 x 93.86 inches)
instrument 327924 page 1 of 2 = G4 400 dpi sheet 1 of 2 327924.002 (23.80 x 17.53 inches)
instrument 327924 page 2 of 2 = G4 400 dpi sheet 2 of 2 327924.002 (23.84 x 17.41 inches)
Two are identified as CCITT Group 4 Fax Encoding which is common for TIFFs of this type.
Tiff is a multi image format so a multipage FAX can be viewed as one file or 4 different printing CMYK colour plates could be sent as one image file for either overlay as one check print or printed one at a time for quality inking.
The file name Tif (or tiff) is usually applied to files with one or more pages (even 400+ for a long novel)
The extension part001.tif part002.tif is usually applied to groups of multiple pages OR for single sequential pages part1.001.tif part1.002.tif
Unfortunately for you you have a mix following a convention that seems to indicate number of pages 002 = 2 pages, but in inconsistent order, so need to check which were used for each file, as there is uncertainty.
Also the internal number does NOT always reflect the filename? perhaps transfer of interest ?
IN ADDITION you have a mix of compression methods and resolution thus cannot be sure of correct scale to be applied.
The best way to resolve this issue is decide how you wish them to be regrouped/sequenced and use the correct scale for each page or group of pages then recombine as desired into PDF.
It would help for a large number to tabulate the pages by number scale size compression etc and then process in identical groups before reorder and merge.

Ghostscript should not embed font but only list a substitute

I do have a pdf generating pipeline where at the end ghostscript (Linux) gets called to end up with a PDF (input ps). The PDF must be as small as possible, so the general commandline used is
ps2pdf13 -dSAFER -dPDFSETTINGS=/default -dEmbedAllFonts=false -dNoOutputFonts -dFastWebView infile outfile
That generates nice PDF files without fonts included as wanted, the assumption is that the target system then should use whatever they have to replace. Yes, this can mean that different systems do use slightly different fonts and as such get different looks.
Mostly works, there are 7 different fonts listed in the PDFs properties. Works nicely on Linux.
Windows (Acrobat Reader) complains about one of them missing, and then doesn't render any of that ones characters.
I know I can let gs embed fonts, except that increases the PDF size by 50%. Would like to avoid that (while its around 6000bytes, this multiplies by approx 30000 times for every run, and as such does count).
I would love to have a way to "embed" in the PDF an information of "For Font Helvetica-Narrow just use Arial Narrow" (or similar).
Does that exist?
[Edit]
Sorry for the late reply, busy. :(
Well, ok. I was thinking of a list of possible options for font selection. Also, coming from that way, the question may be gone the wrong way.
The options, btw, do make different size, though it seems to be the
-dEmbedAllFonts one to be responsible for sizes, -dNoOutPutFonts
doesnt seem to have any effect actually.
I have to compare against a (very old) distiller, which we try to replace, and using pdffonts, I get the following tables:
psp2df:
name type encoding emb sub uni object ID
------------------------------------ ----------------- ---------------- --- --- --- ---------
Helvetica-Narrow Type 1 Custom no no no 11 0
Helvetica-Bold Type 1 Custom no no no 9 0
Helvetica-Narrow-Bold Type 1 WinAnsi no no no 13 0
Courier Type 1 Custom no no no 15 0
Courier-Bold Type 1 Standard no no no 10 0
Helvetica Type 1 Custom no no no 8 0
Times-Italic Type 1 Standard no no no 21 0
distiller:
name type encoding emb sub uni object ID
------------------------------------ ----------------- ---------------- --- --- --- ---------
Helvetica Type 1 Custom no no no 4 0
Helvetica-Bold Type 1 Custom no no no 5 0
Courier Type 1 Custom no no no 6 0
Courier-Bold Type 1 Custom no no no 7 0
Helvetica-Narrow Type 1 Custom no no no 8 0
Helvetica-Narrow-Bold Type 1 Custom no no no 9 0
Times-Italic Type 1 Custom no no no 15 0
With the ps2pdf created PDF file Acrobat Reader complains about "Font
Helvetica-Narrow can not be found". The distiller one works.
I don't get it. It's the same list, at least for that font.
And obviously it then looks crap.
One solution is to embed fonts. Then the font list turns into
name type encoding emb sub uni object ID
------------------------------------ ----------------- ---------------- --- --- --- ---------
XVQNWP+Helvetica-Narrow Type 1C Custom yes yes no 11 0
Helvetica-Bold Type 1 Custom no no no 9 0
LBTZEH+Helvetica-Narrow-Bold Type 1C WinAnsi yes yes no 13 0
Courier Type 1 Custom no no no 15 0
Courier-Bold Type 1 Standard no no no 10 0
Helvetica Type 1 Custom no no no 8 0
Times-Italic Type 1 Standard no no no 21 0
and the file size goes up a load, which we want to avoid. Distiller
shows its possible, but not how.
No, you cannot define a substitute font for a missing one, that is entirely at the discretion of the viewer. How would it help anyway ? If the substitute you define isn't available to the viewer, then it would have to fall back to its own substitution anyway, or fail altogether.
A few comments on your command line:
If you are using -dNoOutputFonts then your PDF file should not contain any fonts, or font references, at all. It would also be (considerably) larger than disabling font embedding, and possibly larger than the same PDF with subset fonts embedded, because all the text will be included as path data, for even moderate amounts of text the repetition of the path data will exceed the font size.
Its hard to see how you are managing to produce a file which ends up referencing fonts, but doesn't include the font.
You don't need to specify -dPDFSETTINGS=/default because that is the default...
If you want a smaller file, do not specify -dFastWebView that produces a linearised PDF file which is larger (because of the format) than a non-linearised file. Very few viewers honour it, even those that do can only accelerate the first page view, and if the file is very small, its pointless since the entire file will arrive as fast as the early portion of the linearised file.
Forcing the version to 1.3 will likely make the file size larger too, at least in the future.

Is there an "Export to Pdf" plugin for Tiddlywiki?

Has anyone put together a plugin or tool for exporting a Tiddlywiki to pdf?
No, there isn't.
As a workaround, I write or find a decent printable stylesheet, then print to PDF.
Why not select the target tiddler to "Open in new window", and print it to PDF with any installed PDF printer?
To accomplish this I used a tool to convert HTML to PDF. These steps are a bit long but well worth it. Once you've got it working it is easily repeated.
In each tiddler that I want in my PDF, I mark with a specific tag; I used TableOfContents.
In each tiddler that is marked with this tag, I added an order field--to be used to define the order of tiddlers to appear in the PDF.
Ensure your HTML headers are properly defined for the document. I think tiddler titles use <h2>, so properly defining subheadings using <h3><h4> etc will ensure, if you want, a nice auto-generated Table of Contents in your PDF.
If you want each tiddler to start on a new page (in the PDF), we need to add this HTML to the end of each tiddler:
<div style = "display:block; clear:both; page-break-after:always;"></div>
With a completed TiddlyWiki document export the tiddlers to a single HTML file--this will be used to generate a PDF document. To export, go to the AdvancedSearch, select the Filter tab. In the search textbox enter your filter criteria--for me that was:
[tag[TableOfContents]sort[order]]
You'll see, immediately, on-screen a list of the tiddlers the system found based on that criteria. Then click on the Export icon and select Static HTML.
Optionally, but I think it's a great idea, manually create a cover page (in your favorite editor)--this will be a single HTML file to act as the cover page in the PDF document; call it cover.html. More on this later.
Download and install wkhtmltopdf (command-line tool to generate PDF from an HTML file).
https://wkhtmltopdf.org/downloads.html
Learn and get familiar with the wkhtmltopdf command line syntax. There are numerous features here so the command you end up with maybe lengthy. Use wkhtmltopdf /? to view general help, then wkhtmltopdf --extended-help to view details (well worth the read).
Generate a PDF document. At the command prompt navigate to the folder where your TiddlyWiki document is located. Here is a list of my favorite command-line switches. My app is installed in C:\Program Files..., so my command line starts with that...
"c:\Program Files\wkhtmltopdf\bin\wkhtmltopdf.exe"
Add this switch for a header on the left:
--header-left "My document title"
For a header on the right:
--header-right "v1.0.0.1"
Font size of header:
--header-font-size 8
Display a line below the header:
--header-line
Spacing between header and content in mm (default 0):
--header-spacing 5
A left-footer ([section] is replaced with the name of the current section:
--footer-left "[section]"
A centered footer:
--footer-center "Page [page] of [topage]"
Footer font size:
--footer-font-size 8
Footer spacing:
--footer-spacing 5
If you want titles to hyperlink (in the PDF) to go back to the TOC:
--enable-toc-back-links
Make sure no background images get printed:
--no-background
I added special styles in the TiddlyWiki document for print media--to hide tags and clean up the spacing. Then I used this switch to ensure print media is used:
--print-media-type
Being in North America I want letter-size pages; I think the default is A4:
-s Letter
IMPORTANT--give the tool access to local files, otherwise your images will be missing in the PDF:
--enable-local-file-access
Use this if you want to have a cover page (see step 6 above):
cover "cover.htm"
And use this if you want a TOC automatically generated. Without a cover page, the TOC will be your first page, so create a cover page:
toc
After the toc identify your exported tiddler HTML file as input to the tool:
tiddlers.html
And, the final argument on the command line is the output PDF file name:
MyDocument.pdf
Export the tid to html.
Then in the terminal, issue:
html2pdf $myTid.html $myTid.pdf
$myTid is only a var and can be any name
:)

How can I extract embedded fonts from a PDF as valid font files?

I'm aware of the pdftk.exe utility that can indicate which fonts are used by a PDF, and wether they are embedded or not.
Now the problem: given I had PDF files with embedded fonts -- how can I extract those fonts in a way that they are re-usable as regular font files? Are there (preferably free) tools which can do that? Also: can this be done programmatically with, say, iText?
You have several options. All these methods work on Linux as well as on Windows or Mac OS X. However, be aware that most PDFs do not include to full, complete fontface when they have a font embedded. Mostly they include just the subset of glyphs used in the document.
Using pdftops
One of the most frequently used methods to do this on *nix systems consists of the following steps:
Convert the PDF to PostScript, for example by using XPDF's pdftops (on Windows: pdftops.exe helper program.
Now fonts will be embedded in .pfa (PostScript) format + you can extract them using a text editor.
You may need to convert the .pfa (ASCII) to a .pfb (binary) file using the t1utils and pfa2pfb.
In PDFs there are never .pfm or .afm files (font metric files) embedded (because PDF viewer have internal knowledge about these). Without these, font files are hardly usable in a visually pleasing way.
Using fontforge
Another method is to use the Free font editor FontForge:
Use the "Open Font" dialogbox used when opening files.
Then select "Extract from PDF" in the filter section of dialog.
Select the PDF file with the font to be extracted.
A "Pick a font" dialogbox opens -- select here which font to open.
Check the FontForge manual. You may need to follow a few specific steps which are not necessarily straightforward in order to save the extracted font data as a file which is re-usable.
Using mupdf
Next, MuPDF. This application comes with a utility called pdfextract (on Windows: pdfextract.exe) which can extract fonts and images from PDFs. (In case you don't know about MuPDF, which still is relatively unknown and new: "MuPDF is a Free lightweight PDF viewer and toolkit written in portable C.", written by Artifex Software developers, the same company that gave us Ghostscript.)
(Update: Newer versions of MuPDF have moved the former functionality of 'pdfextract' to the command 'mutool extract'. Download it here: mupdf.com/downloads)
Note: pdfextract.exe is a command-line program. To use it, do the following:
c:\> pdfextract.exe c:\path\to\filename.pdf # (on Windows)
$> pdfextract /path/tofilename.pdf # (on Linux, Unix, Mac OS X)
This command will dump all of the extractable files from the pdf file referenced into the current directory. Generally you will see a variety of files: images as well as fonts. These include PNG, TTF, CFF, CID, etc. The image names will be like img-0412.png if the PDF object number of the image was 412. The fontnames will be like FGETYK+LinLibertineI-0966.ttf, if the font's PDF object number was 966.
CFF (Compact Font Format) files are a recognized format that can be converted to other formats via a variety of converters for use on different operating systems.
Again: be aware that most of these font files may have only a subset of characters and may not represent the complete typeface.
Update: (Jul 2013) Recent versions of mupdf have seen an internal reshuffling and renaming of their binaries, not just once, but several times. The main utility used to be a 'swiss knife'-alike binary called mubusy (name inspired by busybox?), which more recently was renamed to mutool. These support the sub-commands info, clean, extract, poster and show. Unfortunatey, the official documentation for these tools isn't up to date (yet). If you're on a Mac using 'MacPorts': then the utility was renamed in order to avoid name clashes with other utilities using identical names, and you may need to use mupdfextract.
To achieve the (roughly) equivalent results with mutool as its previous tool pdfextract did, just run mubusy extract ....*
So to extract fonts and images, you may need to run one of the following commandlines:
c:\> mutool.exe extract filename.pdf # (on Windows)
$> mutool extract filename.pdf # (on Linux, Unix, Mac OS X)
Downloads are here: mupdf.com/downloads
Using gs (Ghostscript)
Then, Ghostscript can also extract fonts directly from PDFs. However, it needs the help of a special utility program named extractFonts.ps, written in PostScript language, which is available from the Ghostscript source code repository.
Now use it, you need to run both, this file extractFonts.ps and your PDF file. Ghostscript will then use the instructions from the PostScript program to extract the fonts from the PDF. It looks like this on Windows (yes, Ghostscript understands the 'forward slash', /, as a path separator also on Windows!):
gswin32c.exe ^
-q -dNODISPLAY ^
c:/path/to/extractFonts.ps ^
-c "(c:/path/to/your/PDFFile.pdf) extractFonts quit"
or on Linux, Unix or Mac OS X:
gs \
-q -dNODISPLAY \
/path/to/extractFonts.ps \
-c "(/path/to/your/PDFFile.pdf) extractFonts quit"
I've tested the Ghostscript method a few years ago. At the time it did extract *.ttf (TrueType) just fine. I don't know if other font types will also be extracted at all, and if so, in a re-usable way. I don't know if the utility does block extracting of fonts which are marked as protected.
Using pdf-parser.py
Finally, Didier Stevens' pdf-parser.py: this one is probably not as easy to use, because you need to have some know-how about internal PDF structures. pdf-parser.py is a Python script which can do a lot of other things too. It can also decompress and extract arbitrary streams from objects, and therefore it can extract embedded font files too.
But you need to know what to look for. Let's see it with an example. I have a file named big.pdf. As a first step I use the -s parameter to search the PDF for any occurrence of the keyword FontFile (pdf-parser.py does not require a case sensitive search):
pdf-parser.py -s fontfile big.pdf
In my case, for my big1.pdf, I get this result:
obj 9 0
Type: /FontDescriptor
Referencing: 15 0 R
<<
/Ascent 728
/CapHeight 716
/Descent -210
/Flags 32
/FontBBox [ -665 -325 2000 1006 ]
/FontFile2 15 0 R
/FontName /ArialMT
/ItalicAngle 0
/StemV 87
/Type /FontDescriptor
/XHeight 519
>>
obj 11 0
Type: /FontDescriptor
Referencing: 16 0 R
<<
/Ascent 728
/CapHeight 716
/Descent -210
/Flags 262176
/FontBBox [ -628 -376 2000 1018 ]
/FontFile2 16 0 R
/FontName /Arial-BoldMT
/ItalicAngle 0
/StemV 165
/Type /FontDescriptor
/XHeight 519
>>
It tells me that there are two instances of FontFile2 inside the PDF, and these are in PDF objects no. 15 and no. 16, respectively. Object no. 15 holds the /FontFile2 for font /ArialMT, object no. 16 holds the /FontFile2 for font /Arial-BoldMT.
To show this more clearly:
pdf-parser.py -s fontfile big1.pdf | grep -i fontfile
/FontFile2 15 0 R
/FontFile2 16 0 R
A quick peeking into the PDF specification reveals the the keyword /FontFile2 relates to a 'stream containing a TrueType font program' (/FontFile would relate to a 'stream containing a Type 1 font program' and /FontFile3 would relate to a 'stream containing a font program whose format is specified by the Subtype entry in the stream dictionary' {hence being either a Type1C or a CIDFontType0C subtype}.)
To look specifically at PDF object no. 15 (which holds the font /ArialMT), one can use the -o 15 parameter:
pdf-parser.py -o 15 big1.pdf
obj 15 0
Type:
Referencing:
Contains stream
<<
/Length1 778552
/Length 1581435
/Filter /ASCIIHexDecode
>>
This pdf-parser.py output tells us that this object contains a stream (which it will not directly display) that has a length of 1.581.435 Bytes and is encoded ( == "compressed") with ASCIIHexEncode and needs to be decoded ( == "de-compressed" or "filtered") with the help of the standard /ASCIIHexDecode filter.
To dump any stream from an object, pdf-parser.py can be called with the -d dumpname parameter. Let's do it:
pdf-parser.py -o 15 -d dumped-data.ext big1.pdf
Our extracted data dump will be in the file named dumped-data.ext. Let's see how big it is:
ls -l dumped-data.ext
-rw-r--r-- 1 kurtpfeifle staff 1581435 Apr 11 00:29 dumped-data.ext
Oh look, it is 1.581.435 Bytes. We saw this figure in the previous command's output. Opening this file with a text editor confirms that its content is ASCII hex encoded data.
Opening the file with a font reading tool like otfinfo (this is a part of the lcdf-typetools package) will lead to some disappointment at first:
otfinfo -i dumped-data.ext
otfinfo: dumped-data.ext: not an OpenType font (bad magic number)
OK, this is because we did not (yet) let pdf-parser.py make use of its full magic: to dump a filtered, decoded stream. For this we have to add the -f parameter:
pdf-parser.py -o 15 -f -d dumped-data-decoded.ext big1.pdf
What's the size is this new file?
ls -l dumped-data-decoded.ext
-rw-r--r-- 1 kurtpfeifle staff 778552 Apr 11 00:39 dumped-data-decoded.ext
Oh, look: that exact number was also already stored in the PDF object no. 15 dictionary as the value for key /Length1...
What does file think it is?
file dumped-data-decoded.ext
dumped-data-decoded.ext: TrueType font data
What does otfinfo tell us about it?
otfinfo -i dumped-data-decoded.ext
Family: Arial
Subfamily: Regular
Full name: Arial
PostScript name: ArialMT
Version: Version 5.10
Unique ID: Monotype:Arial Regular:Version 5.10 (Microsoft)
Designer: Monotype Type Drawing Office - Robin Nicholas, Patricia Saunders 1982
Manufacturer: The Monotype Corporation
Trademark: Arial is a trademark of The Monotype Corporation.
Copyright: © 2011 The Monotype Corporation. All Rights Reserved.
License Description: You may use this font to display and print content as permitted by
the license terms for the product in which this font is included.
You may only (i) embed this font in content as permitted by the
embedding restrictions included in this font; and (ii) temporarily
download this font to a printer or other output device to help
print content.
Vendor ID: TMC
So Bingo!, we have a winner: pdf-parser.py did indeed extract a valid font file for us. Given the size of this file (778.552 Bytes), it looks like this font had been embedded even completely in the PDF...
We could rename it to arial-regular.ttf and install it as such and happily make use of it.
Caveats:
In any case you need to follow the license that applies to the font. Some font licences do not allow free use and/or distribution. Pirating fonts is like pirating any software or other copyrighted material.
Most PDFs which are in the wild out there do not embed the full font anyway, but only subsets. Extracting a subset of a font is only useful in a very limited scope, if at all.
Please do also read the following about Pros and (more) Cons regarding font extraction efforts:
http://typophile.com/node/34377 — not available anymore, but can bee seen on Wayback Machine at https://web.archive.org/web/20110717120241/typophile.com/node/34377
Use online service http://www.extractpdf.com. No need to install anything.
Even though this question is 10 years old, it is still valid and as technology changes so does a valid answer.
In searching the current answers noticed none of them note WOFF (Web Open Font Format) (W3C) (Wikipedia) which can be used to recreate the individual characters (glyphs) and display them in a web page accurately.
Using the free online web page by IDR Solutions, PDF to HTML5 (link), convert a PDF to a zip file. In the resulting zip will be a font directory of woff file types. Current Internet browsers support woff files if you were not aware. (reference) These can be examined at the online site FontDrop! (link).
WOFF files can be converted to/from OTF or TTF at WOFFer – WOFF font converter
Also the zip file from PDF to HTML5 will contain an HTML file for each page of the PDF that can be opened in an Internet browser and is one of the best and most accurate PDF translations I have found or seen.
Eventually found the FontForge Windows installer package and opened the PDF through the installed program. Worked a treat, so happy.
http://www.verypdf.com/app/pdf-font-extractor/pdf-font-extracting-tool.html
IMO easiest way to extract fonts (Windows).
PDF2SVG version 6.0 from PDFTron does a reasonable job. It produces OpenType (.otf) fonts by default. Use --preserve_fontnames to preserve "the font/font-family naming scheme as obtained from the source file."
PDF2SVG is a commercial product, but you can download a free demo executable (which includes watermarks on the SVG output but doesn't otherwise restrict usage). There may be other PDFTron products that also extract fonts, but I only recently discovered PDF2SVG myself.
One of the best online tools currently available to extract pdf fonts is http://www.pdfconvertonline.com/extract-pdf-fonts-online.html
This is a followup to the font-forge section of #Kurt Pfeifle's answer, specific to Red Hat (and possibly other Linux distros).
After opening the PDF and selecting the font you want, you will want to select "File -> Generate Fonts..." option.
If there are errors in the file, you can choose to ignore them or save the file and edit them. Most of the errors can be fixed automatically if you click "Fix" enough times.
Click "Element -> Font Info...", and "Fontname", "Family Name" and "Name for Humans" are all set to values you like. If not, modify them and save the file somewhere. These names will determine how your font appears on the system.
Select your file name and click "Save..."
Once you have your TTF file, you can install it on your system by
Copying it to folder /usr/share/fonts (as root)
Running fc-cache -f /usr/share/fonts/ (as root)

Programmatically add comments to PDF header

Has anyone had any success with adding additional information to a PDF file?
We have an electronic medical record system which produces medical documents for our users. In the past, those documents have been Print-To-File (.prn) files which we have fed to a system that displayed them as part of an enterprise medical record.
Now the hospital's enterprise medical record vendor wants to receive the documents as PDF, but still wants all of the same information stored in the header.
Honestly, we can't figure out how to put information into a PDF file that doesn't break the PDF file.
Here is the start of one of our PDFs...
%PDF-1.4
%âãÏÓ
6 0 obj
<<
/Type /XObject
/Subtype /Image
/BitsPerComponent 8
/Width 854
/Height 130
/ColorSpace /DeviceRGB
/Filter /DCTDecode
/Length 17734>>
stream
In our PRN files, we would insert information like this:
%MRN% TEST000001
%ACCT% TEST0000000000001
%DATE% 01/01/2009^16:44
%DOC_TYPE% Clinical
%DOC_NUM% 192837475
%DOC_VER% 1
My question is, can I insert this information into a PDF in a manner which allows the document server to perform post-processing, yet is NOT visible to the doctor who views the PDF?
Thank you,
David Walker
Yes, you can. Any line in a PDF file that starts with a percent sign is a comment and as such ignored (the first two lines of the PDF actually are comments as well). So you can pretty much insert your information into the PDF as you did into the PRN.
However:
The PDF format works with byte position references, so if you insert data into a finished PDF file, this will push the rest of the data away from their original position and thus break the file. You can also not append it to the file, because a PDF file has to end with
startxref
123456
%%EOF
(the 123456 is an example). You could insert your data right before these three lines. The byte position of the "startxref" part is never referenced anywhere, so you won't break anything if you push this final part towards the end.
Edit: This of course assumes there is no checksumming, signing or encryption going on. That would make things more complicated.
Edit 2: As Javier pointed out correctly, you can also just add your data to the end and just add a copy of the three lines to the end of that. Boils down to the same thing, but it's a little easier.
PDFs are supposed to have multiple versions just appending at the end; but the very end must have the offset to the main reference table. Just read the last three lines, append your data and reattach the original ending.
You can either remove the original ending or let it there. PDF readers will just go to the end and use the second-to-last line to find the reference table.
Have you ever thought to embed your additional info inside the PDF as a separate file?
The generic PDF specification allows to "attach files" to PDFs. Attached files can be anything: *.txt, *.doc, *.xsl, *.html or even .pdf. Attached files are contained in the PDF "container" file without corrupting the container's own content. (Special-purpose PDF specifications such as PDF/A- and PDF/X-* may impose some restrictions about embedded/attached files.)
That allows you to tie additional info and/or data to PDF files and allow for common storage and processing. Attached files are supposed to not disturb any PDF viewer's rendering.
I've used that feature frequently, for various purposes:
store the parent document (like .doc) inside the .pdf from which the .pdf was created in the first place;
tag a job ticketing information to a printfile that is sent to the printshop;
etc.pp.
Of course, recently discovered and published flaws in PDF processing software (and in the PDF spec itself) suggest to stay away from embedding/attaching binary files to PDF files --
because more and more Readers will by default stop you from easily extracting/detaching the embedded/attached files.
However, there is no reason why you shouldn't be able to put your additional info into a medical-record-info.txt file of arbitrary lenght and internal format and attach it to the PDF:
MRN TEST000001
ACCT TEST0000000000001
DATE 2009-01-01
TIME 16:44:33.76
DOC_TYPE Clinical
DOC_NUM 192837475
DOC_VER 1
MORE_INFO blah blah
Hi, guys,
can you please process this file faster than usual? If you don't,
someone will be dying.
Seriously, David.
FWIW, the commandline tools pdftk.exe (Windows) and pdftk (Linux) are able to attach and detach embedded files from their container PDF. Acrobat Reader can also handle attachments.
You could setup/program/script your document server handling the PDF to automatically detach the embedded .txt file and trigger actions according to its content.
Of course, the doctor who views the PDF would be able to see there is a file attachment in the PDF. But it wouldn't appear in his "normal" viewing. He'd have to take specific additional actions in order to extract and view it. (And then there is the option to set a password on the PDF to protect it from un-authorized file detachments. And/or encode, obscure, rot13 the .txt. Not exactly rock-solid methods, but 99% of doctors wouldn't be able to accomplish it even if you teach them how to...)
You can still insert comments into a PDF file using the % character. But anyone would be able to access with a text editor.
Your vendor could remove these comments after post-processing, so it doesn't actually get to the doctors.
You can store the data as real PDF metadata. For example, with CAM::PDF you can write metadata like this:
use CAM::PDF;
my $pdf = CAM::PDF->new('temp.pdf') || die;
my $info = $pdf->getValue($pdf->{trailer}->{Info}) || die;
$info->{PRN} = CAM::PDF::Node->new('dictionary', {
DOC_TYPE => CAM::PDF::Node->new('string', 'Clinical'),
DOC_NUM => CAM::PDF::Node->new('number', 192837475),
DOC_VER => CAM::PDF::Node->new('number', 1),
});
$pdf->cleanoutput('out.pdf');
The Info node of the PDF then looks like this:
8 0 obj
<< /CreationDate (D:20080916083455-04'00')
/ModDate (D:20080916083729-04'00')
/PRN << /DOC_NUM 192837475 /DOC_TYPE (Clinical) /DOC_VER 1 >> >>
endobj
You can read the PRN data back out like so (simplistic code...)
my $pdf = CAM::PDF->new('out.pdf') || die;
my $info = $pdf->getValue($pdf->{trailer}->{Info}) || die;
my $prn = $info->{PRN};
if ($prn) {
my $prndict = $pdf->getValue($prn);
for my $key (sort keys %{$prndict}) {
print "$key = ", $pdf->getValue($prndict->{$key}), "\n";
}
}
Which makes output like this:
DOC_NUM = 192837475
DOC_TYPE = Clinical
DOC_VER = 1
PDF supports arbitrarily nested arrays, dictionaries and references so just about any data can be represented. For example, I built an entire filesystem embedded in a PDF just for fun!
At one point we were changing some Acrobat JS code by doing a text replace in a plain (unencrypted) PDF. The trick was that the lengths of each PDF block were hard coded in the document. So, we could not change the number of characters. We would just add extra spaces.
It worked great, the JS code executed an all.
Have you thought about using XMP?