How to check if a PDF has any kind of digital signature - pdf

I need to understand if a PDF has any kind of digital signature. I have to manage huge PDFs, e.g. 500MB each, so I just need to find a way to separate non-signed from signed (so I can send just signed PDFs to a method that manages them). Any procedure found until now involves attempt to extract certificate via e.g. Bouncycastle libs (in my case, for Java): if it is present, pdf is signed, if it not present or a exception is raised, is it not (sic!). But this is obviously time/memory consuming, other than an example of resource-wastings implementation.
Is there any quick language-independent way, e.g. opening PDF file, and reading first bytes and finding an info telling that file is signed?
Alternatively, is there any reference manual telling in detail how is made internally a PDF?
Thank you in advance

You are going to want to use a PDF Library rather than trying to implement this all yourself, otherwise you will get bogged down with handling the variations of Linearized documents, Filters, Incremental updates, object streams, cross-reference streams, and more.
With regards to reference material; per my cursory search, it looks like Adobe is no longer providing its version of the ISO 32000:2008 specification to any and all, though that specification is mainly a translation of the PDF v1.7 Reference manual to ISO-conforming language.
So assuming the PDF v1.7 Reference, the most relevant sections are going to be 8.7 (Digital Signatures), 3.6.1 (Document Catalog), and 8.6 (Interactive Forms).
The basic process is going to be:
Read the Document Catalog for 'Perms' and 'AcroForm' entries.
Read the 'Perms' dictionary for 'DocMDP','UR', or 'UR3' entries. If these entries exist, In all likelyhood, you have either a certified document or a Reader-enabled document.
Read the 'AcroForm' entry; (make sure that you do not have an 'XFA' entry, because in the words of Fraizer from Porgy and Bess: Dat's a complication!). You basically want to first check if there is an (optional) 'SigFlags' entry, in which case a non-zero value would indicate that there is a signature in the Fields Array. Otherwise, you need to walk each entry of the 'Fields' Array looking for a field dictionary with an 'FT' (Field Type) entry set to 'Sig' (signature), with a 'V' (Value) entry that is not null.
Using a PDF library that can use the document's cross-reference table to navigate you to the right indirect objects should be faster and less resource-intensive than a brute-force search of the document for a certificate.

This is not the optimal solution, but it is another one... you can to check "Sigflags" and stop at the first match:
grep -m1 "/Sigflags" ${PDF_FILE}
or get such files inside a directory:
grep -r --include=*.pdf -m1 -l "/Sigflags" . > signed_pdfs.txt
grep -r --include=*.pdf -m1 -L "/Sigflags" . > non_signed_pdfs.txt
Grep can be very fast for big files. You can run that in a batch for certain time and process the resulting lists (.txt files) after that.
Note that the file could be modified incrementally after a signature, and the last version might not be signed. That would be the actual meaning of "signed".
Anyway, if the file doesn't have a /Sigflags string , it is almost sure that it was never signed.
Note the conforming readers start reading backwards (from the end of the file) because there is the cross-reference table that says where is every object.
I advice you to use peepdf to check the inner structure of the file. It supports executing it commands over the file. For example:
$ peepdf -C "search /SigFlags" signed.pdf
[6]
$ peepdf -C "search /SigFlags" non-signed.pdf
Not found!!
But I have not tested the performance of that. You can use it to browse over the internal structure of the PDF an learn from the PDF v1.7 Reference. Check for the Annexs with PDF examples there.

Using command line you can check if a file has a digital signature with pdfsig tool from poppler-utils package (works on Ubuntu 20.04).
pdfsig pdffile.pdf
will produce output with detailed data on the signatures included and validation data. If you need to scan a pdf file tree and get a list of signed pdfs you can use a bash command like:
find ./path/to/files -iname '*.pdf' \
-exec bash -c 'pdfsig "$0"; \
if [[ $? -eq 0 ]]; then \
echo "$0" >> signed-files.txt; fi' {} \;
You will get a list of signed files in signed-files.txt file in the local directory.
I have found this to be much more reliable than trying to grep some text out of a pdf file (for example, the pdfs produced by signing services in Lithuania do not contain the string "SigFlags" which was mentioned in the previous answers).

After six years, this is the solution I implemented in Java via IText that can find any PADES signature presence on an unprotected PDF file.
This easy method returns a 3-state Boolean (don't wallop me for that, lol): Boolean.TRUE means "signed"; Boolean.FALSE means "not signed"; null means that something nasty happened reading the PDF (and in this case, I send the file to the old slow analysis procedure). After about half a million PADES-signed PDFs were scanned, I didn't have any false negatives, and after about 7 million of unsigned PDFs I didn't have any false positives.
Maybe I was just lucky (my PDF files were just signed once, and always in the same way), but it seems that this method works - at least for me. Thanks #Patrick Gallot
private Boolean isSigned(URL url)
{
try {
PdfReader reader = new PdfReader(url);
PRAcroForm acroForm = reader.getAcroForm();
if (acroForm == null) {
return false;
}
// The following can lead to false negatives
// boolean hasSigflags = acroForm.getKeys().contains(PdfName.SIGFLAGS);
// if (!hasSigflags) {
// return false;
// }
List<?> fields = acroForm.getFields();
for (Object k : fields) {
FieldInformation fi = (FieldInformation) k;
PdfObject ft = fi.getInfo().get(PdfName.FT);
if (PdfName.SIG.equals(ft)) {
logger.info("Found signature named {}", fi.getName());
return true;
}
}
} catch (Exception e) {
logger.error("Whazzup?", e);
return null;
}
return false;
}
Another function that should work correctly (I found it checking recently a paper written by Bruno Lowagie, Digital Signatures for PDF documents, page 124) is the following one:
private Boolean isSignedShorter(URL URL)
{
try {
PdfReader reader = new PdfReader(url);
AcroFields fields = reader.getAcroFields();
return !fields.getSignatureNames().isEmpty();
} catch (Exception e) {
logger.warn("Whazzup?", e);
return null;
}
}
I personally tested it on about a thousand signed/unsigned PDFs and it seems to work too, probably better than mine in case of complex signatures.
I hope to have given a good starting point to solve my original issue :)

Related

Best practice using NSLocalizedString

I'm (like all others) using NSLocalizedStringto localize my app.
Unfortunately, there are several "drawbacks" (not necessarily the fault of NSLocalizedString itself), including
No autocompletition for strings in Xcode. This makes working not only error-prone but also tiresome.
You might end up redefining a string simply because you didn't know an equivalent string already existed (i.e. "Please enter password" vs. "Enter password first")
Similarily to the autocompletion-issue, you need to "remember"/copypaste the comment strings, or else genstring will end up with multiple comments for one string
If you want to use genstring after you've already localized some strings, you have to be careful to not lose your old localizations.
Same strings are scattered througout your whole project. For example, you used NSLocalizedString(#"Abort", #"Cancel action") everywhere, and then Code Review asks you to rename the string to NSLocalizedString(#"Cancel", #"Cancel action") to make the code more consistent.
What I do (and after some searches on SO I figured many people do this) is to have a seperate strings.h file where I #define all the localize-code. For example
// In strings.h
#define NSLS_COMMON_CANCEL NSLocalizedString(#"Cancel", nil)
// Somewhere else
NSLog(#"%#", NSLS_COMMON_CANCEL);
This essentially provides code-completion, a single place to change variable names (so no need for genstring anymore), and an unique keyword to auto-refactor. However, this comes at the cost of ending up with a whole bunch of #define statements that are not inherently structured (i.e. like LocString.Common.Cancel or something like that).
So, while this works somewhat fine, I was wondering how you guys do it in your projects. Are there other approaches to simplify the use of NSLocalizedString? Is there maybe even a framework that encapsulates it?
NSLocalizedString has a few limitations, but it is so central to Cocoa that it's unreasonable to write custom code to handle localization, meaning you will have to use it. That said, a little tooling can help, here is how I proceed:
Updating the strings file
genstrings overwrites your string files, discarding all your previous translations.
I wrote update_strings.py to parse the old strings file, run genstrings and fill in the blanks so that you don't have to manually restore your existing translations.
The script tries to match the existing string files as closely as possible to avoid having too big a diff when updating them.
Naming your strings
If you use NSLocalizedString as advertised:
NSLocalizedString(#"Cancel or continue?", #"Cancel notice message when a download takes too long to proceed");
You may end up defining the same string in another part of your code, which may conflict as the same english term may have different meaning in different contexts (OK and Cancel come to mind).
That is why I always use a meaningless all-caps string with a module-specific prefix, and a very precise description:
NSLocalizedString(#"DOWNLOAD_CANCEL_OR_CONTINUE", #"Cancel notice window title when a download takes too long to proceed");
Using the same string in different places
If you use the same string multiple times, you can either use a macro as you did, or cache it as an instance variable in your view controller or your data source.
This way you won't have to repeat the description which may get stale and get inconsistent among instances of the same localization, which is always confusing.
As instance variables are symbols, you will be able to use auto-completion on these most common translations, and use "manual" strings for the specific ones, which would only occur once anyway.
I hope you'll be more productive with Cocoa localization with these tips!
As for autocompletition for strings in Xcode, you could try https://github.com/questbeat/Lin.
Agree with ndfred, but I would like to add this:
Second parameter can be use as ... default value!!
(NSLocalizedStringWithDefaultValue does not work properly with genstring, that's why I proposed this solution)
Here is my Custom implementation that use NSLocalizedString that use comment as default value:
1 . In your pre compiled header (.pch file) , redefine the 'NSLocalizedString' macro:
// cutom NSLocalizedString that use macro comment as default value
#import "LocalizationHandlerUtil.h"
#undef NSLocalizedString
#define NSLocalizedString(key,_comment) [[LocalizationHandlerUtil singleton] localizedString:key comment:_comment]
2. create a class to implement the localization handler
#import "LocalizationHandlerUtil.h"
#implementation LocalizationHandlerUtil
static LocalizationHandlerUtil * singleton = nil;
+ (LocalizationHandlerUtil *)singleton
{
return singleton;
}
__attribute__((constructor))
static void staticInit_singleton()
{
singleton = [[LocalizationHandlerUtil alloc] init];
}
- (NSString *)localizedString:(NSString *)key comment:(NSString *)comment
{
// default localized string loading
NSString * localizedString = [[NSBundle mainBundle] localizedStringForKey:key value:key table:nil];
// if (value == key) and comment is not nil -> returns comment
if([localizedString isEqualToString:key] && comment !=nil)
return comment;
return localizedString;
}
#end
3. Use it!
Make sure you add a Run script in your App Build Phases so you Localizable.strings file will be updated at each build, i.e., new localized string will be added in your Localized.strings file:
My build phase Script is a shell script:
Shell: /bin/sh
Shell script content: find . -name \*.m | xargs genstrings -o MyClassesFolder
So when you add this new line in your code:
self.title = NSLocalizedString(#"view_settings_title", #"Settings");
Then perform a build, your ./Localizable.scripts file will contain this new line:
/* Settings */
"view_settings_title" = "view_settings_title";
And since key == value for 'view_settings_title', the custom LocalizedStringHandler will returns the comment, i.e. 'Settings"
Voilà :-)
In Swift I'm using the following, e.g. for button "Yes" in this case:
NSLocalizedString("btn_yes", value: "Yes", comment: "Yes button")
Note usage of the value: for the default text value. The first parameter serves as the translation ID. The advantage of using the value: parameter is that the default text can be changed later but the translation ID remains the same. The Localizable.strings file will contain "btn_yes" = "Yes";
If the value: parameter was not used then the first parameter would be used for both: for the translation ID and also for the default text value. The Localizable.strings file would contain "Yes" = "Yes";. This kind of managing localization files seems to be strange. Especially if the translated text is long then the ID is long as well. Whenever any character of the default text value is changed, then the translation ID gets changed as well. This leads to issues when external translation systems are used. Changing of the translation ID is understood as adding new translation text, which may not be always desired.
I wrote a script to help maintaining Localizable.strings in multiple languages. While it doesn't help in autocompletion it helps to merge .strings files using command:
merge_strings.rb ja.lproj/Localizable.strings en.lproj/Localizable.strings
For more info see:
https://github.com/hiroshi/merge_strings
Some of you find it useful I hope.
If anyone looking for a Swift solution. You may want to check out my solution I put together here: SwiftyLocalization
With few steps to setup, you will have a very flexible localization in Google Spreadsheet (comment, custom color, highlight, font, multiple sheets, and more).
In short, steps are: Google Spreadsheet --> CSV files --> Localizable.strings
Moreover, it also generates Localizables.swift, a struct that acts like interfaces to a key retrieval & decoding for you (You have to manually specify a way to decode String from key though).
Why is this great?
You no longer need have a key as a plain string all over the places.
Wrong keys are detected at compile time.
Xcode can do autocomplete.
While there're tools that can autocomplete your localizable key. Reference to a real variable will ensure that it's always a valid key, else it won't compile.
// It's defined as computed static var, so it's up-to-date every time you call.
// You can also have your custom retrieval method there.
button.setTitle(Localizables.login.button_title_login, forState: .Normal)
The project uses Google App Script to convert Sheets --> CSV , and Python script to convert CSV files --> Localizable.strings You can have a quick look at this example sheet to know what's possible.
with iOS 7 & Xcode 5, you should avoid using the 'Localization.strings' method, and use the new 'base localisation' method. There are some tutorials around if you google for 'base localization'
Apple doc : Base localization
#define PBLocalizedString(key, val) \
[[NSBundle mainBundle] localizedStringForKey:(key) value:(val) table:nil]
Myself, I'm often carried away with coding, forgetting to put the entries into .strings files. Thus I have helper scripts to find what do I owe to put back into .strings files and translate.
As I use my own macro over NSLocalizedString, please review and update the script before using as I assumed for simplicity that nil is used as a second param to NSLocalizedString. The part you'd want to change is
NSLocalizedString\(#(".*?")\s*,\s*nil\)
Just replace it with something that matches your macro and NSLocalizedString usage.
Here comes the script, you only need Part 3 indeed. The rest is to see easier where it all comes from:
// Part 1. Get keys from one of the Localizable.strings
perl -ne 'print "$1\n" if /^\s*(".+")\s*=/' myapp/fr.lproj/Localizable.strings
// Part 2. Get keys from the source code
grep -n -h -Eo -r 'NSLocalizedString\(#(".*?")\s*,\s*nil\)' ./ | perl -ne 'print "$1\n" if /NSLocalizedString\(#(".+")\s*,\s*nil\)/'
// Part 3. Get Part 1 and 2 together.
comm -2 -3 <(grep -n -h -Eo -r 'NSLocalizedString\(#(".*?")\s*,\s*nil\)' ./ | perl -ne 'print "$1\n" if /NSLocalizedString\(#(".+")\s*,\s*nil\)/' | sort | uniq) <(perl -ne 'print "$1\n" if /^\s*(".+")\s*=/' myapp/fr.lproj/Localizable.strings | sort) | uniq >> fr-localization-delta.txt
The output file contains keys that were found in the code, but not in the Localizable.strings file. Here is a sample:
"MPH"
"Map Direction"
"Max duration of a detailed recording, hours"
"Moving ..."
"My Track"
"New Trip"
Certainly can be polished more, but thought I'd share.

Batch OCR Program for PDFs [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
This has been asked before, but I don't really know if the answers help me. Here is my problem: I got a bunch of (10,000 or so) pdf files. Some were text files that were saved using adobe's print feature (so their text is perfect and I don't want to risk screwing them up). And some were scanned images (so they don't have any text and I will have to settle for OCR). The files are in the same directory and I can't tell which is which. Ultimately I want to turn them into .txt files and then do string processing on them. So I want the most accurate OCR possible.
It seems like people have recommended:
adobe pdf (I don't have a licensed copy of this so ... plus if ABBYY finereader or something is better, why pay for it if I won't use it)
ocropus (I can't figure out how to use this thing),
Tesseract (which seems like it was great in 1995 but I'm not sure if there's something more accurate plus it doesn't do pdfs natively and I've have to convert to TIFF. that raises its own problem as I don't have a licensed copy of acrobat so I don't know how I'd convert 10,000 files to tiff. plus i don't want 10,000 30 page documents turned into 30,000 individual tiff images).
wowocr
pdftextstream (that was from 2009)
ABBYY FineReader (apparently its' $$$, but I will spend $600 to get this done if this thing is significantly better, i.e. has more accurate ocr).
Also I am a n00b to programming so if it's going to take like weeks to learn how to do something, I would rather pay the $$$. Thx for input/experiences.
BTW, I'm running Linux Mint 11 64 bit and/or windows 7 64 bit.
Here are the other threads:
Batch OCRing PDFs that haven't already been OCR'd
Open source OCR
PDF Text Extraction Approach Using OCR
https://superuser.com/questions/107678/batch-ocr-for-many-pdf-files-not-already-ocred
Just to put some of your misconceptions straight...
" I don't have a licensed copy of acrobat so I don't know how I'd convert 10,000 files to tiff."
You can convert PDFs to TIFF with the help of Free (as in liberty) and free (as in beer) Ghostscript. Your choice if you want to do it on Linux Mint or on Windows 7. The commandline for Linux is:
gs \
-o input.tif \
-sDEVICE=tiffg4 \
input.pdf
"i don't want 10,000 30 page documents turned into 30,000 individual tiff images"
You can have "multipage" TIFFs easily. Above command does create such TIFFs of the G4 (fax tiff) flavor. Should you even want single-page TIFFs instead, you can modify the command:
gs \
-o input_page_%03d.tif \
-sDEVICE=tiffg4 \
input.pdf
The %03d part of the output filename will automatically translate into a series of 001, 002, 003 etc.
Caveats:
The default resolution for the tiffg4 output device is 204x196 dpi. You probably want a better value. To get 720 dpi you should add -r720x720 to the commandline.
Also, if your Ghostscript installation uses letter as its default media size, you may want to change it. You can use -gXxY to set widthxheight in device points. So to get ISO A4 output page dimensions in landscape you can add a -g8420x5950 parameter.
So the full command which controls these two parameters, to produce 720 dpi output on A4 in portrait orientation, would read:
gs \
-o input.tif \
-sDEVICE=tiffg4 \
-r720x720 \
-g5950x8420 \
input.pdf
Figured I would try to contribute by answering my own question (have written some nice code for myself and could not have done it without help from this board). If you cat the pdf files in unix (well, osx for me), then the pdf files that have text will have the word "Font" in them (as a string, but mixed in with other text) b/c that's how the file tells Adobe what fonts to do display.
The cat command in bash seems to have the same output as reading the file in binary mode in python (using 'rb' mode when opening file instead of 'w' or 'r' or 'a'). So I'm assuming that all pdf files that contain text with have the word "Font" in the binary output and that no image-only files ever will. If that's always true, then this code will make a list of all pdf files in a single directory that have text and a separate list of those that have only images. It saves each list to a separate .txt file, then you can use a command in bash to move the pdf files to the appropriate folder.
Once you have them in their own folders, then you can run your batch ocr solution on just the pdf files in the images_only folder. I haven't gotten that far yet (obviously).
import os, re
#path is the directory with the files, other 2 are the names of the files you will store your lists in
path = 'C:/folder_with_pdfs'
files_with_text = open('files_with_text.txt', 'a')
image_only_files = open('image_only_files.txt', 'a')
#have os make a list of all files in that dir for a loop
filelist = os.listdir(path)
#compile regular expression that matches "Font"
mysearch = re.compile(r'.*Font.*', re.DOTALL)
#loop over all files in the directory, open them in binary ('rb'), search that binary for "Font"
#if they have "Font" they have text, if not they don't
#(pdf does something to understand the Font type and uses this word every time the pdf contains text)
for pdf in filelist:
openable_file = os.path.join(path, pdf)
cat_file = open(openable_file, 'rb')
usable_cat_file = cat_file.read()
#print usable_cat_file
if mysearch.match(usable_cat_file):
files_with_text.write(pdf + '\n')
else:
image_only_files.write(pdf + '\n')
To move the files, I entered this command in bash shell:
cat files_with_text.txt | while read i; do mv $i Volumes/hard_drive_name/new_destination_directory_name; done
Also, I didn't re-run the python code above, I just hand-edited the thing, so it might be buggy, Idk.
This is an interesting problem. If you are willing to work on Windows in .NET, you can do this with dotImage (disclaimer, I work for Atalasoft and wrote most of the OCR engine code). Let's break the problem down into pieces - the first is iterating over all your PDFs:
string[] candidatePDFs = Directory.GetFiles(sourceDirectory, "*.pdf");
PdfDecoder decoder = new PdfDecoder();
foreach (string path in candidatePDFs) {
using (FileStream stm = new FileStream(path, FileMode.Open)) {
if (decoder.IsValidFormat(stm)) {
ProcessPdf(path, stm);
}
}
}
This gets a list of all files that end in .pdf and if the file is a valid pdf, calls a routine to process it:
public void ProcessPdf(string path, Stream stm)
{
using (Document doc = new Document(stm)) {
int i=0;
foreach (Page p in doc.Pages) {
if (p.SingleImageOnly) {
ProcessWithOcr(path, stm, i);
}
else {
ProcessWithTextExtract(path, stm, i);
}
i++;
}
}
}
This opens the file as a Document object and asks if each page is image only. If so it will OCR the page, else it will text extract:
public void ProcessWithOcr(string path, Stream pdfStm, int page)
{
using (Stream textStream = GetTextStream(path, page)) {
PdfDecoder decoder = new PdfDecoder();
using (AtalaImage image = decoder.Read(pdfStm, page)) {
ImageCollection coll = new ImageCollection();
coll.Add(image);
ImageCollectionImageSource source = new ImageCollectionImageSource(coll);
OcrEngine engine = GetOcrEngine();
engine.Initialize();
engine.Translate(source, "text/plain", textStream);
engine.Shutdown();
}
}
}
what this does is rasterizes the PDF page into an image and puts it into a form that is palatable for engine.Translate. This doesn't strictly need to be done this way - one could get an OcrPage object from the engine from an AtalaImage by calling Recognize, but then it would be up to client code to loop over the structure and write out the text.
You'll note that I've left out GetOcrEngine() - we make available 4 OCR engines for client use: Tesseract, GlyphReader, RecoStar, and Iris. You would select the one that would be best for your needs.
Finally, you would need the code to extract text from the pages that already have perfectly good text on them:
public void ProcessWithTextExtract(string path, Stream pdfStream, int page)
{
using (Stream textStream = GetTextStream(path, page)) {
StreamWriter writer = new StreamWriter(textStream);
using (PdfTextDocument doc = new PdfTextDocument(pdfStream)) {
PdfTextPage page = doc.GetPage(i);
writer.Write(page.GetText(0, page.CharCount));
}
}
}
This extracts the text from the given page and writes it to the output stream.
Finally, you need GetTextStream():
public Stream GetTextStream(string sourcePath, int pageNo)
{
string dir = Path.GetDirectoryName(sourcePath);
string fname = Path.GetFileNameWithoutExtension(sourcePath);
string finalPath = Path.Combine(dir, String.Format("{0}p{1}.txt", fname, pageNo));
return new FileStream(finalPath, FileMode.Create);
}
Will this be a 100% solution? No. Certainly not. You could imagine PDF pages that contain a single image with a box draw around it - this would clearly fail the image only test but return no useful text. Probably, a better approach is to just use the extracted text and if that doesn't return anything, then try an OCR engine. Changing from one approach to the other is a matter of writing a different predicate.
The simplest approach would be to use a single tool such a ABBYY FineReader, Omnipage etc to process the images in one batch without having to sort them out into scanned vs not scanned images. I believe FineReader converts the PDF's to images before performing OCR anyway.
Using an OCR engine will give you features such as automatic deskew, page orientation detection, image thresholding, despeckling etc. These are features you would have to buy an image processng library for and program yourself and it could prove difficult to find an optimal set of parameters for your 10,000 PDF's.
Using the automatic OCR approach will have other side effects depending on the input images and you would find you would get better results if you sorted the images and set optimal parameters for each type of images. For accuracy it would be much better to use a proper PDF text extraction routine to extract the PDF's that have perfect text.
At the end of the day it will come down to time and money versus the quality of the results that you need. At the end of the day, a commercial OCR program will be the quickest and easiest solution. If you have clean text only documents then a cheap OCR program will work as well as an expensive solution. The more complex your documents, the more money you will need to spend to process them.
I would try finding some demo / trial versions of commercial OCR engines and just see how they perform on your different document types before spending too much time and money.
I have written a small wrapper for Abbyy OCR4LINUX CLI engine (IMHO, doesn't cost that much) and Tesseract 3.
The wrapper can batch convert files like:
$ pmocr.sh --batch --target=pdf --skip-txt-pdf /some/directory
The script uses pdffonts to determine if a PDF file has already been OCRed to skip them. Also, the script can work as system service to monitor a directory and launch an OCR action as soon as a file enters the directory.
Script can be found here:
https://github.com/deajan/pmOCR
Hopefully, this helps someone.

convert pdf to svg

I want to convert PDF to SVG please suggest some libraries/executable that will be able to do this efficiently. I have written my own java program using the apache PDFBox and Batik libraries -
PDDocument document = PDDocument.load( pdfFile );
DOMImplementation domImpl =
GenericDOMImplementation.getDOMImplementation();
// Create an instance of org.w3c.dom.Document.
String svgNS = "http://www.w3.org/2000/svg";
Document svgDocument = domImpl.createDocument(svgNS, "svg", null);
SVGGeneratorContext ctx = SVGGeneratorContext.createDefault(svgDocument);
ctx.setEmbeddedFontsOn(true);
// Ask the test to render into the SVG Graphics2D implementation.
for(int i = 0 ; i < document.getNumberOfPages() ; i++){
String svgFName = svgDir+"page"+i+".svg";
(new File(svgFName)).createNewFile();
// Create an instance of the SVG Generator.
SVGGraphics2D svgGenerator = new SVGGraphics2D(ctx,false);
Printable page = document.getPrintable(i);
page.print(svgGenerator, document.getPageFormat(i), i);
svgGenerator.stream(svgFName);
}
This solution works great but the size of the resulting svg files in huge.(many times greater than the pdf). I have figured out where the problem is by looking at the svg in a text editor. it encloses every character in the original document in its own block even if the font properties of the characters is the same. For example the word hello will appear as 6 different text blocks. Is there a way to fix the above code? or please suggest another solution that will work more efficiently.
Inkscape can also be used to convert PDF to SVG. It's actually remarkably good at this, and although the code that it generates is a bit bloated, at the very least, it doesn't seem to have the particular issue that you are encountering in your program. I think it would be challenging to integrate it directly into Java, but inkscape provides a convenient command-line interface to this functionality, so probably the easiest way to access it would be via a system call.
To use Inkscape's command-line interface to convert a PDF to an SVG, use:
inkscape -l out.svg in.pdf
Which you can then probably call using:
Runtime.getRuntime().exec("inkscape -l out.svg in.pdf")
http://download.oracle.com/javase/1.4.2/docs/api/java/lang/Runtime.html#exec%28java.lang.String%29
I think exec() is synchronous and only returns after the process completes (although I'm not 100% sure on that), so you shoudl be able to just read "out.svg" after that. In any case, Googling "java system call" will yield more info on how to do that part correctly.
Take a look at pdf2svg (also on on github):
To use
pdf2svg <input.pdf> <output.svg> [<pdf page no. or "all" >]
When using all give a filename with %d in it (which will be replaced by the page number).
pdf2svg input.pdf output_page%d.svg all
And for some troubleshooting see:
http://www.calcmaster.net/personal_projects/pdf2svg/
pdftocairo can be used to convert pdf to svg. pdfcairo is part of poppler-utils.
For example to convert 2nd page of a pdf, following command can be run.
pdftocairo -svg -f 1 -l 1 input.pdf
pdftk 82page.pdf burst
sh to-svg.sh
contents of to-svg.sh
#!/bin/bash
FILES=burst/*
for f in $FILES
do
inkscape -l "$f.svg" "$f"
done
I have encountered issues with the suggested inkscape, pdf2svg, pdftocairo, as well as the not suggested convert and mutool when trying to convert large and complex PDFs such as some of the topographical maps from the USGS. Sometimes they would crash, other times they would produce massively inflated files. The only PDF to SVG conversion tool that was able to handle all of them correctly for my use case was dvisvgm. Using it is very simple:
dvisvgm --pdf --output=file.svg file.pdf
It has various extra options for handling how elements are converted, as well as for optimization. Its resulting files can further be compacted by svgcleaner if necessary without perceptual quality loss.
inkscape (#jbeard4) for me produced svgs with no text in them at all, but I was able to make it work by going to postscript as an intermediary using ghostscript.
for page in $(seq 1 `pdfinfo $1.pdf | awk '/^Pages:/ {print $2}'`)
do
pdf2ps -dFirstPage=$page -dLastPage=$page -dNoOutputFonts $1.pdf $1_$page.ps
inkscape -z -l $1_$page.svg $1_$page.ps
rm $1_$page.ps
done
However this is a bit cumbersome, and the winner for ease of use has to go to pdf2svg (#Koen.) since it has that all flag so you don't need to loop.
However, pdf2svg isn't available on CentOS 8, and to install it you need to do the following:
git clone https://github.com/dawbarton/pdf2svg.git && cd pdf2svg
#if you dont have development stuff specific to this project
sudo dnf config-manager --set-enabled powertools
sudo dnf install cairo-devel poppler-glib-devel
#git repo isn't quite ready to ./configure
touch README
autoreconf -f -i
./configure && make && sudo make install
It produces svgs that actually look nicer than the ghostscript-inkscape one above, the font seems to raster better.
pdf2svg $1.pdf $1_%d.svg all
But that installation is a bit much, too much even if you don't have sudo. On top of that, pdf2svg doesn't support stdin/stdout, so the readily available pdftocairo (#SuperNova) worked a treat in these regards, and here's an example of "advanced" use below:
for page in $(seq 1 `pdfinfo $1.pdf | awk '/^Pages:/ {print $2}'`)
do
pdftocairo -svg -f $page -l $page $1.pdf - | gzip -9 >$1_$page.svg.gz
done
Which produces files of the same quality and size (before compression) as pdf2svg, although not binary-identical (and even visually, jumping between output of the two some pixels of letters shift, but neither looks wrong/bad like inkscape did).
Inkscape does not work with the -l option any more. It said "Can't open file: /out.svg (doesn't exist)". The long form that option is in the man page as --export-plain-svg and works but shows a deprecation warning. I was able to fix and update the command by using the -o option on Inkscape 1.1.2-3ubuntu4:
inkscape in.pdf -o out.svg

Convert pdf from version 1.1 to 1.4 (or higher)

How can I convert pdf files from version 1.1 to 1.4 (or higher)?
Actually I need some sort of command line tool for batch converting or some API to be able to convert dynamically severall documents.
Use Ghostscript tool.
gs -sDEVICE=pdfwrite -dCompatibilityLevel=1.4 -o output.pdf input.pdf
Pdf 1.1 is forward compatible with pdf 1.4. Everything in pdf 1.1 will work with pdf 1.4 - it's guaranteed by the spec. Let's assume that you've got some justifiable reason why this is not good enough for you (let's assume, for example, that you have a non-spec compliant tool that consumes PDF and explodes on any file version less that 1.4).
We can focus on the main syntactic differences between versions.
All PDF files have a header somewhere in the first 1024 bytes. In most cases, it's the very first line, but that's not guaranteed (I'm looking at you GhostScript!). The header looks like this in PDF 1.1:
%PDF-1.1
in PDF 1.4, it looks like this:
%PDF-1.4
So in theory, all you need is a tool that will look in the first 1024 bytes for a file for "%PDF-1.1" and change it to "%PDF-1.4". You could use sed, perl, etc to do something like that for you. You could write it in C and you would be tempted to do something like this:
#define PDFHEADERSIZE 1024
bool ChangeFileToNewPdfVersion(char *file)
{
char *replacePoint = NULL;
FILE *fp = fopen(file, "rw");
char buf[PDFHEADERSIZE + 1];
buf[PDFHEADERSIZE] = '\0';
if (fread(buf, 1, PDFHEADERSIZE, fp) != PDFHEADERSIZE) { fclose(fp); return false; }
fseek(fp, 0, SEEK_SET);
if ((replacePoint = strstr(buf, "%PDF-1.1")) == NULL) { fclose(fp); return false; }
replacePoint[7] = '4';
if (fwrite(buf, 1, PDFHEADERSIZE, fp) != PDFHEADERSIZE) { fclose(fp); return false; }
fflush(fp);
fclose(fp);
return;
}
which will work in most sane cases. It will not work if the file starts, for example, with 0 bytes, which would serve as null terminators in the block of data.
A better choice (really) would be to cobble up a simple state machine to find %PDF-1. by reading 1 byte at a time until it either finds it or passes 1017 (1024 less the header length), then reads the next byte, if it's a '1', it seeks back a byte and writes a '4'.
The only other thing you would need to worry about is that PDF 1.4 suggests that the document catalog should contain a Version key with the file version. Since this is defined as optional in the spec, you are safe to ignore it.
So this will solve your problem. I do not, however, believe that you should need to do this. Really.
You should take some time to read part of the PDF spec, specifically section I.2 about version numbers and compatibility.
I just had this problem. Trying to submit some PDF's to a finanicial institution. "We only support PDF 1.4 or newer". Apparently our HP scanner creates version 1.3 PDF's.
I opened the PDF file with Notepad++ and changed the 3 to a 4 and saved it. It was that simple.
It's the very first part of the file and it's in plain text.
Another option for a small number of pdf files is to open them in Chrome or other browser then save as PDF or print to PDF. In my case, using Chrome, it saved to a newer pdf version and the bank accepted it

Programmatically add comments to PDF header

Has anyone had any success with adding additional information to a PDF file?
We have an electronic medical record system which produces medical documents for our users. In the past, those documents have been Print-To-File (.prn) files which we have fed to a system that displayed them as part of an enterprise medical record.
Now the hospital's enterprise medical record vendor wants to receive the documents as PDF, but still wants all of the same information stored in the header.
Honestly, we can't figure out how to put information into a PDF file that doesn't break the PDF file.
Here is the start of one of our PDFs...
%PDF-1.4
%âãÏÓ
6 0 obj
<<
/Type /XObject
/Subtype /Image
/BitsPerComponent 8
/Width 854
/Height 130
/ColorSpace /DeviceRGB
/Filter /DCTDecode
/Length 17734>>
stream
In our PRN files, we would insert information like this:
%MRN% TEST000001
%ACCT% TEST0000000000001
%DATE% 01/01/2009^16:44
%DOC_TYPE% Clinical
%DOC_NUM% 192837475
%DOC_VER% 1
My question is, can I insert this information into a PDF in a manner which allows the document server to perform post-processing, yet is NOT visible to the doctor who views the PDF?
Thank you,
David Walker
Yes, you can. Any line in a PDF file that starts with a percent sign is a comment and as such ignored (the first two lines of the PDF actually are comments as well). So you can pretty much insert your information into the PDF as you did into the PRN.
However:
The PDF format works with byte position references, so if you insert data into a finished PDF file, this will push the rest of the data away from their original position and thus break the file. You can also not append it to the file, because a PDF file has to end with
startxref
123456
%%EOF
(the 123456 is an example). You could insert your data right before these three lines. The byte position of the "startxref" part is never referenced anywhere, so you won't break anything if you push this final part towards the end.
Edit: This of course assumes there is no checksumming, signing or encryption going on. That would make things more complicated.
Edit 2: As Javier pointed out correctly, you can also just add your data to the end and just add a copy of the three lines to the end of that. Boils down to the same thing, but it's a little easier.
PDFs are supposed to have multiple versions just appending at the end; but the very end must have the offset to the main reference table. Just read the last three lines, append your data and reattach the original ending.
You can either remove the original ending or let it there. PDF readers will just go to the end and use the second-to-last line to find the reference table.
Have you ever thought to embed your additional info inside the PDF as a separate file?
The generic PDF specification allows to "attach files" to PDFs. Attached files can be anything: *.txt, *.doc, *.xsl, *.html or even .pdf. Attached files are contained in the PDF "container" file without corrupting the container's own content. (Special-purpose PDF specifications such as PDF/A- and PDF/X-* may impose some restrictions about embedded/attached files.)
That allows you to tie additional info and/or data to PDF files and allow for common storage and processing. Attached files are supposed to not disturb any PDF viewer's rendering.
I've used that feature frequently, for various purposes:
store the parent document (like .doc) inside the .pdf from which the .pdf was created in the first place;
tag a job ticketing information to a printfile that is sent to the printshop;
etc.pp.
Of course, recently discovered and published flaws in PDF processing software (and in the PDF spec itself) suggest to stay away from embedding/attaching binary files to PDF files --
because more and more Readers will by default stop you from easily extracting/detaching the embedded/attached files.
However, there is no reason why you shouldn't be able to put your additional info into a medical-record-info.txt file of arbitrary lenght and internal format and attach it to the PDF:
MRN TEST000001
ACCT TEST0000000000001
DATE 2009-01-01
TIME 16:44:33.76
DOC_TYPE Clinical
DOC_NUM 192837475
DOC_VER 1
MORE_INFO blah blah
Hi, guys,
can you please process this file faster than usual? If you don't,
someone will be dying.
Seriously, David.
FWIW, the commandline tools pdftk.exe (Windows) and pdftk (Linux) are able to attach and detach embedded files from their container PDF. Acrobat Reader can also handle attachments.
You could setup/program/script your document server handling the PDF to automatically detach the embedded .txt file and trigger actions according to its content.
Of course, the doctor who views the PDF would be able to see there is a file attachment in the PDF. But it wouldn't appear in his "normal" viewing. He'd have to take specific additional actions in order to extract and view it. (And then there is the option to set a password on the PDF to protect it from un-authorized file detachments. And/or encode, obscure, rot13 the .txt. Not exactly rock-solid methods, but 99% of doctors wouldn't be able to accomplish it even if you teach them how to...)
You can still insert comments into a PDF file using the % character. But anyone would be able to access with a text editor.
Your vendor could remove these comments after post-processing, so it doesn't actually get to the doctors.
You can store the data as real PDF metadata. For example, with CAM::PDF you can write metadata like this:
use CAM::PDF;
my $pdf = CAM::PDF->new('temp.pdf') || die;
my $info = $pdf->getValue($pdf->{trailer}->{Info}) || die;
$info->{PRN} = CAM::PDF::Node->new('dictionary', {
DOC_TYPE => CAM::PDF::Node->new('string', 'Clinical'),
DOC_NUM => CAM::PDF::Node->new('number', 192837475),
DOC_VER => CAM::PDF::Node->new('number', 1),
});
$pdf->cleanoutput('out.pdf');
The Info node of the PDF then looks like this:
8 0 obj
<< /CreationDate (D:20080916083455-04'00')
/ModDate (D:20080916083729-04'00')
/PRN << /DOC_NUM 192837475 /DOC_TYPE (Clinical) /DOC_VER 1 >> >>
endobj
You can read the PRN data back out like so (simplistic code...)
my $pdf = CAM::PDF->new('out.pdf') || die;
my $info = $pdf->getValue($pdf->{trailer}->{Info}) || die;
my $prn = $info->{PRN};
if ($prn) {
my $prndict = $pdf->getValue($prn);
for my $key (sort keys %{$prndict}) {
print "$key = ", $pdf->getValue($prndict->{$key}), "\n";
}
}
Which makes output like this:
DOC_NUM = 192837475
DOC_TYPE = Clinical
DOC_VER = 1
PDF supports arbitrarily nested arrays, dictionaries and references so just about any data can be represented. For example, I built an entire filesystem embedded in a PDF just for fun!
At one point we were changing some Acrobat JS code by doing a text replace in a plain (unencrypted) PDF. The trick was that the lengths of each PDF block were hard coded in the document. So, we could not change the number of characters. We would just add extra spaces.
It worked great, the JS code executed an all.
Have you thought about using XMP?