I do a lot of quick-and-dirty PDF creation of long documents (100+ pages), for distribution to clients; my clients are often individuals, but sometimes corporate managers at banks and insurance companies.
Acrobat Pro allows you to save in many versions of PDF, from Acrobat 4 - Acrobat 10. Which should I use, as a general rule?
I don't often use advanced features in my documents: usually pictures and text. Since I send via email, I want the best compression possible... my documents often have lots of images. However, since my clients are banks and such, not cutting-edge technologists, I don't think they have the most recent Acrobat/PDF reader installed.
What is the best PDF version, as a compromise between document compression and widespread adoption?
I recommend PDF 1.4 - Acrobat 5. PDF/A-1 (PDF for archiving) standard is also based on PDF 1.4.
Related
Is there any program that will allow me to superimpose the text (OCR) layer of a PDF on top of the PDF rendering?
I want to quickly see if the text layer has errors or not.
It would be more convenient if that can be done with a program, if not, some cli command or script would also work.
Superimpose? It implies you'd like to add text while I believe you'd like to have access to the text for detection and possibly further analysis of the OCRed text quality. Perhaps need further clarification.
Our developers worked for some time on algorithms to detect the presence of text in PDFs and then evaluate its quality. There are many cases that can trick a basic algorithm - Bates number or imprinter added into image-only PDF makes it seem like PDF has high-quality text while it has no actual text. Some copiers produce "searchable PDFs" while using very low-quality OCR that contains many errors, but not necessarily on the first page that is typically some kind of title page with large fonts, thus the first line of the text encountered by an algorithm seems high quality. Or the first page may have text while other pages do not, yet algorithm may believe the whole PDF has text.
In our commercial high-volume server-based OCR software (used by service bureaus, SaaS platforms, libraries, backlog conversions, etc.) we now have advanced detection of PDFs with existing text layer and "smart decisions" which can filter out many of these false positive situations. Our OCR can skip re-OCRing PDFs with high-quality text in PDFs. If you are looking for a high-quality inexpensive OCR platform, such detection is a feature in it, but it can't be used separately without our OCR. OCR workflow is used as a part of that filter. Our developers wrote and integrated these algorithms without external tools.
I am with www.wisetrend.com where we provide software solutions and consulting for various OCR projects.
I have a pdf file of 9mb but I want to convert it into less than 1mb.How can I do that? I have used all the online tools available.
It depends on what you produced it with, what level of PDF compliance you set, what target purpose / amount of images.
What you made it with: Some design software (Ilustrator) gives the option to leave editing information in the PDF. You would want to switch anything like that off.
Level of PDF compliance - compression got better with higher PDF versions. Go for the highest level you can.
Target purpose / content: Target purpose means DPI. Re-sample the images to the appropriate size for the output purpose. If you are for example intending the PDF to be displayed on a phone or tablet, then you do not need those massive images that you took with your 24Gb SLR.
Acrobat Pro tends to be good at compressing and removing junk.
9Mb is quite large for a PDF so I assume it is full of rich images. Reducing the size and colour depth of those images is likely to be the biggest benefit.
We are trying to figure out the best way to create a web service that delivers high quality text books to remote tablets and desktop clients. The books are copyrighted and sold to users so the delivery must be protected as much as possible against copy. The books' layout is very complicated, with lots of images, pictures, textures, tables, diagrams and the like. They are produced by InDesign in PDF format.
So far, our best guess is to store the PDF in single pages (a PDF per page) and scramble them with asymmetric keys, so all the decryption can be processed in memory with no temporary file generated.
Our concern is that PDF is a proprietary format and sometimes the file is too big (quality is an important concern for the client).
Is there any Open Source alternative to PDF, capable of delivering high quality, complicated layouts in smaller files?
Your only way around this if it is to be viewable offline is to encrypt the document and issue licence keys for it to be viewable.
There are commercial packages that will allow you to do this enabling you to limit the licence to machine, user or time period.
Ultimately you can't stop people coming up with ingenious ways of copying it, just make it more difficult.
You can use raster image with high quality as PDF alternative.
Is there any pdf tools that generate information regarding the loading time and memory usage to display pdf in browser, and also total element inside the pdf?
Unfortunately not really. I've done some of this research, not for PDF in a browser but (and perhaps this is what you are looking at as well) PDF on mobile devices.
There are a number of factors that contribute and that to some extent can be tested for:
Whether or not big images exist in the PDF and what resolution they are. This is linked directly to memory usage.
What compression method is used for image compression. Decompressing JPEG-2000 images specifically can increase load time significantly. Even worse, as JPEG-2000 can be progressively decompressed, it can give the appearance of a really bad PDF until the images has been fully decompressed and loaded (this is ugly specifically on somewhat older tablets for example).
How complex the transparency effects are that are used in the document.
How many fonts are used in the document.
How many line-art objects (vector elements) with a large number of nodes (points) are used on a page.
You can test what is in the document using Acrobat Pro to some extent (there is a well-hidden tool when you save an optimised PDF file that can audit what objects use how much of the space in a PDF document). You can also use a preflight solution such as pdfToolbox from callas (I'm affiliated with this company) or pitstop from enfocus; these tools would allow you to get a report with the results of custom checks such as image resolution, compression, vector objects, color spaces etc.
I'm currently writing a typesetting application and I'm using PSG as the backend for producing postscript files. I'm now wondering whether that choice makes sense. It seems the ReportLab Toolkit offers all the features PSG offers, and more. ReportLab outputs PDF however.
Advantages PDF offers:
transparancy
better support for character encodings (Unicode, for example)
ability to embed TrueType and even OpenType fonts
hyperlinks and bookmarks
Is there any reason to use Postscript instead of directly outputting to PDF? While Postscript is a full programming language as opposed to PDF, as a basic output format for documents, that doesn't seem to offer any advantage. I assume a PDF can be readily converted to PostScript for printing?
Some useful links:
Wikipedia: PDF
Adobe: PostScript vs. PDF
If you're planning on only outputting to a PostScript printer, then use PostScript. Otherwise, use PDF.
PDF is more widely supported by non-printer devices. And for your purposes, there aren't any technical advantages of PS over PDF (other than not being able to dump the file directly to a printer).
Here are some things to consider:
gzipped postscript is often much smaller than an equivalent PDF
PDF is basically a generalized container format, if you didn't know that you can embed videos in PDF, that should give you pause
PDF contains scripts that have been used for exploits (though this may be more the fault of bad PDF reader software)
PDF is a much more self-contained format and a high level of functionality. It also has more tools. UNless you specifically need Postscript, stick to PDF.
Avoid PDF like the plague. Adobe invented PDF and pushed PDF to the consumers to make more money from suckers who believed all the hype about PDF that Adobe told its users. PDF is a bloated format that requires a slow and non-free reader to read and process correctly. Most free readers do not support 100% of Adobe features and likely support a subset of features that is are found in Postscript. For instance reportlab does not support 100% of PDF features.
Historical fake technical arguments to use PDF have been
No loops in PDF which stops helps processing, False as other formats such as XML without loops have memory and processing issues.
More fully feature, False argument as Postscript is more powerful and can do what PDF can do with less features.
Postscript has to load all of the pages as it is a language. This is of course not true as C,C++, Java and many other language can load code at runtime.
Postscript is missing feature X, True but mostly because of
Adobe inventing a new format to make money not because feature X cannot be
added to Postscript.
The real reason to use PDF instead of Postscript is that PDF readers are more common than Postscript readers.