I am experimenting with a system to scan letters and convert the scanned bitmaps to PDF with the goal to have a high resolution and a small PDF file size.
I am prototyping with scanner, GIMP for bitmap manipulation and ImageMagick for bitmap-to-PDF conversion.
My process looks as follows:
Scan in 3x8bit color, 600 DPI,
LZW-compressed true-color TIFF file
size is around 8 Mb.
Use GIMP to convert bitmap to indexed
image with a typical color table of 4
to 8 colors. That makes the image better compressible.
Use ImageMagick to convert the
LZW-compressed indexed TIFF file PDF,
with around 500K per page.
Now in order to make the image even better compressible, I could make the bitmap more compression-friendly. Before experimenting here, I would like to know how PS/PDF stores bitmaps.
Are bitmaps in PS/PDF run-lenght-encoded? Then I woud gain compression by removing single pixles form bitmap rows.
Do you have ideas for further optimizing here?
Do you know references to bitmap storage format in PS/PDF?
PDF supports many types of image compression, see: http://en.wikipedia.org/wiki/Pdf#Raster_images
I think you can specify which one to use with the imagemagick -compress option: http://www.imagemagick.org/script/command-line-options.php#compress
A few companies (Luratech and CamiNova are the only ones I know) make a "Mixed Raster Content" model in PDF. The files are viewable in the standard Adobe Reader but are very, very small -- comparable to DjVu.
"Mixed Raster Content" means they segment the image into a high resolution B&W mask (hard edges, lines, letters) and lower resolution smooth tone image (background pictures). The mask gets stored using a bitonal compression algorithm (probably JBIG2) and the smooth tone image gets compressed using JP2K (probably).
For bitmaps, IIRC, PDF uses deflate. But PDF can also store images with more specific image compression algorithms, such JPEG (lossy), CCITT (lossless), JBIG2 (lossy and lossless) and JPX (of JPEG2000, lossy and lossless).
Adobe's PDF reference might be a good place to start. From a very cursory look, it looks like images are stored uncompressed, but that doesn't feel right at all. It can also link to external images, in JPEG for instance.
The compression method is generally selected by the tool creating the PDF and you may have limited control over that.
If you have Acrobat 9.0 there is a really nice 'hidden' feature which allows you to see the object tree inside a PDF (you are interested in the XObjects under Resources). There is a short blog on using it at http://pdf.jpedal.org/java-pdf-blog/bid/10479/Viewing-PDF-objects
Related
I have an image of a website which I'm trying to convert to PDF. I have the image in several formats: PSD, PNG, JPG, TIFF, all saved losslessly.
I'm using the following command to convert the image to PDF:
convert -density 93 foo.jpg bar.pdf
Here is part of the original image:
And here is the same part, after converting to PDF:
As you can see, the second one is ever so slightly hazy. What's causing this, and how can I eliminate it? I've seen PDFs with crisp graphics, so I know it's possible.
If you are seeing the same results with multiple input types. The fuzziness is likely being caused by the anti-aliasing feature of your PDF Viewer. If using Acrobat, you can turn off image anti-aliasing by doing the following:
Go to Edit-->Preferences-->Page Display
Untick the option "Smooth Images" and hit "OK".
The crisp graphics you are seeing on other PDFs are likely due to the fact that they are vectorized graphics. Imagemagick is creating a PDF and embedding your image inside of it which may be subject to compression.
Also:
When using jpeg as input, add the "-quality 100" to your Imagemagick call to retain the highest quality possible.
Use a higher value for the "-density" parameter (I would recommend at least 150) to generate a higher resolution PDF.
I'm organizing a large amount of PDFs, some of which need to be inverted, or have their contrast adjusted. But when I use convert to modify a PDF, the new file size become much bigger than the original file size, using the density and quality command to achieve the original quality. A typical command looks like this:
convert -density 300 OrignalPDF.pdf -quality 100 -negate NewPDF.pdf
This results in a pdf that looks very nearly as sharp as the original, but when switching between the two (with the original inverted within the pdf viewer's settings (qpdfview)), one notices that the new one seems very slightly shrunken and that all the lines become slightly thicker/bolder. Obviously this isn't too bad, but shouldn't I be able to invert the colors with almost no noticeable changes?
This slight change becomes even more ridiculous when one notices the size disparity: the original image was 276 KB and the modified file is 28 MB. That's more than 100 times larger! Given that I have hundreds of PDFs, out of which more than 20 or 30 need to be (custom) modified, how can I keep the total size near the original total size, while retaining quality?
Imagemagick's documentation says:
However the reading of these formats is very complicated, as they are full computer languages designed specifically to generate a printed page on high quality laser printers. This is well beyond the scope of ImageMagick, and so it relies on a specialized delegate program known as "ghostscript" to read, and convert Postscript and PDF pages to a raster image.
So, ImageMagick converts PDF to raster image first and then it makes a simple PDF from this raster image. And the output PDF is unsearchable, contains no vectors, no hidden text etc but just the page wide raster image. But PDF (and PostScript) is not just a set of images but a set of commands, text, vectors, fonts, and even a sub-scripts inside (to calculate output color, for example). PDF is more like an application rather than a static image.
Anywa, I suppose you may have 2 types of input PDF files:
with page-wide images inside (for example, scanned documents). You should process 1st type only using imagemagick. This type of files will be converted into the nearly the same file size.
with pure text and vectors inside (for example, PDF invoices). This type of files should not be processed using imagemagick as the conversion damages the input file (and finally increases the output file size). If you still need to adjust contrast or compression of images inside files of this type then consider using the ghostscript directly, check this tutorial.
I have a PDF which uses 'Calibri' as a font. Our printers insist that it must be embedded into the document, however when we do the PDF is approximately 3x larger.
We initially thought there isn't much we can do about this, but the printers sent over a document which has both 'Calibri' embedded and the smaller file size.
The difference between the two can be seen here:
Ours
Printers
It's clear that embedded Fonts are the culprit here.
How can we produce PDFs with this smaller file size?
The library we are using is Microsoft.Reporting.WebForms but I suspect we may need to do some post-processing to reduce the sizes, therefore do you have any suggestions?
I am wondering if there is any way to compress a specific sections of an image and preserve other sections. For example I want the background of a large image compressed but the title and description text laid over the background to be crisp.
This would be pretty cool. Short answer (no).
Long Answer.
JPEG and PNG.
Do the background with JPEG and save this off as a separate file.
Then do the title and description as a PNG with transparency.
In what every you are making (website, app) you will then be able to overlay these images and since the PNG has transparency it will appear as part of the original image.
At the end of the day we only have a few technologies we can work with ant that is jpg, gif, png, tiff, bmp, (svg some things dont support this) for image decoding for the end user.
Neither of these technologies do what you want well. PNG is awesome, but it the file size will be pretty huge compared to JPG. JPG wont give you crisp text when you have an image in the background.
I wouldnt be surprised if someone has written an encoder for what you want to do but being able to send this file to someone or something. They wont be able to decode it easily without your encoder and hence this is why we stick to the standard formats.
The direct answers are:
If you are asking "can I . . . . in Photoshop", the answer to your question is NO.
If you are asking "can I . . . . programmatically," the answer to your question is YES, with some compression methods.
However, I sense there is a question behind your question.
You mention blurring. That suggests you are trying to save as JPEG because JPEG is the only major image compression technique that causes blurring.
The solution to this problem depends upon the nature of the image. Is it a photograph? Drawing? How many colors does it have?
Could you use another compression method (e.g., PNG as suggested previously)?
You might be able to get away with JPEG using a so called "high" quality setting at the cost of increased file size.
You can select to save it with a format that has lossless compression, but the compression rate is significantly lower. Having said that, save your file as JPG with the highest level (12), then open the new file and compare it to the original. In this level the details loss is relatively lower and if the text isn't on a single color background you might find this acceptable to your needs.
This is a bit more of a fun question than a serious one, but how does the Adobe PDF format make documents so... portable?
I just created a small Word document, 235kb in size, containing multiple color photos and a few textual phrases. A PDF created using CutePDF (which I understand isn't the most efficient method of PDF creation) is only 176kb. That's a 25% compression ratio. When those files are placed into a compressed folder, the PDF is capable of 3% compression where the .docx can only take 2%. I'm sure that larger files would have even greater differences in size.
My question is, how does Adobe manage to make their files so much smaller? I understand that they are drawn from raster graphics, but my 3 bitmap files really can't be helped from raster that much, can they?
If you have Acrobat 9 there is a nice tool built-in so you can see how the PDF was put together (and compressions used). There is a blog post explaining how to use it at http://pdf.jpedal.org/java-pdf-blog/bid/10479/Viewing-PDF-objects
There are a few ways it can be compressing this:
Pdf files use lzw and zip compression.
If the image is scaled in the document, or is a larger dpi on disk than you allow for in cutepdf (for example, if cutepdf is set for 300dpi and the image is 600 dpi), it can be scaled in the pdf.
Microsoft stores TONS of info in the docx format, in xml. WAY more than is really needed to just export the info (for an example, try copying and pasting your text into a textbox cell, and look at the html info that comes out - I had a limit on a textbox size for a cms, and a 7 word sentence ballooned to 950 characters). This is so it can be later edited, and with a lot of esoteric info to make sure everything displays right in every possible permutation. The pdf doesn't need that info, and so it can just do the font and size, and strip out all the unnecessary info, saving a ton of space.
When you use such small files any overhead in the document format will have a disproportionate effect which is why you are seeing such large % differences.
I took a 2683KB JPEG and inserted it into a new word 2003 document. The resulting .doc file was 2725KB (or 2697KB as docx). Turning this into a PDF gives me a 2701KB PDF. So I am seeing a difference of 25KB, but only about 1% difference because of the size of the image data. It is about half what you got but maybe the version of word you have is more verbose when making docx?
For the PDF, acrobat shows space usage as 2691K image, 8.27K overhead and 1K fonts. PDF is quite a sparse format in its syntax which limits overhead and much of it has repeating strings so is easily compressible.
If you want to see what the PDF contains in a tree-like view you can download the demo version of CosEdit.