Ghostscript to compress a batch of PDFs - pdf

I have no experience of programming.
My PDFs won't display images on the iPad in PDFExpert or GoodNotes as the images are in JPEG2000, from what I could find on the internet.
These are large PDFs, upto 1500-2000 pages with images. One of these was an 80MB or so file. I tried printing it with Foxit to convert the images to JPG from JPEG2000 but the file size jumped to 800MB...plus it's taking too long.
I stumbled upon Ghostscript, but I have NO clue how to use the command line interface.
I am very short on time. Pretty much need a step by step guide for a small script that converts all my PDFs in one go.
Very sorry about my inexperience and helplessness. Can someone spoon-feed me the steps for this?
EDIT: I want to switch the JPEG2000 to any other format that produces less of an increase in file size and causes a minimal loss in quality (within reason). I have no clue how to use Ghostscript. I basically want to change the compression on the images to something that will display correctly on the iPad while maintaining the quality of the rest of the text, as well as the embedded bookmarks.
I'll repeat that I have NO experience with command line...I don't even know how to point GS to the folder my PDFs are in...

You haven't really said what it is you want. 'Convert' PDFs how exactly ?
Note that switching from JPX (JPEG2000) to JPEG will result in a quality loss, because the image data will be quantised (with a different quantisation scheme to JPX) by the JPEG encoder. You can use a lossless compression scheme instead, but then you won't get the same kind of compression. You won't get the same compression ratio as JPX anyway no matter what you use, the result will be larger.
A simple Ghostscript command would be:
gs -sDEVICE=pdfwrite -o out.pdf in.pdf
Because JPEG2000 encoding is (or at least, was) patent encumbered, the pdfwrite device doesn't write images as JPX< by default it will write them several times with different compression schemes, and then use the one that gives the best compression (practically always JPEG).
Getting better results will require more a complex command line, but you'll also have to be more explicit about what exactly you want to achieve, and what the perceived problem with the simplistic command line is.
[EDIT]
Well, giving help on executing a command line is a bit off-topic for Stack Overflow, this is supposed to be a site for software developers :-)
Without knowing what operating system you are using its hard to give you detailed instructions, I also have no idea what an iPad uses, I don't generally use Apple devices and my only experience is with Macs.
Presumably you know where (the directory) you installed Ghostscript. Either open a command shell there and type the command ./gs or execute the command by giving the full path, such as :
/usr/bin/gs
I thought the arguments on the command line were self-explanatory, but....
The -sDEVICE=pdfwrite switch tells Ghostscript to use the pdfwrite device, as you might guess from the name, that device writes PDF files as its output.
The -o switch is the name (and full path if required) of the output file.
The final argument is the name (and again, full path if its not in the current directory) of the input file.
So a command might look like:
/usr/bin/gs -sDEVICE=pdfwrite -o /home/me/output.pdf /home/me/input.pdf
Or if Ghostscript and the input file are in the same directory:
./gs -sDEVICE=pdfwrite -o out.pdf input.pdf

Related

Ghostscript Text Extraction Time?

I am Extracting Text from pdf and for that i am using Ghostscript v9.52
The time taken by ghostscript with default txtwrite commands is ~400ms and the commands are:
-dSafer -dBATCH -dNOPAUSE -sPDFPassword=thispdf -device="txtwrite" stdout pdf.pdf
Then i tried to lower down the resolution of renderring and that saved some time was able to make it down to ~300ms:
-dSafer -dBATCH -dNOPAUSE -r2 -dDEVICEWIDTHPOINTS=50 -dDEVICEHEIGHTPOINTS=50 -dFIXEDMEDIA -sPDFPassword=thispdf -device="txtwrite" stdout pdf.pdf
Have no idea how setting low resolution is working here.
How can i speed up Text Extraction near to 100ms if possible ?
If that's how long it's taking, then that's the length of time it takes. The interpreter has to be started up and establish a full working PostScript environment then fully interpret the input, including all the fonts, and pass that to the output device. The output device records the font, point size, orientation, colour, position and attempts to calculate the Unicode code points for all the text. Then, depending on the options (which you haven't given) it may reorder the text before output. Then it outputs the text closes the input and output files, releases all the memory used and cleanly shuts down the interpreter.
You haven't given an example of the file you are using, but half a second doesn't seem like a terribly long time to do all that.
In part you can blame all the broken PDF files out there, every time another broken file turns up 'but Acrobat reads it' another test has to be made and a work-around established, generally all of which slow the interpreter down.
The resolution will have no effect, and I find it very hard to believe the media size makes any difference at all, since it's not used. Don't use NOGC, that's a debugging tool and will cause the memory usage to increase.
The most likely way to improve performance would be not to shut the interpreter down between jobs, since the startup and shutdown are probably the largest part of the time spent when it's that quick. Of course that means you can't simply fork Ghostscript, which likely means doing some development with the API and that would potentially mean you were infringing the AGPL, depending what your eventual plans are for this application.
If you would like to supply a complete command line and an example file I could look at it (you could also profile the program yourself) but I very much doubt that your goal is attainable, and definitely not reliably attainable for any random input file.

shrinking a PDF

I'm not sure if this is the right place to post this question.
I'm trying to reduce the size of multiple 7MB PDF files so I tried this ghostscript commands I found online:
simple ghostscript with printer quality setting
gswin32c.exe -sDEVICE=pdfwrite -dCompatibilityLevel=1.4 -dPDFSETTINGS=/printer -dNOPAUSE -dQUIET -dBATCH -sOutputFile=output.pdf input.pdf
tried this
gswin32c.exe -o output.pdf -sDEVICE=pdfwrite -dColorConversionStrategy=/LeaveColorUnchanged -dDownsampleMonoImages=false -dDownsampleGrayImages=false -dDownsampleColorImages=false -dAutoFilterColorImages=false -dAutoFilterGrayImages=false -dColorImageFilter=/FlateEncode -dGrayImageFilter=/FlateEncode input.pdf
and this
gswin32c.exe -o output.pdf -sDEVICE=pdfwrite -dColorConversionStrategy=/LeaveColorUnchanged -dEncodeColorImages=false -dEncodeGrayImages=false -dEncodeMonoImages=false input.pdf
but in all cases the PDF files obtained were 'bigger' that the original.
All these pdf files are basically a collection of scanned images so maybe I need a specific option to 'tell' ghostscript to compress them ?
The strange thing I found is that using the trial version of phantom pdf I was able to reduce the size to 2-5MB without visible loss of quality.
How do I do the same with ghostscript ?
Firstly, Ghostscript (or more accurately, Ghostscript's pdfwrite device) doesn't 'shrink' PDF files, it makes new ones which may, or may not, be smaller.
Secondly, its practically impossible to say what might be happening with a PDF file without an example to look at.
If your files really are scanned images, then (assuming sensible initial compression) there's probably no way to reduce the file size without reducing quality. You might not notice the reduction in quality, especilaly if you're just viewing on screen, but it will be there.
Random poking with command lines which you run across online is probably not going to result in useful output either; you really need to understand where the size is being used in your original files, and then select options which are likely to reduce that.
For example, you say the pages are scanned images; there are only two realistic ways to reduce the size of an image, downsample it to a lower resolution, or select a different (more efficient, possibly lossy, compression). Ghostscript already compresses image data (unless you tell it not to).
The latter two of your command lines explicitly disable image downsampling, so they are not likely to reduce the size of scanned images. (by default the pdfwrite device doesn't downsample images, we try to preserve quality)
The middle option disables auto compression, and selects Flate compression. If your images were previously JPEG compressed, or are not contone images, then this is probably reasonable.
You also say that the PDF files got larger, most likely this is due to using compressed object streams and xref, which is a PDF 1.5 feature that the pdfwrite device doesn't support. However its not likely to save you much space.
I'd say the most likely difference is that 'phantom PDF' is using more aggressive downsampling, which you could reproduce with pdfwrite.
I'm assuming, of course, that you are using a recent version of Ghostscript. Older versions unsurprisingly perform less well than recent ones.

Preflight program for PDFs using PoDoFo or anything else open source? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I have to automate a preflight check on PDF documents. The preflight consists of:
Detect the resolution of images in an existing document and change them to 300dpi if they are not already at that resolution.
Detect the colorspace of images and if not in CMYK, then convert them to CMYK using color profiles.
Detect whether or not fonts are embedded in an existing PDF document, and correct this problem by substituting fonts. (or drawing font outlines — I'm not sure about this part).
Just wondering if this can be done using PoDoFo or any other open source projects out there. Or if I really need to go order some propriety software between $2K to $6K. My hosting environment is on Linux and supports PHP, Perl, Python, Ruby, Java.
Any ideas?
I'm not aware of any ready-made Open Source software which meets your requirements.
Only a part of it could be solved by writing your own shell script (or other program).
Detect resolution of images.
Run pdfimages -list some.pdf to output a list of images contained in the PDF as well as their dimensions... seemingly. But what is not obvious about it: these dimensions are the ones of the raw image (as embedded in the PDF). This could be 720x720 pixels. However, if rendered onto a 10x10 inch square of the page this image will be 72 DPI on the page. If rendered on a 1x1 inch square, it will be 720 DPI. Both types of 'rendering' inside a PDF can be made from the same embedded raw image, and it is the context of the current 'graphic state' which determines which is applied. So to determine the actual DPI of an image as it appears on the page requires some additional PDF parsing...
In any case, you can tell Ghostscript to re-sample images to 300 dpi, and to use a 'threshold' for this. (Ghostscript will never "upsample" an image, only downsample these which do overshoot the threshold. Upsampling almost never makes sense -- it only blows up the file size with no return in terms of higher quality.)
Convert colors to colorspace CMYK using ICC profiles.
The most recent versions of Ghostscript can do that. See also the most recent Ghostscript documentation describing its support for ICC.
Embed un-embedded fonts.
Running (and evaluating the results of) pdffonts some.pdf will show you which fonts are not embedded.
Ghostscript can embed un-embedded fonts.
So one Ghostscript command that would cover most of your requirements is this:
gs \
-o cmyk.pdf \
-sDEVICE=pdfwrite \
-sColorConversionStrategy=CMYK \
-sProcessColorModel=DeviceCMYK \
-sOutputICCProfile=/path/to/your.icc \
-sColorImageDownsampleThreshold=2 \
-sColorImageDownsampleType=Bicubic \
-sColorImageResolution=300 \
-sGrayImageDownsampleThreshold=2 \
-sGrayImageDownsampleType=Bicubic \
-sGrayImageResolution=300 \
-sMonoImageDownsampleThreshold=2 \
-sMonoImageDownsampleType=Bicubic \
-sMonoImageResolution=1200 \
-dSubsetFonts=true \
-dEmbedAllFonts=true \
-sCannotEmbedFontPolicy=Error \
-c ".setpdfwrite<</NeverEmbed[ ]>> setdistillerparams" \
-f some.pdf
This command would downsample all images with a resolution that's higher than the double wanted resolution (*ImageDownSampleThreshold=2). Also it would apply all these settings to any input file (unless some special PDF preflighting software which would apply selective 'fixups' based on the results of 'checks' for special properties).
Lastly, I cannot see what made think you'd have to spend $2k to $6k in case you'd have to resort to closed-source, commercial preflighting software. (My favorite in this field is the very powerful callas pdfToolbox6 (which even has a version that runs as CLI on Linux) -- its basic version costs 500 €.)
My background is in printing, so please keep this in mind when reading my answer. The items you propose to do seem somewhat straight forward, but when you get into the nitty gritty of it, there's a lot of print-industry knowledge that goes into these operations.
Here's some quick feedback to your bullet points:
You won't want to upsample an low res image to 300 dpi as it will decrease image quality (via re-interpolation) and increase files size.
You need to be careful with color conversions. There may be certain builds of RGB which you'd want to convert to black only. Or what happens if someone supplies a file which is already cmyk and tagged with the incorrect profile.
Font detection - very complicated to substitute fonts. If you don't have the exact same font as the originator, you could end up with text reflow problems. To own that font, you'll have to paid for a license. You also can't convert fonts to outlines without them being embedded.
My recommendation is to look at a commercial package for preflighting. These developers have invested years into developing their programs and are experts within the field of printing. The challenging part will be finding ones that are unix based in your price range. Most are designed for Windows or Mac. Callas has a linux cl version but not at the price listed. You'd need the server version.
What type of volume are you planning to run through it?
Did you try Enfocus PitStop Pro? Contact their support department with your specific request. They have tons of PDF preflight examples and will be happy to help you out.

Converting multi-page PDFs to several JPGs using ImageMagick and/or GhostScript

I am trying to convert a multi-page PDF file into a bunch of JPEGs, one for each page in the PDF. I have spent hours and hours looking up how to do this, and eventually I discovered that I need Ghostscript installed. So I did that (from this website: http://downloads.ghostscript.com/public/ And I used the most recent link "ghostscript-9.05.tar.gz" from Feb 8, 2012).
However, even with this installed/downloaded, I am still unable to do what I want. Should I have this saved somewhere special, like in the same folder as ImageMagick?
What I have figured out so far is this:
In Command Prompt I change the working directory to the ImageMagick folder, where that is saved.
I then type
convert "<full file path to pdf>" "<full file path to jpg>"
This is followed by a giant blob of error. It begins with:
Unrecoverable error: rangecheck in.setuserparams
Operand stack:
Followed by a blurb of unreadable numbers and caps. It ends with:
While reading gs_lev2.ps:
%%[ Error: invalidaccess; OffendingCommand: put ]%%
Needless to say, after hours and hours of deliberation, I don't think I am any closer to doing the seemingly simple task of converting this PDF into a JPG.
What I would like are some step by step instructions on how to make this work. Don't leave out anything, no matter how "obvious" it might seem (especially anything involving ghostscript). This has been troubling me and my supervisor for months now.
For further clarification, we are on a Windows XP operating system. The eventual intention is to call these command lines in R, the statistical language, and run it in a script. In addition, I have been able to successfully convert JPGs to PNG format and vice versa, but PDF just is not working.
Help!!!
You don't need ImageMagick for this, Ghostscript can do it all alone. (If you used ImageMagick, it couldn't do that conversion itself, it HAS to use Ghostscript as its 'delegate'.)
Try this for directly using Ghostscript:
c:\path\to\gswin32c.exe ^
-o page_%03d.jpg ^
-sDEVICE=jpeg ^
d:/path/to/input.pdf
This will create a new JPEG for each page, and the filenames will increment as page_001.jpg, page_002.jpg,...
Note, this will also create JPEGs which use all the default settings of the jpeg device (one of the most important ones will be that the resolution will be 72dpi).
If you need higher (or lower resolution) for your images, you can add other options:
gswin32c.exe ^
-o page_%03d.jpg ^
-sDEVICE=jpeg ^
-r300 ^
-dJPEGQ=100 ^
d:/path/to/input.pdf
-r300 sets the resolution to 300dpi and -dJPEGQ=100 sets the highest JPEG quality level (Ghostscript's default is 75).
Also note, please: JPEG is not well suited to represent shapes with sharp edges and high contrast in good quality (such as you typically see in black-on-white text pages with small characters).
The (lossy) JPEG compression method is optimized for continuous-tone pictures + photos, and not for line graphics. Therefore it is sub-optimal for such PostScript or PDF input pages which mainly contain text. Here, the lossy compression of the JPEG format will result in poorer quality output even if the input is excellent. See also the JPEG FAQ for more details on this topic.
You may get better image output by choosing PNG as the output format (PNG uses a lossless compression):
gswin32c.exe ^
-o page_%03d.png ^
-sDEVICE=png16m ^
-r150 ^
d:/path/to/input.pdf
The png16m device produces 24bit RGB color. You could swap this for pnggray (for pure grayscale output), png256 (for 8-bit color), png16 (4-bit color), pngmono (black and white only) or pngmonod (alternative black-and-white module).
There are numerous SaaS services that will do this for you too. HyPDF and Blitline come to mind.

Tools for JPEG optimization? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
Do you know of any tools (preferrably command-line) to automatically and losslessly optimize JPEGs that I could integrate into our build environment? For PNGs I'm currently using PNGOUT, and it generally saves around 40% bandwidth/image size.
At the very least, I would like a tool that can strip metadata from the JPGs - I noticed a strange case where I tried to make thumbnail from a photograph, and couldn't get it smaller than 34 kB. After investigating more, I found that the EXIF data was still part of the image, and the thumbnail was 3 kB after removing the metadata.
And beyond that - is it possible to further optimize JPGs losslessly? The PNG optimizer tries different compression strategies, random initialization of the Huffmann encoding etc.
I am aware that most savings come from the JPEG quality parameter, and that it's a rather subjective measure. I'm only looking for a tool that can be run as a build step and that losslessly squeezes a few bytes from the images.
I wrote a GUI for all image optimization tools I could find, including MozJPEG and jpegoptim that optimize Huffman tables, progressive scans, and (optionally) remove invisible metadata.
If you don't have a Mac, I also have a basic web interface that works on any platform.
I use libjpeg for lossless operations. It contains a command-line tool jpegtran that can do all you want. With the commandline option -copy none all the metadata is stripped, and -optimize does a lossless optimization of the Huffmann compression. You can also convert the images to progressive mode with -progressive, but that might cause compatibility problems (does anyone know more about that?)
[WINDOWS ONLY]
RIOT(Radical Image Optimization Tool)
This is the greatest image optimization tool I have found!
http://luci.criosweb.ro/riot/
You can easily get a 10MB image down to 800KB through sub-sampling.
It supports PNG, GIF, and JPEG.
It even integrates into context menus so you can send pictures straight there.
Allows you to rotate, re-size, compress to specified KB's, and more. Also has plugins for GIMP and IrfanView and other things.
There is also a DLL available if you want to incorporate it into your own programs or java script / c++ program.
Another alternative is http://pnggauntlet.com/ PNGGAUNTLET takes forever but it does a pretty good job.
[WINDOWS ONLY]
A new service called JPEGmini produces incredible results. A shame that it's online only. Edit: It's available for Windows and Mac now
Tried a number of the suggestions above - I personally was after lossless compression.
My sample image had an original size of 67,737 bytes.
Using kraken.io, it went down to 64,718
Using jpegtran, it went down to 64,718
Using yahoo smush-it, it went down to 61,746
Using imagemagick (-strip), it went down to 65,312
The smush.py option looks promising, but the installation was too complex for me to do quickly
jpegrescan looks promising too, but seems to be unix and I'm using windows
jpegmini is NOT lossless, but I can't tell the difference (down to 22,172)
plinth's Altrasoft jpegstripper app does not work on my windows 7
jpegoptim is not windows - no good for me
Riot (keeping quality at 100%) got it down to 63,416 and with chroma subsampling set to high, it got it down to 61,912 - I don't know if that is lossless or not though, and I think it looks lighter than the original.
So my verdict is yahoo smushit if it must be lossless
I would try Imagemagick. It has tons of command line options, its free and have a nice license.
http://www.imagemagick.org
There seems to be an option called Strip that may help you:
http://www.imagemagick.org/script/command-line-options.php#strip
ImageOptim is really slick. The command line option posted by the author will populate the GUI and show progress. I used jpegtran for optimizing and converting to progressive, then ImageOptim for further progressive optimizations and for other file types.
Reuse of script code also found in this forum (all files replaced in place):
jpegtran
for file in $(find $DIR -type f \( -name "*.jpg" -or -name "*.jpeg" -or -name "*.JPG" \)); do
echo found $file for optimizing...
jpegtran -copy comments -optimize -progressive -outfile $file $file
done
ImageOptim
for file in $(find $DIR -type f \( -name "*.jpg" -or -name "*.png" -or -name "*.gif" \)); do
do
echo found $file for optimizing...
open -a ImageOptim.app $file
done
In case anyone's looking, I've written an offline version of Yahoo's Smush.it. It will losslessly optimise pngs, jpgs and gifs (animated and static):
http://github.com/thebeansgroup/smush.py
You can use jpegoptim which will losslessly optimize jpeg files by default. The --strip-all option strips all extra embedded info. You can also specify a lossy mode with the --max switch which is useful when you have images saved with a very high quality setting, which is not necessary for eg. web content.
You get similar optimization as with jpegtran (see answer by OutOfMemory) but jpegoptim can't save to progressive jpegs.
I've written a command line tool called 'picopt' (similar to ImageOptim) that uses external programs to optimize JPEGs, PNGs, GIFS, animated GIFS and even comic book archive contents (CBR/CBZ).
This is suitable for use with homebrew on OS X or Linux systems where you have installed tools like jpegrescan, jpegtran, optipng, gifsicle, etc.
https://github.com/ajslater/picopt
I too would recommend ImageMagick. It has a command line option to remove EXIF metadata
mogrify -strip image.jpg
There are plenty of other tools out there that do the same thing.
As far as recompressing JPEGs go, don't. JPEGs are lossy to start with, so any form of recompression is only going to hurt image quality. However, if you have losslessly encoded images, some encoders do a better job than others. I have noticed that JPEGs done with Photoshop consistently look better than when encoded with ImageMagick (despite the same file size) due to complicated reasons. Furthermore (and this is relevant to you), I know that at least Photoshop can save JPEGs as optimized which means they drop compatibility with some stuff that you probably don't care about to save a couple of KB. Also, make sure you don't have any colour profiles embedded and you may be able to save another couple of KB.
I would recommend using http://kraken.io It's ultra-fast webapp which will optimize your PNG and JPEG files far better than smush.it does.
I recommend to use JpegOptim, it's free and really nice, you can specify the quality, the size you want ... And easy to use in command line.
JpegOptim
May I recommend this for near-transparency:
convert 'yourfile.png' ppm:- | jpeg-recompress -t 97 -q veryhigh -a -m smallfry -s -r -S disable - yourfile.jpg
It uses imagemagick's convert and jpeg-recompress from jpeg-archive.
Both are open-source and work on Windows, Mac and Linux. You may want to tweak the options above for different quality expectations.