Ghostscript error when converting PostScript to PDF file - pdf

I convert a PDF with Ghostscript (9.20) to a PostScript File:
pdf2ps original.pdf optimized.ps
and then try to reconvert the PostScript to a smaller PDF file with the -dPDFSETTINGS=/screen or /ebook option to hopefully obtain a smaller PDF file size in the end:
ps2pdf -dPDFSETTINGS=/screen optimized.ps optimized.pdf
But then I get the following error during conversion:
Subsample filter does not support non-integer downsample factor (2.400000)
Failed to initialise downsample filter, downsampling aborted
What's missing or what I'm doing wrong? Couldn't find any solutions yet… :-(

Firstly you don't need to do a multiple step conversion PDF->PS->PDF, a simple PDF->PDF will work.
The warning is due to trying to downsample images to a lower resolution, and the scale factor is not an integer. So in this case, it won't downsample. If you insist on using the canned settings instead of setting the controls yourself, then I'm afraid you are pretty much always going to be in the dark. It would be much better to read the documentation and work out which controls to set, based on the type of input you have, and the compromises you are prepared to accept on quality.
In this case, you will almost certainly have to not downsample monochrome images. See the documentation on how to achieve that.
You have not stated the version of Ghostscript you are using which makes it even harder to comment here, however there is an open enhancement request regarding the downsampling filter here
Which originated with a Stack Overflow question here

Related

Ghostscript to compress a batch of PDFs

I have no experience of programming.
My PDFs won't display images on the iPad in PDFExpert or GoodNotes as the images are in JPEG2000, from what I could find on the internet.
These are large PDFs, upto 1500-2000 pages with images. One of these was an 80MB or so file. I tried printing it with Foxit to convert the images to JPG from JPEG2000 but the file size jumped to 800MB...plus it's taking too long.
I stumbled upon Ghostscript, but I have NO clue how to use the command line interface.
I am very short on time. Pretty much need a step by step guide for a small script that converts all my PDFs in one go.
Very sorry about my inexperience and helplessness. Can someone spoon-feed me the steps for this?
EDIT: I want to switch the JPEG2000 to any other format that produces less of an increase in file size and causes a minimal loss in quality (within reason). I have no clue how to use Ghostscript. I basically want to change the compression on the images to something that will display correctly on the iPad while maintaining the quality of the rest of the text, as well as the embedded bookmarks.
I'll repeat that I have NO experience with command line...I don't even know how to point GS to the folder my PDFs are in...
You haven't really said what it is you want. 'Convert' PDFs how exactly ?
Note that switching from JPX (JPEG2000) to JPEG will result in a quality loss, because the image data will be quantised (with a different quantisation scheme to JPX) by the JPEG encoder. You can use a lossless compression scheme instead, but then you won't get the same kind of compression. You won't get the same compression ratio as JPX anyway no matter what you use, the result will be larger.
A simple Ghostscript command would be:
gs -sDEVICE=pdfwrite -o out.pdf in.pdf
Because JPEG2000 encoding is (or at least, was) patent encumbered, the pdfwrite device doesn't write images as JPX< by default it will write them several times with different compression schemes, and then use the one that gives the best compression (practically always JPEG).
Getting better results will require more a complex command line, but you'll also have to be more explicit about what exactly you want to achieve, and what the perceived problem with the simplistic command line is.
[EDIT]
Well, giving help on executing a command line is a bit off-topic for Stack Overflow, this is supposed to be a site for software developers :-)
Without knowing what operating system you are using its hard to give you detailed instructions, I also have no idea what an iPad uses, I don't generally use Apple devices and my only experience is with Macs.
Presumably you know where (the directory) you installed Ghostscript. Either open a command shell there and type the command ./gs or execute the command by giving the full path, such as :
/usr/bin/gs
I thought the arguments on the command line were self-explanatory, but....
The -sDEVICE=pdfwrite switch tells Ghostscript to use the pdfwrite device, as you might guess from the name, that device writes PDF files as its output.
The -o switch is the name (and full path if required) of the output file.
The final argument is the name (and again, full path if its not in the current directory) of the input file.
So a command might look like:
/usr/bin/gs -sDEVICE=pdfwrite -o /home/me/output.pdf /home/me/input.pdf
Or if Ghostscript and the input file are in the same directory:
./gs -sDEVICE=pdfwrite -o out.pdf input.pdf

Change Ghostscript dithering method when converting pdf to 256 color BMP

I am trying to produce some high quality 8bpp bmp from pdf file with ghostscript. For that purpose, I use the bmp256 device.
So far, everything works well and is really fast, but ghostscript use halftoning to dither the image, leading to some uggly patterns when zooming on the picture :
I've managed to reduce their size by playing with the -dDITHERPPI flag, but this is still not satisfying enough. Those are too regular and are too easily spotted, even with little zoom.
Instead of using halftone, I would like to use some error diffusion algorithm, like the Floyd–Steinberg one. I found this algorithm is implemented on other devices, but they are all printer related devices, so I can't really use them.
Plus, I need to be as fast as possible when converting the PDF to 8bpp BMP, and the outputed pictures are very large: so converting it to 24 or 32bpp BMP in the first place to dither it later with another tool is excluded.
I already downloaded the source to try to implement it myself, but the project is really big and complex and I don't know how and where to start.
Is there any way to use some error diffusion algorithm with ghostscript without having to implement it myself ?
If no, is there a prefered way for extending ghostscript ? Any guideline ?

Converting multi-page PDFs to several JPGs using ImageMagick and/or GhostScript

I am trying to convert a multi-page PDF file into a bunch of JPEGs, one for each page in the PDF. I have spent hours and hours looking up how to do this, and eventually I discovered that I need Ghostscript installed. So I did that (from this website: http://downloads.ghostscript.com/public/ And I used the most recent link "ghostscript-9.05.tar.gz" from Feb 8, 2012).
However, even with this installed/downloaded, I am still unable to do what I want. Should I have this saved somewhere special, like in the same folder as ImageMagick?
What I have figured out so far is this:
In Command Prompt I change the working directory to the ImageMagick folder, where that is saved.
I then type
convert "<full file path to pdf>" "<full file path to jpg>"
This is followed by a giant blob of error. It begins with:
Unrecoverable error: rangecheck in.setuserparams
Operand stack:
Followed by a blurb of unreadable numbers and caps. It ends with:
While reading gs_lev2.ps:
%%[ Error: invalidaccess; OffendingCommand: put ]%%
Needless to say, after hours and hours of deliberation, I don't think I am any closer to doing the seemingly simple task of converting this PDF into a JPG.
What I would like are some step by step instructions on how to make this work. Don't leave out anything, no matter how "obvious" it might seem (especially anything involving ghostscript). This has been troubling me and my supervisor for months now.
For further clarification, we are on a Windows XP operating system. The eventual intention is to call these command lines in R, the statistical language, and run it in a script. In addition, I have been able to successfully convert JPGs to PNG format and vice versa, but PDF just is not working.
Help!!!
You don't need ImageMagick for this, Ghostscript can do it all alone. (If you used ImageMagick, it couldn't do that conversion itself, it HAS to use Ghostscript as its 'delegate'.)
Try this for directly using Ghostscript:
c:\path\to\gswin32c.exe ^
-o page_%03d.jpg ^
-sDEVICE=jpeg ^
d:/path/to/input.pdf
This will create a new JPEG for each page, and the filenames will increment as page_001.jpg, page_002.jpg,...
Note, this will also create JPEGs which use all the default settings of the jpeg device (one of the most important ones will be that the resolution will be 72dpi).
If you need higher (or lower resolution) for your images, you can add other options:
gswin32c.exe ^
-o page_%03d.jpg ^
-sDEVICE=jpeg ^
-r300 ^
-dJPEGQ=100 ^
d:/path/to/input.pdf
-r300 sets the resolution to 300dpi and -dJPEGQ=100 sets the highest JPEG quality level (Ghostscript's default is 75).
Also note, please: JPEG is not well suited to represent shapes with sharp edges and high contrast in good quality (such as you typically see in black-on-white text pages with small characters).
The (lossy) JPEG compression method is optimized for continuous-tone pictures + photos, and not for line graphics. Therefore it is sub-optimal for such PostScript or PDF input pages which mainly contain text. Here, the lossy compression of the JPEG format will result in poorer quality output even if the input is excellent. See also the JPEG FAQ for more details on this topic.
You may get better image output by choosing PNG as the output format (PNG uses a lossless compression):
gswin32c.exe ^
-o page_%03d.png ^
-sDEVICE=png16m ^
-r150 ^
d:/path/to/input.pdf
The png16m device produces 24bit RGB color. You could swap this for pnggray (for pure grayscale output), png256 (for 8-bit color), png16 (4-bit color), pngmono (black and white only) or pngmonod (alternative black-and-white module).
There are numerous SaaS services that will do this for you too. HyPDF and Blitline come to mind.

How to determine when PNG24 converted to PNG8 is lossless?

Hey, i'm using a program called pngquant to convert 24 bit PNGs to 8-bit PNGs. Everything seems to work fine, and I don't notice any loss of quality for icons and other images that don't contain too much colors. Now when I feed it a PNG photo with zillions of colours, it produces a PNG8 where I can see some quality loss.
I'd like to determine that quality loss programmatically. I'd like to know when converting a PNG24 to PNG8 is safe or not. Sort of what webpagetest.org does -- they tell you that this specific image will be smaller in size if converted to PNG8 and will not loose quality.
Any ideas?
Thanks.
This sounds like a full-reference image quality assessment problem.
The simplest way to approach this is to try computing the PSNR between the PNG24 and PNG8 images. This is a measure of the difference between the two images. The higher the PSNR, the less different the images are. After using your color quantization software, check if the PSNR is above some threshold (you'll have to determine that empirically), and if it is, then the quantization was "safe".
PSNR has its down sides, namely the fact that it doesn't always correspond to the way the human visual system works (for example, it neglects the phenomenon of spatial and contrast masking). Another metric, SSIM, attempts to take care of that problem, but is slightly more difficult to compute (here is an OpenCV implementation, though). You can use SSIM instead of PSNR in the thresholding approach I described above.
Here's another thread which you might find useful.
Quite simple. If the image you are converting from PNG24 to PNG8 has more thant 256 colors, you gonna loose quality. Do I missed something?
For development of pngquant I use my own SSIM tool, since the OpenCV-based one didn't seem to support gamma correction nor alpha channel properly.
When you run pngquant -v it will output amount of error introduced as MSE=n (n is mean square error — 0 is perfect quality).
The latest version has --quality setting which lets you set minimum required quality. If it can't achieve it, it won't save the file.

Using ps2pdf on EPS files with PNG used for bitmaps?

We're currently using ps2pdf to convert EPS files to PDF. These EPS files contain both vector information (lines and text) and bitmap data.
However, by default ps2pdf converts the bitmap components of these images to JPG as they're embedded within the PDF, whereas for the type of graphics we have (data visualisation) it would be much more appropriate to use lossless compression. PDF supports PNG, so it should be possible to achieve what we're trying to do, but I'm having trouble finding a relevant option in the somewhat intimidating manual.
So the short question is: what is the correct way to write this?
    ps2pdf -dPDFSETTINGS=UsePNGinsteadOfJPGcompression input.eps output.pdf
The answer is not -dUseFlateCompression, since that option refers to using Flate instead of LZW compression; both are lossless but LZW was covered by patents for a while. Since that's not a problem any more, the option is ignored.
Instead, the options called to achieve lossless encoding of bitmap data are: (all four of)
-dAutoFilterColorImages=false
-dAutoFilterGrayImages=false
-dColorImageFilter=/FlateEncode
-dGrayImageFilter=/FlateEncode
You might also want to do the same thing with MonoImageFilter as well, but I assume /CCITTFaxEncode does a reasonable job there so it's not too important.