Forcing Ghostscript to use antialiasing when converting a PDF to PNG? - pdf

I'm using GPL Ghostscript 9.07 (2013-02-14) on OS X (10.8.4) to convert many PDFs to PNGs.
It works fine except for one of the PDFs which turns into a PNG with jagged edges. In other words, Ghostscript turns off antialiasing for that particular PDF for some reason.
The PDF in question.
The output:
In other cases it works fine (sample: pdf -> png).
I use this command:
gs -dNOPAUSE -dBATCH -dPDFFitPage -sDEVICE=pngalpha -g200x150 -sOutputFile=01.png 01.pdf
Is it possible to force Ghostscript to use antialiasing for that PDF?
Any tips are appreciated.

This worked for me:
gs -q -dQUIET -dSAFER -dBATCH -dNOPAUSE -dNOPROMPT -dMaxBitmap=500000000 -dAlignToPixels=0 -dGridFitTT=2 -sDEVICE=jpeg -dTextAlphaBits=4 -dGraphicsAlphaBits=4 -r150 -sOutputFile=foo-%d.jpg foo.pdf
Source: ImageMagick convert pdf to jpeg has poor text quality after upgrading ImageMagick version to 6.7.8
The above would work for a JPG; for PNG, replace the -sDEVICE option with your choice, example: -sDEVICE=png16m
Source: http://ghostscript.com/doc/current/Devices.htm

You can try -dGraphicsAlphaBits= with values 1,2 or 4 which may or may not make a difference. It made some improvement for me, but its a small graphic at low resolution with an awkward curve, so not so much as might be expected.
Or you can use one of the anti-aliasing devices (eg tiffscaled) which are more flexible. There is no anti-aliased device for PNG output but it would be trivial to convert TIFF to PNG.
By the way, your PDF file specifically turns off anti-aliasing on the components:
8 0 obj
<</AntiAlias false/ColorSpace/DeviceCMYK/Coords[0.0 0.0 1.0 0.0]/Domain[0.0 1.0]/Extend[true true]/Function 10 0 R/ShadingType 2>>
You might like to try and see what happens if you change AntiAlias to true, though I doubt this will have an effect as I'm pretty sure the aniti-aliasing is applied to the internal rendering of the shading, not the edgses.

You can try -dDOINTERPOLATE which uses a Mitchell filter function to scale the contributions for each output pixel

Related

Converting pdf to eps without rasterizing or changing fonts

I have been trying to convert a pdf vector graphic to eps. I tried two commands from the following answer: https://stackoverflow.com/a/44737018/5661667
The inkscape command inkscape input.pdf --export-eps=output.eps or rather, since --export-eps is deprecated now,
inkscape input.pdf --export-filename=output.eps
nicely converts to a vectorized eps. However, it strangely converts my Times New Roman fonts (the graphic was originally created using matplotlib) to some sans serif font (looks like Arial or something).
The ghostscript version of the conversion from the linked answer
gs -q -dNOCACHE -dNOPAUSE -dBATCH -dSAFER -sDEVICE=eps2write -sOutputFile=output.eps input.pdf
keeps my fonts nicely. However, the eps seems to be rasterized despite the -dNOCACHE option.
Is there any way to get one of these to just convert my pdf to eps without modifying it?
Further info: I am using Mac OS. For the first part, my suspicion is that I only have an Arial Unicode.tff installed in /Library/Fonts/. I tried installing some other fonts, but no success for my conversion.
I had the same problem when trying to convert a powerpoint generated pdf to eps format using inkscape.
After trying with gs and disabling the transparency I noticed some areas turned black after eps conversion.
gs -q -dNOCACHE -dNOPAUSE -dBATCH -dSAFER -dNOTRANSPARENCY -sDEVICE=eps2write -sOutputFile=output.eps input.pdf
Coming back to inkscape I noticed that Powerpoint added some transparent objects in these areas that turned black. So I manually removed them using inkscape and when converting to eps again the result was perfect!
In short: if there are transparent elements in your pdf, the fonts will probably be rasterized during eps conversion. So, you need to remove these elements.
Maybe there is an easier way to identify them in inkscape.
In my case I was able to use Find/Replace (Ctrl+F) to search objects with string "clipPath" and with 'Search option = Properties'. Then I open the Objects Tab (Menu Object->Objects...) and use that to delete each transparent object generated by Powerpoint.

How can I use Ghostscript to pre-process pdfs for older Kindles?

I have an old Kindle Dx. Owing to disabilities, I can't use tablets or other touch devices, and I transfer pdfs to the Kindle to read them. It requires pre-processing.
What is a good option to pre-process pdfs without rasterizing them?
[When rasterizing is acceptable:
k2pdfopt -mode copy for maps or for small text. This rasterizes, enhances contrast, and makes everything 1.4-compatible.
k2pdfopt -mode copy -dev dx for other works. This rasterizes to 800x1080, downsamples as needed, enhances contrast while making everything grayscale, and makes everything 1.4-compatible.
When rasterizing text is not acceptable:
gs -sDEVICE=pdfwrite -dCompatibilityLevel=1.4 -sstdout=%sstderr -dNOPAUSE -dQUIET -dBATCH -sOutputFile=output.pdf input.pdf if you want to preserve graphics. This makes minimal changes to make everything 1.4 compatible.
gs -sDEVICE=pdfwrite -dCompatibilityLevel=1.4 \
-g800x1080 -r150 -dPDFFitPage \
-dFastWebView -sColorConversionStrategy=RGB \
-dDownsampleColorImages=true -dDownsampleGrayImages=true -dDownsampleMonoImages=true -dColorImageResolution=150 -dGrayImageResolution=150 -dMonoImageResolution=300 -dColorImageDownsampleThreshold=1.0 -dGrayImageDownsampleThreshold=1.0 -dMonoImageDownsampleThreshold=1.0 \
-sstdout=%sstderr -dNOPAUSE -dQUIET -dBATCH -sOutputFile=output.pdf input.pdf if you want moderate downsampling. This re-rasterizes existing raster images to fit 800x1080 and makes everything 1.4 compatible.
gs -sDEVICE=pdfwrite -dCompatibilityLevel=1.4 \
-g800x1080 -r150 -dPDFFitPage \
-dFastWebView -sColorConversionStrategy=Gray \
-dDownsampleColorImages=true -dDownsampleGrayImages=true -dDownsampleMonoImages=true -dColorImageResolution=75 -dGrayImageResolution=75 -dMonoImageResolution=150 -dColorImageDownsampleThreshold=1.0 -dGrayImageDownsampleThreshold=1.0 -dMonoImageDownsampleThreshold=1.0 \
-sstdout=%sstderr -dNOPAUSE -dQUIET -dBATCH -sOutputFile=output.pdf input.pdf if you want more aggressive downsampling. This re-rasterizes raster images to fit 400x540, makes them grayscale, and makes everything 1.4 compatible. Low image quality, but usually still recognizable.
gs -sDEVICE=pdfwrite -dCompatibilityLevel=1.4 -dFILTERIMAGE -dFILTERVECTOR -sstdout=%sstderr -dNOPAUSE -dQUIET -dBATCH -sOutputFile=output.pdf input.pdf if you want to cut all graphics.
If using any of these options to pre-process for another device check its screen size in pixels. Don't worry too much about pixels per inch.]
[I.S. My goals are to fix pdfs so they 1. don't crash my Kindle, 2. don't freeze my Kindle or take too long to load each page, and 3. don't take up too much of the limited disk space on my Kindle. Preferably also 4. not rasterizing text, 5. not cutting out all images, which can sometimes lose tables, etc. and 6. not reflowing text, which will generally lose tabled. But I'm happy to downsample most images.]
[I.S. Note that I'm keeping copies of the originals. This is not a way to save disk space!]
For scanned pdfs, Willus's k2pdfopt is a great option. I've set up Mac Automator for
k2opt -mode copy -dev dx
or occasionally just -mode copy.
For pdf-born-pdfs, I'd rather not rasterize everything.
gs -sDEVICE=pdfwrite -dCompatibilityLevel=1.4 -sstdout=%stderr
-dNOPAUSE -dQUIET -dBATCH
can usually convert files, so the Kindle Dx can open them, but the Kindle will still slow, freeze, or crash with some pages.
One option is to combine Ghostscript and Mutool as follows:
gs -sDEVICE=pdfwrite -dCompatibilityLevel=1.4 -sstdout=%stderr -dNOPAUSE -dQUIET -dBATCH to pre-process pdfs to remove passwords,
mutool clean -g -g -d -s -l to sort out the junk, and then
gs
-sDEVICE=pdfwrite -dCompatibilityLevel=1.4 -sstdout=%stderr -dNOPAUSE -dQUIET -dBATCH again to get a smaller and faster pdf.
Note: I think Mutool's 3rd -g is the equivalent of Ghostscript's -dDetectDuplicateImages. Since it slows rendering down it may be better to do the opposite. I'm not sure how to set it to false. -dDetectDuplicateImages false? -uDetectDuplicateImages?
Note: I'm using gtime to time pdf rendering.
A single-step tool in a single application would help. And an image-reduction too would also help. Ghostscript's documentation is hard to follow.
For cleanup, as an alternative to running mutool:
-dFastWebView might help.
-dNOGC indicates that Ghostscript does garbage collection by default.
For image reduction:
-dPDFSETTINGS=/screen seems to work better in 9.50 than 9.23. /ebook might be better since it embeds all fonts.
-dFILTERIMAGE -dFILTERVECTOR also work better in 9.50 than 9.23, but are more drastic than I'd like.
A lot of settings seem to rely in input resolution and/or input page size.
-r seems to rely on input page size, rather than output page size. The Kindle Dx is 800 pixels by 1180 pixels.
-dDownScaleFactor reduces relative to input resolution.
-g800x1080 seems to crop pages, not shrink them.
I think -sDEVICE=pdfimage8 rasterizes everything, like k2pdfopt.
In some cases
gs -sDEVICE=pdfwrite -dCompatibilityLevel=1.4 -dFastWebView
-uDetectDuplicateImages -dPDFSETTINGS=/ebook -sstdout=%sstderr -dNOPAUSE -dQUIET -dBATCH yields larger and slower files than just -sDEVICE=pdfwrite -dCompatibilityLevel=1.4 -sstdout=%sstderr -dNOPAUSE -dQUIET -dBATCH
... I'm not sure what to make of these results.
You've asked an awful lot in here, which makes it rather difficult to read and answer cogently. You haven't really made it clear exactly what it is you want to achieve (you also haven't said what version of GS and MuPDF you are using).
Here are some points;
You don't need to 'clean out the junk' from PDF files produced by Ghostscript, these rarely have anything which can be removed, that's one reason people run PDF files through GS+pdfwrite (despite my saying constantly its a bad idea).
Using the -g switch with Mutool twice doesn't (AFAIK) do anything extra, but adding -d decompresses the files. You can have Ghostscript produce uncompressed PDF files too, use -dCompressPages=false -dCompressFonts=false -dCompressStreams=false.
When you pass your PDF through pdfwrite, then MuPDF, then pdfwrite again, you are risking quality degradation at every step, and the intermediate MUPDF step is unlikely to achieve anything. Most likely what you are doing is reducing the compression (and quality) of any JPEG compressed images, I doubt much else of use is happening.
I can't think why you'd want to not detect duplicate images, it really just makes the file bigger but if you want to you use the switch the same way as all the other GS switches; -dDetectDuplicateImages=false. Note this won't change the processing speed (and generally pdfwrite doesn't do rendering, but perhaps you mean on the target device...), the detection is done by applying an MD5 filter to every image as it is read, then comparing the MD5 hashes. Switching that off doesn't stop the MD5 it just stops the comparison.
If you find Ghostscript's documentation hard to follow, then use the Adobe documentation for distillerparams, that's where the majority of the pdfwrite settings come from (ie blame Adobe for this ;-)
-dFastWebView is (IMO) totally pointless, its there purely for compatibility with Adobe, and because a lot of people won't accept that its useless and insist on it. All it does is speed up loading of the first page of a PDF file, by PDF consumers which support it (which is practically none). And to do this it makes the file slightly bigger and more complicated.
Do NOT use -dNOGC, I keep telling people not to do this, its a debugging tool, it has no practical value in production other than to potentially make Ghostscript use more memory. Everything else you hear about it is cargo cult.
-r has nothing to do with the media szie at all, and does (more or less) nothing with pdfwrite. It sets the resolution of a page when rendering. Since you don't want to render to an image, setting the resolution is not a useful thing to do.
No pdfwrite settings rely on the "input resolution" because PDF (and PostScript) files don't have a resolution, they are vector page descriptions.
-dDownscaleFactor is a switch which only applies to the downscaling devices; tiffscaled and friends, which are rendering devices, it has no effect at all on pdfwrite.
Setting a fixed media size (using -g) does indeed rely on the resolution (because its specified in device pixesl) and does indeed only alter the media size, not the content. If you want to rescale the content to fit the new media, then you need to use -dFitPage. I can't really see why you would do that. Note that it doesn't affect the content of a PDF file (unless its a rendered image), it just makes all the numberic values smaller.
The pdfimage devices do indeed produce a PDF file where the entire content is an image; hence the name....
Now, if you could define what you actually want to achieve, I could make some suggestions.....
[EDIT]
image downsampling
Firstly there are three controls which turn this feature on/off altogether;
-dDownsampleMonoImages, -dDownsampleGrayImages and -dDownsampleColorImages. Assuming you don't select a PDFSETTINGS (I would recommend you do not) these are all initially false. If you want to downsample any images you need to set the relevant mono/gray/color switch to true.
Once downsampling is enabled then you need to set the relevant ImageResolution and DownsamplingThreshold, there are again switches for each colour depth.
Now although PDF files don't have a resolution the images have an effective resolution, but its not easy to calculate (actually without a lot of effort its impossible). Its the number of image samples in the bitmap in each direction, divided by the area of the media covered by the image.
As an example if I have an image 100x100 samples, and that is placed on the page in a 1 inch square, then the resolution of the image is 100 dpi. If I then scale the image up so that it covers 2 inches square (but don't change the image data) then its 50 dpi.
So you need to decide what resolution looks OK on your device. You then set -dColorImageResolution=, -dMonoImageResolution, -dGrayImageResolution.
That's the 'target' resolution. But if the image is already close to that it can be wasteful to process it, so the Downsampling threshold is consulted. The actual resolution of the image in the input has to be the target resolution times the threshold, or more, to be reduced for output.
If we consider, for example, a target resolution of 300 and a threshold of 1.5 then the actual resolution of an image in the input file would have to exceed 450 dpi to be considered for downsampling.
Obviously you can set the threshold to 1.0 eg -dColorImageDownsampleThreshold=1.0
Finally there is the downsampling type, this is the filter used to create the lower resolution image from the higher. The simplest is /Subsample; basically throw away enough lines and columns until we reach the required resolution (this is only filter available for monochrome imsages, as all the others would change the colour depth). Then there's /Average which averages the value in each direction, effectively a bilinear filter. Finally there's /Bicubic which probably does the 'best' job but will be the slowest to process.
On top of all that you can choose the Image Filter (the compression filter) used to write the image data. We don't support JPXEncode in the AGPL version of Ghostscript and pdfwrite. That leaves you /CCITTFaxEncode (for monochrome) DCTEncode (JPEG) and FlateEncode (basically Zip compression). That's MonImageFilter, GrayImageFilter and ColorImageFilter.
If you want to use these you must first set AutoFilterGrayImages to false and/or AutoFilterColorImages to false, because if these are true the pdfwrite device will choose a compression method by looking to see which one compresses most. For Gray and Color images this will almost certainly be JPEG.
Final point is that linework (vector data) cannot be selectively rendered; either everything is rendered or everything is maintained 'as it was'. The only time (in general) that pdfwrite renders content is when transaprecny is present and the output CompatibilityLevel doesn't support transparency (1.3 or below). There are exceptions but they are quite uncommon.
You might want to consider setting the ColorConversionStrategy to either /DeviceRGB or /DeviceGray. I've no idea if you are using colour or grayscale devices, but if they are grayscale creating a gray PDF file would reduce the size and processing significantly. Creating an RGB file for colour devices probably makes sense too, in case the input is CMYK.

Linux PDF to TIFF Quality Issue

I am trying to use a linux application to convert .pdf files to .tiff for faxing, however, our clients have not been happy with the quality of GhostScript's tiffg4 device.
In the image below, the left side shows a conversion using GhostScript tiffg4 and the right is from an online conversion service. We are unable to see which application is being used to attain that quality.
Note: The output TIFF must be black & white
Ghostscript Code:
gs -sDEVICE=tiffg4 -dNOPAUSE -dBATCH -dPDFFitPage -sPAPERSIZE=letter -g1728x2156 -sOutputFile=testg4.tiff test.pdf
We have tried these GhostScript devices:
tiffcrle
tiffg3
tiffg32d
tiffg4
tifflzw
tiffpack
My question is this--does anyone know which application and/or setting is used to achieve the quality on the right?
Extending on BitBank's comment, you could write a RGB tiff and then use ImageMagick to convert to Group 4. ImageMagick allows you to control the dithering algorithm:
gs -sDEVICE=tiff24nc -dNOPAUSE -dBATCH -dPDFFitPage -sPAPERSIZE=letter -g1728x2156 -sOutputFile=intermediate.tiff your.pdf
convert intermediate.tiff -dither FloydSteinberg -compress group4 out.tiff
ImageMagick's manual has some background on the algorithm(s) and available options.

Ghostscript: Quality and Size issue

I have a ghostscript command that converts a pdf into several PNG images (one for every page). The command arguments are as follows:
-dNOPAUSE -q -r300 -sPAPERSIZE=a4 -dTextAlphaBits=4 -dGraphicsAlphaBits=4 -dUseTrimBox -sDEVICE=png16m -dBATCH -sOutputFile="C:\outputfile%d.png" -c \"30000000 setvmthreshold\" -f "C:\inputfile.pdf"
The pdf displays as regular A4 pages in Adobe Reader, but in the PNG images it becomes huge (2480 by 3507 pixels for instance).
if I change the resolution in the ghostscript command to -r110 the page size is correct but the image quality is very rastorized.
Is there another way to improve the quality of the image without affecting the image size?
Thanks
Got it! Added the following parameter to my GS command:
-dDownScaleFactor=3
From the GS documentation:
This causes the internal rendering to be scaled down by the given
(small integer) factor before being output. For example, the following
will produce a 200dpi output png from a 300dpi internal rendering:
gs -sDEVICE=png16m -r600 -dDownScaleFactor=3 -o tiger.png\
examples/tiger.png
I had a similar problem, where PDF conversion to PNG using ghostscript resulted in an image with much greater dimensions (including extra white space). I solved the issue by using
-dUseCropBox
... which sets the page size to the CropBox rather than the MediaBox
The quality-size tradeoff is inevitable. You may choose a different compression to keep size down while maintaining reasonable quality. E.g. DCT (jpeg) or jpeg2000 if your content mainly consists of photographic images or CCITT or JBIG2 if your content is mainly black and white.
find the width and the height in points (%%BounDingBox)
use them
gs
-sDEVICE=png16m
-dDEVICEWIDTHPOINTS=$l
-dDEVICEHEIGHTPOINTS=$h
-r600
-dDownScaleFactor=3
-o tiger.png\
examples/tiger.png
where $w is the width and $h the height

How to convert PDF to low-resolution (but good quality) JPEG?

When I use the following ghostscript command to generate jpg thumbnails from PDFs, the image quality is often very poor:
gs -q -dNOPAUSE -dBATCH -sDEVICE=jpeggray -g465x600 -dUseCropBox -dPDFFitPage -sOutputFile=pdf_to_lowres.jpg test.pdf
By contrast, if I use ghostscript to generate a high-resolution png, and then use mogrify to convert the high-res png to a low-res jpg, I get pretty good results.
gs -q -dNOPAUSE -dBATCH -sDEVICE=pnggray -g2550x3300 -dUseCropBox -dPDFFitPage -sOutputFile=pdf_to_highres.png test.pdf
mogrify -thumbnail 465x600 -format jpg -write pdf_to_highres_to_lowres.jpg pdf_to_highres.png
Is there any way to achieve good results while bypassing the intermediate pdf -> high-res png step? I need to do this for a large number of pdfs, so I'm trying to minimize the compute time.
Here are links to the images referenced above:
test.pdf
pdf_to_lowres.jpg
pdf_to_highres.png
pdf_to_highres_to_lowres.jpg
One option that seems to improve the output a lot: -dDOINTERPOLATE. Here's what I got by running the same command as you but with the -dDOINTERPOLATE option:
I'm not sure what interpolation method this uses but it seems pretty good, especially in comparison to the results without it.
P.S. Consider outputting PNG images (-sDEVICE=pnggray) instead of JPEG. For most PDF documents (which tend to have just a few solid colors) it's a more appropriate choice.
Your PDF looks like it is just a wrapper around a jpeg already.
Try using the pdfimages program from xpdf to extract the actual image rather than rendering
to a file.