Ghostscript on Unix generating huge files - pdf

I use Ghostscript 9.14, the last one compiled for HP-Unix.
I need to create PDF/A-1b files from existing pdf files from different sources.
It is preferred that this happens on a HP-Unix server because that is the server that puts them in a DMS.
The command:
gs -q -dPDFA -dBATCH -dNOPAUSE -dNOOUTERSAVE \
-dCCFONTDEBUG -dCFFDEBUG -dCMAPDEBUG -dDOCIEDEBUG -dEPSDEBUG \
-dFAPIDEBUG -dINITDEBUG -dPDFDEBUG -dPDFOPTDEBUG -dPDFWRDEBUG \
-dSETPDDEBUG -dSTRESDEBUG -dTTFDEBUG -dVGIFDEBUG -dVJPGDEBUG \
-dColorConversionStrategy=/sRGB -dProcessColorModel=/DeviceRGB \
-sDEVICE=pdfwrite -sPDFACompatibilityPolicy=2 \
-sOutputFile=debug_0901ece380001a00.pdf /usr/../PDFA_def.ps \
/0901ece380001a00.pdf
The source pdf is filled with just non-OCRed images.
I have this working on a newer version on a Windows server (Ghostscript 9.19) without problems and with the same command but can't seem to get it working on HP-Unix.
On the Windows server there is a MS Office installed.
The HP-Unix command generates 9mb file for a 300kb source file and it takes ages to generate.
Ghostscript seems single threaded but 9 mins for 35 pages is a bit much.
When I check through Preflight in Acrobat Pro 9 Extended, the 9mb file is truly PDF-A 1b.
Do I need to install a kind of Office software on Unix to get this working?
Or an image editing tool?
Also, how do I check the debug lines? They aren't in a readable format and I can't find any info on that.
Maybe it is something that only can be checked by the Ghostscript developers?

Almost certainly the input file contains transparency. PDF/A-1 does not support transparency, and so when creating PDF/A-1 files any page which does contain transparency is rendered to an image, and then that image is embedded in the output.
Clearly this will take time (rendering a page at 720 dpi, full colour, and transparency processing is slow) and will result in a large file. However, its the only way to preserve the appearance of the input file and still create a PDF/A-1 file.
Of course, in the absence of an example input file to examine its not possible to be certain of this.
The DEBUG lines switches are useless except to Ghostscript developers, don't bother to set them. You would never set so many anyway, you'll be swamped with extraneous detail. I'm doubtful all the ones you have listed are even valid.
You say you have this 'working' with Ghostscript 9.19 on Windows, what do you mean by 'working' ? It seems to me that the 9.14 output 'works' as well.....
As far as I know we have never compiled a Ghostscript release for HP/UX, but the current version (9.22) is known to compile and run on HP/UX.
Finally Ghostscript does not rely on (and indeed cannot make use of) Microsoft Office. Nor does it rely on the operating system for anything except memory and file access.

Related

how to include pdf 1.6 files in sphinx

We use libreoffice to generate pdf-figures from odg files (automatically via makefile) and include these in documentation generated by sphinx that via LaTeX ultimately produces a pdf file. This works nicely. However, starting with libreoffice 7, libreoffice generates pdfs with version 1.6. And pdflatex as used by sphinx (4.1.2) only accepts PDFs up to 1.5. producing warning messages such as
PDF inclusion: found PDF version <1.6>, but at most version <1.5> allowed
That would easily be fixable by including \pdfminorversion=6 early in the generated LaTeX file. However, putting it (first) in the preamble via conf.py is too late
! pdfTeX error (setup): PDF version cannot be changed after data is written to the PDF file.
Are there any other legal means to just insert raw LaTeX early (without resorting to scripted file manipulation)? Or do you have any other hints on how to specifiy the pdf version that gets produced by LaTeX/sphinx and thus get rid of the warnings. I know, just warnings, but these things tend to become errors sooner than one might think...
First of all, some of the answers to this question might be useful if you definitely want to upgrade to PDF version 1.6.
On the opposite, if the actual PDF version of your figures is not an issue, you can also manually force the PDF's to insert to be of version 1.5, using Ghostscript in the following command taken from here:
gs -sDEVICE=pdfwrite -dCompatibilityLevel=1.5 -o output.pdf input.pdf
That way, you make sure avoiding inserting any instability issue, messing up with LaTeX configuration. (even though setting up \pdfminorversion=6 should be fine by now)

Ghostscript pdfwrite produces zero length output when reading from PDF files, no error reported, fine with other inputs

I have a requirement to append one PDF file to another.
I felt that Ghostscript was the way forward, and installed the 64 bit Windows version (9.53.0), but if I attempt to do anything with pdfwrite where the input is a PDF, e.g.
gswin64c -DNOSAFER -sDEVICE=pdfwrite -o output.pdf input.pdf
I get zero length output (with no error messages at all). This happens whether the PDF is one of Ghostscript's shipped examples, one generated using tcpdf, or one saved from a Windows application. It happens whether I try to read from a single PDF or from multiple ones (the latter being my use case).
If I convert the input PDFs to Postscript and then use pdfwrite on those, it works like a dream, e.g.
call pdf2ps input.pdf temp.ps
gswin64c -DNOSAFER -sDEVICE=pdfwrite -o output.pdf temp.ps
EPS inputs work fine also - the only problem seems to be with PDF ones. But Ghostscript can read and display any PDF (and indeed convert any PDF to Postscript), it just can't cope with PDFs as input to pdfwrite, as far as I can see.
I can find no reference anywhere to this particular issue.
This turned out to not be limited to PDF input, it's just easier to trigger it that way. The problem was that an internal data type was changed from a build-dependent size to always be 64-bits, but a #define'd value wasn't correctly updated so the 64-bit Windows build was still using a value intended for 32-bit builds.
There's a commit to fix the problem here. However this seems serious enough that a new build 9.53.1 (so that's patch level 1 already...) will be forthcoming shortly (if it's not already there).
It would help a lot if people could report bugs when they find this kind of problem, and even better if there are any volunteers to try out the release candidates, we would really prefer not to make releases with serious problems.....

How can I convert pdf to png now that imagemagik no longer works on my shared hosting server

---EDIT---
As discussed below and in the comments, my ISP took a default policy.xml on June 25 for Imagemagick which turns off convert for pdf files, which is what I need. I am getting crickets from my requests to modify it and I don't consider that a good solution as it could revert back on the next upgrade. I have found that Ghostscript will convert pdf files. I would appreciate any input if I am on the right track but so far this is very promising.
My existing call is this
convert -density 300 -quality 100 \"" . Aqualarm.pdf . "\" -resize 800 -sharpen 0x1.0 -flatten \"" . Aqualarm.png . "\"
The proposed Ghostscript version is this
gs -dNOPAUSE -dBATCH -r300 -dDownScaleFactor=3 -sDEVICE=png16m -sOutputFile="Aqualarm_%03d.png" Aqualarm.pdf
In this case Aqualarm is a test file.
---EDIT 2---
Using Ghostscript as described above worked with one modification. convert numbers files starting with 0 and gs numbers files starting with 1. I had to put a test in and if file 0 was missing, I changed the index to 1. Other than that I am happy with the result. This is apparently a common problem even on non shared host systems. The issue is that updates to ImageMagick will overwrite edits to the policy.xml so what you do there to make things work, might stop working on the next update. Since ImageMagick uses Ghostscript to do this anyway, I don't see any reason not to bypass the middleman.
From other reading I found that the reason ImageMagick disabled pdf by default is because of an error in Ghostscript that was fixed a few version back.
---END EDIT---
My website is on a shared hosting server. For years I have used ImageMagick "convert" to turn pdf documents into png. Now I get an error message as described here
ImageMagick security policy 'PDF' blocking conversion
The message is: convert-im6.q16: attempt to perform an operation not allowed by the security policy `PDF' # error/constitute.c/IsCoderAuthorized/408.
The suggested solution is to modify policy.xml but of course on a shared host I do not have access to that file.
I have also seen the suggestion to install pdftoppm but even after hours of searching I cannot find out how to do that locally, without root access.
Is there a way that will work on a shared host server?
Thank you for reading.
I decided that getting policy.xml changed to allow pdf was not a good solution because it might just be overwritten at the next release and put me right back where I am. Research uncovered that ImageMagick uses Ghostscript to do the pdf conversions so why put up with an unreliable middleman. More research found some command line batch instructions to do the conversion. However, the resolution was terrible. Only when I got up to close to the resolution of 300 did I get good results but the file was huge. Ghostscript has a command that allows high internal resolution and then a downscale factor to bring the file to a smaller size. Why this is better than just directly converting to the file size I want is a mystery but this is the recommended solution and experimentation showed it to be of high quality. The final solution is as follows:
$gscommand = "gs -dNOPAUSE -dBATCH -r300 -dDownScaleFactor=3 -sDEVICE=png16m -sOutputFile=\"" .$file . "_%d.png\" " . $file . ".pdf";
$returnedvalue = exec($gscommand);
In closing, this seems to be a pretty common problem without a solution other than use a different program. One recommended is pdftoppm which I did not find how to install on a shared host system and with Ghostscript doing the job there is no need to figure that out.
I hope this post helps others faced with this problem.

Ghostscript skips characters when merging PDFs

I have a problem when using Ghostscript (version 8.71) on Ubuntu to merge PDF files created with wkhtmltopdf.
The problem I experience on random occasions is that some characters get lost in the merge process and replaced by nothing (or space) in the merged PDF. If I look at the original PDF it looks fine but after merge some characters are missing.
Note that one missing character, such as number 9 or the letter a, can be lost in one place in the document but show up fine somewhere else in the document so it is not a problem displaying it or a font issue as such.
The command I am using is:
gs \
-q \
-dNOPAUSE \
-sDEVICE=pdfwrite \
-sOutputFile=/tmp/outputfilename \
-dBATCH \
/var/www/documents/docs/input1.pdf \
/var/www/documents/docs/input2.pdf \
/var/www/documents/docs/input3.pdf
Anyone else that have experienced this, or even better know a solution for it?
I've seen this happening if the names for embedded font subsets are identical, but the real content of these subsets are different (containing different glyph sets).
Check all your input files for the fonts used. Use Poppler's pdffonts utility for this:
for i in input*.pdf; do
pdffonts ${i} | tee ${i}.pdffonts.txt
done
Look for the font names used in each PDF.
My theory/bet is on you seeing identical font names used (names which are similar to BAAAAA+ArialMT) by different input files.
The BAAAAA+ font name prefix to be used for subset fonts is supposed to be random (though the official specification is not very clear about this). Some applications use predictable prefixes, however, starting with BAAAAA+, CAAAAAA+ DAAAAA+ etc. (OpenOffice.org and LibreOffice are notorious for this). This means that the prefix BAAAAA+ gets used in every single file where at least one subset font is used...
It can easily happen that your input files do not use the exact same subset of characters. However the identical names used could make Ghostscript think that the font really is the same. It (falsely) 'optimizes' the merged PDF and embeds only one of the 2 font instances (both having the same name, for example BAAAAA+Arial). However, this instance may not include some glyphs which where part of the other instance(s).
This leads to some characters missing in merged output.
I know that more recent versions of Ghostscript have seen a heavy overhaul of their font handling code. Maybe you'll be more lucky with trying Ghostscript v9.06 (the most recent release to date).
I'm very much interested in investigating this in even bigger detail. If you can provide a sample of your input files (as well as the merged output given by GS v8.70), I can test if it works better with v9.06.
What you could do to avoid this problem
Try to always embed fonts as full sets, not subsets:
I don't know if and how you can control to have full font embedding when using wkhtmltopdf.
If you generate your input PDFs from Libre/OpenOffice, you're out of luck and you'll have no control over it.
If you use Acrobat to generate your input PDFs, you can tweak font embedding details in the Distiller settings.
If Ghostscript generates your input PDFs the commandline parameters to enforce full font embeddings are:
gs -o output.pdf -sDEVICE=pdfwrite -dSubsetFonts=false input.file
Some type of fonts cannot be embedded fully, but only subsetted (TrueType, Type3, CIDFontType0, CIDFontType1, CIDFontType2). See this answer to question "Why doesnt Acrobat Distiller embed all fonts fully?" for more details.
Do the following only if you are sure that no-one else gets to see or print or use your individual input files: Do not embed the fonts at all -- only embed when merging with Ghostscript the final result PDF from your inputs.
I don't know if and how you can control to have no font embedding when using wkhtmltopdf.
If you generate your input PDFs from Libre/OpenOffice, you're out of luck and you'll have no control over it.
If you use Acrobat to generate your input PDFs, you can tweak font embedding details in the Distiller settings.
If Ghostscript generates your input PDFs the commandline parameters to prevent font embedding are:
gs -o output.pdf -sDEVICE=pdfwrite -dEmbedAllFonts=false -c "<</AlwaysEmbed [ ]>>setpagedevice" input.file
Some type of fonts cannot be embedded fully, but only subsetted (Type3, CIDFontType1). See this answer to question "Why doesnt Acrobat Distiller embed all fonts fully?" for more details.
Do not use Ghostscript, but rather use pdftk for merging PDFs. pdftk is a more 'dumb' utility than Ghostscript (at least older versions of pdftk are) when it comes to merging PDFs, and this dumbness can be an advantage...
Update
To answer once more, but this time more explicitly (following the extra question of #sacohe in the comments below. In many (not all) cases the following procedure will work:
Re-'distill' the input PDF files with the help of Ghostscript (preferably the most recent version from the 9.0x series).
The command to use is this (or similar):
gs -o redistilled-out.pdf -sDEVICE=pdfwrite input.pdf
The resulting output PDF should then be using different (unique) prefixes to the font names, even when the input PDF used the same name prefix for different font (subsets).
This procedure worked for me when I processed a sample of original input files provided to me by 'Mr R', the author of the original question. After that fix, the "skipped character problem" was gone in the final result (a merged PDF created from the fixed input files).
I wanted to give some feedback that unfortunately the re-processing trick doesn't seem to work with ghostscript 8.70 (in redhat/centos releases) and files exported as pdf from word 2010 (which seems to use ABCDEE+ prefix for everything). and i haven't been able to find any pre-built versions of ghostscript 9 for my platform.
you mention that older versions of pdftk might work. we moved away from pdftk (newer versions) to gs, because some pdf files would cause pdftk to coredump. #Kurt, do you think that trying to find an older version of pdftk might help? if so, what version do you recommend?
another ugly method that halfway works is to use:
-sDEVICE=pdfwrite -dCompatibilityLevel=1.2 -dHaveTrueType=false
which converts the fonts to bitmap, but it then causes the characters on the page to be a bit light (not a big deal), trying to select text is off by about one line height (mildly annoying), and worst is that even though the characters display ok, copy/paste gives random garbage in the text.
(I was hoping this would be a comment, but I guess I can't do that, is answer closed?)
From what I can tell, this issue is fixed in Ghostscript version 9.21. We were having a similar issue where merged PDFs were missing characters, and while #Kurt Pfeifle suggestion of re-distilling those PDFs did work, it seems a little infeasible/silly to us. Some of our merged PDFs consisted of up to 600 or more individual PDFs, and re-distilling every single one of those to merge them just seemed nuts
Our production version of Ghostscript was 9.10 which was causing this problem. But when I did some tests on 9.21 the problem seemed to vanish. I have been unable to produce a document with missing or mangled characters using GS 9.21 so I think that's the real solution here.

Converting correctly pdf to ps and vice-versa

I'm using "pdftops" to convert .pdf files to .ps files and then "ps2pdf" for the reverse process (poppler-utils). The problem is that when creating the .pdf files from the .ps files, the text looks ok, but when i try to copy it, the characters are very strange (it's like they are corrupted). I used these tools on other files for a long time and it worked fine.
I also tried "pdftohtml -xml" to create an .xml file, and the text is ok (the characters are extracted correctly).
What problem could it be regarding the conversion? Maybe if I use "pdftops" and "ps2pdf" are there some options that need to be changed?
If I create the .xml output, is there a way to create a .pdf file from the .xml file ?
EDIT:
Output for "pdffonts original.pdf"
Output for "roundtripped.pdf"
I'm just covering the PS->PDF conversion... (I'm assuming your phrase of vice-versa isn't meant to point to a 'round-trip' conversion of the very same file [PDF->PS->PDF], but the general direction of conversion for any PS file. Is that correct?)
First of all, most likely your ps2pdf is only a shellscript, which internally uses a Ghostscript command with some default parameters to do the real work. ps2pdf is much easier to use. Ghostscript has many more options, but it is more difficult to learn. ps2pdf it takes away a lot of potential control you could have if you used Ghostscript. (You can tweak a few parameters with ps2pdf -- but then you are already so much closer to run the real Ghostscript command already...)
Second, without exactly knowing how exactly your PS input file is conditioned, it is difficult to give you good advice: Does your PS have embedded the fonts it uses? Which type of fonts are they? etc.
Thirdly, Ghostscript gained a lot of additional power and control, and had a few bugs or weak spots removed over the last few years when it comes to outputing PDF. So, which is the version of Ghostscript installed on your system? (Remember, ps2pdf calls Ghostscript, it will not work without a locally installed gs executable.)
One likely cause for your inability to copy text from the PDF could be the font type (and encoding) that ended up being used and embedded in your PDF file. Which font details can you tell us about your resulting PDFs? (Try pdffonts your.pdf to find out -- pdffonts is also part of the Poppler utils you mentioned.)
You may try this (full) Ghostscript command for PS->PDF conversion and check where it takes you:
gs \
-o output.pdf \
-sDEVICE=pdfwrite \
-dPDFSETTINGS=/prepress \
-dHaveTrueTypes=true \
-dEmbedAllFonts=true \
-dSubsetFonts=false \
-c ".setpdfwrite <</NeverEmbed [ ]>> setdistillerparams" \
-f input.ps