I am using ghostpcl-9.18-win64. This is the script that I used to generate the pdf file:
gpcl6win64-9.18.exe -sDEVICE=pdfwrite -sOutputFile=%1.pdf -dNOPAUSE %1.txt
The file to test can be found here and the result of running ghostpcl can be found here.
If you take a look at the pdf file it contains only a page (there should be 2) and some of the text is missing. Why is that? I always pictured in my mind that ghostpcl would produce a pdf identical with a printout. Am I missing something, parameters perhaps?
As a matter of fact, when I used the lpr command to print the file on RHEL it printed exactly what I expected. I wonder how reliable is the ghostpcl tool in converting pcl files to PDF. And if it's not that reliable, a broader question is: is there another tool to do it? I am interested mainly in the linux version.
The txt file is based on a file generated using SQR.
Thanks
In fact the OP did raise a bug report (but didn't mention it here):
http://bugs.ghostscript.com/show_bug.cgi?id=696509
The opinion of our PCL maintainer is that the output is correct, inasmuch as it matches at least one HP printer. See the URL above for slightly more details.
Based on the discussion on the bug thread, the input file is invalid because it should have had CRLFs instead of LFs only.
If I convert the LFs to CRLFs then my input file is converted as expected to PDF. However, converting LFs to CRLFs is not a general solution. According to support LFs can be used for images. In that case converting such an LF to CRLF could break the image.
It seems that there is one thing I was wrong about on the bug thread, in our system, lpr includes carriage returns as well in the final file that gets sent to the printer. I followed the instructions here: https://wiki.ubuntu.com/DebuggingPrintingProblems, and the instructions in the 'Getting the data which would go to the printer' section to print to a file and the file includes Carriage returns.
Related
I have a requirement to append one PDF file to another.
I felt that Ghostscript was the way forward, and installed the 64 bit Windows version (9.53.0), but if I attempt to do anything with pdfwrite where the input is a PDF, e.g.
gswin64c -DNOSAFER -sDEVICE=pdfwrite -o output.pdf input.pdf
I get zero length output (with no error messages at all). This happens whether the PDF is one of Ghostscript's shipped examples, one generated using tcpdf, or one saved from a Windows application. It happens whether I try to read from a single PDF or from multiple ones (the latter being my use case).
If I convert the input PDFs to Postscript and then use pdfwrite on those, it works like a dream, e.g.
call pdf2ps input.pdf temp.ps
gswin64c -DNOSAFER -sDEVICE=pdfwrite -o output.pdf temp.ps
EPS inputs work fine also - the only problem seems to be with PDF ones. But Ghostscript can read and display any PDF (and indeed convert any PDF to Postscript), it just can't cope with PDFs as input to pdfwrite, as far as I can see.
I can find no reference anywhere to this particular issue.
This turned out to not be limited to PDF input, it's just easier to trigger it that way. The problem was that an internal data type was changed from a build-dependent size to always be 64-bits, but a #define'd value wasn't correctly updated so the 64-bit Windows build was still using a value intended for 32-bit builds.
There's a commit to fix the problem here. However this seems serious enough that a new build 9.53.1 (so that's patch level 1 already...) will be forthcoming shortly (if it's not already there).
It would help a lot if people could report bugs when they find this kind of problem, and even better if there are any volunteers to try out the release candidates, we would really prefer not to make releases with serious problems.....
I have a process that needs to write multiple postscript and pdf files to a single postscript file generated by, and that will continue to be modified by, word interop VB code. Each call to ghostscript results in an extra blank page. I am using GhostScript 9.27.
Since there are several technologies and factors here, I've narrowed it down: the problem can be demonstrated by converting a postscript file to postscript and then to pdf via command line. The problem does not occur going directly from postscript to pdf. Here's an example and an example of the errors.
C:\>"C:\Program Files (x86)\gs\gs9.27\bin\gswin32c.exe" -dNOPAUSE -dBATCH -sDEVICE=ps2write -sOutputFile=C:\testfont.ps C:\smallexample.ps
C:\>"C:\Program Files (x86)\gs\gs9.27\bin\gswin32c.exe" -dNOPAUSE -dBATCH -sDEVICE=pdfwrite -sOutputFile=C:\testfont.pdf C:\testfont.ps
Can't find (or can't open) font file %rom%Resource/Font/TimesNewRomanPSMT.
Can't find (or can't open) font file TimesNewRomanPSMT.
Can't find (or can't open) font file %rom%Resource/Font/TimesNewRomanPSMT.
Can't find (or can't open) font file TimesNewRomanPSMT.
Querying operating system for font files...
Didn't find this font on the system!
Substituting font Times-Roman for TimesNewRomanPSMT.
I'm starting with the assumption that the font errors are the cause of the extra page (if only to rule that out, I know it is not certain). Since my ps->pdf test does not exhibit this problem and my ps->ps->pdf does, I'm thinking ghostscript is not writing font data that was in the original postscript file to the one it is creating. I'm looking for a way to preserve/recreate that in the resulting postscript file. Or if that is not possible, I'll need a way to tell ghostscript how to use those fonts. I did not have success attempting to include them as described in the GS documentation here: https://www.ghostscript.com/doc/current/Use.htm#CIDFontSubstitution.
Any help is appreciated.
I've made this an answer, even though I'm aware it doesn't answer the question, becasue it won'f fit as a comment.
I think your assumption that the missing fonts are causing your problem is flawed. Many PDF files do not embed all the fonts they require, I've seen many such examples and they do not emit extra pages.
You haven't been entirely clear in your description of what you are doing. You describe two processes, one going from PostScript to PDF and one going from PostScript on to PostScript (WHY ?) and then to PDF.
You haven't described why you are processing PostScript into a PostScript file.
In particular you haven't supplied an example file to look at. Without that there's no way to tell whether your experience is in fact correct.
For example; its entirely possible that you have set /Duplex true and have an odd number of pages in your file. This will cause an extra blank page (quite properly) to be emitted, because duplexing requires an even number of pages.
The documentation you linked to is for CIDFont substitution, it has nothing to do with Font substitution, CIDFonts and Fonts are different things in PDF and (more particularly) PostScript. But I honestly doubt that is your problem.
I'd suggest that you put (at the least) 'smallexample.ps' somewhere public and post the URL here, that way we can at least follow the same steps you are doing. That way we can probably tell you what's going on. An explanation of why you're doing this would be useful too, I would normally strongly suggest that you don't do extra steps like this; each step carries the risk of degarding the output in some way.
Thank you for the response. I am posting as an answer as well due to the comment length restrictions:
I think you are correct that my assumption about fonts is wrong. I have found the extra page in the second ps file and do not encounter the font errors until the second conversion.
I have a process that uses VB MSWord Interop libraries to print multiple documents to a single ps file using a virtual printer set up with ghostscript and redmon. I am adding functionality to mix in pdf files too. It works, but results in an extra page. To narrow down where the problem actually was, I tried much simpler test cases via command line to identify the problem. I only get the extra page when ghostscript is converting ps to ps (whether or not there is a pdf as well). Converting ps to pdf I do not get the extra page. Interestingly, I can work around the problem by converting the ps to pdf and then both pdfs back to ps. That is a slower and should not be necessary however, so I would like to identify and resolve the extra page issue. I cannot share that particular file. I'll see if I can create an example I can share that also exhibits the problem. In the meantime, I can confirm that the source ps file is six pages and the duplexing settings are as follows. There is duplex definition in the resulting ps file with the extra page. Might there be some other common culprits I could check for in the source ps? Thank you.
featurebegin{
%%BeginFeature: *DuplexUnit NotInstalled
%%EndFeature
}featurecleanup
featurebegin{
%%BeginFeature: *Duplex None
<</Duplex false /Tumble false>> setpagedevice
%%EndFeature
}featurecleanup
I wish to validate pdfs in a Windows 7 folder containing 30,000 pdfs. I have discovered that some pdfs do not render properly and give a "Insufficient data for image" error.
How can I modify following Ghostscript command to input all pdfs in a folder, rather than as a single pdf or a list of pdfs?
gswin32c.exe -o nul -sDEVICE=nullpage -r36x36 "D:/Pdf/04701.pdf"
OK first, Ghostscript doesn't 'validate' PDFs, all your command line does is determine whether the default behaviour of Ghostscript, while interpreting a file, sends any messages to stdout or stderr.
There are several problems with that, Ghostscript doesn't flag even a warning for everything, and the default behaviour is not to throw an error, but to emit a warning and continue. Since you aren't looking at the output (its going to a bit bucket) its entirely possible that you would miss the fact that swathes of output were simply blank. SO you may well miss a faulty file.
'Insufficient data for an image' is simply one possible error, there are an awful lot of badly written PDF files out there.
If you want to handle every file in a folder, you can't do it with Ghostscript alone, as it requires each input file to be specified. However, you can write a command shell script easily enough. Since you are clearly on Windows, just use a for loop and have the 'do' clause call Ghostscript just as above.
Something like:
For %s in (c:\files\*.*) do gswin32c... %s
just type
help for
at the command shell for information on 'for'.
Would appreciate your help with the following:
I have 2 partially accessible PDFs (containing tags), and I want to concatenate them using some command line tool (as PDFtk or Ghostscript, or any Perl module):
I've tried doing this with PDFtk and Ghostscript and both output a non accessible PDF without the original tags (each of the concatenated PDFs had tags).
Do you know of any way to implement this with one of the mentioned tools or some other command line tool for Linux?
(Not necessarily freeware)
Perl modules are also an option.
Thanks!
pdfunite in-1.pdf in-2.pdf in-n.pdf out.pdf
You can read more in a similar question
Solved - new version of iText works (the former, which was the newest when writing the message didn't work- only since 5.4.4 it works).
It's important to mention (was missing in documentation in the past) that
when concatenating documents in tagged mode, you must keep all readers
opened until the resultant document is closed, i.e.:
first:
document.close();
and only after this:
reader.close();
I have a problem when using Ghostscript (version 8.71) on Ubuntu to merge PDF files created with wkhtmltopdf.
The problem I experience on random occasions is that some characters get lost in the merge process and replaced by nothing (or space) in the merged PDF. If I look at the original PDF it looks fine but after merge some characters are missing.
Note that one missing character, such as number 9 or the letter a, can be lost in one place in the document but show up fine somewhere else in the document so it is not a problem displaying it or a font issue as such.
The command I am using is:
gs \
-q \
-dNOPAUSE \
-sDEVICE=pdfwrite \
-sOutputFile=/tmp/outputfilename \
-dBATCH \
/var/www/documents/docs/input1.pdf \
/var/www/documents/docs/input2.pdf \
/var/www/documents/docs/input3.pdf
Anyone else that have experienced this, or even better know a solution for it?
I've seen this happening if the names for embedded font subsets are identical, but the real content of these subsets are different (containing different glyph sets).
Check all your input files for the fonts used. Use Poppler's pdffonts utility for this:
for i in input*.pdf; do
pdffonts ${i} | tee ${i}.pdffonts.txt
done
Look for the font names used in each PDF.
My theory/bet is on you seeing identical font names used (names which are similar to BAAAAA+ArialMT) by different input files.
The BAAAAA+ font name prefix to be used for subset fonts is supposed to be random (though the official specification is not very clear about this). Some applications use predictable prefixes, however, starting with BAAAAA+, CAAAAAA+ DAAAAA+ etc. (OpenOffice.org and LibreOffice are notorious for this). This means that the prefix BAAAAA+ gets used in every single file where at least one subset font is used...
It can easily happen that your input files do not use the exact same subset of characters. However the identical names used could make Ghostscript think that the font really is the same. It (falsely) 'optimizes' the merged PDF and embeds only one of the 2 font instances (both having the same name, for example BAAAAA+Arial). However, this instance may not include some glyphs which where part of the other instance(s).
This leads to some characters missing in merged output.
I know that more recent versions of Ghostscript have seen a heavy overhaul of their font handling code. Maybe you'll be more lucky with trying Ghostscript v9.06 (the most recent release to date).
I'm very much interested in investigating this in even bigger detail. If you can provide a sample of your input files (as well as the merged output given by GS v8.70), I can test if it works better with v9.06.
What you could do to avoid this problem
Try to always embed fonts as full sets, not subsets:
I don't know if and how you can control to have full font embedding when using wkhtmltopdf.
If you generate your input PDFs from Libre/OpenOffice, you're out of luck and you'll have no control over it.
If you use Acrobat to generate your input PDFs, you can tweak font embedding details in the Distiller settings.
If Ghostscript generates your input PDFs the commandline parameters to enforce full font embeddings are:
gs -o output.pdf -sDEVICE=pdfwrite -dSubsetFonts=false input.file
Some type of fonts cannot be embedded fully, but only subsetted (TrueType, Type3, CIDFontType0, CIDFontType1, CIDFontType2). See this answer to question "Why doesnt Acrobat Distiller embed all fonts fully?" for more details.
Do the following only if you are sure that no-one else gets to see or print or use your individual input files: Do not embed the fonts at all -- only embed when merging with Ghostscript the final result PDF from your inputs.
I don't know if and how you can control to have no font embedding when using wkhtmltopdf.
If you generate your input PDFs from Libre/OpenOffice, you're out of luck and you'll have no control over it.
If you use Acrobat to generate your input PDFs, you can tweak font embedding details in the Distiller settings.
If Ghostscript generates your input PDFs the commandline parameters to prevent font embedding are:
gs -o output.pdf -sDEVICE=pdfwrite -dEmbedAllFonts=false -c "<</AlwaysEmbed [ ]>>setpagedevice" input.file
Some type of fonts cannot be embedded fully, but only subsetted (Type3, CIDFontType1). See this answer to question "Why doesnt Acrobat Distiller embed all fonts fully?" for more details.
Do not use Ghostscript, but rather use pdftk for merging PDFs. pdftk is a more 'dumb' utility than Ghostscript (at least older versions of pdftk are) when it comes to merging PDFs, and this dumbness can be an advantage...
Update
To answer once more, but this time more explicitly (following the extra question of #sacohe in the comments below. In many (not all) cases the following procedure will work:
Re-'distill' the input PDF files with the help of Ghostscript (preferably the most recent version from the 9.0x series).
The command to use is this (or similar):
gs -o redistilled-out.pdf -sDEVICE=pdfwrite input.pdf
The resulting output PDF should then be using different (unique) prefixes to the font names, even when the input PDF used the same name prefix for different font (subsets).
This procedure worked for me when I processed a sample of original input files provided to me by 'Mr R', the author of the original question. After that fix, the "skipped character problem" was gone in the final result (a merged PDF created from the fixed input files).
I wanted to give some feedback that unfortunately the re-processing trick doesn't seem to work with ghostscript 8.70 (in redhat/centos releases) and files exported as pdf from word 2010 (which seems to use ABCDEE+ prefix for everything). and i haven't been able to find any pre-built versions of ghostscript 9 for my platform.
you mention that older versions of pdftk might work. we moved away from pdftk (newer versions) to gs, because some pdf files would cause pdftk to coredump. #Kurt, do you think that trying to find an older version of pdftk might help? if so, what version do you recommend?
another ugly method that halfway works is to use:
-sDEVICE=pdfwrite -dCompatibilityLevel=1.2 -dHaveTrueType=false
which converts the fonts to bitmap, but it then causes the characters on the page to be a bit light (not a big deal), trying to select text is off by about one line height (mildly annoying), and worst is that even though the characters display ok, copy/paste gives random garbage in the text.
(I was hoping this would be a comment, but I guess I can't do that, is answer closed?)
From what I can tell, this issue is fixed in Ghostscript version 9.21. We were having a similar issue where merged PDFs were missing characters, and while #Kurt Pfeifle suggestion of re-distilling those PDFs did work, it seems a little infeasible/silly to us. Some of our merged PDFs consisted of up to 600 or more individual PDFs, and re-distilling every single one of those to merge them just seemed nuts
Our production version of Ghostscript was 9.10 which was causing this problem. But when I did some tests on 9.21 the problem seemed to vanish. I have been unable to produce a document with missing or mangled characters using GS 9.21 so I think that's the real solution here.