UPDATE: I wrote to Wolfram support about this and will update the post if they can resolve the problem. Sorry for spamming SO with a technical support question, but here it remains in case anyone else is having the same issue.
Is anyone else having this problem with Mathematica 8? I recently upgraded and noticed that when I export Graphics to a PDF file, although the file appears fine on my computer, it prints as a blank page. For example, try
Rectangle[{1,1}]//
Graphics//
Export["~/test.pdf",#]&
which creates a PDF file containing a black square. This file opens fine, but if I send it to my department printer I just get a blank page. If I don't export the graphics but print the notebook from MM, no problem, the graphics print as expected. If I use MM 7 to do exactly the same thing, the PDF file prints as expected. Exporting to PNG in MM8 seems to work fine. And, using the context menu Save Graphics As ... or File > Save Selection As ... to create a PDF containing just the graphic also works. However, these graphics eventually get included in a TeX document, and it would be far better if I could continue using the script I've got that doesn't require any button clicking to generate them.
I'm running MM 8.0.0.0 on Mac OS 10.6.7. I have not been able to test this on another printer yet, but this printer has never given me problems before and prints other PDF documents fine. Any ideas why this is happening?
Wolfram Research responds:
...
This issue has been reported by other users as
well and our developers are currently looking into it. I have added your
details to the report so you can be notified when this is resolved.
In the meantime, the alternatives that you could try are:
Try a different printer.
Rasterize the image with the function 'Rasterize' before exporting. If
the rasterized image loses some resolution, you could use the option
'ImageResolution' to edit this.
Rasterize[image, ImageResolution -> xxx]
Surely this is a bug (please report it to support#wolfram.com), but you can work around the problem by selecting the graphic and choosing File > Save Selection As... from the menu (or Save Graphic As... from the contextual menu). This produces a slightly different file that doesn't appear to exhibit the undesirable behavior we observe from Export[].
These problematic files, and LaTeX PDFs that include them, can be properly printed by Adobe Reader 10.1.2. That's if you're okay with installing and using a 450MB PDF reader.
I reproduced the problem (leading me to this question) with Mathematica 8.0.4.0 on Mac OS X 10.7.2. Wolfram suggested lame workarounds like Rasterize and told me
This issue has been addressed by our developers, and a fix will be included in a future version of Mathematica.
Related
I am creating a separate question, stemming from this one. The used code is almost the same. The reason is that the original problem was about subsetting a font with pdfbox, which I kind of dealt with. I got faced though with another problem, which is : the annotations, and how the fonts used in them are interpreted by particularly Acrobat Reader DC.
I tried different combinations of fonts and embedding options and got rather desperate. The fact is that I had a feeling that in particular the way these things are handled by the programs that interpret the PDF files is non-standard. I think I read somewhere that the annotations and the way they are displayed is on purpose non-standardized by the PDF format, to give freedom to the interpreters to handle them in their own way, since the main purpose of the annotations is the interaction with the user. TL;DR I cannot understand why Acrobat Reader DC doesn't like the annotations I have created and saved with PDFBOX. I even opened a question on friendly and helpful Adobe's User Community forum. But as I expected, someone suggested me to better investigate this question with the PDFBOX team.
Everything is possible, but rather than writing a question on PDFBOX mailing list (I could never get used or understand the efficient use of the mailing lists btw), I want to open a question here because I hope that it could help others to understand the PDF format better.
I basically rephrase the above question from the Adobe's forums here: Here is an example (Google Drive link) with FreeText annotations (but it seems to make no difference if I use Stamp annotations instead), it causes problems when open by Adobe Acrobat Reader DC (file) version 21.001.20149.37945 (I think this corresponds to April 16th '21 update). Specifically the problem happens when the Comments pane is opened by the user, either manually or automatically.
Manually:
link
Automatically:
link
While experimenting, I also tried to unset the "Use local fonts" option in Preferences -> Page Display. I had the impression that maybe Acrobat Reader will be more eager to show the error message once it is not allowed to substitute the erroneously embedded fonts with the possible local fonts. I am not sure if this is true.
The error that I get is the infamous "Cannot extract the embedded font XXXXXX+SomeFontName" as seen in the below picture:
link
The same problems happen also if I use full font embed (subsetting option set to false when using PDType0Font.load). I also tried to embed OpenSans font instead of LiberationSans, also tried to manually convert LiberationSans to a TTF font with fewer glyphs using FontForge, even tried to use Windows ARIALN.TTF, thinking that maybe the font is the problem. All cause the same behavior in Acrobat Reader DC. I have also tried to run Acrobat Reader 2019 Pro Preflight on the document and in the profile that scans the document for the possible font inconsistencies, it reports no errors.
Of course, when I use e.g. PDType1Font.HELVETICA instead of custom TTF font, I do not get the above errors. But I cannot use it because it does not contain the glyphs for the Unicode characters that I use. Does anybody have a better idea?
Thank you very much!
EDIT: to make myself clear - the error does not appear ALWAYS. it appears on some machines constantly (e.g. I am using Windows 7 64-bit with latest Acrobat Reader DC installed to reproduce it fairly well), while on my Windows 10 64-bit with the same version of Acrobat Reader DC it sometimes appears, and sometimes not - I haven't figured out why or in what cases.. - which makes me think - but no - I checked that too - the font I am using opens up alright on the machine where the problem is fairly constant)
UPDATE: at my wits ends again, I created a blank page with Apache OpenOffice, exported it to PDF, opened it with Acrobat Reader DC (last version), added a FreeTextTypewriter annotation (View -> Tools -> Comment -> Open) with 4 greek letters in ArialNarrow font, saved it, reopened it with Acrobat Reader DC, and it gives me the same error (cannot extract the embedded font...).. So this could be the Reader problem? But they made this so difficult to diagnose.. Here is the file, but I do not expect it to show errors on other machines. It's one of those moments that you start to believe in magic and the power of prayer (and a good sleep)
UPDATE 30/04/2021
So, to sum things up, I haven't come with a solution yet, but I came up with three files created with PDFBOX, OpenPDF (iText5 fork) and Acrobat Reader DC itself (can append annotations and save - just adding a simple Text box with greek text through Comment pane) - and they all issue the above error message, when open by Acrobat Reader DC. I have posted details in the Acrboat Reader forum here (same link as in comment)
I have added the code that I used to create the OpenPDF example file here and the example 3 files are in the same repository here
I've been tinkering with Ghostscript with a port monitor(on a HP PCL 6 Universal driver) to convert print job into PDF. I've tested with a few applications such as Words, Excel, Adobe Reader, Microsoft Edge etc and they are all working properly.
However upon testing Microsoft Powerpoint 2016, it seems like there are some graphics that are unable to be rendered properly through Ghostscript.
Actual Slide Below
Output From Ghostscript in PDF Below
I've tested this even with some other PDF generators such as BioPDF,CutePDF as well as AdobePDF and they would all result in the same output as above.
Just wondering has anyone tried and have faced similar issues before? if so could someone point me in the right direction??
What you are doing isn't a single step PowerPoint to PDF and Ghostscript is not rendering the PowerPoint. In fact if you are creating a PDF file Ghostscript isn't (ideally) rendering anything.
What's actually happening is that you are asking PowerPoint to print to a canvas, which is then passed to the PostScript printer driver. That produces PostScript which is sent to the Port. Your (and others) Port Monitor then sends the PostScript to the 'Distiller' (in your case Ghostscript and the pdfwrite device). The Distiller reformats the vector drawing commands into a PDF format and builds a PDF file from them. It doesn't render (turn into a bitmap image) anything unless forced to.
Obviously there are several places along that road where the problem could creep in. Given that you say that the Adobe product (the others you mention al use Ghostscript) has the same problem, I think its safe to assume that the problem isn't Ghostscript.
This also means that you aren't using the driver you think you are. Adobe can't handle PCL as an input medium as far as I'm aware, and nor can Ghostscript. GhostPCL will handle PCL as an input, but that's not what you say you are using.
Of course you haven't linked to an example file to demonstrate the problem, nor supplied an example command line, so this is all supposition.
Now if, somehow, you are using a PCL6 device, then the problem is most likely due to the presence of rasterOps in the output. Rasterops are part of the PCL imaging model which do not exist in PDF and are a form of transparency. There are three ways to handle such content for a PDF output device; firstly render the whole page content to an image, secondly ignore the rasterOps objects, thirdly treat the rasterOps as opaque.
GhostPCL and the pdfwrite device take the third option. So, its just conceivable that your original content has some transparent objects which are being handled as rasterOps by the PCL printer driver, and then rendered as opaque by GhostPCL and the pdfwrite device.
If that's somehow the case then the solution is simple; don't use a PCL printer driver, use the PostScript one.
If you post a link to a (simple, eg single page) example of what you are sending to Ghostscript, and a command line, then I can look at it. Please don't send me the PowerPoint, I can't use it and even if I could, my print setup would not match yours. I need the data being sent to Ghostscript.
[EDIT after looking at files]
Don't mean to sound like I'm lecturing, the problem is people find these result on Google searches and then try to apply them based on a poor understanding of what's happening. So I find it best to be really clear in my answers about what's going on. It saves questions later :-)
The first thing I see is that the PCL is indeed PCL, and if you try running that through Ghostscript it throws horrible errors and exits. So presumably you aren't doing that.
The PostScript file contains nothing except huge images, rendered (presumably at 600 dpi) contains 2 pages, the two pages look like your images above. Which is why the PostScript is better than 20 times larger than the PCL file.
But.... If I open the .ppt file with OpenOffice (4.0.0 is what I have to hand) I see exactly the same thing. I don't, I'm afraid, have a copy of Microsoft PowerPoint, but from what I see here there are two conclusions;
firstly that the PDF I get looks pretty much like the PowerPoint when viewed with OpenOffice at least. So there's something 'interesting' about your PowerPoint.
secondly, even if that's not what you expect, its what's in the PostScript program. That means that either PowerPoint rendered the slide to a bitmap or the Windows printing system/HP driver did.
Now, if I run the PCL through GhostPCL instead of Ghostscript (rendering, not producing a PDF) then the result is more like what I think you are expecting. However, when sent to a PDF file the result is horrible. Which strongly suggests to me that there's some form of transparency involved, PostScript doesn't support transparency at all, and PCL does it through rasterOPs.
I'm afraid that this means that the problem lies either in PowerPoint, the Windows print system or the PostScript printer driver you are using. Since the PCL is at least close to what you expect, I suspect that this means PowerPoint is doing the right thing, and its the printer driver messing up. It appears you are using the Windows PostScript printer driver.
So there's no way you can 'fix' this for files like this, at least not with Ghostscript. You would need to 'fix' the Windows PostScript printer driver, or possibly the Windows print system. You could try reporting a bug to Microsoft, presumably these files print incorrectly when sent to physical PostScript printers too.
I'm having difficulties filling in a form using pdftk with text fields with true type fonts.
Font files (.ttf) are added to /Library/Fonts (OSX Mavericks)
The form is created with Adobe Acrobat Pro
The form includes normal (non form) text using these fonts
The form text fields also use these fonts
The form can successfully be filled and printed using Adobe Acrobat Pro and even Preview
However, pdftk throws an error when trying to fill it using the command:
pdftk ./my_form.pdf fill_form my_data.fdf output ./the_output.pdf
The output is:
Unhandled Java Exception in create_output():
java.lang.ArrayIndexOutOfBoundsException: 0
at pdftk.com.lowagie.text.pdf.DocumentFont.fillEncoding(pdftk)
at pdftk.com.lowagie.text.pdf.DocumentFont.doType1TT(pdftk)
at pdftk.com.lowagie.text.pdf.DocumentFont.<init>(pdftk)
at pdftk.com.lowagie.text.pdf.AcroFields.getAppearance(pdftk)
at pdftk.com.lowagie.text.pdf.AcroFields.setField(pdftk)
at pdftk.com.lowagie.text.pdf.AcroFields.setFields(pdftk)
If I change the font of the text inputs to Helvetica, Times Roman or Courier, pdftk will successfully create a PDF. Oddly though, Arial and Georgia also throw the same error.
I have tried to no avail to embed the fonts in the PDF using Ghostscript as suggested in this question How to repair a PDF file and embed missing fonts. gs may have embedded the fonts, but it removes the form fields so the resulting PDF can't feed back into pdftk.
A working resolution would be greatly appreciated.
I was getting the same java.lang.ArrayIndexOutOfBoundsException: 0 error using pdftk to fill forms on an Adobe Acrobat generated PDF. This question is super old, but I couldn't find a consistent answer on stackoverflow or elsewhere so I figured I'd post my fix.
What ended up working for me:
Opening the PDF in the OS X app Preview
Clicking into a form field, adding text then deleting that text (so nothing is actually changed)
Saving it
Running the PDF through pdftk again
I'm not that familiar with encoding or PDFs in general, but saving the PDF with Preview seems to fix the encoding or at least get it to a place where pdftk can work with it. Good luck.
This was causing a huge headache for me for 2 days. It turns out I was focusing on the wrong end of the problem.
A nice alternative that isn't as manual and only has to be done once is to enter some text in a field of the source PDF form, in your case ./my_form.pdf. I don't know EXACTLY why this works, but it does. that way if you want to create a new file at any time, you dont have to go through this trouble :)
I have a small PDF file, which is supposed to display just the string "Hello World!".
Unfortunately, it displays black boxes instead of the characters. I suppose there is some problem with the fonts, but I am not sure.
Is there a way to diagnose and troubleshoot this issue? All I see on the Internet is advices to do this and to do that, which helps to some and does not to others (nothing helped me). Looks like shooting in the dark to me.
Here is a concrete example. Why does this PDF display black squares instead of the string Hello World ?
EDIT
A bit of the context. I am trying to convert a trivial HTML to PDF using the wkhtmltopdf tool. It is an absolute frustration, because according to the Internet searches the tool is supposed to work and do it quite well. But the thing does not work for me and nothing I do changes this! Unfortunately, this tool seems the only free tool to convert HTML to PDF. This is a huge bummer.
If you want to find out whether a PDF is valid or what is wrong with it, there are a few general steps you can take:
1) Open it in Adobe Acrobat or Adobe Reader (on a desktop platform, not a tablet device). For a very long time the PDF format was owned by Acrobat and the way their software handles PDF is still close to the gold standard. However, there is a caveat with this; Acrobat is very, very smart in the way it handles PDF files and it will overlook or actively correct a number of mistakes other PDF engines might have a problem with...
2) Get yourself a preflight tool. These tools were invented for use in graphic arts, but have applications outside of it too. Popular examples are callas pdfToolbox (warning, I'm affiliated with this vendor!) or the "Preflight" plug-in you'll find in Adobe Acrobat Pro (which is actually also callas technology under the hood). Then preflight specifically against the PDF/A-1b or PDF/A-2b standard.
That last point deserves some more explanation. You should pick a PDF/A compliant preflight profile because the PDF/A (or PDF for Archival) standard is extremely picky. It's goal is to make sure that PDF files will still be readable in exactly the same way 50 years from now and to ensure that it tests a whole range of properties of the file itself and the different components in it. You might be able to ignore some of the errors you get (because some of them will be connected to the fact that the PDF/A identification isn't correct for example) but I wouldn't ignore any other errors unless you understand exactly what they mean and why they aren't relevant.
PS: Can you make your test file available some other way? The file you shared in your question is useless I think. When I do "Download" I get a PDF file that doesn't contain text and doesn't have fonts in it. Those rectangles you see are exactly that - rectangles. So this PDF renders fine - it's the PDF generation process (or the fact that you stored the file on Google docs - I really have no clue what that might do) that went berserk apparently.
In addition to David's hints (first using a known good viewer and then some preflight tool), there is a third level in the inspection process:
3) Inspect the PDF with your own eyes and with the PDF specification (made available by Adobe here) at hand in a text viewer (for a first impression) and (if the cause of the issue at hand is not immediately visible) then in a PDF browsing tool (for in-depth analysis).
This step is quite cumbersome at first but after some time you learn your way around in the PDFs.
A sample for such a PDF browser tool is RUPS but there are others around, too.
'Small PDF file supposed to display "Hello World!"'
Not correct. The file you linked to does not contain any code that could render pixels on screen or on paper that a human brain would read as "Hello World!". The file indeed does only contain vector drawing operations which result in 12 black boxes.
The command line tool pdffonts does not indicate any font being used in the file:
pdffonts so-file-#15858199.pdf
What could still cause the "rendering" of the words you are looking for: some vector or pixel drawing code contained in the PDF. To find out about this, you'll have to look into the low level source code of the PDF.
The original file is 1.570 Bytes. So this task looks not as being overly huge.
'Is there a way to diagnose and troubleshoot this issue?'
Using qpdf, a "command-line program that does structural, content-preserving transformations on PDF files", you can expand all contained streams (which are normally compressed):
qpdf --qdf --object-streams=disable so-file-#15858199.pdf qdf-#15858199.pdf
The resulting file, qdf-#15858199.pdf, is 3.875 Bytes. Now open it in a text editor. PDF object no. 6 (lines 66-219) contains the contents of the page. Lines 123-194 contain only the operators m (moveto), l (lineto) and h (closepath). These lines contain 12 different groups of drawing commands, where each one represents the path for one of the 12 black boxes you see rendered on screen or printed on paper:
102.400001 12.8000001 m
268.800004 12.8000001 l
268.800004 179.200002 l
102.400001 179.200002 l
102.400001 12.8000001 l
h
Line 196 contains
f
which is the fill operator to actually fill black color into so far constructed (closed) path. Nothing in the other lines (which I didn't analyze in detail) does any drawing that may resemble the shapes of any glyphs.
'Unfortunately, this tool seems the only free tool to convert HTML to PDF'
Not correct either.
1.
Assuming your "free" is meant as free as in liberty, then an alternative option is HTMLDOC.
HTMLDOC does not support specific fonts which may be assigned to your HTML input via CSS, but it does a good job in converting one or multiple HTML documents into a single PDF book containing chapters, page-numbering, page headers and footers and more. For all options available, see its full documentation.
2.
Assuming your "free" is meant as free as in beer, then an alternative option (for private usage only) could be PrinceXML.
PrinceXML does an extraordinarily good job when it comes to support almost all CSS features your HTML document may be using. See its documentation and also some of the sample PDF files produced by PrinceXML.
We have a WPF application which can perform either a report preview or a report print.
Both requests use the same code.
Call the report service which gets the report from Microsoft Report Services.
Convert the report into the desired format (in this case PDF).
Then return the report as a byte array.
The result is then written to a temporary file as a binary stream, and either popped into a window to preview or start a Process to print.
In both cases the temporary file is passed.
Print Preview works flawlessly! But Print Report will print with all occurances of 'ti' disappearing. I see there is a printer escape sequence of ESC t NUL/SOH and I assume that if, for some reason, an escape character gets into that stream that ti will result in an ignored print sequence. Thus the missing characters.
My first question is if anyone has ever experienced this with generated PDF reports?
My second question (obviously) is if anyone knows of a utility I can use to view the binary data in the file being printed, to see what is in the file just before every 'ti' sequence?
After a great deal of searching I came across a post on the Adobe forum that states that version 8 had a bug where it was not printing character combinations. Once I dug deeper it seems that it has returned and the suggested workaround fixed our issue.
Workaround: Do a print as image.
Adobe seems to be unable to do the most basic of what their software must do, print the exact content!
Answer for your second question:
First, do one of the following two things:
Set the Windows print spooler properties to not delete printed jobs.
Pause the target print queue.
Then, grab the spool file from the Windows printspool directory (which location that is you can find out by looking at the (right-click) 'Properties...' dialog of the 'Printers and Faxes' folder).
I realize this is an old post but I wanted to add some updated info from the above comment stating that it's a problem with Acrobat 8. We are using Acrobat 10.1.6 and still have the same problem. From what I've read, it's a problem with the adobe product itself. The only real fix I've seen (actually work around) is to print as an image. LAME
Surprisingly this bug is still there in 2021. Adobe cannot be relied upon printing documents properly. This takes away all the allure of features it had if it cannot do the most basic stuff it is required for.
Printing as image reduces the quality and blur the document.
Simply open the document with Safari or Chrome and print from there. E
I had a similar problem while printing directly from the firefox (acrobat reader within). I downloaded the file and then printed. The problem was solved.