I found a very weird PDF document here:
This is the PDF document
When opening it in Adobe reader, only half of the contents are visible; while if I change to SumatraPDF reader, then all the contents are visible.
What is happening to this document? and how can I fix it so that it is normal in Adobe reader?
Acrobat X says 'an error exists on this page...' which is why only half of it is visible. It draws up to the point where the error occurs.
SumatraPDF is based on MuPDF and clearly MuPDF is simply more tolerant of this particular class of broken PDF file. Acrobat is normally quite tolerant and doesn't even bother to issue warnings most of the time, sadly.
Ghostscript gives me 2 warnings; first that it expected a number and didn't get one, so it replaced it with 0, and second that an invalid shading was ignored.
The actual problem is the shading dictionary in object 90:
90 0 obj
<<
/BBox [ 0.0260000005 0.467999995 0.973999977 ]
Bounding Boxes are required to have 4 values and this one only has 3, so its not valid.
Its not easy to fix a PDF file, the best solution is to make it afresh with a fixed tool. The file is compressed, so you'll need to decompress it before you can modify it, then you'll have to guess what the missing value ought to be.
Related
I have a pdf file. The text can be extracted in Edge browser or in adobe reader after installing some fonts. Please let me know how to extract the text with itextsharp (latest version 5.x). I use this commands. Empty text is returning. But the file has 8 pages with text.
var reader = new PdfReader(bytes);
var pages = reader.NumberOfPages;
for (int i = 1; i <= pages; i++)
{
var t = PdfTextExtractor.GetTextFromPage(reader, i, new SimpleTextExtractionStrategy());
text += t;
}
The PDF
The PDF at first glance appears to be OCR'ed by an OCR program that did not realize that the pages are rotated by 180°.
For example, the OCR program on the second page started in what a PDF viewer displays as bottom left corner:
and here recognized
epnq eoⅢ9時u ez `9P...
押印S ’句OP JuP9A...
eA I臥O9叩Od n^Z小no...
This is not that bad, e.g. epnq eoⅢ... is not really unlike the ...mce bude rotated by 180°.
The OCR software appears to have a certain affinity to CJK glyphs; this impression is reinforced by the fact that the it uses fonts with an Adobe-Japan1-2 ROS and a 90ms-RKSJ-H encoding.
Text extraction
All the information above considered, though, I have some doubt that
The text can be extracted in Edge browser or in adobe reader after installing some fonts.
At least I doubt that anything similar to the actual text can be extracted, no matter how many fonts are installed. On the other hand both Adobe Reader and Edge out-of-the-box here extract the weird text recognized from the rotated letters.
iText
My observation with iText differs, while the OP reports that
Empty text is returning
I get a lot of CJK glyphs (I have added the Asian jar, though, which might make a difference). Unfortunately, though, not those found by inspection of the PDF.
As far as I remember, though, text extraction by Encoding + ROS has never been in focus during iText development up to version 5.5.x (inclusive), in particular the mixed single-byte/double-byte encoding of 90ms-RKSJ-H might not be supported.
I have several PDF files that have been OCR-processed (not by me). They contain both the scanned image and the OCR text. They seem to work fine in some viewers (iPhone/iPad), but not in others (Preview.app on macOS) which makes them somewhat awkward to read.
From googling around, it seems that the text & image may be layered incorrectly or there is a problem with the fonts used? I'm not even sure I'm using the correct vocabulary, as most hits I get are worthless.
Is it possible to use ghostscript or something to batch-fix these files?
Example of "bad" rendering:
Its impossible to say what's wrong with the PDF file (or viewer) without seeing the PDF file, which alse makes it hard to propose solutions!
You could certainly run the file through Ghostscript to the pdfwrite device, and use the -dFILTERTEXT switch to not process the text. The resulting document would therefore not contain the offending text, but would still contain the image.
Of course, this would then not be possible to search or highlight.
You could instead use -dFILTERIMAGE which would remove the original image leaving the text behind. But then anything in the original document which was not text would now be missing.
The usual 'best practice' is to have the text drawn in rendering mode 3, which makes no marks. This allows you to see the original image without the OCR'ed text interfering. Its possible that the viewer you are using is not honouring the text rendering mode, which would be a (fairly serious) bug in the viewer. The most recent versions of MacOS seems to have some nasty bugs in the Quartz PDF rendering engine.
The other way to do this is to draw the text first, then put the original image on top of it, but that's hard to get wrong, I suspect its more likely the text rendering mode.
EDIT
The PDF file first draws the text, then draws the image on top of the text. The underlying text should not appear. mkl is quite correct in his comment.
The correct way to fix this is to fix the consumer which is rendering it incorrectly. As I mentioned above the latest version of Quartz seems to have some fairly serious bugs, you might choose to raise this as a bug with Apple.
The only other solution would be to run this through something which will remove the text. Ghostscript can do this but there are implications; firstly it will no longer be possible to search/copy/paste text from the document. Secondly you would need to run quite a complex command line in order to prevent the decompressed JPX images being recompressed as JPEG, which would probably result in compromised quality. Finally the resulting file size would be larger.
I have a PDF which renders fine in Acrobat but fails to print during the PDF to PS conversion process on our printer's RIP. After uncompressing with pdftk and editing I've found if I replace the usage of a certain font it will print.
The font is a strange one, a TrueType subset with a single character (space).
If I pass the PDF through Ghostscript it reports no errors, however an Acrobat pre-flight check will report a missing glyph for space. This error is not reported for the original file. I'm just using a basic command: gswin32c -dBATCH -dNOPAUSE -sDEVICE=pdfwrite -o gs.pdf original_sample.pdf
I've pulled out the font data from the original PDF and saved it. Running TTFDUMP.exe produces an interesting result where it seems that the 'glyf' table is missing:
4. 'glyf' - chksm = 0x00000000, off = 0x00000979, len = 0
5. 'head' - chksm = 0xE463EA67, off = 0x00000979, len = 54
Just wondering, am I interpreting this result correctly? Is it valid to run TTFDUMP like this on extracted data from a PDF? I think a 'glyf' table is required based on the spec, at least for the first 4 necessary characters.
TTFDUMP run on the ghostscript PDF produces a similar result but with a 1-byte 'glyf' table.
If so it seems that Acrobat doesn't particularly care about the missing space while other programs (including the printer) do. It's odd it isn't reported as missing though until it runs through Ghostscript.
The PDF is created by Adobe InDesign and the font is copyrighted like most so I can't share it.
Edit - I've accepted Ken's answer as he helped me on the Ghostscript bug tracker. In summary, it seems the font is broken as suspected due to the missing glyf table. Until I hear otherwise I'll have to suppose this is a bug in InDesign, and will continue investigating.
Yes you can run ttfdump on an embedded subset font, its still a perfectly valid font.
A missing glyph is not specifically a problem, because the .notdef glyph is used instead, a missing .notdef means a font isn't legal.
I think you are mistaken about the legality of sharing the PDF file (from the point of view of font embedding). Practically every PDF file you see will contain copyright fonts, but these are permitted to be embedded and distributed as part of a PDF (or indeed PostScript) file. TrueType fonts contain flags which control the DRM of the font, and which can deny embedding in in PDF (or other formats). Ghostscript honours these embedding flags in the font as does Acrobat Distiller and other Adobe products.
There were some fonts which inadvertently shipped with DRM which prevented embedding, and there's a list somewhere of these, along with an explicit statement from the font foundry that its permissible to embed these fonts. I think this was somewhere on the Adobe web site a few years back.
So if you have a PDF file with the font embedded in it (especially if it was produced by an Adobe application) then I would be comfortable that its legal to share.
I'm having some trouble figuring out what the problem actually is, and how you are using Ghostscript. If you are running the PDF->PS and then back to PDF then all bets are off frankly. Round-tripping files will often provoke problems.
In any event I'm happy to look at the file but you will have to make it available.
Example PDF page: https://db.tt/qRcF000k
This is sample page from a document, where copied text shows as question marks in my favorite reader SumatraPDF (mupdf) just the same as in Adobe Acrobat. But my main problem is that I can not search this document because of this, nor I can index it.
OTOH, xpdf's pdftotext extracts correct text.
In Adobe Acrobat if I use "Copy as formatted text", correct text is written to clipboard, although I still can't search from Acrobat.
Also if I open the linked page in Firefox's built-in PDF reader I can correctly copy the text.
Can GhostScript perhaps be instructed to correct this issue, which I can not describe differently then as 'unreadable characters'?
The PDF file uses subset fonts with non-standard Encodings and no ToUnicode CMaps. So no, you can't have Ghostscript 'correct' this file.
In fact I can't see how anything can possibly be extracting sensible text from this, and indeed my version of Acrobat (Pro X and Reader XI) can't copy meaningful text and don't appear to have a 'copy as formatted text' menu item, can you tell me where to find this ?
However, I notice that the PDF file has actually been created by Ghostscript (version 9.14) so possibly you mean 'starting with a different input file, which I haven't given you, could I have generated a PDF file where the text could be copied', to which I can only say 'I don't know', it depends what was in the original input file .
I'm having difficulties filling in a form using pdftk with text fields with true type fonts.
Font files (.ttf) are added to /Library/Fonts (OSX Mavericks)
The form is created with Adobe Acrobat Pro
The form includes normal (non form) text using these fonts
The form text fields also use these fonts
The form can successfully be filled and printed using Adobe Acrobat Pro and even Preview
However, pdftk throws an error when trying to fill it using the command:
pdftk ./my_form.pdf fill_form my_data.fdf output ./the_output.pdf
The output is:
Unhandled Java Exception in create_output():
java.lang.ArrayIndexOutOfBoundsException: 0
at pdftk.com.lowagie.text.pdf.DocumentFont.fillEncoding(pdftk)
at pdftk.com.lowagie.text.pdf.DocumentFont.doType1TT(pdftk)
at pdftk.com.lowagie.text.pdf.DocumentFont.<init>(pdftk)
at pdftk.com.lowagie.text.pdf.AcroFields.getAppearance(pdftk)
at pdftk.com.lowagie.text.pdf.AcroFields.setField(pdftk)
at pdftk.com.lowagie.text.pdf.AcroFields.setFields(pdftk)
If I change the font of the text inputs to Helvetica, Times Roman or Courier, pdftk will successfully create a PDF. Oddly though, Arial and Georgia also throw the same error.
I have tried to no avail to embed the fonts in the PDF using Ghostscript as suggested in this question How to repair a PDF file and embed missing fonts. gs may have embedded the fonts, but it removes the form fields so the resulting PDF can't feed back into pdftk.
A working resolution would be greatly appreciated.
I was getting the same java.lang.ArrayIndexOutOfBoundsException: 0 error using pdftk to fill forms on an Adobe Acrobat generated PDF. This question is super old, but I couldn't find a consistent answer on stackoverflow or elsewhere so I figured I'd post my fix.
What ended up working for me:
Opening the PDF in the OS X app Preview
Clicking into a form field, adding text then deleting that text (so nothing is actually changed)
Saving it
Running the PDF through pdftk again
I'm not that familiar with encoding or PDFs in general, but saving the PDF with Preview seems to fix the encoding or at least get it to a place where pdftk can work with it. Good luck.
This was causing a huge headache for me for 2 days. It turns out I was focusing on the wrong end of the problem.
A nice alternative that isn't as manual and only has to be done once is to enter some text in a field of the source PDF form, in your case ./my_form.pdf. I don't know EXACTLY why this works, but it does. that way if you want to create a new file at any time, you dont have to go through this trouble :)