Same image operations applied to two different PDFs, different results - pdf

I'm working with two PDFs that are not identical, but are to have the same operation applied to them.
The first one is generated by Microsoft Office 365 by downloading a Word document as a PDF
The second one is generated by Google Drive by downloading a Google Docs document as a PDF
I'm working with some preliminary code using Aspose to apply the same image to both PDF files using exactly the same code. I'm not inclined to blame the library right away as it is capable of generating the correct output when operating on the Office 365 document:
// note: Anyone familiar with the PDF format itself should have no
// issues inferring the low-level operations being performed here...
fun Page.writeImage(image: InputStream) {
val imageName = resources.images.add(image.inMemory());
val rectangle = rectangleFromTopLeft(0.0, 0.0, 400.0, 200.0);
val matrix = rectangle.defaultMatrix();
contents.add(listOf(
GSave(),
ConcatenateMatrix(matrix),
Do(imageName),
GRestore()
));
}
Regardless of which file I provide, the coordinates for the rectangle and matrix in both these cases remain the same.
For the Office 365 derived PDF, the image is applied to the page as I specify. Where things get weird is when I open the Google Docs derived PDF, the image is applied flipped vertically and at the bottom of the page!
View the four PDF files in their before and after states.
I would love for any PDF experts to perhaps be able to explain to me what's going on here. My initial suspicion is that some prior state or operation in the Google Docs PDF is in effect prior to my image operation.
That said, I'm not familiar enough (yet!) with the PDF spec to pick it out...

I don't know who you should blame, but there is a straightforward reason for the difference.
The Google Docs document has a page stream that begins with:
1 0 0 -1 0 792 cm
This basically does the vertical flipping of the page, the 792 is to compensate and move things back on the page - it should be the height of the page in points.
It does not encapsulate this in a q ... Q pair to do a "save ... restore", which means this matrix is now set for the remainder of all that follows on the page. As you might already know, the PDF specification does not provide a way to reset the page matrix, you can only append to it.
When you add content to the page at the end, your content now inherits this matrix, which is why you see it flipped and at the bottom.
The Microsoft file does not do this and as a result it's handled properly. In this case the matrix remains the identity matrix and you end up with your content where you expected it.
How to fix this? Well, if your library doesn't provide a way to know what the current page matrix is, that's going to be very difficult. It can of course be solved "just for this document" by applying the inverse matrix to cancel out the stupid thing Google did here, but I can imagine this is not the ultimate way to handle this you'd be looking for.

Related

"Re-paginate" PDF using iText

Disclaimer:
I am using iText 5. I know this is generally frowned upon (vs. using iText 7), but I am working with considerable legacy code that uses iText 5, and upgrading does not fall under my control.
Requirements:
A "simple" PDF/A is received as input (text only, these are generated from RTF), as well as a float value corresponding to a desired first page length in inches.
A PDF/A must be output that is identical to the input PDF, except it is paginated as follows: first page length = input value; each subsequent (not first or last) page will fill a standard page length; the last page will be truncated a constant number of points below the content nearest the bottom of the page. Note that input and output width will be identical and constant.
Progress / Approach:
I have extended the SimpleTextExtractionStrategy to generate XML containing font information (size and family, bold or italics, etc.) as well as location information (relative an absolute coordinate system where the origin is at the top left corner of the first page of the input PDF) for each "span" of text extracted from the input PDF.
I then generate a new PDF page by page (where each page is the desired length according to the requirements outlined above), filtering the extracted XML info with LINQ based on the bounds of each new page, and adding appropriately formatted text at the appropriate location using ColumnText.ShowTextAligned(...).
Problem:
The approach outlined above does fine. It generates PDFs with the desired page structure, but some information is lost in translation, namely colored text and underlined text. While colored text shouldn't be seen in these PDFs, underlined text absolutely must be detected.
This set of requirements should also include PDFs with tables. I originally planned on implementing a different module that adheres to the same interface for table PDFs, as these are generated and used separately from the PDFs generated from RTF, and iText has relatively strong table functionality built in.
The two concerns outlined above, coupled with the fact that my described approach was born out of an attempt to reuse existing code leads me to believe that an entirely different approach may be necessary or at least much better. It seems to me that there should be a way to capture content byte info and clip it as necessary to "re-paginate" the input PDF, only worrying about moving content that falls along a page boundary.
Essentially, I am looking for (iText based) recommendations for a better approach. Pseudo-code type answers or simply recommendations for classes / interfaces that may help are acceptable. While it would be nice to handle text and tables together, any advice pertinent to one or the other would also be appreciated. I have perused much of the available documentation on the iText website and other SO questions, but have not found quite what I'm looking for.
Note that no code is included in this question as I am looking for a high-level approach that is entirely different from what I have tried.
Edit:
I didn't notice it before, but the way in which I was reusing fonts (similar to this) resulted in some unexpected (but documented as such) behavior. It seems that I will need to avoid extracting information for re pagination at the text level, as it will be difficult to ensure continuity of fonts between input and output.
I solved this problem a while ago, but figured I would post my solution. I'm sure it's not the most efficient solution, but it works well for my purposes. Note that this will re-paginate a PDF as described in the question containing text only. Table PDF's are handled separately.
The basic process is this:
Use a custom TextExtractionStrategy to extract XML containing information regarding ascent and descent lines for all text in the input PDF, as well as what page it originally appears on.
Given the page length requirements as described in the question (first page = input value, subsequent = standard length, last page = fit content) and the XML info regarding text positions, determine what content will fit on each page of the output PDF. Create a map of where each input page will need to be cropped (top and bottom, note that each input page may be cropped more than once), as well as a map of which cropped pages will need to be "concatenated" together in the final output.
Copy the input PDF page by page to an intermediate temporary PDF (using PdfCopier). If an input page must be cropped more than once (ex: first 2 inches of input page 1 = page 1 output, next 6 inches of input page 1 = page 2 output, final 0.5 inch of input page 1 = top of page 3 output), ensure that it is copied the appropriate number of times (1 time per crop).
Crop each page of the intermediate copied PDF appropriately. This is done by modifying the MediaBox and / or CropBox.
Concatenate the appropriate cropped pages together into the final output PDF's pages. I used a PdfWriter to first create a new page of the appropriate height, then add each appropriate cropped page at the appropriate position in the output PDF page's byte content usingcontentByte.AddTemplate(inputCroppedPage, 0, bottomOfLastAddedCroppedPage).
To anyone who managed to read and understand all of that, congratulations. To anyone else, please let me know what you if you are confused. The solution described above is a little twisted and tough to put into words. While there is too much code to post here (and I am not at liberty to share the code on GitHub or similar), I would be happy to answer any questions that will help someone else implement something similar.
The TextExtractionStrategy mentioned in step 1 was inspired by this answer. Essentially, I used System.Xml.Linq to create an XML document rather than concatentating strings to form HTML, and I ignored any font information, storing only information regarding where text is located in the page (you'll see that this information is available in the linked answer, just isn't written into the final HTML).

Extracting text from PDF with correct/sensible coordinates

My company licenses both iTextSharp and PdfTools. Trying to figure out the root cause I built Apache's PdfBox: All show the same behavior, so rather than creating two support requests and a post on the PdfBox list I'm trying SO first for the general problem.
For a real world PDF (according to the document's properties it was created by "SAP NetWeaver 740") all extracted text coordinates are way off, while the content is fine. Across all the tools I listed above:
The page size (as in, mediabox and cropbox) is 842.0 x 595.0 - a portrait invoice. My default test word (all are off, but that's the one that caused my investigation) starts at roughly 80% in. All tools report the coordinates of that text with x=778 - outside of the page bounds. The y coordinate seems to be fine though. Probably related, the width is off (too wide by a large margin) while the height is again fine.
Now, maybe the PDF is broken in some way. But then again: The text is rendered fine of course. If I select the text in - say - Acrobat Reader, that works fine (i.e. the selection rectangle matches the text on the screen). And I assume that SAP generates rather bland/unsophisticated documents, tbh.
I guess my question boils down to: Under which circumstances would text appear to be outside of the page's boundaries? What might cause the horizontal position to be totally out of whack (and always too large)?

How to measure different coordinates from a PDF file on Windows?

I am looking for a way to measure the coordinates of different rectangles on a PDF file?
Mainly I do have to perform some overprinting on an existing PDF and I need to know the x,y,w,h on where I am supposed to write the texts.
It seems that Preview.app on Mac has this ability but so far I wasn't able to find anything on Windows that does the same.
Please do not confuse this feature with the Measuring Tools from Adobe Reader which are used to measure distance in printed construction stuff, not the PDF page itself.
It seems that the default using of measure is point, so I need something that would allow to select a rectangle and that will tell me the coordinates.
Please do not suggest on exporting as a imagine and using something else to measure the pixels on the image.
Update: http://legacy.activepdf.com/support/knowledgebase/view.cfm?tk=rl&kb=11866 -- PDF Units, that's what I am looking for, something to measure the PDF coordinates in PDF units.
Disclaimer: I work for Atalasoft.
I know you said not to suggest this, but honestly, it's the easiest approach:
If you mean "sweep out a rectangle in the UI and report the coordinates", that's pretty straight forward, but it's going to be a build-your-own type of thing. What you will need are:
A PDF rasterizer (GhostScript, Acrobat, FoxIt, Atalasoft) to get you an image at a specific resolution.
A tool to display that image in a window and let you sweep out a rectangle (this is straight forward winforms type code for .NET, but we have a control that does this out of the box - combining 1 & 2 into one step).
A tool that can look at the structure of a PDF page and report back the crop box (if any) and the media box for each page (iText, DotPdf).
A tool/understanding of matrix transformations to build the matrix that goes from display space into PDF space (and/or vice versa, probably in iText, definitely in DotPdf)
The code flow becomes something like:
For each page:
Open document, pull out crop and media box, rasterize page, build transformation matrix.
Display image, build/hook into event for selection changing.
Push the image viewer rectangle coordinates through the transformation matrix.
Profit.
From a coding point of view (assuming 0 prior knowledge of this, but a decent understanding of linear algebra), from 3 days to a 2 weeks. If I were to write it, it would probably take on the order of a few hours, but I wrote most of our PDF tools and this is pretty easy.
If your goal is to intuit where rectangles are on the page and report back those coordinates, that's also doable, but it decidedly non-trivial in comparison. You need to write code that can rip through a PDF display list and interpret the contents correctly. That means being able to handle all the cumulative matrix transformations, the graphics state changes, the gstate object use, Form XObject placement, and so on. You need to answer the question "what is a rectangle?" because in PDF placement, it could be an re operator, a set of degenerate beziers, a set of lines, an image of a rectangle or (surprise!) a combination of all of the above. Honestly, intuiting anything about the content on a PDF page is a Herculean task.

Photoshop jsx image grid

What I am ultimately trying to do is to create a grid of images for print that are minor variations of the same thing (different text is all). Looking through online resources I was able to create a script that changes the text and exports all of the images necessary (several hundred). What I am trying to do now is to import all of these images into a new photoshop document and lay them all out in a grid and I can't seem to find any examples of this.
Can anyone point me in the right direction to place a file at a specific coordinate (I'm using CS5 and have the design suite so if there is a way in illustrator to do this quickly...)?
Also, I'm open to other ideas on how to do this (even other programs) easily. It's for labels so the positioning on the sheet has to be pretty precise...
The art layer object has a translate() method that takes delta x and y params. You'll need to open each image, copy it to the target document, get its current location (using artLayer.bounds) and do the math to find the deltas to position it where you want it. Your deltas can be in pixels so you'll get plenty of precision.
Check out your 'JavaScript Scripting Reference' pdf in your Adobe install directory for more details.
Ok I'm marking Anna's response as the answer because though I didn't fully test it, it seems like it should work and answers the original question with jsx. However I'm also leaving my final solution in case anyone else runs across this with the same issue and may prefer this method as well.
What I ended up doing instead is using InDesign. I figured out that it has a grid option that lets you import a number of files and place them all in an equal grid in a single command. This is almost exactly what I was looking for, except that it leaves a small border/margin in between the columns and grids and mine were designed to meet exactly.
I couldn't figure out how to make it not have the border (I have very little experience with InDesign, it may be possible). However I was able to select all my images and scale them uniformly to be the correct size, then I just selected each column and dragged it over to snap to the adjacent column and the same with rows...

How to troubleshoot badly rendered PDF file

I have a small PDF file, which is supposed to display just the string "Hello World!".
Unfortunately, it displays black boxes instead of the characters. I suppose there is some problem with the fonts, but I am not sure.
Is there a way to diagnose and troubleshoot this issue? All I see on the Internet is advices to do this and to do that, which helps to some and does not to others (nothing helped me). Looks like shooting in the dark to me.
Here is a concrete example. Why does this PDF display black squares instead of the string Hello World ?
EDIT
A bit of the context. I am trying to convert a trivial HTML to PDF using the wkhtmltopdf tool. It is an absolute frustration, because according to the Internet searches the tool is supposed to work and do it quite well. But the thing does not work for me and nothing I do changes this! Unfortunately, this tool seems the only free tool to convert HTML to PDF. This is a huge bummer.
If you want to find out whether a PDF is valid or what is wrong with it, there are a few general steps you can take:
1) Open it in Adobe Acrobat or Adobe Reader (on a desktop platform, not a tablet device). For a very long time the PDF format was owned by Acrobat and the way their software handles PDF is still close to the gold standard. However, there is a caveat with this; Acrobat is very, very smart in the way it handles PDF files and it will overlook or actively correct a number of mistakes other PDF engines might have a problem with...
2) Get yourself a preflight tool. These tools were invented for use in graphic arts, but have applications outside of it too. Popular examples are callas pdfToolbox (warning, I'm affiliated with this vendor!) or the "Preflight" plug-in you'll find in Adobe Acrobat Pro (which is actually also callas technology under the hood). Then preflight specifically against the PDF/A-1b or PDF/A-2b standard.
That last point deserves some more explanation. You should pick a PDF/A compliant preflight profile because the PDF/A (or PDF for Archival) standard is extremely picky. It's goal is to make sure that PDF files will still be readable in exactly the same way 50 years from now and to ensure that it tests a whole range of properties of the file itself and the different components in it. You might be able to ignore some of the errors you get (because some of them will be connected to the fact that the PDF/A identification isn't correct for example) but I wouldn't ignore any other errors unless you understand exactly what they mean and why they aren't relevant.
PS: Can you make your test file available some other way? The file you shared in your question is useless I think. When I do "Download" I get a PDF file that doesn't contain text and doesn't have fonts in it. Those rectangles you see are exactly that - rectangles. So this PDF renders fine - it's the PDF generation process (or the fact that you stored the file on Google docs - I really have no clue what that might do) that went berserk apparently.
In addition to David's hints (first using a known good viewer and then some preflight tool), there is a third level in the inspection process:
3) Inspect the PDF with your own eyes and with the PDF specification (made available by Adobe here) at hand in a text viewer (for a first impression) and (if the cause of the issue at hand is not immediately visible) then in a PDF browsing tool (for in-depth analysis).
This step is quite cumbersome at first but after some time you learn your way around in the PDFs.
A sample for such a PDF browser tool is RUPS but there are others around, too.
'Small PDF file supposed to display "Hello World!"'
Not correct. The file you linked to does not contain any code that could render pixels on screen or on paper that a human brain would read as "Hello World!". The file indeed does only contain vector drawing operations which result in 12 black boxes.
The command line tool pdffonts does not indicate any font being used in the file:
pdffonts so-file-#15858199.pdf
What could still cause the "rendering" of the words you are looking for: some vector or pixel drawing code contained in the PDF. To find out about this, you'll have to look into the low level source code of the PDF.
The original file is 1.570 Bytes. So this task looks not as being overly huge.
'Is there a way to diagnose and troubleshoot this issue?'
Using qpdf, a "command-line program that does structural, content-preserving transformations on PDF files", you can expand all contained streams (which are normally compressed):
qpdf --qdf --object-streams=disable so-file-#15858199.pdf qdf-#15858199.pdf
The resulting file, qdf-#15858199.pdf, is 3.875 Bytes. Now open it in a text editor. PDF object no. 6 (lines 66-219) contains the contents of the page. Lines 123-194 contain only the operators m (moveto), l (lineto) and h (closepath). These lines contain 12 different groups of drawing commands, where each one represents the path for one of the 12 black boxes you see rendered on screen or printed on paper:
102.400001 12.8000001 m
268.800004 12.8000001 l
268.800004 179.200002 l
102.400001 179.200002 l
102.400001 12.8000001 l
h
Line 196 contains
f
which is the fill operator to actually fill black color into so far constructed (closed) path. Nothing in the other lines (which I didn't analyze in detail) does any drawing that may resemble the shapes of any glyphs.
'Unfortunately, this tool seems the only free tool to convert HTML to PDF'
Not correct either.
1.
Assuming your "free" is meant as free as in liberty, then an alternative option is HTMLDOC.
HTMLDOC does not support specific fonts which may be assigned to your HTML input via CSS, but it does a good job in converting one or multiple HTML documents into a single PDF book containing chapters, page-numbering, page headers and footers and more. For all options available, see its full documentation.
2.
Assuming your "free" is meant as free as in beer, then an alternative option (for private usage only) could be PrinceXML.
PrinceXML does an extraordinarily good job when it comes to support almost all CSS features your HTML document may be using. See its documentation and also some of the sample PDF files produced by PrinceXML.