I am trying to extract the images stored in PDF as stream. While I can do this easily, I am not able to get the accurate image rotation information. I am looking for specific information such as MediaBox, Rotate and landscape/portrait mode.
When I extract the image, its alignment does not match the what the end user sees with a pdf reader tool.
I binary compared two PDFs (where an image was rotated 90 in the former and the same image was rotated 270 in the latter) and I found difference in a particular stream object. However, I am not able to make out what that stream information is.
Here are the two documents I am talking about:
http://bit.ly/eQZGKJ
http://bit.ly/g43Whb
The position, size and orientation of the image when displayed on the page is determined by the current transformation matrix (CTM). You have to execute the entire page content stream to determine the CTM that is in place when the image is displayed. It's like a virtual rendering of the PDF page.
To almost every image is so called CTM (current transformation matrix) stored. It gives a reader information about position, rotation and skewing of the image.
Check cm operator, which described in pdf reference as "Modify the current transformation matrix (CTM) by concatenating the specified matrix (see Section 4.2.1, “Coordinate Spaces”). Although the operands specify a matrix, they are written as six separate numbers, not as an array." In your PDF documents:
rotated1.pdf contains "0 550.08 -743.04 0 743.04 0 cm"
rotated2.pdf contains "0 -550.08 743.04 0 0 550.08 cm"
So we can say that your image rotates on 90deg clockwise or onto 90deg in opposite direction.
(and translated)
It can also have a clip so you may only see part of the image. MediaBox and rotation relate to the whole page.
Related
I created and merged an images SFrame with an Annotations SFrame.
I have verified that the coordinates of the annotation boxes matches the location of the features measured in Photoshop.
However the models I create are non-functional, so I explored the merged data set with
data['image_with_ground_truth'] =
tc.object_detector.util.draw_bounding_boxes(data['image'], data['annotations'])
and I find that all the annotations are squashed in the top-left corner in Turi Create despite them actually being widely distributed on the source image as in the second image. The annotations list column shows the coordinates get read correctly into TC, but are mapped badly into what the model sees as bounding boxes.
Where should I look to find the scaling problem in Turi Create??
the version of ml-annotate I was using output coordinates with different scale factors for each image in set, some close, some off by as much as 3.3x
I'm searching for a methods of text recognition based on document borders.
Or the methods that can solve the problem of finding new viewpoint.
For exmp. the camera is in point (x1,y1,z1) and the result picture with perspective distortions, but we can find (x2,y2,z2) for camera to correct picture.
Thanks.
The usual approach, which assumes that the document's page is approximately flat in 3D space, is to warp the quadrangle encompassing the page into a rectangle. To do so you must estimate a homography, i.e. a (linear) projective transformation between the original image and its warped counterpart.
The estimation requires matching points (or lines) between the two images, and a common choice for documents is to map the page corners in the original images to the image corners of the warped image. This will in general produce a rectangle with an incorrect aspect ratio (i.e. the warped page will look "wider" or "taller" than the real one), but this can be easily corrected if you happen to know in advance what the real aspect ratio is (for example, because you know the type of paper used, whether letter, A4, etc.).
A simple algorithm to perform the estimation is the so-called Direct Linear Transformation.
The OpenCV library contains routines to help accomplishing all these tasks, look into it.
I have a PDF with a page with an image. I'm using a command line tool to extract this image. The page in the PDF shows only a part of the image, because the extracted image as a lot more "contents" and they are slightly rotated. This happens, I assume, because some sort of cropping and/or rotation was applied to the image when the PDF was built.
Is there anyway, using iText, to figure out the offset and rotation applied to the image? That would allow me to crop the extracted image in the same way and end up with something similar to what's visible on the PDF page.
I have an iPad app that displays pdf pages.I need to add annotations on the image (if exists on the pdf page) for which i need the coordinates at which the image is situated in the pdf page.I am able to get the image data from the XObject and the image width and height,but i also need the x and y coodrinate of the image.Any idea about how to obtain the coordinates of image by parsing pdf page?
Im assuming you have seen this apple developer page describing how to parse XObjects: http://developer.apple.com/library/mac/#documentation/GraphicsImaging/Conceptual/drawingwithquartz2d/dq_pdf_scan/dq_pdf_scan.html
XObjects do not contain any position data as they just describe image data that can be reused through the pdf.
From http://itext-general.2136553.n4.nabble.com/finding-the-position-of-xobject-in-an-existing-pdf-td2157152.html
"An XObject is a stream that can be reused in many different
other streams. For instance: you could have an image XObject
of a logo that appears on every page in the document.
Suppose that you have some pages in landscape and some in portrait.
Then the logo will have different coordinates on these different
pages. Therefore the position of the XObject IS NEVER STORED with
the XObject, the position can be found in the stream that refers
to the XObject.
Maybe your reaction is: "Oh right, then it's simple: I have to
look in the content stream of the pages using the XObject."
Yes and no. That's indeed where you should look, but it's not
simple. Because the actual position depends on the current
transformation matrix of the state at the moment the image is
added. It's quite some programming work to parse the content
stream and calculate the position of an XObject. "
I think you should find another option and avoid this all together.
If your still determined you will have to use CGPDFScanner and find the transforms through the page.
I have a PDF document that is created by creating NSImages with size in 72dpi pts, each has a single representation which is measured in pixels. I then put these images into PDFPages with initWithImage, and then save the document.
When I open the document, I need the resolution of the original image. However, all of the rectangles that PDFPage gives me are measured in points, not pixels.
I know that the information is in there, and I suppose I can try to parse the PDF data myself, by going through the voyeur.app example... but that's a WHOLE lot of effort to do something that should be pretty normal...
Is there an easier way to do this?
Added:
I've tried two techniques:
get the PDFRepresentation data from
the page, and use it to make a new
NSImage via initWithData. This
works, however, the image has both
size and pixel size in 72dpi.
Draw the PDFPage into a new
off-screen context, and then get a
CGImage from that. The problem is
that when I'm making the context, it
appears that I need to know the size
in pixels already, which defeats
part of the purpose...
There are a few things you need to understand about PDF:
The PDF Coordinate system is in
points (1/72 inch) by default.
The PDF Coordinate system is devoid of resolution. (this is a white lie - the resolution is effectively the limits of 32 bit floating point numbers).
Images in PDF do not inherently have any resolution attached to them (this is a white lie - images compressed with JPEG2000 still have resolution in their embedded metadata).
An Image in PDF is represented by an object that contains a series of samples that are stored using some compression filter.
Image objects can be rendered on a page multiple times at any size.
Since resolution is defined as the number of pixels (or samples) per unit distance, resolution only means something for a particular rendering of an image on a page. So if you are rendering a particular image to fill the page, then the resolution in dpi is
xdpi = image_width / (pageWidthInPoints / 72.0);
ydpi = image_height / (pageHeightInPoints / 72.0);
If the image is not being rendered to the full size of the page, a complete solution is very tricky. Adobe prescribes that images should be treated as being 1x1 and that you change the page transformation matrix to determine how to render them. The means that you would need the matrix at the point of rendering the image and you would need to push the points (0,0), (0, 1), (1,0) through the matrix. The Euclidean distance between (0, 0)' and (1, 0)' will give you the width in points and the Euclidean distance between (0, 0)' and (0, 1)' will give you the height in points.
So how do you get that matrix? Well, you need the content stream for the page and you need to write a PDF interpreter that can rip the content stream and keep track of changes to the CTM. When you reach your image, you extract the CTM for it.
To do that last step should be about an hour with a decent PDF toolkit, provided you are familiar with the toolkit. Writing that toolkit is several person years of work.