Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 8 years ago.
Improve this question
I am getting image size from a link on the internet. the image actual size is 630x420 , but when i print the image size i get 129x86 . (image.size.width and height )
When i log the NSImage , i can see there is pixels field with the right size, which different than 'size' field. so, why is that , and how do i get the pixels size ?
I know that in UIImage, the size is the actual size of an image .
<NSImage 0x600000260fc0 Size={129.59999999999999, 86.399999999999991} Reps=(
"NSBitmapImageRep 0x6000000b0b00 Size={129.59999999999999, 86.399999999999991} ColorSpace=(not yet loaded) BPS=8 BPP=(not yet loaded) Pixels=630x420 Alpha=NO Planar=NO Format=(not yet loaded) CurrentBacking=nil (faulting) CGImageSource=0x600000171c40"
The size of an image in Cocoa, just like the size of most everything else, is measured in points. These are idealized coordinates that are assumed to be 1/72nd of an inch.
You may think that images are measured in pixels, and some are, but they can also have "real world", "physical" dimensions. An image has physical dimensions if its metadata describes the relationship between its pixels and some real world unit of length. A common way to express that relationship is DPI, dots per inch.
The image data you obtain from the net has a DPI stored in its metadata. It seems that it has a DPI of 350. 630 pixels at 350 DPI is 1.8 inches. 420 pixels at 350 DPI is 1.2 inches. So, the image is 1.8 inches x 1.2 inches in size.
Cocoa then expresses this in points. 1.8 inches * 72 DPI = 129.6. 1.2 inches * 72 DPI = 86.4. Those are the values you are getting from NSImage, within the ability of floating point numbers to represent them.
Cocoa is correct to consider this image as 129.6 points wide and 86.4 points high. If it didn't, then the image would draw at the wrong size on screen or when printing, given its DPI.
For the general case, you must also recognize that NSImage is a wrapper around image representations. An NSImage is just "a thing that can draw itself", not anything more specific than that. There may be multiple representations for any given image. Also, not all representations are bitmaps. Some may be vector graphics (e.g. PDF). Others may be procedural, using code to draw themselves. NSBitmapImageRep is the class which represents bitmap images.
Ok , seems that with NSImage, size is not the actual size but relative to screen .
To get the real size :
NSImageRep *rep = [[image representations] objectAtIndex:0];
NSSize imageSize = NSMakeSize(rep.pixelsWide, rep.pixelsHigh);
Related
This is possibly a dumb question, but I'm not seeing how the math works here. I have an image, 414w x 584h. To get an image scaled down to half this size (i.e., half the initial width and height), using [UIImage imageWithCGImage:scaleFactor:orientation], I have to set scaleFactor to 6.0.
Why is it 6.0? How does this value relate to say, a width scale of 414/207 = 2.0, or a height scale, same value, 584/292 = 2.0?
As I write this, I'm wondering... my app is running on an iPhone 6+. So could it have something to do with the 3x Retina display? I.e., normal scale factor of 2.0, which is dimensionless, but when applied to images on the 6+, to get to pixels, I have to do 3x this? Is this the logic?
And, I guess, while I'm here, is there a better way to resize an image, using available iOS facilities? E.g., some special affine transform, etc.? No particular concerns around memory or performance; the images are all no more than 1000 wide by 1500 or so high.
Thanks a lot!
You can find a grt tutorial at http://nshipster.com/image-resizing/
The Documentation says:
The scale factor to use when interpreting the image data. Specifying a scale factor of 1.0 results in an image whose size matches the pixel-based dimensions of the image. Applying a different scale factor changes the size of the image as reported by the size property.
For Scale:
If you load an image from a file whose name includes the #2x modifier,
the scale is set to 2.0. You can also specify an explicit scale factor
when initializing an image from a Core Graphics image. All other
images are assumed to have a scale factor of 1.0.
If you multiply the logical size of the image (stored in the size
property) by the value in this property, you get the dimensions of the
image in pixels.
For size:
In iOS 4.0 and later, this value reflects the logical size of the
image and is measured in points. In iOS 3.x and earlier, this value
always reflects the dimensions of the image measured in pixels
I have an app that creates a large bitmap and later the user can add some labels. Everything works great as long as the base bitmap is the default 96x96 resolution. If I bump it up to 300 for instance, then the text applied with Graphics.DrawString is much too large - a petite size 8 or 10 font displays like it is 20.
On the one hand, it makes sense given the resolution increase, but on the other, you'd think the Fonts would scale. MeasureString returns a larger size when measured on a 300 vs 96 dpi bitmap, which wasn't really what I expected.
I've tried tricking it by creating a small bitmap of the appropriate size, printing to it, then pasting that to the master image. But when pasted to the high res it enlarges the pasted image.
The only other thing I can think of is to create a high res temp bitmap, print to it, then shrink it before pasting to the main image. That seems like a long way to go. Is there a compositing or overlay type setting that allows this? Are font sizes only true for a 96 dpi canvas?
Thanks for any hints/advice!
The size of a font is expressed in inches. One point is 1/72 inch. So if you draw into a bitmap that has 300 dots-per-inch then your font is going to use a lot more dots for the requested number of inches. So when you display it on a 300 dpi display then you'll get the size in inches back that you asked for.
Problem is, you are not displaying it a 300 dpi device, you are displaying it on a 96 dpi device. So it looks much bigger.
Clearly you don't really want a 300 dpi bitmap. Or you want to draw it three times smaller. Take your pick.
If you want a consistent size in pixels, specify UnitPixel when creating your Font object.
MacOS 10.7.4 comes with new icons having image reps at 144 DPI. The bad thing is that when I load one of these icons in a NSImage I only get reps having a size of 512px. I mean: I load a 1024px/144dpi icns file in a NSImage and then I ask every image rep for its size... no rep has a size of 1024px, I only get sizes with a max. of 512px (no matter if a rep has a resolution of 72dpi rather than 144dpi: in fact new icons in 10.7.4, like TextEdit or Automator, have reps for both resolutions for each size except for 1024px which exists in a single rep at 144dpi).
Whay does NSImageRep seem like it doesn't understand its real resolution? Why do I get this issue just for 1024px/144dpi and not, for example, for 512px/144dpi?
If I read the TIFFRepresentation of the NSImage and I write it back to a file I get a correct 1024px/144dpi TIFF file, while if I write the same NSImage going through CGImageSource/CGImageDestination as kUTTypeTIFF I get a 1024px/72dpi file.
All these things are making me getting very confused.
Thanks a lot
The docs for -[NSImageRep size] say:
The size of the image representation, measured in points in the user coordinate space.
(Emphasis added.)
This is not a measurement in pixels. It's a measurement in points, so an image that is 1024 pixels at 144 dpi measures 512 points when the points are 72 dpi.
You want to query the -pixelsWide and -pixelsHigh methods (if, indeed, you care about the pixel dimensions; often you should not).
To retrieve pixel values from CGImage I use CGContextDrawImage (like described here:
How to get pixel data from a UIImage (Cocoa Touch) or CGImage (Core Graphics)?). The only difference is that I create 128 bpp float components context, not usual 32 bpp context. The source CGImage obtained from CGImageSource created with option kCGImageSourceShouldAllowFloat. That way I hoped to get access to float pixel values color matched with my bitmap context's color space and use them in further image processing. The problem is that resulting image data seems to be loosing dynamic range. It can be seen in shadow, plain blue sky areas. They become contoured and lacking detail. Some investigation showed the problem occurs in CGContextDrawImage (Source CGImage contains full dynamic range, saving it through CGImageDestination proves it) and after CGContextDrawImage context contents become posterized.
After some more investigation I found this:
http://lists.apple.com/archives/quartz-dev/2007/mar/msg00026.html
That led me to conclusion that problem is not in my code but in core graphics or that is intended behaviour.
My question is: what is correct way to obtain floating point data from image using core graphics?
After some more investigation the problem is narrowed down to the following: posterization occurs when 8 bit image is drawn to the 128 bpp float context created with linear color space (kCGColorSpaceGenericRGBLinear). If I draw the same image to context created with kCGColorSpaceGenericRGB then retrieve CGImage from that context and draw that second image to linear color space context everything is ok.
Other solution (workaround ?) is to use Core Image: create CIImage from source CGImage and draw it to CIContext created with corresponding kCGColorSpaceGenericRGBLinear CGContext. But that only option on OS X (not on iOS).
I have a PDF document that is created by creating NSImages with size in 72dpi pts, each has a single representation which is measured in pixels. I then put these images into PDFPages with initWithImage, and then save the document.
When I open the document, I need the resolution of the original image. However, all of the rectangles that PDFPage gives me are measured in points, not pixels.
I know that the information is in there, and I suppose I can try to parse the PDF data myself, by going through the voyeur.app example... but that's a WHOLE lot of effort to do something that should be pretty normal...
Is there an easier way to do this?
Added:
I've tried two techniques:
get the PDFRepresentation data from
the page, and use it to make a new
NSImage via initWithData. This
works, however, the image has both
size and pixel size in 72dpi.
Draw the PDFPage into a new
off-screen context, and then get a
CGImage from that. The problem is
that when I'm making the context, it
appears that I need to know the size
in pixels already, which defeats
part of the purpose...
There are a few things you need to understand about PDF:
The PDF Coordinate system is in
points (1/72 inch) by default.
The PDF Coordinate system is devoid of resolution. (this is a white lie - the resolution is effectively the limits of 32 bit floating point numbers).
Images in PDF do not inherently have any resolution attached to them (this is a white lie - images compressed with JPEG2000 still have resolution in their embedded metadata).
An Image in PDF is represented by an object that contains a series of samples that are stored using some compression filter.
Image objects can be rendered on a page multiple times at any size.
Since resolution is defined as the number of pixels (or samples) per unit distance, resolution only means something for a particular rendering of an image on a page. So if you are rendering a particular image to fill the page, then the resolution in dpi is
xdpi = image_width / (pageWidthInPoints / 72.0);
ydpi = image_height / (pageHeightInPoints / 72.0);
If the image is not being rendered to the full size of the page, a complete solution is very tricky. Adobe prescribes that images should be treated as being 1x1 and that you change the page transformation matrix to determine how to render them. The means that you would need the matrix at the point of rendering the image and you would need to push the points (0,0), (0, 1), (1,0) through the matrix. The Euclidean distance between (0, 0)' and (1, 0)' will give you the width in points and the Euclidean distance between (0, 0)' and (0, 1)' will give you the height in points.
So how do you get that matrix? Well, you need the content stream for the page and you need to write a PDF interpreter that can rip the content stream and keep track of changes to the CTM. When you reach your image, you extract the CTM for it.
To do that last step should be about an hour with a decent PDF toolkit, provided you are familiar with the toolkit. Writing that toolkit is several person years of work.