Scaling an image from the iPhone - iOS App - objective-c

I let the user select a photo from the iPhone library, and I grab the UIImage.
I output the size of the image, and it says 320x480, but it doesn't seem to be, because when I draw the image on the screen using CGRectMake(0,0,320,480), it only shows the upper left portion of the image. Aren't the images much bigger than 320x480 because of the high resolution?
I'd like to scale the image to force it to be 320x480. If it is less than 320x480, it should not be rescaled at all. If the width is greater than 320 or the height is greater than 480, it should scale in a way so that it becomes as close to 320x480 as possible, but by keeping the proper proportion of width to height. So, for instance, if it scales to 320x420, that is fine, or 280x480.
How can I do this in Objective-C?

Setting the image view's content mode like this:
myView.contentMode = UIViewContentModeScaleAspectFit;
will preserve the aspect ratio.

Related

Is there an Android equivalent of iOS' UIViewContentModeScaleAspectFit;

I've got an image loaded from Parse.com and on xhdpi and xxhdpi devices, the image displayed is very tiny. I've tried playing with the XML layout. The most I can do is to stretch a background image that is the border for the downloaded image to full width of the screen. The height of the image never exceeds the physical size of the image stored on parse.
I'm trying to get the image to scale to fit the width of any device its on, while maintaining the aspect ratio.
I'm using a ParseImageView which is a subclass of ImageView. I accomplished this on iOS by resizing the image to fill the width, and then setting ImageView.contentMode=UIViewContentModeScaleAspectFit; Is there any equivilant to this for Android?
You can achieve this by using Picasso Library. Using Picasso Library, you can get the width and height of the image from Web by casting it to Bitmap and scaling the ImageView to that aspect ratio. Related post is below:
getting image width and height with picasso library

Size of Tabbar Image

Currently I am designing the UITabbar of my App. I created a Photoshop layout for the Tabbar, it is 84px high and 640px wide. Is it the right way to create one image with the size of 84x640 and one with the size 320x42. And then name the larger image #2x.png.
I am struggling at this point, because when I log the width of the UITabbar it says 320.00, but I am using the Iphone 3.5inch retina simulator.
Any tips for me to realize the tabbar?
Yes. You should have two images. One for normal displays and one for retina.
Xcode works with point, not pixels, so the width will always be 320.
In the case of retina display one point is 2x2 pixels and in normal mode it is 1x1.
by the way, I think the height for the tab bar should be 320x49 for normal and 640x98 for retina.
the retina image should have the same name as the normal one with the #2x at the end
Example:
normal: image.png
retina: image#2x.png
You confused "Points" with "Pixels". The Points are resolution independent. You can normally check your scale factor by calling contentScaleFactor on your UIView.
It should say 2.0 for retina, and 1.0 for non retina.

drawInRect loses resolution when drawn to smaller image?

When i draw a large image (say 1000x1000 pixel) using drawInRect method with size say 200x200 pixel and again i use drawInRect method to draw the image to its original size (1000x1000 pixel) does the resolution affect by using this ? Does the resolution decreases by drawing large image into small and again that same image to large image ?
Hopefully I've gotten your question correct in my head.
If you take an image bigger than 200x200 pixels and draw it into a 200x200 pixel rectangle, it'll get scaled down and lose most of its detail. If you then take the resultant image and try to draw it in a bigger rectangle it'll just get scaled up. So, to answer you're question, yes. It'll look blurry as hell. It's no different than resizing an image down in a graphics editor then blowing it back up to its original size. The loss of detail is permanent; there's no way to know what was lost in the transition down.

How do I override the Points to Pixels iOS specificity and have my image drawn at the right size?

I have a 64px by 64px redSquare.png file at a 326ppi resolution. I'm drawing it at the top left corner of my View Controller's window as follows:
myImage = [UIImage imageNamed:#"redSquare.png"];
myImageView = [[UIImageView alloc] initWithImage:myImage];
[self.view addSubview:myImageView];
Given that the iPhone 4S has a screen resolution of 960x640 (326ppi) there should be enough room for 9 more squares to fit next to the first one. However there's only room for 4 more. i.e. the square is drawn larger than what it should given my measurements.
// even tried resizing UIImageView in case it was
// resizing my image to a different size, by adding
// this next line, but no success there either :
myImageView.frame = CGRectMake(0, 0, 64, 64);
I believe it has to do with the way the device is "translating" my pixels. I read about the distinction between Points Versus Pixels in Apple's documentation but it doesn't mention how one can work around this problem. I know I'm measuring in pixels. Should I be measuring in points? And how could I do that? How exactly am I to resize my image so that it can hold 9 more same-sized squares next to it (i.e. on the same horizontal..) ?
Thank you
To display an image at full resolution on a Retina display, it needs to have #2x appended to the end of its name. In practice, this means you should save the image you're currently using as redSquare#2x.png and a version of that image in 32x32 pixels as redSquare.png.
Once you have done this, there is no need to change your code. The appropriate image will be displayed depending on the device's capabilities. This will allow your app to render correctly on both Retina and non-Retina devices.

iOS objective C converting coordinates from Absolute image location to my views coordinate system

I am using CIDetector to find faces in a picture.
The coordinates of faces it returns are the absolute coordinates in the image file (The image dimensions are much larger than the screen size obviously).
I tried to use the converRect:toView command. The image itself is not a UIView so the command doesn't work, also I have a few views embedded inside each other where finally the image is being shown.
I want to convert the bounds of the found faces in the image to the exact location of the face being shown on the screen in the embedded image.
How can this be accomplished?
Thanks!
The image being shown on the phone - the image is scaled to fit the screen with aspect fit
The coordinates from CIDetecter (CoreImage) are flipped relative to UIKit coordinates. There are a bunch of tutorials out there on iOS Face Detection but most of them are either incomplete or mess up the coordinates. Here's one that is correct: http://nacho4d-nacho4d.blogspot.com/2012/03/coreimage-and-uikit-coordinates.html
One thing to note: the tutorial uses a small image so the resulting coordinates do not have to be scaled to the on-screen (UIImageView) representation of the image. Assuming you use a photo taken with the iPad camera, you will have to scale the coordinates by the amount the source image is scaled (unless you reduce its size before running the face detection routine -maybe not a bad idea). You may also need to rotate the image for the correct orientation.
There is a routine in one of the answers here for rotating/scaling: UIImagePickerController camera preview is portrait in landscape app
And this answer has a good routine for finding the scale of an image when presented by a UIImageView using 'aspect fit': How to get the size of a scaled UIImage in UIImageView?
You will need to use the scale in order to map the CIDetector coordinates from the full size image to the scaled down image shown in a UIImageView.