UIImageView, CGImage, and Retina Art - objective-c

I've got a few CALayers in my interface, and I'm drawing images directly to the layers as opposed to imageViews.
Here's a snippet:
UIImage *anImage = [UIImage imageNamed:#"anyImage"];
CGImageRef anImageRef = [anImage CGImage];
CALayer *aLayer = [CALayer layer];
CGFloat anImageWidth = CGImageGetWidth(anImageRef);
CGFloat anImageHeight = CGImageGetHeight(anImageRef);
CGRect layerFrame = CGRectMake(0,0,anImageWidth, anImageHeight);
[aLayer setLayerContents:(__bridge id)anImageRef];
[parentLayer addSublayer:aLayer];
So my problem is that I'm getting inconsistent results with the size of the image. On the retina Device, the image that appears is double the size anticipated (e.g., it matches the pixel size of the #2x image). On the simulator in retina mode, the image drawn to the layer is the anticipated size (where points match the pixels of the non retina image).
Rather than statically set the size, or halve the size (which corrects the issue on the device but breaks compatibility with non-retina displays), what is a good solution or workaround to this scenario? Why is it happening?

The UIImage contains a scale property. It will be 2.0 for retina display images. See the docs for more info.

CGImageGetWidth() and CGImageGetHeight() return the number of pixels whereas you need the image size in points. Use -[UIImage size] instead.

Related

Image rotating left when taking picture with iPad

In the application I'm working at you can take a picture with the iPad camera. After that using CoreGraphics you can draw shapes on that image.
At first the image was upside down and mirrored. I resolved that with this:
CGContextTranslateCTM(myContext, 0, backgroundImage.size.height);
CGContextScaleCTM(myContext, 1.0, -1.0);
But now when you take the image in portrait mode, the imported image is rotated to the left (so it's presented horizontally). I rotated the image back with this code:
UIImage *tempImage = [[UIImage alloc] initWithCGImage:imagetest.CGImage];
CGAffineTransform transform = CGAffineTransformIdentity;
transform = CGAffineTransformTranslate(transform, 0, tempImage.size.height);
transform = CGAffineTransformRotate(transform, -M_PI_2);
CGContextRef ctx = CGBitmapContextCreate(NULL, tempImage.size.width, tempImage.size.height,
CGImageGetBitsPerComponent(tempImage.CGImage), 0,
CGImageGetColorSpace(tempImage.CGImage),
CGImageGetBitmapInfo(tempImage.CGImage));
CGContextConcatCTM(ctx, transform);
CGContextDrawImage(ctx, CGRectMake(0,0,tempImage.size.height,tempImage.size.width), tempImage.CGImage);
CGImageRef cgimg = CGBitmapContextCreateImage(ctx);
UIImage *img = [UIImage imageWithCGImage:cgimg];
CGContextRelease(ctx);
CGImageRelease(cgimg);
Now the image is shown in the right way (portrait), but I can't draw properly on it, maybe because the width and height are reversed.
From what I read there is a meta tag with the image orientation that cannot be read by CoreGraphics.
Do you know a better way to rotate the image? Or any solution that would keep the image from rotating when taking a photo in portrait mode?
Yes that is an issue because default orientation of device camera is Landscape, if you take picture in portrait mode and see preview in Photo Gallery it will be fine, but as you use it in your app it will be rotated 90 Degrees, to fix that issue i have written answer in my Recent Post Here
If you tell the image to draw itself, it will respect its own orientation. No need to flip it (it does that itself) and no need to rotate it.

renderInContext on retina and non-retina devices

I am creating a PDF by taking a screenshot of a UIView, this is currently working great on the iPad3 with the retina display, but when testing on other devices with lower resolution screens I am having problems with text resolution.
Here is my code:
//start a new page with default size and info
//this can be changed later to include extra info.
UIGraphicsBeginPDFPage();
//render the view's layer into an image context
//the last option specifies scale. If 0, it uses the devices scale.
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.opaque, 2.0);
CGContextRef context = UIGraphicsGetCurrentContext();
[view.layer renderInContext:context];
UIImage *screenShot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//render the screenshot into the pdf page CGContext
[screenShot drawInRect:view.bounds];
//close the pdf context (saves the pdf to the NSData object)
UIGraphicsEndPDFContext();
I have also tried to set the UIGraphicsBeginImageContextWithOptions scale to 2.0, but this gives no change. How can I force a view on an iPad2 to render at 2x resolution?
Expected output:
Actual output:
I ended up fixing this by recursively setting the contentScaleFactor property of the parent view and its subviews to 2.0.
The UIImage was rendering at the correct resolution, but the layer wasn't when renderInContext was being called.

Problems scaling a UIImage to fit a UIButton

I have a set of buttons and different sized images. I want to scale each image in order that it fits the button in the correct aspect ratio. Once I've scaled the image, I set the button's image property to the scaled version.
UIImage *scaledImage = [image scaledForButton:pickerButton];
[pickerButton setImage:scaledImage forState:UIControlStateNormal];
my scaledForButton method is defined in a class extension for UIImage. It looks like this:
- (UIImage *)scaledForButton:(UIButton *)button
{
// Check which dimension (width or height) to pay respect to and
// calculate the scale factor
CGFloat imageRatio = self.size.width / self.size.height;
CGFloat buttonRatio = button.frame.size.width / button.frame.size.height;
CGFloat scaleFactor = (imageRatio > buttonRatio ? self.size.width/button.frame.size.width : self.size.height/button.frame.size.height);
// Create image using scale factor
UIImage *scaledimage = [UIImage imageWithCGImage:[self CGImage]
scale:scaleFactor
orientation:UIImageOrientationUp];
return scaledimage;
}
When I run this on an iPad2 it works fine and the images are scaled correctly. However if I run it on a retina display (both in the simulator and on a device) the image does not scale correctly and is squished into the button.
Any ideas why this would happen on retina only? I've been scratching my head for a couple of days but can't figure it out. They're both running the same iOS and I've checked the scale and ratio outputs, which are always the same, regardless of device. Many thanks.
Found the answer here: UIButton doesn't listen to content mode setting?
If you're setting the .contentMode, it seems you have to set the imageView property of the UIButton, not just the UIButton, then it worked properly.
The problem on iPad 3 was as Herman suggested - the CGImage was still a lot larger than the UIButton, so even though it was scaled down, it still had to be resized to fit the button.

Downloading images from a web server for the retina display iOS

I am downloading images from a webserver for display in a table view in my iOS application using the following code:
NSURL *url = [NSURL URLWithString:[imageArray objectAtIndex:indexPath.row]];
UIImage *myImage = [UIImage imageWithData:[NSData dataWithContentsOfURL:url]];
cell.imageView.image = myImage;
The image view is a 60x60 placeholder and 120x120 for the retina display. I am going to assume the user has an iPhone 4. However, if I size the image to 120x120 it does not correct the issue, it just becomes too big for the the imageview. If I size the image to 60x60, on the webserver that is, then the image fits fine but its a little fuzzy. Anyone know how to fix this issue?
Thanks!
Let's first agree that your UIImageView is 60x60 points, meaning 60x60 pixels for a standard display and 120x120 pixels for a retina display.
For a UIImageView at 60x60 points, the image should be 60x60 pixels at scale 1.0 for a standard display and 120x120 pixels at scale 2.0 for a retina display. This means that your UIImage should always have a size of 60x60 points, but should have a different scale depending on the display resolution.
When getting the image data from your server, you should first check the scale of the device's screen and then request the appropriate image size (in pixels), like so:
if ([UIScreen mainScreen].scale == 1.0) {
// Build URL for 60x60 pixels image
}
else {
// Build URL for 120x120 pixels image
}
Then you should put the image data in a UIImage of size 60x60 points, at the appropriate scale:
NSData *imageData = [NSData dataWithContentsOfURL:url];
CFDataRef cfdata = CFDataCreate(NULL, [imageData bytes], [imageData length]);
CGDataProviderRef imageDataProvider = CGDataProviderCreateWithCFData (cfdata);
CGImageRef imageRef = CGImageCreateWithJPEGDataProvider(imageDataProvider, NULL, true, kCGRenderingIntentDefault);
UIImage *image = [[UIImage alloc] initWithCGImage:imageRef
scale:[UIScreen mainScreen].scale
orientation:UIImageOrientationUp];
CFRelease (imageRef);
CFRelease (imageDataProvider);
CFRelease(cfdata);
Hope this helps.
If your downloaded image have a size 2X from your UIImageView dimension size. It's look fine for retina display in iPhone4.
see also there: http://mobile.tutsplus.com/tutorials/iphone/preparing-your-iphone-app-for-higher-resolutions/
Images downloaded from the web are not formatted for Retina Display the same way images bundled with your app (using the "#2x" suffix) are.
You can get/set the scale of any UIView with the aptly named scale: and setScale: methods to help you better display content from the web if you know it's intended for Retina Display devices.
Well the answer is as simple as it can be:
I would recommend always downloading the double density images from the server (as users with non retina display are very few) and setting in the imageview.
What you didn't do correctly is you didn't set the imageView to automatically fit the content.
You can do this either in IB by selecting the ImageView and setting the contentMode (mode) to scale to Fill, or by code:
imageView.contentMode = UIViewContentModeScaleToFill;

crop image from certain portion of screen in iphone programmatically

NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
CGSize contextSize=CGSizeMake(320,400);
UIGraphicsBeginImageContext(self.view.bounds.size);
UIGraphicsBeginImageContext(contextSize);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *savedImg = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[self setSaveImage:savedImg];
to extarct some part of image from main screen.
In UIGraphicsBeginImageContext I can only use size, is there any way to use CGRect or some other way to extract image from a specific portion of screen ie (x,y, 320, 400) some thing like this
Hope this helps:
// Create new image context (retina safe)
UIGraphicsBeginImageContextWithOptions(size, NO, 0.0);
// Create rect for image
CGRect rect = CGRectMake(x, y, size.width, size.height);
// Draw the image into the rect
[existingImage drawInRect:rect];
// Saving the image, ending image context
UIImage * newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
This question is really a duplicate of several other questions including this: How to crop the UIImage?, but since it took me a while to find a solution, I will cross post again.
In my quest for a solution that I could more easily understand (and written in Swift), I arrived at this:
I wanted to be able to crop from a region based on an aspect ratio, and scale to a size based on a outer bounding extent. Here is my variation:
import AVFoundation
import ImageIO
class Image {
class func crop(image:UIImage, crop source:CGRect, aspect:CGSize, outputExtent:CGSize) -> UIImage {
let sourceRect = AVMakeRectWithAspectRatioInsideRect(aspect, source)
let targetRect = AVMakeRectWithAspectRatioInsideRect(aspect, CGRect(origin: CGPointZero, size: outputExtent))
let opaque = true, deviceScale:CGFloat = 0.0 // use scale of device's main screen
UIGraphicsBeginImageContextWithOptions(targetRect.size, opaque, deviceScale)
let scale = max(
targetRect.size.width / sourceRect.size.width,
targetRect.size.height / sourceRect.size.height)
let drawRect = CGRect(origin: -sourceRect.origin * scale, size: image.size * scale)
image.drawInRect(drawRect)
let scaledImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return scaledImage
}
}
There are a couple things that I found confusing, the separate concerns of cropping and resizing. Cropping is handled with the origin of the rect that you pass to drawInRect, and scaling is handled by the size portion. In my case, I needed to relate the size of the cropping rect on the source, to my output rect of the same aspect ratio. The scale factor is then output / input, and this needs to be applied to the drawRect (passed to drawInRect).
One caveat is that this approach effectively assumes that the image you are drawing is larger than the image context. I have not tested this, but I think you can use this code to handle cropping / zooming, but explicitly defining the scale parameter to be the aforementioned scale parameter. By default, UIKit applies a multiplier based on the screen resolution.
Finally, it should be noted that this UIKit approach is higher level than CoreGraphics / Quartz and Core Image approaches, and seems to handle image orientation issues. It is also worth mentioning that it is pretty fast, second to ImageIO, according to this post here: http://nshipster.com/image-resizing/