effective way to generate thumbnail from uiimage - objective-c

I have an UICollectionView and each item shows a square image.
Each image is big file (> 3MB) and every time app try to jump into this view. It will delay 2-3 seconds.
I try to create a thumbnail from big file then apply to each collection item. But it seems not saving time.
Is there any effective way ?
Follow is the method I use to create thumbnail
-(UIImage*)resizedImageToSize:(CGSize)dstSize{
CGImageRef imgRef = self.CGImage;
// the below values are regardless of orientation : for UIImages from Camera, width>height (landscape)
CGSize srcSize = CGSizeMake(CGImageGetWidth(imgRef), CGImageGetHeight(imgRef)); // not equivalent to self.size (which is dependant on the imageOrientation)!
/* Don't resize if we already meet the required destination size. */
if (CGSizeEqualToSize(srcSize, dstSize)) {
return self;
}
CGFloat scaleRatio = dstSize.width / srcSize.width;
UIImageOrientation orient = self.imageOrientation;
CGAffineTransform transform = CGAffineTransformIdentity;
switch(orient) {
case UIImageOrientationUp: //EXIF = 1
transform = CGAffineTransformIdentity;
break;
case UIImageOrientationUpMirrored: //EXIF = 2
transform = CGAffineTransformMakeTranslation(srcSize.width, 0.0);
transform = CGAffineTransformScale(transform, -1.0, 1.0);
break;
case UIImageOrientationDown: //EXIF = 3
transform = CGAffineTransformMakeTranslation(srcSize.width, srcSize.height);
transform = CGAffineTransformRotate(transform, M_PI);
break;
case UIImageOrientationDownMirrored: //EXIF = 4
transform = CGAffineTransformMakeTranslation(0.0, srcSize.height);
transform = CGAffineTransformScale(transform, 1.0, -1.0);
break;
case UIImageOrientationLeftMirrored: //EXIF = 5
dstSize = CGSizeMake(dstSize.height, dstSize.width);
transform = CGAffineTransformMakeTranslation(srcSize.height, srcSize.width);
transform = CGAffineTransformScale(transform, -1.0, 1.0);
transform = CGAffineTransformRotate(transform, 3.0 * M_PI_2);
break;
case UIImageOrientationLeft: //EXIF = 6
dstSize = CGSizeMake(dstSize.height, dstSize.width);
transform = CGAffineTransformMakeTranslation(0.0, srcSize.width);
transform = CGAffineTransformRotate(transform, 3.0 * M_PI_2);
break;
case UIImageOrientationRightMirrored: //EXIF = 7
dstSize = CGSizeMake(dstSize.height, dstSize.width);
transform = CGAffineTransformMakeScale(-1.0, 1.0);
transform = CGAffineTransformRotate(transform, M_PI_2);
break;
case UIImageOrientationRight: //EXIF = 8
dstSize = CGSizeMake(dstSize.height, dstSize.width);
transform = CGAffineTransformMakeTranslation(srcSize.height, 0.0);
transform = CGAffineTransformRotate(transform, M_PI_2);
break;
default:
[NSException raise:NSInternalInconsistencyException format:#"Invalid image orientation"];
}
/////////////////////////////////////////////////////////////////////////////
// The actual resize: draw the image on a new context, applying a transform matrix
UIGraphicsBeginImageContextWithOptions(dstSize, NO, self.scale);
CGContextRef context = UIGraphicsGetCurrentContext();
if (!context) {
return nil;
}
if (orient == UIImageOrientationRight || orient == UIImageOrientationLeft) {
CGContextScaleCTM(context, -scaleRatio, scaleRatio);
CGContextTranslateCTM(context, -srcSize.height, 0);
} else {
CGContextScaleCTM(context, scaleRatio, -scaleRatio);
CGContextTranslateCTM(context, 0, -srcSize.height);
}
CGContextConcatCTM(context, transform);
// we use srcSize (and not dstSize) as the size to specify is in user space (and we use the CTM to apply a scaleRatio)
CGContextDrawImage(UIGraphicsGetCurrentContext(), CGRectMake(0, 0, srcSize.width, srcSize.height), imgRef);
UIImage* resizedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return resizedImage;
}

There's a built in way to create thumbnails from compressed image files: directly using ImageIO. It is lightning fast compared to rendering the image into a smaller bitmap context.
Here's a method that does the job:
+ (UIImage *)thumbnailWithContentsOfURL:(NSURL *)URL maxPixelSize:(CGFloat)maxPixelSize
{
CGImageSourceRef imageSource = CGImageSourceCreateWithURL((__bridge CFURLRef)URL, NULL);
NSAssert(imageSource != NULL, #"cannot create image source");
NSDictionary *imageOptions = #{
(NSString const *)kCGImageSourceCreateThumbnailFromImageIfAbsent : (NSNumber const *)kCFBooleanTrue,
(NSString const *)kCGImageSourceThumbnailMaxPixelSize : #(maxPixelSize),
(NSString const *)kCGImageSourceCreateThumbnailWithTransform : (NSNumber const *)kCFBooleanTrue
};
CGImageRef thumbnail = CGImageSourceCreateThumbnailAtIndex(imageSource, 0, (__bridge CFDictionaryRef)imageOptions);
CFRelease(imageSource);
UIImage *result = [[UIImage alloc] initWithCGImage:thumbnail];
CGImageRelease(thumbnail);
return result;
}

Related

UIImageView set image orienation after setting from camera roll

Image Orientation changes after setting it from camera roll to UIImage.
UIImagePickerController *controller = [[UIImagePickerController alloc] init];
controller.sourceType = UIImagePickerControllerSourceTypeCamera;
controller.allowsEditing = NO;
controller.mediaTypes = [UIImagePickerController availableMediaTypesForSourceType: UIImagePickerControllerSourceTypeCamera];
controller.delegate = self;
[self.navigationController presentViewController: controller animated: YES completion: nil];
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info{
[self.navigationController dismissViewControllerAnimated: YES completion: nil];
UIImage *image = [info valueForKey: UIImagePickerControllerOriginalImage];
NSData *imageData = UIImageJPEGRepresentation(image, 0.1);
headerCell.userProfileImageView.image = [UIImage imageWithData:imageData];
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsPath = [paths objectAtIndex:0]; //Get the docs directory
NSString *filePath = [documentsPath stringByAppendingPathComponent:#"profileImage.png"]; //Add the file name
NSData *pngData = UIImagePNGRepresentation(image);
[pngData writeToFile:filePath atomically:YES];
}
From Camera Roll its set orientation wrong but from photo library its setting perfectly.
Any one had same issue like this please let us know your comments to fix this issue.
From thread here and below answer(by Dilip Rajkumar):
https://stackoverflow.com/a/10602363/5575752
You can use below method to manage image orientation after picking image:
(Modify method as per your requirement)
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info{
[self.navigationController dismissViewControllerAnimated: YES completion: nil];
UIImage *image = [self scaleAndRotateImage: [info objectForKey:UIImagePickerControllerOriginalImage]];
NSData *imageData = UIImageJPEGRepresentation(image, 0.1);
headerCell.userProfileImageView.image = [UIImage imageWithData:imageData];
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsPath = [paths objectAtIndex:0]; //Get the docs directory
NSString *filePath = [documentsPath stringByAppendingPathComponent:#"profileImage.png"]; //Add the file name
NSData *pngData = UIImagePNGRepresentation(image);
[pngData writeToFile:filePath atomically:YES];
}
- (UIImage *)scaleAndRotateImage:(UIImage *) image {
int kMaxResolution = 320;
CGImageRef imgRef = image.CGImage;
CGFloat width = CGImageGetWidth(imgRef);
CGFloat height = CGImageGetHeight(imgRef);
CGAffineTransform transform = CGAffineTransformIdentity;
CGRect bounds = CGRectMake(0, 0, width, height);
if (width > kMaxResolution || height > kMaxResolution) {
CGFloat ratio = width/height;
if (ratio > 1) {
bounds.size.width = kMaxResolution;
bounds.size.height = bounds.size.width / ratio;
}
else {
bounds.size.height = kMaxResolution;
bounds.size.width = bounds.size.height * ratio;
}
}
CGFloat scaleRatio = bounds.size.width / width;
CGSize imageSize = CGSizeMake(CGImageGetWidth(imgRef), CGImageGetHeight(imgRef));
CGFloat boundHeight;
UIImageOrientation orient = image.imageOrientation;
switch(orient) {
case UIImageOrientationUp: //EXIF = 1
transform = CGAffineTransformIdentity;
break;
case UIImageOrientationUpMirrored: //EXIF = 2
transform = CGAffineTransformMakeTranslation(imageSize.width, 0.0);
transform = CGAffineTransformScale(transform, -1.0, 1.0);
break;
case UIImageOrientationDown: //EXIF = 3
transform = CGAffineTransformMakeTranslation(imageSize.width, imageSize.height);
transform = CGAffineTransformRotate(transform, M_PI);
break;
case UIImageOrientationDownMirrored: //EXIF = 4
transform = CGAffineTransformMakeTranslation(0.0, imageSize.height);
transform = CGAffineTransformScale(transform, 1.0, -1.0);
break;
case UIImageOrientationLeftMirrored: //EXIF = 5
boundHeight = bounds.size.height;
bounds.size.height = bounds.size.width;
bounds.size.width = boundHeight;
transform = CGAffineTransformMakeTranslation(imageSize.height, imageSize.width);
transform = CGAffineTransformScale(transform, -1.0, 1.0);
transform = CGAffineTransformRotate(transform, 3.0 * M_PI / 2.0);
break;
case UIImageOrientationLeft: //EXIF = 6
boundHeight = bounds.size.height;
bounds.size.height = bounds.size.width;
bounds.size.width = boundHeight;
transform = CGAffineTransformMakeTranslation(0.0, imageSize.width);
transform = CGAffineTransformRotate(transform, 3.0 * M_PI / 2.0);
break;
case UIImageOrientationRightMirrored: //EXIF = 7
boundHeight = bounds.size.height;
bounds.size.height = bounds.size.width;
bounds.size.width = boundHeight;
transform = CGAffineTransformMakeScale(-1.0, 1.0);
transform = CGAffineTransformRotate(transform, M_PI / 2.0);
break;
case UIImageOrientationRight: //EXIF = 8
boundHeight = bounds.size.height;
bounds.size.height = bounds.size.width;
bounds.size.width = boundHeight;
transform = CGAffineTransformMakeTranslation(imageSize.height, 0.0);
transform = CGAffineTransformRotate(transform, M_PI / 2.0);
break;
default:
[NSException raise:NSInternalInconsistencyException format:#"Invalid image orientation"];
}
UIGraphicsBeginImageContext(bounds.size);
CGContextRef context = UIGraphicsGetCurrentContext();
if (orient == UIImageOrientationRight || orient == UIImageOrientationLeft) {
CGContextScaleCTM(context, -scaleRatio, scaleRatio);
CGContextTranslateCTM(context, -height, 0);
}
else {
CGContextScaleCTM(context, scaleRatio, -scaleRatio);
CGContextTranslateCTM(context, 0, -height);
}
CGContextConcatCTM(context, transform);
CGContextDrawImage(UIGraphicsGetCurrentContext(), CGRectMake(0, 0, width, height), imgRef);
UIImage *imageCopy = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return imageCopy;
}

Huge memory peak - CGContextDrawImage

I use this piece of code to scale and rotate images taken with the photo camera. When I use this I can see a huge memory peak. Something like 20 MB. When I use the instruments I can see that this line:
CGContextDrawImage(ctxt, orig, self.CGImage);
holds 20 MB. Is this normal for full resolution photos? The iPhone 4S can handle it. but older devices crashes because of this code.
After I rescale the image I need it in NSData, so I use the UIImageJPEGRepresentation() method. This together makes the memory peak even higher. It goes to 70 MB in memory usage for several seconds.
And yes, I did read almost all the iOS camera related question about memory usage. But no answer out there.
// WBImage.mm -- extra UIImage methods
// by allen brunson march 29 2009
#include "WBImage.h"
static inline CGFloat degreesToRadians(CGFloat degrees)
{
return M_PI * (degrees / 180.0);
}
static inline CGSize swapWidthAndHeight(CGSize size)
{
CGFloat swap = size.width;
size.width = size.height;
size.height = swap;
return size;
}
#implementation UIImage (WBImage)
// rotate an image to any 90-degree orientation, with or without mirroring.
// original code by kevin lohman, heavily modified by yours truly.
// http://blog.logichigh.com/2008/06/05/uiimage-fix/
-(UIImage*)rotate:(UIImageOrientation)orient
{
CGRect bnds = CGRectZero;
UIImage* copy = nil;
CGContextRef ctxt = nil;
CGRect rect = CGRectZero;
CGAffineTransform tran = CGAffineTransformIdentity;
bnds.size = self.size;
rect.size = self.size;
switch (orient)
{
case UIImageOrientationUp:
return self;
case UIImageOrientationUpMirrored:
tran = CGAffineTransformMakeTranslation(rect.size.width, 0.0);
tran = CGAffineTransformScale(tran, -1.0, 1.0);
break;
case UIImageOrientationDown:
tran = CGAffineTransformMakeTranslation(rect.size.width,
rect.size.height);
tran = CGAffineTransformRotate(tran, degreesToRadians(180.0));
break;
case UIImageOrientationDownMirrored:
tran = CGAffineTransformMakeTranslation(0.0, rect.size.height);
tran = CGAffineTransformScale(tran, 1.0, -1.0);
break;
case UIImageOrientationLeft:
bnds.size = swapWidthAndHeight(bnds.size);
tran = CGAffineTransformMakeTranslation(0.0, rect.size.width);
tran = CGAffineTransformRotate(tran, degreesToRadians(-90.0));
break;
case UIImageOrientationLeftMirrored:
bnds.size = swapWidthAndHeight(bnds.size);
tran = CGAffineTransformMakeTranslation(rect.size.height,
rect.size.width);
tran = CGAffineTransformScale(tran, -1.0, 1.0);
tran = CGAffineTransformRotate(tran, degreesToRadians(-90.0));
break;
case UIImageOrientationRight:
bnds.size = swapWidthAndHeight(bnds.size);
tran = CGAffineTransformMakeTranslation(rect.size.height, 0.0);
tran = CGAffineTransformRotate(tran, degreesToRadians(90.0));
break;
case UIImageOrientationRightMirrored:
bnds.size = swapWidthAndHeight(bnds.size);
tran = CGAffineTransformMakeScale(-1.0, 1.0);
tran = CGAffineTransformRotate(tran, degreesToRadians(90.0));
break;
default:
// orientation value supplied is invalid
assert(false);
return nil;
}
UIGraphicsBeginImageContext(rect.size);
ctxt = UIGraphicsGetCurrentContext();
switch (orient)
{
case UIImageOrientationLeft:
case UIImageOrientationLeftMirrored:
case UIImageOrientationRight:
case UIImageOrientationRightMirrored:
CGContextScaleCTM(ctxt, -1.0, 1.0);
CGContextTranslateCTM(ctxt, -rect.size.height, 0.0);
break;
default:
CGContextScaleCTM(ctxt, 1.0, -1.0);
CGContextTranslateCTM(ctxt, 0.0, -rect.size.height);
break;
}
CGContextConcatCTM(ctxt, tran);
CGContextDrawImage(ctxt, bnds, self.CGImage);
copy = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return copy;
}
-(UIImage*)rotateAndScaleFromCameraWithMaxSize:(CGFloat)maxSize
{
UIImage* imag = self;
imag = [imag rotate:imag.imageOrientation];
imag = [imag scaleWithMaxSize:maxSize];
return imag;
}
-(UIImage*)scaleWithMaxSize:(CGFloat)maxSize
{
return [self scaleWithMaxSize:maxSize quality:kCGInterpolationHigh];
}
-(UIImage*)scaleWithMaxSize:(CGFloat)maxSize
quality:(CGInterpolationQuality)quality
{
CGRect bnds = CGRectZero;
UIImage* copy = nil;
CGContextRef ctxt = nil;
CGRect orig = CGRectZero;
CGFloat rtio = 0.0;
CGFloat scal = 1.0;
bnds.size = self.size;
orig.size = self.size;
rtio = orig.size.width / orig.size.height;
if ((orig.size.width <= maxSize) && (orig.size.height <= maxSize))
{
return self;
}
if (rtio > 1.0)
{
bnds.size.width = maxSize;
bnds.size.height = maxSize / rtio;
}
else
{
bnds.size.width = maxSize * rtio;
bnds.size.height = maxSize;
}
UIGraphicsBeginImageContext(bnds.size);
ctxt = UIGraphicsGetCurrentContext();
scal = bnds.size.width / orig.size.width;
CGContextSetInterpolationQuality(ctxt, quality);
CGContextScaleCTM(ctxt, scal, -scal);
CGContextTranslateCTM(ctxt, 0.0, -orig.size.height);
CGContextDrawImage(ctxt, orig, self.CGImage);
copy = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return copy;
}
#end
I ended up using imageIO, less memory!
-(UIImage *)resizeImageToMaxDimension: (float) dimension withPaht: (NSString *)path
{
NSURL *imageUrl = [NSURL fileURLWithPath:path];
CGImageSourceRef imageSource = CGImageSourceCreateWithURL((__bridge CFURLRef)imageUrl, NULL);
NSDictionary *thumbnailOptions = [NSDictionary dictionaryWithObjectsAndKeys:
(id)kCFBooleanTrue, kCGImageSourceCreateThumbnailWithTransform,
kCFBooleanTrue, kCGImageSourceCreateThumbnailFromImageAlways,
[NSNumber numberWithFloat:dimension], kCGImageSourceThumbnailMaxPixelSize,
nil];
CGImageRef thumbnail = CGImageSourceCreateThumbnailAtIndex(imageSource, 0, (__bridge CFDictionaryRef)thumbnailOptions);
UIImage *resizedImage = [UIImage imageWithCGImage:thumbnail];
CFRelease(thumbnail);
CFRelease(imageSource);
return resizedImage;
}
that's correct it comes from the picture that you shot with your camera, older devices use cameras with less resolution, that means that images taken with an iPhone 3g are smaller in resolution (thus size) respect the one that you have on your iPhone4s. Images are usually compressed but when they open up in memory for some sort of operation they must be decompresses, the size that they need is really bigger that the one in a file, because is the number_of_pixel_in_row*number_of_pixel_in_height*byte_for_pixel if I remember well.
Bye,
Andrea
Insert this at the end of your methods and before the return copy;:
CGContextRelease(ctxt);

UIImage: fill background

I've made this below fitImage function which takes an UIImage and a CGSize. In case the input image is larger then the input box, the image would be scaled down to fill into the box. If the box and image does not have the same ratio the image does not fill the box entirely, and there would be some visible background. In this case this background is always white. How can I change this to e.g. a black background?
+ (UIImage*) fitImage:(UIImage*)image inBox:(CGSize)size {
if (image.size.width==size.width && image.size.height==size.height)
return image;
if (image.size.width<size.width && image.size.height<size.height)
return [Util scaleImage:image toSize:size];
float widthFactor = size.width / image.size.width;
float heightFactor = size.height / image.size.height;
CGSize scaledSize = size;
if (widthFactor<heightFactor) {
scaledSize.width = size.width;
scaledSize.height = image.size.height * widthFactor;
} else {
scaledSize.width = image.size.width * heightFactor;
scaledSize.height = size.height;
}
UIGraphicsBeginImageContextWithOptions( size, NO, 0.0 );
float marginX = (size.width-scaledSize.width)/2;
float marginY = (size.height-scaledSize.height)/2;
CGRect scaledImageRect = CGRectMake(marginX, marginY, scaledSize.width, scaledSize.height );
[image drawInRect:scaledImageRect];
UIImage* scaledImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return scaledImage;
}
Please tell me if you need further information.
Solution:
+ (UIImage*) fitImage:(UIImage*)image inBox:(CGSize)size withBackground:(UIColor*)color {
if (image.size.width==size.width && image.size.height==size.height)
return image;
if (image.size.width<size.width && image.size.height<size.height)
return [Util scaleImage:image toSize:size];
float widthFactor = size.width / image.size.width;
float heightFactor = size.height / image.size.height;
CGSize scaledSize = size;
if (widthFactor<heightFactor) {
scaledSize.width = size.width;
scaledSize.height = image.size.height * widthFactor;
} else {
scaledSize.width = image.size.width * heightFactor;
scaledSize.height = size.height;
}
UIGraphicsBeginImageContextWithOptions( size, NO, 0.0 );
float marginX = (size.width-scaledSize.width)/2;
float marginY = (size.height-scaledSize.height)/2;
CGRect scaledImageRect = CGRectMake(marginX, marginY, scaledSize.width, scaledSize.height );
UIImage* temp = UIGraphicsGetImageFromCurrentImageContext();
[color set];
UIRectFill(CGRectMake(0.0, 0.0, temp.size.width, temp.size.height));
[image drawInRect:scaledImageRect];
UIImage* scaledImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return scaledImage;
}
You can use UIRectFill(rect) to fill the background of a bitmap context with the current fill color. To set the fill color you can use [UIColor set].
e.g.
//Get a temp image to determine the size of the bitmap context
UIImage* temp = UIGraphicsGetImageFromCurrentImageContext();
[[UIColor redColor] set]; //set the desired background color
UIRectFill(CGRectMake(0.0, 0.0, temp.size.width, temp.size.height)); //fill the bitmap context
[image drawInRect:scaledImageRect]; //draw your image over the filled background

Existing implementation of cropImage:to:andScaleTo: and straightenAndScaleImage()?

I'm writing code to use UIImagePickerController. Corey previously posted some nice sample code on SO related to cropping and scaling. However, it doesn't have implementations of cropImage:to:andScaleTo: nor straightenAndScaleImage().
Here's how they're used:
newImage = [self cropImage:originalImage to:croppingRect andScaleTo:scaledImageSize];
...
UIImage *rotatedImage = straightenAndScaleImage([editInfo objectForKey:UIImagePickerControllerOriginalImage], scaleSize);
Since I'm sure someone must be using something very similar to Corey's sample code, there's probably an existing implementation of these two functions. Would someone like to share?
If you check the post you linked to, you'll see a link to the apple dev forums where I got some of this code, here are the methods you are asking about. Note: I may have made some changes relating to data types, but I can't quite remember. It should be trivial for you to adjust if needed.
- (UIImage *)cropImage:(UIImage *)image to:(CGRect)cropRect andScaleTo:(CGSize)size {
UIGraphicsBeginImageContext(size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGImageRef subImage = CGImageCreateWithImageInRect([image CGImage], cropRect);
CGRect myRect = CGRectMake(0.0f, 0.0f, size.width, size.height);
CGContextScaleCTM(context, 1.0f, -1.0f);
CGContextTranslateCTM(context, 0.0f, -size.height);
CGContextDrawImage(context, myRect, subImage);
UIImage* croppedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGImageRelease(subImage);
return croppedImage;
}
UIImage *straightenAndScaleImage(UIImage *image, int maxDimension) {
CGImageRef img = [image CGImage];
CGFloat width = CGImageGetWidth(img);
CGFloat height = CGImageGetHeight(img);
CGRect bounds = CGRectMake(0, 0, width, height);
CGSize size = bounds.size;
if (width > maxDimension || height > maxDimension) {
CGFloat ratio = width/height;
if (ratio > 1.0f) {
size.width = maxDimension;
size.height = size.width / ratio;
}
else {
size.height = maxDimension;
size.width = size.height * ratio;
}
}
CGFloat scale = size.width/width;
CGAffineTransform transform = orientationTransformForImage(image, &size);
UIGraphicsBeginImageContext(size);
CGContextRef context = UIGraphicsGetCurrentContext();
// Flip
UIImageOrientation orientation = [image imageOrientation];
if (orientation == UIImageOrientationRight || orientation == UIImageOrientationLeft) {
CGContextScaleCTM(context, -scale, scale);
CGContextTranslateCTM(context, -height, 0);
}else {
CGContextScaleCTM(context, scale, -scale);
CGContextTranslateCTM(context, 0, -height);
}
CGContextConcatCTM(context, transform);
CGContextDrawImage(context, bounds, img);
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}

How to zoom in/out an UIImage object when user pinches screen?

I would like to zoom in/out an UIImage object when the user performs the standard pinch action on my application. I'm currently using a UIImageView to display my image, if that detail helps in any way.
I'm trying to figure out how to do this, but no such luck so far.
Any clues?
As others described, the easiest solution is to put your UIImageView into a UIScrollView. I did this in the Interface Builder .xib file.
In viewDidLoad, set the following variables. Set your controller to be a UIScrollViewDelegate.
- (void)viewDidLoad {
[super viewDidLoad];
self.scrollView.minimumZoomScale = 0.5;
self.scrollView.maximumZoomScale = 6.0;
self.scrollView.contentSize = self.imageView.frame.size;
self.scrollView.delegate = self;
}
You are required to implement the following method to return the imageView you want to zoom.
- (UIView *)viewForZoomingInScrollView:(UIScrollView *)scrollView
{
return self.imageView;
}
In versions prior to iOS9, you may also need to add this empty delegate method:
- (void)scrollViewDidEndZooming:(UIScrollView *)scrollView withView:(UIView *)view atScale:(CGFloat)scale
{
}
The Apple Documentation does a good job of describing how to do this:
Another easy way to do this is to place your UIImageView within a UIScrollView. As I describe here, you need to set the scroll view's contentSize to be the same as your UIImageView's size. Set your controller instance to be the delegate of the scroll view and implement the viewForZoomingInScrollView: and scrollViewDidEndZooming:withView:atScale: methods to allow for pinch-zooming and image panning. This is effectively what Ben's solution does, only in a slightly more lightweight manner, as you don't have the overhead of a full web view.
One issue you may run into is that the scaling within the scroll view comes in the form of transforms applied to the image. This may lead to blurriness at high zoom factors. For something that can be redrawn, you can follow my suggestions here to provide a crisper display after the pinch gesture is finished. hniels' solution could be used at that point to rescale your image.
Shefali's solution for UIImageView works great, but it needs a little modification:
- (void)pinch:(UIPinchGestureRecognizer *)gesture {
if (gesture.state == UIGestureRecognizerStateEnded
|| gesture.state == UIGestureRecognizerStateChanged) {
NSLog(#"gesture.scale = %f", gesture.scale);
CGFloat currentScale = self.frame.size.width / self.bounds.size.width;
CGFloat newScale = currentScale * gesture.scale;
if (newScale < MINIMUM_SCALE) {
newScale = MINIMUM_SCALE;
}
if (newScale > MAXIMUM_SCALE) {
newScale = MAXIMUM_SCALE;
}
CGAffineTransform transform = CGAffineTransformMakeScale(newScale, newScale);
self.transform = transform;
gesture.scale = 1;
}
}
(Shefali's solution had the downside that it did not scale continuously while pinching. Furthermore, when starting a new pinch, the current image scale was reset.)
Below code helps to zoom UIImageView without using UIScrollView :
-(void)HandlePinch:(UIPinchGestureRecognizer*)recognizer{
if ([recognizer state] == UIGestureRecognizerStateEnded) {
NSLog(#"======== Scale Applied ===========");
if ([recognizer scale]<1.0f) {
[recognizer setScale:1.0f];
}
CGAffineTransform transform = CGAffineTransformMakeScale([recognizer scale], [recognizer scale]);
imgView.transform = transform;
}
}
Keep in mind that you're NEVER zooming in on a UIImage. EVER.
Instead, you're zooming in and out on the view that displays the UIImage.
In this particular case, you chould choose to create a custom UIView with custom drawing to display the image, a UIImageView which displays the image for you, or a UIWebView which will need some additional HTML to back it up.
In all cases, you'll need to implement touchesBegan, touchesMoved, and the like to determine what the user is trying to do (zoom, pan, etc.).
Here is a solution I've used before that does not require you to use the UIWebView.
- (UIImage *)scaleAndRotateImage(UIImage *)image
{
int kMaxResolution = 320; // Or whatever
CGImageRef imgRef = image.CGImage;
CGFloat width = CGImageGetWidth(imgRef);
CGFloat height = CGImageGetHeight(imgRef);
CGAffineTransform transform = CGAffineTransformIdentity;
CGRect bounds = CGRectMake(0, 0, width, height);
if (width > kMaxResolution || height > kMaxResolution) {
CGFloat ratio = width/height;
if (ratio > 1) {
bounds.size.width = kMaxResolution;
bounds.size.height = bounds.size.width / ratio;
}
else {
bounds.size.height = kMaxResolution;
bounds.size.width = bounds.size.height * ratio;
}
}
CGFloat scaleRatio = bounds.size.width / width;
CGSize imageSize = CGSizeMake(CGImageGetWidth(imgRef), CGImageGetHeight(imgRef));
CGFloat boundHeight;
UIImageOrientation orient = image.imageOrientation;
switch(orient) {
case UIImageOrientationUp: //EXIF = 1
transform = CGAffineTransformIdentity;
break;
case UIImageOrientationUpMirrored: //EXIF = 2
transform = CGAffineTransformMakeTranslation(imageSize.width, 0.0);
transform = CGAffineTransformScale(transform, -1.0, 1.0);
break;
case UIImageOrientationDown: //EXIF = 3
transform = CGAffineTransformMakeTranslation(imageSize.width, imageSize.height);
transform = CGAffineTransformRotate(transform, M_PI);
break;
case UIImageOrientationDownMirrored: //EXIF = 4
transform = CGAffineTransformMakeTranslation(0.0, imageSize.height);
transform = CGAffineTransformScale(transform, 1.0, -1.0);
break;
case UIImageOrientationLeftMirrored: //EXIF = 5
boundHeight = bounds.size.height;
bounds.size.height = bounds.size.width;
bounds.size.width = boundHeight;
transform = CGAffineTransformMakeTranslation(imageSize.height, imageSize.width);
transform = CGAffineTransformScale(transform, -1.0, 1.0);
transform = CGAffineTransformRotate(transform, 3.0 * M_PI / 2.0);
break;
case UIImageOrientationLeft: //EXIF = 6
boundHeight = bounds.size.height;
bounds.size.height = bounds.size.width;
bounds.size.width = boundHeight;
transform = CGAffineTransformMakeTranslation(0.0, imageSize.width);
transform = CGAffineTransformRotate(transform, 3.0 * M_PI / 2.0);
break;
case UIImageOrientationRightMirrored: //EXIF = 7
boundHeight = bounds.size.height;
bounds.size.height = bounds.size.width;
bounds.size.width = boundHeight;
transform = CGAffineTransformMakeScale(-1.0, 1.0);
transform = CGAffineTransformRotate(transform, M_PI / 2.0);
break;
case UIImageOrientationRight: //EXIF = 8
boundHeight = bounds.size.height;
bounds.size.height = bounds.size.width;
bounds.size.width = boundHeight;
transform = CGAffineTransformMakeTranslation(imageSize.height, 0.0);
transform = CGAffineTransformRotate(transform, M_PI / 2.0);
break;
default:
[NSException raise:NSInternalInconsistencyException format:#"Invalid image orientation"];
}
UIGraphicsBeginImageContext(bounds.size);
CGContextRef context = UIGraphicsGetCurrentContext();
if (orient == UIImageOrientationRight || orient == UIImageOrientationLeft) {
CGContextScaleCTM(context, -scaleRatio, scaleRatio);
CGContextTranslateCTM(context, -height, 0);
}
else {
CGContextScaleCTM(context, scaleRatio, -scaleRatio);
CGContextTranslateCTM(context, 0, -height);
}
CGContextConcatCTM(context, transform);
CGContextDrawImage(UIGraphicsGetCurrentContext(), CGRectMake(0, 0, width, height), imgRef);
UIImage *imageCopy = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return imageCopy;
}
The article can be found on Apple Support at:
http://discussions.apple.com/message.jspa?messageID=7276709#7276709
Shafali and JRV's answers extended to include panning and pinch to zoom:
#define MINIMUM_SCALE 0.5
#define MAXIMUM_SCALE 6.0
#property CGPoint translation;
- (void)pan:(UIPanGestureRecognizer *)gesture {
static CGPoint currentTranslation;
static CGFloat currentScale = 0;
if (gesture.state == UIGestureRecognizerStateBegan) {
currentTranslation = _translation;
currentScale = self.view.frame.size.width / self.view.bounds.size.width;
}
if (gesture.state == UIGestureRecognizerStateEnded || gesture.state == UIGestureRecognizerStateChanged) {
CGPoint translation = [gesture translationInView:self.view];
_translation.x = translation.x + currentTranslation.x;
_translation.y = translation.y + currentTranslation.y;
CGAffineTransform transform1 = CGAffineTransformMakeTranslation(_translation.x , _translation.y);
CGAffineTransform transform2 = CGAffineTransformMakeScale(currentScale, currentScale);
CGAffineTransform transform = CGAffineTransformConcat(transform1, transform2);
self.view.transform = transform;
}
}
- (void)pinch:(UIPinchGestureRecognizer *)gesture {
if (gesture.state == UIGestureRecognizerStateEnded || gesture.state == UIGestureRecognizerStateChanged) {
// NSLog(#"gesture.scale = %f", gesture.scale);
CGFloat currentScale = self.view.frame.size.width / self.view.bounds.size.width;
CGFloat newScale = currentScale * gesture.scale;
if (newScale < MINIMUM_SCALE) {
newScale = MINIMUM_SCALE;
}
if (newScale > MAXIMUM_SCALE) {
newScale = MAXIMUM_SCALE;
}
CGAffineTransform transform1 = CGAffineTransformMakeTranslation(_translation.x, _translation.y);
CGAffineTransform transform2 = CGAffineTransformMakeScale(newScale, newScale);
CGAffineTransform transform = CGAffineTransformConcat(transform1, transform2);
self.view.transform = transform;
gesture.scale = 1;
}
}
The simplest way to do this, if all you want is pinch zooming, is to place your image inside a UIWebView (write small amount of html wrapper code, reference your image, and you're basically done). The more complcated way to do this is to use touchesBegan, touchesMoved, and touchesEnded to keep track of the user's fingers, and adjust your view's transform property appropriately.
Keep in mind that you don't want to zoom in/out UIImage. Instead try to zoom in/out the View which contains the UIImage View Controller.
I have made a solution for this problem. Take a look at my code:
#IBAction func scaleImage(sender: UIPinchGestureRecognizer) {
self.view.transform = CGAffineTransformScale(self.view.transform, sender.scale, sender.scale)
sender.scale = 1
}
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view, typically from a nib.
view.backgroundColor = UIColor.blackColor()
}
N.B.: Don't forget to hook up the PinchGestureRecognizer.