I have an image in a CALayer (using the CALayers contents property). A CALayer has a transform property that can be use to scale and rotate. I can use that transform without difficulty when the CALayer is still.
But when I try to drag the CALayer (using mouseDragged:(NSEvent *)theEvent), then it all fells apart (the image gets flattened out as I drag).
-(void)mouseDragged:(NSEvent *)theEvent {
CGPoint loc = [self convertPoint:theEvent.locationInWindow fromView:nil];
CGPoint deltaLoc = ccpSub(loc, downLoc); // subtract two points
CGPoint newPos = ccpAdd(startPos, deltaLoc); // adds two points
[layer changePosition:newPos];
...
}
In the layer
-(void)changePosition:(CGPoint)newPos {
//prevent unintended animation actions
self.actions = #{#"position": [NSNull null]};
CGSize size = self.frame.size;
CGRect newRect = CGRectMake(newPos.x, newPos.y, size.width, size.height);
self.frame = newRect;
}
The problem is that I use CALayer's frame to dynamically change the location of the CALayer as I drag, but when the rotation angle is not zero, then the frame is not longer parallel to x and y axis. What is the best way to fix this problem?
One way to do it is to regenerate transforms for each incremental movement.
-(void)mouseDragged:(CGPoint)deltaLoc start:(CGPoint)startPos {
CGPoint newPos = ccpAdd(startPos, deltaLoc);
[self changePosition:newPos];
}
-(void)changePosition:(CGPoint)newPos {
//prevent unintended animation actions
self.actions = #{#"position": [NSNull null]};
double angle = [[self valueForKeyPath: #"transform.rotation"] doubleValue];
double scaleX = [[self valueForKeyPath: #"transform.scale.x"] doubleValue];
double scaleY = [[self valueForKeyPath: #"transform.scale.y"] doubleValue];
self.transform = CATransform3DIdentity;
CGSize size = self.frame.size;
CGRect newRect = CGRectMake(newPos.x, newPos.y, size.width, size.height);
self.frame = newRect;
[self applyTransformRotation:angle scale:ccp(scaleX, scaleY)];
}
-(void)applyTransformRotation:(double)angle scale:(CGPoint)scale {
self.transform = CATransform3DMakeRotation(angle, 0, 0, 1);
self.transform = CATransform3DScale(self.transform, scale.x, scale.y, 1);
}
I do not know if this is the most efficient way to do it, but it definitely works.
Related
i have an app which is composed by a main UIScrollView and a variable number of "pages" subviews.
Each one of this pages contains an imageView, some buttons and an UIView subclass, called drawView, that is responsible to draw over the UIImageView ( using touchesBegin , touchesMoved and tochesEnded) allowing users to take notes over the page.
Unfortunately as all pages are subviews of uiscrollview, so no touch reaches my drawView.
I have a button on the main interface to enable highlight mode and once tapped i'd like to disable all UIScrollView events so all events goes to my drawView.
I've tried to set userInteractionEnabled to the main ScrollView but, as my view is a subview of the UIScrollView, also my drawView receives no touch, same if i disable scroll.
Does anyone have a suggestion on how to solve my issue and achive it ?
Dont worry, make it simple.
Create a custom UIView Class that will allocate a CALayer to place your UIImage and a second CALayer to show the future drawing on top of it. Both CALayers can be placed on top of each other. Assuming you know how to handle transparency of the drawing CALayer.
As both visible parts are in one UIView you can keep on going with
-touchesBegin: -touchesMoved: -touchesEnded:
and handle your BezierPath or how ever you want to catch the fingers/pen.
Now take care and reduce the CGRect size that is repainted with -drawRect: so it still looks nice and does not paint the whole image buffer again, which will make it a little bit faster.
As you handle both Layers in one View you can keep on going as is with your UIScrollView but you will use the new CustomViewClass instead of the UIViews. Well - you don't need the UIImageView then.
something like.
#import UIKit;
#interface ImageDrawingUIView : UIView
-(instancetype)initWithFrame:(CGRect)frame andImage:(UIImage*)img;
#property (nonatomic) UIImage *image;
#property (nonatomic) CALayer *drawLayer;
#end
and
#import "ImageDrawingUIView.h"
#implementation ImageDrawingUIView {
UIImage* _painting;
NSMutableArray<NSValue *> *points;
NSUInteger strokes;
}
-(instancetype)initWithFrame:(CGRect)frame andImage:(UIImage *)img {
if (!(self=[super initWithFrame:frame])) return nil;
_image = img;
CALayer *imgLayer = [CALayer layer];
imgLayer.drawsAsynchronously = YES;
imgLayer.frame = frame;
imgLayer.contents = img;
[self.layer addSublayer:imgLayer];
_drawLayer = [CALayer layer];
_drawLayer.drawsAsynchronously = YES;
_drawLayer.frame = frame;
[self.layer addSublayer:_drawLayer];
_drawLayer.contents = (__bridge id _Nullable)[self paintingImage].CGImage;
points = [NSMutableArray new];
return self;
}
-(UIImage*)imageFromPoints {
// do the CGContext image creation from array of points here
CGSize size = CGSizeMake(66, 66);
const CGFloat mid = size.width * 0.5;
CGFloat f = 1.4;
UIGraphicsBeginImageContextWithOptions(size, NO, 0.0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSaveGState(context);
CGContextSetLineWidth(context, 1.5);
CGContextSetLineCap(context, kCGLineCapSquare);
int strokeNum = 0;
for (int p = 0; p < [points count]; p++) {
// NSValues can store CGPoints, the points Array contains NSValues.
NSValue *value = points[p];
CGPoint dot = [value CGPointValue];
CGPoint point = CGPointMake(mid + (dot.x*f) * mid, mid + (dot.y*f) * mid);
if (strokes > strokeNum) {
CGContextMoveToPoint(context, point.x, point.y);
CGContextAddRect(context, CGRectMake(point.x - 1.5, point.y - 1.5, 3, 3));
strokeNum++;
} else {
CGContextAddLineToPoint(context, point.x, point.y);
}
}
CGContextStrokePath(context);
CGContextRestoreGState(context);
UIImage *i = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return i;
}
-(UIImage*)paintingImage {
if (!_painting) {
_painting = [self imageFromPoints];
}
return _painting;
}
// following code is not complete, just showing the methods
-(void)touchesBegan:(NSSet<UITouch *> *)touches withEvent:(UIEvent *)event {
// capture first finger, store first point in a array of CGPoints
[points addObject:[NSValue valueWithCGPoint: /*touches*/ CGPointMake(0, 0)]];
}
-(void)touchesMoved:(NSSet<UITouch *> *)touches withEvent:(UIEvent *)event {
// capture first finger, store in array of CGPoints
[points addObject:[NSValue valueWithCGPoint: /*touches*/ CGPointMake(0, 0)]];
_painting = [self imageFromPoints]; // update painted image
}
-(void)touchesEnded:(NSSet<UITouch *> *)touches withEvent:(UIEvent *)event {
// capture first finger, store in array and arrange the next finger is another array
[points addObject:[NSValue valueWithCGPoint: /*touches*/ CGPointMake(0, 0)]];
// apply new painting to CALayer again
_drawLayer.contents = (__bridge id _Nullable)(_painting.CGImage);
}
#end
You can allocate it in your UIScrollView like
NSUInteger pages = 0;
ImageDrawingUIView *page = [[ImageDrawingUIView alloc] initWithFrame:CGRectMake(0,pages*pageHeight,width,height) andImage:[UIImage imageWithContentsOfFile:#"filename"]];
pages++;
[self addSubView:page];
ooops , did'nt want to take all the joy to code it youself from ya. :)
PS: in -initWithFrame:andImage: there is still space to allocate the Buttons and stuff you need on top and interact with the content of the CustomUIView.
I'm trying to draw a line with a specific width. I searched for examples online, but I only found examples using straight lines. I need curved lines. Also, I need to detect if the user touched within the line. Is it possible to achieve this using Objective C and Sprite Kit? If so can someone provide an example?
You can use UIBezierPath to create bezier curves (nice smooth curves). You can specify this path for a CAShapeLayer and add that as a sublayer to your view:
UIBezierPath *path = [UIBezierPath bezierPath];
[path moveToPoint:CGPointMake(10, 150)];
[path addCurveToPoint:CGPointMake(110, 150) controlPoint1:CGPointMake(40, 100) controlPoint2:CGPointMake(80, 100)];
[path addCurveToPoint:CGPointMake(210, 150) controlPoint1:CGPointMake(140, 200) controlPoint2:CGPointMake(170, 200)];
[path addCurveToPoint:CGPointMake(310, 150) controlPoint1:CGPointMake(250, 100) controlPoint2:CGPointMake(280, 100)];
CAShapeLayer *layer = [CAShapeLayer layer];
layer.lineWidth = 10;
layer.strokeColor = [UIColor redColor].CGColor;
layer.fillColor = [UIColor clearColor].CGColor;
layer.path = path.CGPath;
[self.view.layer addSublayer:layer];
If you want to randomize it a little, you can just randomize some of the curves. If you want some fuzziness, add some shadow. If you want the ends to be round, specify a rounded line cap:
UIBezierPath *path = [UIBezierPath bezierPath];
CGPoint point = CGPointMake(10, 100);
[path moveToPoint:point];
CGPoint controlPoint1;
CGPoint controlPoint2 = CGPointMake(point.x - 5.0 - arc4random_uniform(50), 150.0);
for (NSInteger i = 0; i < 5; i++) {
controlPoint1 = CGPointMake(point.x + (point.x - controlPoint2.x), 50.0);
point.x += 40.0 + arc4random_uniform(20);
controlPoint2 = CGPointMake(point.x - 5.0 - arc4random_uniform(50), 150.0);
[path addCurveToPoint:point controlPoint1:controlPoint1 controlPoint2:controlPoint2];
}
CAShapeLayer *layer = [CAShapeLayer layer];
layer.lineWidth = 5;
layer.strokeColor = [UIColor redColor].CGColor;
layer.fillColor = [UIColor clearColor].CGColor;
layer.path = path.CGPath;
layer.shadowColor = [UIColor redColor].CGColor;
layer.shadowRadius = 2.0;
layer.shadowOpacity = 1.0;
layer.shadowOffset = CGSizeZero;
layer.lineCap = kCALineCapRound;
[self.view.layer addSublayer:layer];
If you want it to be even more irregular, break those beziers into smaller segments, but the idea would be the same. The only trick with conjoined bezier curves is that you want to make sure that the second control point of one curve is in line with the first control point of the next one, or else you end up with sharp discontinuities in the curves.
If you want to detect if and when a user taps on it, that's more complicated. But what you have to do is:
Make a snapshot of the image:
- (UIImage *)captureView:(UIView *)view
{
UIGraphicsBeginImageContextWithOptions(view.bounds.size, NO, 1.0); // usually I'd use 0.0, but we'll use 1.0 here so that the tap point of the gesture matches the pixel of the snapshot
if ([view respondsToSelector:#selector(drawViewHierarchyInRect:afterScreenUpdates:)]) {
BOOL success = [view drawViewHierarchyInRect:view.bounds afterScreenUpdates:YES];
NSAssert(success, #"drawViewHierarchyInRect failed");
} else {
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
}
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
Get the color of the pixel at the coordinate that the user tapped by identifying the color of the pixel the user tapped.
- (void)handleTap:(UITapGestureRecognizer *)gesture
{
CGPoint point = [gesture locationInView:gesture.view];
CGFloat red, green, blue, alpha;
UIColor *color = [self image:self.image colorAtPoint:point];
[color getRed:&red green:&green blue:&blue alpha:&alpha];
if (green < 0.9 && blue < 0.9 && red > 0.9)
NSLog(#"tapped on curve");
else
NSLog(#"didn't tap on curve");
}
Where I adapted Apple's code for getting the pixel buffer in order to determine the color of the pixel the user tapped on was.
// adapted from https://developer.apple.com/library/mac/qa/qa1509/_index.html
- (UIColor *)image:(UIImage *)image colorAtPoint:(CGPoint)point
{
UIColor *color;
CGImageRef imageRef = image.CGImage;
// Create the bitmap context
CGContextRef context = [self createARGBBitmapContextForImage:imageRef];
NSAssert(context, #"error creating context");
// Get image width, height. We'll use the entire image.
size_t width = CGImageGetWidth(imageRef);
size_t height = CGImageGetHeight(imageRef);
CGRect rect = {{0,0},{width,height}};
// Draw the image to the bitmap context. Once we draw, the memory
// allocated for the context for rendering will then contain the
// raw image data in the specified color space.
CGContextDrawImage(context, rect, imageRef);
// Now we can get a pointer to the image data associated with the bitmap
// context.
uint8_t *data = CGBitmapContextGetData (context);
if (data != NULL) {
size_t offset = (NSInteger) point.y * 4 * width + (NSInteger) point.x * 4;
uint8_t alpha = data[offset];
uint8_t red = data[offset+1];
uint8_t green = data[offset+2];
uint8_t blue = data[offset+3];
color = [UIColor colorWithRed:red / 255.0 green:green / 255.0 blue:blue / 255.0 alpha:alpha / 255.0];
}
// When finished, release the context
CGContextRelease(context);
// Free image data memory for the context
if (data) {
free(data); // we used malloc in createARGBBitmapContextForImage, so free it
}
return color;
}
- (CGContextRef) createARGBBitmapContextForImage:(CGImageRef) inImage
{
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
void * bitmapData;
size_t bitmapByteCount;
size_t bitmapBytesPerRow;
// Get image width, height. We'll use the entire image.
size_t pixelsWide = CGImageGetWidth(inImage);
size_t pixelsHigh = CGImageGetHeight(inImage);
// Declare the number of bytes per row. Each pixel in the bitmap in this
// example is represented by 4 bytes; 8 bits each of red, green, blue, and
// alpha.
bitmapBytesPerRow = (pixelsWide * 4);
bitmapByteCount = (bitmapBytesPerRow * pixelsHigh);
// Use the generic RGB color space.
colorSpace = CGColorSpaceCreateDeviceRGB(); // CGColorSpaceCreateDeviceWithName(kCGColorSpaceGenericRGB);
NSAssert(colorSpace, #"Error allocating color space");
// Allocate memory for image data. This is the destination in memory
// where any drawing to the bitmap context will be rendered.
bitmapData = malloc(bitmapByteCount);
NSAssert(bitmapData, #"Unable to allocate bitmap buffer");
// Create the bitmap context. We want pre-multiplied ARGB, 8-bits
// per component. Regardless of what the source image format is
// (CMYK, Grayscale, and so on) it will be converted over to the format
// specified here by CGBitmapContextCreate.
context = CGBitmapContextCreate (bitmapData,
pixelsWide,
pixelsHigh,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
(CGBitmapInfo)kCGImageAlphaPremultipliedFirst);
NSAssert(context, #"Context not created!");
// Make sure and release colorspace before returning
CGColorSpaceRelease( colorSpace );
return context;
}
I create a double layer with a UIImageView one on bottom and one overlay with this method:
UIImage *bottomImage = self.imageView.image;
UIImage *image = self.urlFoto.image;
CGSize newSize = CGSizeMake(640, 640);
UIGraphicsBeginImageContext( newSize );
[bottomImage drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height) blendMode:kCGBlendModeNormal alpha:1.0];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
After this I add a method to resize image with pinch gesture:
- (IBAction)scaleImage:(UIPinchGestureRecognizer *)recognizer {
recognizer.view.transform = CGAffineTransformScale(recognizer.view.transform, recognizer.scale, recognizer.scale);
recognizer.scale = 1;
}
But when i save the Photo in photo library, i can't get the current size of my overlay and I see the same size 640,640 on overlay(image), i think the missed code is here:
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height) blendMode:kCGBlendModeNormal alpha:1.0];
On the CGRectMake(0,0,newSize.width,newSize.height) any know the correct method to get the current size of the UIImageView after pinched?
When drawing, you must keep the scale transform in mind. So you have to draw your overlay image with the respect to the final scale of the UIImageView adjusted by gesture.
You can get final scale of view like this:
CGFloat scale = view.transform.a;
The letter a is important here. It is value of the width transformation. So you can use it to get common scale assuming you are scaling the image proportionally (same scale for width and height)
Little more details regarding the scale:
CGAffineTransform is structure defined like
struct CGAffineTransform {
CGFloat a, b, c, d;
CGFloat tx, ty;
};
and
CGAffineTransformMakeScale(CGFloat sx, CGFloat sy)
does following according to the documentation
Return a transform which scales by `(sx, sy)':
t' = [ sx 0 0 sy 0 0 ]
For better understanding, see this example code:
UIView *view = [[UIView alloc] initWithFrame:CGRectMake(50, 50, 100, 100)];
view.backgroundColor = [UIColor redColor];
[self.view addSubview:view];
view.transform = CGAffineTransformScale(view.transform, 0.5, 0.5);
NSLog(#"Transform 1: %f", view.transform.a);
view.transform = CGAffineTransformScale(view.transform, 0.5, 0.5);
NSLog(#"Transform 2: %f", view.transform.a);
which prints following to the console:
Transform 1: 0.500000
Transform 2: 0.250000
1st transform makes 0.5 scale to the default scale 1.0
2nd transform makes 0.5 scale to the already scaled 0.5 -> multiplies current 0.5 with new 0.5 scale
etc.
Step 1 : Declare in .h file
CGFloat lastScale;
Step 2 : in .m File replace your gesture method
- (void)handlePinchGesture:(UIPinchGestureRecognizer *)gestureRecognizer {
if([gestureRecognizer state] == UIGestureRecognizerStateBegan) {
// Reset the last scale, necessary if there are multiple objects with different scales
lastScale = [gestureRecognizer scale];
}
if ([gestureRecognizer state] == UIGestureRecognizerStateBegan ||
[gestureRecognizer state] == UIGestureRecognizerStateChanged) {
CGFloat currentScale = [[[gestureRecognizer view].layer valueForKeyPath:#"transform.scale"] floatValue];
// Calculate the New Scale of UIImageView
CGFloat newScale = 1 - (lastScale - [gestureRecognizer scale]);
// Store Your Imageview's transform
CGAffineTransform transorm = simageView.transform;
// Convert your Imageview to Identity (original Size)
[imageView setTransform:CGAffineTransformIdentity];
// Save Rect Of UIImageView
CGRect actualRect = self.view.frame;
// Apply Transform to Imageview to make it scaled
[imageView setTransform:transorm];
// Now calculate new frame size of your ImageView
NSLog(#"width %f height %f",(imageView.frame.size.width*newScale), (imageView.frame.size.height*newScale));
}
}
I am using a uiscrollview to display an UIImageview that is larger then screen size (just like photo viewing in the Photo Library). next i needed to add a point (i am using a UIImageview to show a dot) to the screen when the user taps. got that working.
now what i am trying to do is have that point to stay in the same location relative to the image. UIScrollview has some notifications for when dragging starts and ends, but nothing that gives a constant update of its contentoffset. say i place a dot at X location. i then scroll down 50 pixels, the point needs to move up 50 pixels smoothly (so the point always stays at the same location relative to the image).
does anybody have any ideas?
EDIT: adding code
- (void)viewDidLoad
{
[super viewDidLoad];
//other code...
scrollView.delegate = self;
scrollView.clipsToBounds = YES;
scrollView.decelerationRate = UIScrollViewDecelerationRateFast;
scrollView.minimumZoomScale = (scrollView.frame.size.width / imageView.frame.size.width);
scrollView.maximumZoomScale = 10.0;
//other code...
}
adding a point
-(void) foundTap:(UITapGestureRecognizer *) recognizer
{
CGPoint pixelPos = [recognizer locationInView:self.view];
UIImageView *testPoint = [[UIImageView alloc] initWithImage:[UIImage imageNamed:#"inner-circle.png"]];
testPoint.frame = CGRectMake(0, 0, 20, 20);
testPoint.center = CGPointMake(pixelPos.x, pixelPos.y);
[self.scrollView addSubview:testPoint];
NSMutableArray *tempTestPointArray = [[NSMutableArray alloc] initWithArray:testPointArray];
[tempTestPointArray addObject:testPoint];
testPointArray = tempTestPointArray;
NSLog(#"recorded point %f,%f",pixelPos.x,pixelPos.y);
}
getting pixel points
-(void)scrollViewDidZoom:(UIScrollView *)scrollView
{
NSLog(#"zoom factor: %f", scrollView.zoomScale);
UIPinchGestureRecognizer *test = scrollView.pinchGestureRecognizer;
UIView *piece = test.view;
CGPoint locationInView = [test locationInView:piece];
CGPoint locationInSuperview = [test locationInView:piece.superview];
NSLog(#"locationInView: %# locationInSuperView: %#", NSStringFromCGPoint(locationInView), NSStringFromCGPoint(locationInSuperview));
}
You can implement the UIScrollViewDelegate method scrollViewDidScroll: to get a continuous callback whenever the content offset changes.
Can you not simply add the image view with the dot as a subview of the scroll view?
You can use something like this:
UIGraphicsBeginImageContext(mainimg.frame.size);
[mainimg.image drawInRect:
CGRectMake(0, 0, mainimg.frame.size.width, mainimg.frame.size.height)];
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextSetLineCap(ctx, kCGLineCapButt);
CGContextSetLineWidth(ctx,2.0);
CGContextSetRGBStrokeColor(ctx, 1.0, 0.0, 0.0, 1.0);
CGContextBeginPath(ctx);
if (largecursor == FALSE) {
CGContextMoveToPoint(ctx, pt.x-3.0, pt.y);
CGContextAddLineToPoint(ctx, pt.x+3.0,pt.y);
CGContextMoveToPoint(ctx, pt.x, pt.y-3.0);
CGContextAddLineToPoint(ctx, pt.x, pt.y+3.0);
} else {
CGContextMoveToPoint(ctx, 0.0, pt.y);
CGContextAddLineToPoint(ctx, maxx,pt.y);
CGContextMoveToPoint(ctx, pt.x, 0.0);
CGContextAddLineToPoint(ctx, pt.x, maxy);
}
CGContextStrokePath(ctx);
mainimg.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
I would like to zoom in/out an UIImage object when the user performs the standard pinch action on my application. I'm currently using a UIImageView to display my image, if that detail helps in any way.
I'm trying to figure out how to do this, but no such luck so far.
Any clues?
As others described, the easiest solution is to put your UIImageView into a UIScrollView. I did this in the Interface Builder .xib file.
In viewDidLoad, set the following variables. Set your controller to be a UIScrollViewDelegate.
- (void)viewDidLoad {
[super viewDidLoad];
self.scrollView.minimumZoomScale = 0.5;
self.scrollView.maximumZoomScale = 6.0;
self.scrollView.contentSize = self.imageView.frame.size;
self.scrollView.delegate = self;
}
You are required to implement the following method to return the imageView you want to zoom.
- (UIView *)viewForZoomingInScrollView:(UIScrollView *)scrollView
{
return self.imageView;
}
In versions prior to iOS9, you may also need to add this empty delegate method:
- (void)scrollViewDidEndZooming:(UIScrollView *)scrollView withView:(UIView *)view atScale:(CGFloat)scale
{
}
The Apple Documentation does a good job of describing how to do this:
Another easy way to do this is to place your UIImageView within a UIScrollView. As I describe here, you need to set the scroll view's contentSize to be the same as your UIImageView's size. Set your controller instance to be the delegate of the scroll view and implement the viewForZoomingInScrollView: and scrollViewDidEndZooming:withView:atScale: methods to allow for pinch-zooming and image panning. This is effectively what Ben's solution does, only in a slightly more lightweight manner, as you don't have the overhead of a full web view.
One issue you may run into is that the scaling within the scroll view comes in the form of transforms applied to the image. This may lead to blurriness at high zoom factors. For something that can be redrawn, you can follow my suggestions here to provide a crisper display after the pinch gesture is finished. hniels' solution could be used at that point to rescale your image.
Shefali's solution for UIImageView works great, but it needs a little modification:
- (void)pinch:(UIPinchGestureRecognizer *)gesture {
if (gesture.state == UIGestureRecognizerStateEnded
|| gesture.state == UIGestureRecognizerStateChanged) {
NSLog(#"gesture.scale = %f", gesture.scale);
CGFloat currentScale = self.frame.size.width / self.bounds.size.width;
CGFloat newScale = currentScale * gesture.scale;
if (newScale < MINIMUM_SCALE) {
newScale = MINIMUM_SCALE;
}
if (newScale > MAXIMUM_SCALE) {
newScale = MAXIMUM_SCALE;
}
CGAffineTransform transform = CGAffineTransformMakeScale(newScale, newScale);
self.transform = transform;
gesture.scale = 1;
}
}
(Shefali's solution had the downside that it did not scale continuously while pinching. Furthermore, when starting a new pinch, the current image scale was reset.)
Below code helps to zoom UIImageView without using UIScrollView :
-(void)HandlePinch:(UIPinchGestureRecognizer*)recognizer{
if ([recognizer state] == UIGestureRecognizerStateEnded) {
NSLog(#"======== Scale Applied ===========");
if ([recognizer scale]<1.0f) {
[recognizer setScale:1.0f];
}
CGAffineTransform transform = CGAffineTransformMakeScale([recognizer scale], [recognizer scale]);
imgView.transform = transform;
}
}
Keep in mind that you're NEVER zooming in on a UIImage. EVER.
Instead, you're zooming in and out on the view that displays the UIImage.
In this particular case, you chould choose to create a custom UIView with custom drawing to display the image, a UIImageView which displays the image for you, or a UIWebView which will need some additional HTML to back it up.
In all cases, you'll need to implement touchesBegan, touchesMoved, and the like to determine what the user is trying to do (zoom, pan, etc.).
Here is a solution I've used before that does not require you to use the UIWebView.
- (UIImage *)scaleAndRotateImage(UIImage *)image
{
int kMaxResolution = 320; // Or whatever
CGImageRef imgRef = image.CGImage;
CGFloat width = CGImageGetWidth(imgRef);
CGFloat height = CGImageGetHeight(imgRef);
CGAffineTransform transform = CGAffineTransformIdentity;
CGRect bounds = CGRectMake(0, 0, width, height);
if (width > kMaxResolution || height > kMaxResolution) {
CGFloat ratio = width/height;
if (ratio > 1) {
bounds.size.width = kMaxResolution;
bounds.size.height = bounds.size.width / ratio;
}
else {
bounds.size.height = kMaxResolution;
bounds.size.width = bounds.size.height * ratio;
}
}
CGFloat scaleRatio = bounds.size.width / width;
CGSize imageSize = CGSizeMake(CGImageGetWidth(imgRef), CGImageGetHeight(imgRef));
CGFloat boundHeight;
UIImageOrientation orient = image.imageOrientation;
switch(orient) {
case UIImageOrientationUp: //EXIF = 1
transform = CGAffineTransformIdentity;
break;
case UIImageOrientationUpMirrored: //EXIF = 2
transform = CGAffineTransformMakeTranslation(imageSize.width, 0.0);
transform = CGAffineTransformScale(transform, -1.0, 1.0);
break;
case UIImageOrientationDown: //EXIF = 3
transform = CGAffineTransformMakeTranslation(imageSize.width, imageSize.height);
transform = CGAffineTransformRotate(transform, M_PI);
break;
case UIImageOrientationDownMirrored: //EXIF = 4
transform = CGAffineTransformMakeTranslation(0.0, imageSize.height);
transform = CGAffineTransformScale(transform, 1.0, -1.0);
break;
case UIImageOrientationLeftMirrored: //EXIF = 5
boundHeight = bounds.size.height;
bounds.size.height = bounds.size.width;
bounds.size.width = boundHeight;
transform = CGAffineTransformMakeTranslation(imageSize.height, imageSize.width);
transform = CGAffineTransformScale(transform, -1.0, 1.0);
transform = CGAffineTransformRotate(transform, 3.0 * M_PI / 2.0);
break;
case UIImageOrientationLeft: //EXIF = 6
boundHeight = bounds.size.height;
bounds.size.height = bounds.size.width;
bounds.size.width = boundHeight;
transform = CGAffineTransformMakeTranslation(0.0, imageSize.width);
transform = CGAffineTransformRotate(transform, 3.0 * M_PI / 2.0);
break;
case UIImageOrientationRightMirrored: //EXIF = 7
boundHeight = bounds.size.height;
bounds.size.height = bounds.size.width;
bounds.size.width = boundHeight;
transform = CGAffineTransformMakeScale(-1.0, 1.0);
transform = CGAffineTransformRotate(transform, M_PI / 2.0);
break;
case UIImageOrientationRight: //EXIF = 8
boundHeight = bounds.size.height;
bounds.size.height = bounds.size.width;
bounds.size.width = boundHeight;
transform = CGAffineTransformMakeTranslation(imageSize.height, 0.0);
transform = CGAffineTransformRotate(transform, M_PI / 2.0);
break;
default:
[NSException raise:NSInternalInconsistencyException format:#"Invalid image orientation"];
}
UIGraphicsBeginImageContext(bounds.size);
CGContextRef context = UIGraphicsGetCurrentContext();
if (orient == UIImageOrientationRight || orient == UIImageOrientationLeft) {
CGContextScaleCTM(context, -scaleRatio, scaleRatio);
CGContextTranslateCTM(context, -height, 0);
}
else {
CGContextScaleCTM(context, scaleRatio, -scaleRatio);
CGContextTranslateCTM(context, 0, -height);
}
CGContextConcatCTM(context, transform);
CGContextDrawImage(UIGraphicsGetCurrentContext(), CGRectMake(0, 0, width, height), imgRef);
UIImage *imageCopy = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return imageCopy;
}
The article can be found on Apple Support at:
http://discussions.apple.com/message.jspa?messageID=7276709#7276709
Shafali and JRV's answers extended to include panning and pinch to zoom:
#define MINIMUM_SCALE 0.5
#define MAXIMUM_SCALE 6.0
#property CGPoint translation;
- (void)pan:(UIPanGestureRecognizer *)gesture {
static CGPoint currentTranslation;
static CGFloat currentScale = 0;
if (gesture.state == UIGestureRecognizerStateBegan) {
currentTranslation = _translation;
currentScale = self.view.frame.size.width / self.view.bounds.size.width;
}
if (gesture.state == UIGestureRecognizerStateEnded || gesture.state == UIGestureRecognizerStateChanged) {
CGPoint translation = [gesture translationInView:self.view];
_translation.x = translation.x + currentTranslation.x;
_translation.y = translation.y + currentTranslation.y;
CGAffineTransform transform1 = CGAffineTransformMakeTranslation(_translation.x , _translation.y);
CGAffineTransform transform2 = CGAffineTransformMakeScale(currentScale, currentScale);
CGAffineTransform transform = CGAffineTransformConcat(transform1, transform2);
self.view.transform = transform;
}
}
- (void)pinch:(UIPinchGestureRecognizer *)gesture {
if (gesture.state == UIGestureRecognizerStateEnded || gesture.state == UIGestureRecognizerStateChanged) {
// NSLog(#"gesture.scale = %f", gesture.scale);
CGFloat currentScale = self.view.frame.size.width / self.view.bounds.size.width;
CGFloat newScale = currentScale * gesture.scale;
if (newScale < MINIMUM_SCALE) {
newScale = MINIMUM_SCALE;
}
if (newScale > MAXIMUM_SCALE) {
newScale = MAXIMUM_SCALE;
}
CGAffineTransform transform1 = CGAffineTransformMakeTranslation(_translation.x, _translation.y);
CGAffineTransform transform2 = CGAffineTransformMakeScale(newScale, newScale);
CGAffineTransform transform = CGAffineTransformConcat(transform1, transform2);
self.view.transform = transform;
gesture.scale = 1;
}
}
The simplest way to do this, if all you want is pinch zooming, is to place your image inside a UIWebView (write small amount of html wrapper code, reference your image, and you're basically done). The more complcated way to do this is to use touchesBegan, touchesMoved, and touchesEnded to keep track of the user's fingers, and adjust your view's transform property appropriately.
Keep in mind that you don't want to zoom in/out UIImage. Instead try to zoom in/out the View which contains the UIImage View Controller.
I have made a solution for this problem. Take a look at my code:
#IBAction func scaleImage(sender: UIPinchGestureRecognizer) {
self.view.transform = CGAffineTransformScale(self.view.transform, sender.scale, sender.scale)
sender.scale = 1
}
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view, typically from a nib.
view.backgroundColor = UIColor.blackColor()
}
N.B.: Don't forget to hook up the PinchGestureRecognizer.