How would I go about setting an images default location to a CGRect, and only allow the image to be dropped inside another CGRect - objective-c

I've been working at this for two days and have not had much luck. I have a little, tile image that I'm able to drag and drop anywhere on the screen with a touche's coordinates. My first question is: how can I set the tile images coordinates so that it starts off inside of the green CGRect instead of the top corner screen? In other words, I want to make the green CGRect the image's default location with affecting the users toch coordinates. My next question is how can I code it to where the only other location on the screen that I can drag and drop the image to, is inside the red CGRect? And if more then half of the image is inside the red CGRect, how can I make it snap to the inside of the Red CGRect if someone lets go of it. If more then half of the image is OUTSIDE of the CGRect, I want to animate it back to it's default location. If your going to answer please explain your answer in detail because I'm new to objective-c. Here's a screen shoot of my GUI.
Here is my code so far:
DADViewController.h
#import <UIKit/UIKit.h>
#import "DADView.h"
#interface DADViewController : UIViewController
#property(nonatomic, weak)IBOutlet DADView *dadView;
#end
DADViewController.m
#import "DADViewController.h"
#implementation DADViewController
#synthesize dadView = _dadView;
-(void)setDadView:(DADView *)dadView
{
_dadView = dadView;
[self.dadView setNeedsDisplay];
}
-(void) touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch = [[event allTouches] anyObject];
CGPoint location = [touch locationInView:touch.view];
location.x -= 40;
location.y -= 40;
self.dadView.thePoint = location;
[self.dadView setNeedsDisplay];
}
-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
[self touchesBegan:touches withEvent:event];
}
DADView.h
#interface DADView : UIView
#property(nonatomic) CGPoint thePoint;
#end
DADView.m
#import "DADView.h"
#implementation DADView
#synthesize thePoint = _thePoint;
- (id)initWithFrame:(CGRect)frame
{
self = [super initWithFrame:frame];
if (self) {
// Initialization code
}
return self;
}
- (void)drawRect:(CGRect)rect
{
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextSetLineWidth(ctx, 2.0);
CGContextSetStrokeColorWithColor(ctx, [UIColor redColor].CGColor);
CGRect rectangle = CGRectMake(120, 70, 70, 70);
CGContextAddRect(ctx, rectangle);
CGContextStrokePath(ctx);
CGContextSetLineWidth(ctx, 2.0);
CGContextSetStrokeColorWithColor(ctx, [UIColor greenColor].CGColor);
CGRect rectangle2 = CGRectMake(120, 290, 70, 70);
CGContextAddRect(ctx, rectangle2);
CGContextStrokePath(ctx);
UIImage* pngWord = [UIImage imageNamed:#"wordbutton_01.png"];
[pngWord drawAtPoint:self.thePoint];
}
#end

To set the view at the beginning:
CGRect viewFrame = view.frame;
viewFrame.origin = startingRect.origin;
view.frame = viewFrame;
view is your image, and startingRect is the rect where you want your image to start, and this code in the awakFromNib or in or when you set the image
To check if the image is inside of a frame when being dragged:
-(void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event {
CGFloat middleX = CGRectGetMidX(imageView.frame);
CGFloat middleY = CGRectGetMidY(imageView.frame);
if(middleX > rectangle2.origin.x && middleX < CGRectGetMaxX(rectangle2) && middleY > rectangle.origin.y && middleY < CGRectGetMaxY(ractangle2)) {
// if the imageView is inside of the box
}
else imageView.frame = defaultFrame;
imageView is the image that is being moved and rectangle 2 is the frame that you want the imageView in, while default frame is where the image's frame should be set to if it doesn't end in the box

Related

Restarting UIPanGestureRecognizer while UIView is animating

I am trying to allow a user to grab a UIView that is being animated at the completion of a UIPanGestureRecognizer method. I am currently able to stop the view and center it under the finger using touchesBegan and the view's presentation layer. But I want the user to be able to retrigger the UIPanGestureRecognizer method without having to lift the finger up and replace it on the UIView.
I've been working on this challenge for a week or so, and have run through various search strings on SO and elsewhere, read through Ray Wenderlich's tutorials, etc., but still haven't found a solution. It looks like I'm trying to do something similar to this question that seemed unresolved:
iOS - UIPanGestureRecognizer : drag during animation
I have just been working with UIView animation blocks and have yet to dive into CABasicAnimation work. Is it possible to do what I want by changing the code below somehow?
Thank you in advance.
In FlingViewController.h:
#import <UIKit/UIKit.h>
#interface FlingViewController : UIViewController <UIGestureRecognizerDelegate>
{
UIView *myView;
}
#property (nonatomic, weak) UIView *myView;
#end
In FlingViewController.m:
#import "FlingViewController.h"
#import <QuartzCore/QuartzCore.h>
#interface FlingViewController ()
#end
#implementation FlingViewController
#synthesize myView = _myView;
- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
// Allocate and init view
myView = [[UIView alloc] initWithFrame:CGRectMake(self.view.center.x - 50,
self.view.center.y - 50,
100, 100)];
// Set view background color
[myView setBackgroundColor:[UIColor purpleColor]];
// Create pan gesture recognizer
UIPanGestureRecognizer *gesture = [[UIPanGestureRecognizer alloc]
initWithTarget:self action:#selector(handleGesture:)];
// Add gesture recognizer to view
gesture.delegate = self;
[myView addGestureRecognizer:gesture];
// Add myView as subview
[self.view addSubview:myView];
NSLog(#"myView added");
}
- (void)handleGesture:(UIPanGestureRecognizer *)recognizer
{
if ((recognizer.state == UIGestureRecognizerStateBegan) ||
(recognizer.state == UIGestureRecognizerStateChanged))
{
[recognizer.view.layer removeAllAnimations];
CGPoint translation = [recognizer translationInView:self.view];
recognizer.view.center = CGPointMake(recognizer.view.center.x + translation.x,
recognizer.view.center.y + translation.y);
[recognizer setTranslation:CGPointMake(0, 0) inView:self.view];
}
else if (recognizer.state == UIGestureRecognizerStateEnded)
{
CGPoint velocity = [recognizer velocityInView:self.view];
CGFloat magnitude = sqrtf((velocity.x * velocity.x) + (velocity.y * velocity.y));
CGFloat slideMult = magnitude / 200;
float slideFactor = 0.1 * slideMult;
CGPoint finalPoint = CGPointMake(recognizer.view.center.x + (velocity.x * slideFactor),
recognizer.view.center.y + (velocity.y * slideFactor));
finalPoint.x = MIN(MAX(finalPoint.x, 0), self.view.bounds.size.width);
finalPoint.y = MIN(MAX(finalPoint.y, 0), self.view.bounds.size.height);
[UIView animateWithDuration:slideFactor * 2
delay:0
options:(UIViewAnimationOptionCurveEaseOut|
UIViewAnimationOptionAllowUserInteraction |
UIViewAnimationOptionBeginFromCurrentState)
animations:^{
recognizer.view.center = finalPoint;
}
completion:^(BOOL finished){
NSLog(#"Animation complete");
}];
}
}
// Currently, this method will detect when user touches
// myView's presentationLayer and will center that view under the finger
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *touch = [touches anyObject];
CGPoint touchPoint = [touch locationInView:self.view];
if ([[myView.layer presentationLayer] hitTest:touchPoint])
{
NSLog(#"Touched!");
[myView.layer removeAllAnimations];
myView.center = touchPoint;
// *** NOTE ***
// At this point, I want the user to be able to fling
// the view around again without having to lift the finger
// and replace it on the view; i.e. when myView is moving
// and the user taps it, it should stop moving and follow
// user's pan gesture again
// Similar to unresolved SO question:
// https://stackoverflow.com/questions/13234234/ios-uipangesturerecognizer-drag-during-animation
}
}
- (void)didReceiveMemoryWarning
{
[super didReceiveMemoryWarning];
// Dispose of any resources that can be recreated.
}
#end
In case it helps anyone else, I've found a solution that seems to work pretty well, based on Thomas Rued/Jack's response to this SO question:
CAAnimation - User Input Disabled
I am using two UIPanGestureRecognizers to move the UIView -- one (gesture/handleGesture:) is attached to the UIView (myView), and another (bigGesture/superGesture:) is attached to that UIView's superview (bigView). The superview detects the position of the subview's presentation layer as it is animating.
The only issue (so far) is the user can still grab the animated view by touching the ending location of myView's animation before the animation completes. Ideally, only the presentation layer location would be the place where the user can interact with the view. If anyone has insight into preventing premature interaction, it would be appreciated.
#import "FlingViewController.h"
#import <QuartzCore/QuartzCore.h>
#interface FlingViewController ()
#end
#implementation FlingViewController
#synthesize myView = _myView;
#synthesize bigView = _bigView;
- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
// *** USING A GESTURE RECOGNIZER ATTACHED TO THE SUPERVIEW APPROACH
// CAME FROM SO USER THOMAS RUED/JACK AT FOLLOWING LINK:
// https://stackoverflow.com/questions/7221688/caanimation-user-input-disabled
// Set up size for superView (bigView)
CGRect screenRect = [[UIScreen mainScreen] bounds];
// Allocate big view
bigView = [[UIView alloc] initWithFrame:(screenRect)];
[bigView setBackgroundColor:[UIColor greenColor]];
[self.view addSubview:bigView];
// Allocate and init myView
myView = [[UIView alloc] initWithFrame:CGRectMake(10, 10, 100, 100)];
// Set view background color
[myView setBackgroundColor:[UIColor purpleColor]];
// Create pan gesture recognizer
UIPanGestureRecognizer *gesture = [[UIPanGestureRecognizer alloc]
initWithTarget:self action:#selector(handleGesture:)];
UIPanGestureRecognizer *bigGesture = [[UIPanGestureRecognizer alloc]
initWithTarget:self action:#selector(superGesture:)];
// Add gesture recognizer to view
gesture.delegate = self;
bigGesture.delegate = self;
[myView addGestureRecognizer:gesture];
[bigView addGestureRecognizer:bigGesture];
// Add myView as subview of bigView
[bigView addSubview:myView];
NSLog(#"myView added");
}
// This gesture recognizer is attached to the superView,
// and will detect myView's presentation layer while animated
- (void)superGesture:(UIPanGestureRecognizer *)recognizer
{
CGPoint location = [recognizer locationInView:self.view];
for (UIView* childView in recognizer.view.subviews)
{
CGRect frame = [[childView.layer presentationLayer] frame];
if (CGRectContainsPoint(frame, location))
// ALTERNATE 'IF' STATEMENT; BOTH SEEM TO WORK:
//if ([[childView.layer presentationLayer] hitTest:location])
{
NSLog(#"location = %.2f, %.2f", location.x, location.y);
[childView.layer removeAllAnimations];
childView.center = location;
}
if ((recognizer.state == UIGestureRecognizerStateEnded) &&
(CGRectContainsPoint(frame, location)))
{
CGPoint velocity = [recognizer velocityInView:self.view];
CGFloat magnitude = sqrtf((velocity.x * velocity.x) + (velocity.y * velocity.y));
CGFloat slideMult = magnitude / 200;
float slideFactor = 0.1 * slideMult;
CGPoint finalPoint = CGPointMake(childView.center.x + (velocity.x * slideFactor),
childView.center.y + (velocity.y * slideFactor));
finalPoint.x = MIN(MAX(finalPoint.x, 0), self.view.bounds.size.width);
finalPoint.y = MIN(MAX(finalPoint.y, 0), self.view.bounds.size.height);
[UIView animateWithDuration:slideFactor * 2
delay:0
options:(UIViewAnimationOptionCurveEaseOut|
UIViewAnimationOptionAllowUserInteraction |
UIViewAnimationOptionBeginFromCurrentState)
animations:^{
childView.center = finalPoint;
}
completion:^(BOOL finished){
NSLog(#"Big animation complete");
}];
}
}
}
// This gesture recognizer is attached to myView,
// and will handle movement when myView is not animating
- (void)handleGesture:(UIPanGestureRecognizer *)recognizer
{
if ((recognizer.state == UIGestureRecognizerStateBegan) ||
(recognizer.state == UIGestureRecognizerStateChanged))
{
[recognizer.view.layer removeAllAnimations];
CGPoint translation = [recognizer translationInView:self.view];
recognizer.view.center = CGPointMake(recognizer.view.center.x + translation.x,
recognizer.view.center.y + translation.y);
[recognizer setTranslation:CGPointMake(0, 0) inView:self.view];
}
else if (recognizer.state == UIGestureRecognizerStateEnded)
{
CGPoint velocity = [recognizer velocityInView:self.view];
CGFloat magnitude = sqrtf((velocity.x * velocity.x) + (velocity.y * velocity.y));
CGFloat slideMult = magnitude / 200;
float slideFactor = 0.1 * slideMult;
CGPoint finalPoint = CGPointMake(recognizer.view.center.x + (velocity.x * slideFactor),
recognizer.view.center.y + (velocity.y * slideFactor));
finalPoint.x = MIN(MAX(finalPoint.x, 0), self.view.bounds.size.width);
finalPoint.y = MIN(MAX(finalPoint.y, 0), self.view.bounds.size.height);
[UIView animateWithDuration:slideFactor * 2
delay:0
options:(UIViewAnimationOptionCurveEaseOut|
UIViewAnimationOptionAllowUserInteraction |
UIViewAnimationOptionBeginFromCurrentState)
animations:^{
recognizer.view.center = finalPoint;
}
completion:^(BOOL finished){
NSLog(#"Animation complete");
}];
}
}
- (void)didReceiveMemoryWarning
{
[super didReceiveMemoryWarning];
// Dispose of any resources that can be recreated.
}
#end

get CGRect location in superview objective-c

I want to print out the x coordinate of a CGRect. The x,y coordinates of the rect is set to where the user touches, like this:
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *touch = [[event allTouches]anyObject];
startPoint = [touch locationInView:self];
}
- (void)drawRect:(CGRect)rect
{
ctx = UIGraphicsGetCurrentContext();
jrect = CGRectMake(startPoint.x, startPoint.y, 100, 100);
CGContextAddRect(ctx, jrect);
CGContextFillPath(ctx);
}
I could just print out the startPoint but if I would print out the CGRect's coordinate I tried to do this:
int jrectX = lroundf(CGRectGetMinX(jrect));
xlabel.text = [NSString stringWithFormat:#"x: %i", jrectX];
But the number it returns doesn't make any sense at all, sometimes they are bigger to the left than to the right. Is there anything wrong with the code?
A CGRect is a struct with four CGFloat properties: x, y, width, height
To print the x value from a CGRect:
[NSString stringWithFormat:#"%f", rect.x]
To print the entire rect, there's a convenience function:
NSStringFromCGRect(rect)
You're having problems above, because you're storing the x value into an int, and then using a float rounding function on it. So it should be:
CGFloat jrectX = CGRectGetMinX(jrect);
. . . unless you're doing rotation transformation, you can just use:
CGFloat jrectX = jrect.origin.x;
DR.h
#import <UIKit/UIKit.h>
#interface DR : UIView
{
CGContextRef ctx;
CGRect jrect;
CGPoint startPoint;
UILabel *xlabel;
}
#end
DR.m file.
- (id)initWithFrame:(CGRect)frame
{
self = [super initWithFrame:frame];
if (self) {
xlabel=[[UILabel alloc] initWithFrame:CGRectMake(0, 0, 100, 100)];
[self addSubview:xlabel];
// Initialization code
}
return self;
}
// Only override drawRect: if you perform custom drawing.
// An empty implementation adversely affects performance during animation.
- (void)drawRect:(CGRect)rect
{
ctx = UIGraphicsGetCurrentContext();
jrect = CGRectMake(startPoint.x, startPoint.y, 100, 100);
CGContextAddRect(ctx, jrect);
CGContextFillPath(ctx);
// Drawing code
}
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *touch = [[event allTouches]anyObject];
startPoint = [touch locationInView:self];
int jrectX = lroundf(CGRectGetMinX(jrect));
NSLog(#"jrectX --------------");
xlabel.text = [NSString stringWithFormat:#"x: %i", jrectX];
[self setNeedsDisplay];
}
#end
In other viewController use it...
- (void)viewDidLoad
{
DR *drr=[[DR alloc] initWithFrame:CGRectMake(0, 0, 320, 460)];
[drr setBackgroundColor:[UIColor greenColor]];
[self.view addSubview:drr];
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
}

Saving UIImageView and overlayed UIView as an image (xcode/iOS)

I have an image loaded into a UIImageView, which has a UIView overlayed (with a CGRect drawn inside it), I am looking for a way to save what is displayed on the screen as a new image. I am using storyboards and ARC.
I have a UIViewController, which contains UIImageView. The UIView is displayed on top of this, and a circle is drawn where the user touches the screen. This all works fine, but now I want to save what is displayed as an image (JPG).
Screenshot of my storyboard:
Below is the code from my PhotoEditViewController so far. passedPhoto is the photo that is loaded into the UIImageView.
#import "PhotoEditViewController.h"
#interface PhotoEditViewController ()
#end
#implementation PhotoEditViewController
#synthesize selectedPhoto = _selectedPhoto;
#synthesize backButton;
#synthesize savePhoto;
- (void)viewDidLoad {
[super viewDidLoad];
[passedPhoto setImage:_selectedPhoto];
}
- (void)viewDidUnload {
[self setBackButton:nil];
[self setSavePhoto:nil];
[super viewDidUnload];
}
- (IBAction)savePhoto:(id)sender{
NSLog(#"save photo");
// this is where it needs to happen!
}
#end
Below is the code from my PhotoEditView which handles the creation of the circle overlay:
#import "PhotoEditView.h"
#implementation PhotoEditView
#synthesize myPoint = _myPoint;
- (void)setMyPoint:(CGPoint)myPoint {
_myPoint = myPoint;
self.backgroundColor = [UIColor colorWithWhite:1.0 alpha:0.01];
[self setNeedsDisplay];
}
- (id)initWithFrame:(CGRect)frame {
self = [super initWithFrame:frame];
return self;
}
-(void) touchesBegan: (NSSet *) touches withEvent: (UIEvent *) event {
NSSet *allTouches = [event allTouches];
switch ([allTouches count]){
case 1: {
UITouch *touch = [[allTouches allObjects] objectAtIndex:0];
CGPoint point = [touch locationInView:self];
NSLog(#"x=%f", point.x);
NSLog(#"y=%f", point.y);
self.myPoint = point;
}
break;
}
}
- (void)drawRect:(CGRect)rect {
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextSetLineWidth(ctx, 4.0);
CGContextSetRGBFillColor(ctx, 0, 0, 0, 0.5);
CGContextSetRGBStrokeColor(ctx, 1.0, 0.0, 0.0, 1);
CGContextStrokePath(ctx);
CGRect circlePoint = CGRectMake(self.myPoint.x - 50, self.myPoint.y - 50, 100.0, 100.0);
CGContextStrokeEllipseInRect(ctx, circlePoint);
}
#end
Any help would be appreciated.
Have you tried this:
UIGraphicsBeginImageContext(rect.size);
[imageView.layer renderInContext:UIGraphicsGetCurrentContext()];
[customView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageView *imgView = [[UIImageView alloc] initWithImage:viewImage];
This is just the same approach as for getting an UIImage from a UIView, only you draw two views in the same context.

Problem with custom vertical UISlider

I'm trying to implement vertical slider with no thumb, that reacts to touches on it's track. I have subclassed UISlider and everything goes fine in horizontal slider, but when I'm transforming slider to vertical there is wierd stuff with coordinates going on. May be there is other way to implement this? Please, give me right direction, thanks!
I'm using this code for slider now:
#implementation MMTouchSlider
-(void) touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch = [[event allTouches] anyObject];
CGPoint touchLocation = [touch locationInView:self];
self.value = self.minimumValue + (self.maximumValue - self.minimumValue) * (touchLocation.x / self.frame.size.width);
[super touchesBegan:touches withEvent:event];
}
- (CGRect)thumbRectForBounds:(CGRect)bounds trackRect:(CGRect)rect value:(float)value {
CGRect thumbRect = [super thumbRectForBounds:bounds trackRect:rect value:value];
return thumbRect;
}
#end
It's easy to make a vertical slider - just use 270 degree rotation!
UISlider *slider = [[UISlider alloc] initWithFrame:CGRectMake(100, 100, 200, 10)];
float angleInDegrees = 270;
float angleInRadians = angleInDegrees * (M_PI/180);
slider.transform = CGAffineTransformMakeRotation(angleInRadians);
[self.view addSubView:slider];

Saving CGContextRef

I have a drawing app in which I would like to create an undo method. The drawing takes place inside the TouchesMoved: method.
I am trying to create a CGContextRef and push it to the stack OR save it in a context property that can be restored later but am not having any luck. Any advice would be great. Here is what I have ...
UIImageView *drawingSurface;
CGContextRef undoContext;
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
UIGraphicsBeginImageContext(self.view.frame.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[drawingSurface.image drawInRect:CGRectMake(0, 0, drawingSurface.image.size.width, drawingSurface.image.size.height)];
UIGraphicsPushContext(context);
// also tried but cant figure how to restore it
undoContext = context;
UIGraphicsEndImageContext();
}
Then I have a method triggered by my undo button ...
- (IBAction)restoreUndoImage {
UIGraphicsBeginImageContext(self.view.frame.size);
UIGraphicsPopContext();
drawingSurface.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
When I run this, I believe my drawingSurface is being assigned nil because it just erases everything in the image.
My guess is I can't use pop and push this way. But I can't seem to figure out how to just save the context and then push it back onto the drawingSurface. Hmmmm. Any help would be ... well ... helpfull. Thanks in advance -
And, just for reference, here is what I am doing to draw to the screen, which is working great. This is inside my TouchesMoved:
UIGraphicsBeginImageContext(self.view.frame.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[drawingSurface.image drawInRect:CGRectMake(0, 0, drawingSurface.image.size.width, drawingSurface.image.size.height)];
CGContextSetLineCap(context, kCGLineCapRound); //kCGLineCapSquare, kCGLineCapButt, kCGLineCapRound
CGContextSetLineWidth(context, self.brush.size); // for size
CGContextSetStrokeColorWithColor (context,[currentColor CGColor]);
CGContextBeginPath(context);
CGContextMoveToPoint(context, lastPoint.x, lastPoint.y);
CGContextAddLineToPoint(context, currentPoint.x, currentPoint.y);
CGContextStrokePath(context);
drawingSurface.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
I think you're approaching the problem the wrong way and confusing contexts.
In an immediate mode API you save the 'state' of objects with push/pop, not the graphical representation. The state consists of things like line widths, colours and positions. The graphical representation is the result of a paint operation (a bitmap) and generally something you don't want to save.
Instead try to save 'information' you use to create the drawing.
My initial suggestion would be to decouple your shape creation and painting. On OSX you can use NSBezierPath, but for iOS we have to use an array of points.
For example given this protocol:
// ViewController.h
#protocol DrawSourceProtocol <NSObject>
- (NSArray*)pathsToDraw;
#end
#interface ViewController : UIViewController<DrawSourceProtocol>
#end
You can implement these functions:
// ViewController.m
#interface ViewController () {
NSMutableArray *currentPath;
NSMutableArray *allPaths;
MyView *view_;
}
#end
...
- (void)viewDidLoad {
[super viewDidLoad];
currentPath = [[NSMutableArray alloc] init];
allPaths = [[NSMutableArray alloc] init];
view_ = (MyView*)self.view;
view_.delegate = self;
}
- (NSArray*)pathsToDraw {
// Return the currently draw path too
if (currentPath && currentPath.count) {
NSMutableArray *allPathsPlusCurrent = [[NSMutableArray alloc] initWithArray:allPaths];
[allPathsPlusCurrent addObject:currentPath];
return allPathsPlusCurrent;
}
return allPaths;
}
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
currentPath = [[NSMutableArray alloc] init];
}
- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event {
// When a touch ends, save the current path
[allPaths addObject:currentPath];
currentPath = [[NSMutableArray alloc] init];
[view_ setNeedsDisplay];
}
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch = [touches anyObject];
CGPoint currentPoint = [touch locationInView:self.view];
// We store the point with the help of NSValue
[currentPath addObject:[NSValue valueWithCGPoint:currentPoint]];
// Update the view
[view_ setNeedsDisplay];
}
Now subclass your view (I call mine MyView here) and implement something like this:
// MyView.h
#import "ViewController.h"
#protocol DrawSourceProtocol;
#interface MyView : UIView {
__weak id<DrawSourceProtocol> delegate_;
}
#property (weak) id<DrawSourceProtocol> delegate;
#end
// MyView.m
#synthesize delegate = delegate_;
...
- (void)drawRect:(CGRect)rect {
NSLog(#"Drawing!");
// Setup a context
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetStrokeColorWithColor(context, [UIColor redColor].CGColor);
CGContextSetRGBFillColor(context, 0.0, 0.0, 1.0, 1.0);
CGContextSetLineWidth(context, 2.0);
// Get the paths
NSArray *paths = [delegate_ pathsToDraw];
for (NSArray *aPath in paths) {
BOOL firstPoint = TRUE;
for (NSValue *pointValue in aPath) {
CGPoint point = [pointValue CGPointValue];
// Always move to the first point
if (firstPoint) {
CGContextMoveToPoint(context, point.x, point.y);
firstPoint = FALSE;
continue;
}
// Draw a point
CGContextAddLineToPoint(context, point.x, point.y);
}
}
// Stroke!
CGContextStrokePath(context);
}
The only cavet here is that setNeedsDisplay isn't very performant. It's better to use setNeedsDisplayInRect:, see my last post regarding an efficient way of determining the 'drawn' rect.
As for undo? Your undo operation is merely popping the last object from the allPaths array. This exercise I'll leave you to :)
Hope this helps!