UIScrollView with drawable view as subview - objective-c

i have an app which is composed by a main UIScrollView and a variable number of "pages" subviews.
Each one of this pages contains an imageView, some buttons and an UIView subclass, called drawView, that is responsible to draw over the UIImageView ( using touchesBegin , touchesMoved and tochesEnded) allowing users to take notes over the page.
Unfortunately as all pages are subviews of uiscrollview, so no touch reaches my drawView.
I have a button on the main interface to enable highlight mode and once tapped i'd like to disable all UIScrollView events so all events goes to my drawView.
I've tried to set userInteractionEnabled to the main ScrollView but, as my view is a subview of the UIScrollView, also my drawView receives no touch, same if i disable scroll.
Does anyone have a suggestion on how to solve my issue and achive it ?

Dont worry, make it simple.
Create a custom UIView Class that will allocate a CALayer to place your UIImage and a second CALayer to show the future drawing on top of it. Both CALayers can be placed on top of each other. Assuming you know how to handle transparency of the drawing CALayer.
As both visible parts are in one UIView you can keep on going with
-touchesBegin: -touchesMoved: -touchesEnded:
and handle your BezierPath or how ever you want to catch the fingers/pen.
Now take care and reduce the CGRect size that is repainted with -drawRect: so it still looks nice and does not paint the whole image buffer again, which will make it a little bit faster.
As you handle both Layers in one View you can keep on going as is with your UIScrollView but you will use the new CustomViewClass instead of the UIViews. Well - you don't need the UIImageView then.
something like.
#import UIKit;
#interface ImageDrawingUIView : UIView
-(instancetype)initWithFrame:(CGRect)frame andImage:(UIImage*)img;
#property (nonatomic) UIImage *image;
#property (nonatomic) CALayer *drawLayer;
#end
and
#import "ImageDrawingUIView.h"
#implementation ImageDrawingUIView {
UIImage* _painting;
NSMutableArray<NSValue *> *points;
NSUInteger strokes;
}
-(instancetype)initWithFrame:(CGRect)frame andImage:(UIImage *)img {
if (!(self=[super initWithFrame:frame])) return nil;
_image = img;
CALayer *imgLayer = [CALayer layer];
imgLayer.drawsAsynchronously = YES;
imgLayer.frame = frame;
imgLayer.contents = img;
[self.layer addSublayer:imgLayer];
_drawLayer = [CALayer layer];
_drawLayer.drawsAsynchronously = YES;
_drawLayer.frame = frame;
[self.layer addSublayer:_drawLayer];
_drawLayer.contents = (__bridge id _Nullable)[self paintingImage].CGImage;
points = [NSMutableArray new];
return self;
}
-(UIImage*)imageFromPoints {
// do the CGContext image creation from array of points here
CGSize size = CGSizeMake(66, 66);
const CGFloat mid = size.width * 0.5;
CGFloat f = 1.4;
UIGraphicsBeginImageContextWithOptions(size, NO, 0.0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSaveGState(context);
CGContextSetLineWidth(context, 1.5);
CGContextSetLineCap(context, kCGLineCapSquare);
int strokeNum = 0;
for (int p = 0; p < [points count]; p++) {
// NSValues can store CGPoints, the points Array contains NSValues.
NSValue *value = points[p];
CGPoint dot = [value CGPointValue];
CGPoint point = CGPointMake(mid + (dot.x*f) * mid, mid + (dot.y*f) * mid);
if (strokes > strokeNum) {
CGContextMoveToPoint(context, point.x, point.y);
CGContextAddRect(context, CGRectMake(point.x - 1.5, point.y - 1.5, 3, 3));
strokeNum++;
} else {
CGContextAddLineToPoint(context, point.x, point.y);
}
}
CGContextStrokePath(context);
CGContextRestoreGState(context);
UIImage *i = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return i;
}
-(UIImage*)paintingImage {
if (!_painting) {
_painting = [self imageFromPoints];
}
return _painting;
}
// following code is not complete, just showing the methods
-(void)touchesBegan:(NSSet<UITouch *> *)touches withEvent:(UIEvent *)event {
// capture first finger, store first point in a array of CGPoints
[points addObject:[NSValue valueWithCGPoint: /*touches*/ CGPointMake(0, 0)]];
}
-(void)touchesMoved:(NSSet<UITouch *> *)touches withEvent:(UIEvent *)event {
// capture first finger, store in array of CGPoints
[points addObject:[NSValue valueWithCGPoint: /*touches*/ CGPointMake(0, 0)]];
_painting = [self imageFromPoints]; // update painted image
}
-(void)touchesEnded:(NSSet<UITouch *> *)touches withEvent:(UIEvent *)event {
// capture first finger, store in array and arrange the next finger is another array
[points addObject:[NSValue valueWithCGPoint: /*touches*/ CGPointMake(0, 0)]];
// apply new painting to CALayer again
_drawLayer.contents = (__bridge id _Nullable)(_painting.CGImage);
}
#end
You can allocate it in your UIScrollView like
NSUInteger pages = 0;
ImageDrawingUIView *page = [[ImageDrawingUIView alloc] initWithFrame:CGRectMake(0,pages*pageHeight,width,height) andImage:[UIImage imageWithContentsOfFile:#"filename"]];
pages++;
[self addSubView:page];
ooops , did'nt want to take all the joy to code it youself from ya. :)
PS: in -initWithFrame:andImage: there is still space to allocate the Buttons and stuff you need on top and interact with the content of the CustomUIView.

Related

Objactive C UISlider changing minimum track image only

Im trying to setMinimumTrackImage of the slider using an image with CAGradientLayer, lets say using blue and red colors.
what happens is that the full track gets the gradient color, its starts with red, and sliding the Thumb to the right reveals the blue.
I want the color to start from red to blue up to the Thumb, ands "stretch" as the Thumb moves.
any ideas ? I though about setting the slider.maximumValue = slider...width
and change the gradient image as I listen to the slider value change but it didn't work
I don't think you'll be successful trying to set the min track image.
Options are a completely custom slider...
or
Set the min track image to a clear image, add an imageView with the gradient image behind the slider, stretch the frame of the imageView to match the thumb movement.
Here's an example:
and the code (just a starting point... would be much better to wrap it into a subclass):
SliderTestViewController.h
//
// SliderTestViewController.h
//
// Created by Don Mag on 10/31/19.
//
#import <UIKit/UIKit.h>
NS_ASSUME_NONNULL_BEGIN
#interface SliderTestViewController : UIViewController
#end
NS_ASSUME_NONNULL_END
SliderTestViewController.m
//
// SliderTestViewController.m
//
// Created by Don Mag on 10/31/19.
//
#import "SliderTestViewController.h"
#import "UIImage+Utils.h"
#interface SliderTestViewController ()
#property (strong, nonatomic) UIImageView *theFakeSliderTrackImageView;
#property (strong, nonatomic) UISlider *theSlider;
#property (strong, nonatomic) NSLayoutConstraint *imgWidthConstraint;
#end
#implementation SliderTestViewController
- (void)viewDidLoad {
[super viewDidLoad];
self.view.backgroundColor = [UIColor whiteColor];
// instantiate a slider
_theSlider = [UISlider new];
// instantiate an image view to use as our custom / fake "left side" of the slider track
_theFakeSliderTrackImageView = [UIImageView new];
// we want the image to stretch
_theFakeSliderTrackImageView.contentMode = UIViewContentModeScaleToFill;
// create a horizontal gradient image to use for our "left side" of the slider track
// the image will be stretched... using a width of 128 seems reasonable
UIImage *gradImg = [UIImage gradientImageWithSize:CGSizeMake(128.0, 4.0) startColor:[UIColor blueColor] endColor:[UIColor redColor] startPoint:CGPointMake(0.0, 0.0) endPoint:CGPointMake(1.0, 0.0)];
// set the gradient image to our image view
_theFakeSliderTrackImageView.image = gradImg;
// create a clear image to use for the slider's min track image
UIImage *clearImg = [UIImage imageWithColor:[UIColor clearColor] size:CGSizeMake(1.0, 1.0)];
// set min track image to clear image
[_theSlider setMinimumTrackImage:clearImg forState:UIControlStateNormal];
// set max track image if desired
// [_theSlider setMaximumTrackImage:anImage forState:UIControlStateNormal];
_theFakeSliderTrackImageView.translatesAutoresizingMaskIntoConstraints = NO;
_theSlider.translatesAutoresizingMaskIntoConstraints = NO;
[self.view addSubview:_theFakeSliderTrackImageView];
[self.view addSubview:_theSlider];
[NSLayoutConstraint activateConstraints:#[
// constrain the slider centerY with 20-pts leading / trailing
[_theSlider.centerYAnchor constraintEqualToAnchor:self.view.centerYAnchor],
[_theSlider.leadingAnchor constraintEqualToAnchor:self.view.leadingAnchor constant:20.0],
[_theSlider.trailingAnchor constraintEqualToAnchor:self.view.trailingAnchor constant:-20.0],
// constrain image view centerY to slider centerY
[_theFakeSliderTrackImageView.centerYAnchor constraintEqualToAnchor:_theSlider.centerYAnchor constant:0.0],
// constrain image view leading to slider leading
[_theFakeSliderTrackImageView.leadingAnchor constraintEqualToAnchor:_theSlider.leadingAnchor constant:0.0],
// image view height to 5-pts (adjust as desired)
[_theFakeSliderTrackImageView.heightAnchor constraintEqualToConstant:5.0],
]];
// init imageView width constraint to 0.0
_imgWidthConstraint = [_theFakeSliderTrackImageView.widthAnchor constraintEqualToConstant:0.0];
_imgWidthConstraint.active = YES;
[_theSlider addTarget:self action:#selector(sliderChanged:) forControlEvents:UIControlEventValueChanged];
}
- (void)viewDidLayoutSubviews {
[super viewDidLayoutSubviews];
[self updateSliderGradientImage];
}
- (void)updateSliderGradientImage {
// set "fake track" imageView width to origin.x of thumb rect (plus 2 for good measure)
CGRect trackRect = [_theSlider trackRectForBounds:_theSlider.bounds];
CGRect thumbRect = [_theSlider thumbRectForBounds:_theSlider.bounds trackRect:trackRect value:_theSlider.value];
_imgWidthConstraint.constant = thumbRect.origin.x + 2;
}
- (void)sliderChanged:(id)sender {
[self updateSliderGradientImage];
}
#end
UIImage+Utils.h
//
// UIImage+Utils.h
//
// Created by Don Mag on 10/31/19.
//
#import <UIKit/UIKit.h>
NS_ASSUME_NONNULL_BEGIN
#interface UIImage (Utils)
+ (nullable UIImage *)imageWithColor:(UIColor *)color size:(CGSize)size;
+ (nullable UIImage *)gradientImageWithSize:(CGSize)size startColor:(UIColor *)startColor endColor:(UIColor *)endColor startPoint:(CGPoint)startPoint endPoint:(CGPoint)endPoint;
#end
NS_ASSUME_NONNULL_END
UIImage+Utils.m
//
// UIImage+Utils.m
//
// Created by Don Mag on 10/31/19.
//
#import "UIImage+Utils.h"
#implementation UIImage (Utils)
+ (UIImage *)imageWithColor:(UIColor *)color size:(CGSize)size {
if (!color || size.height < 1 || size.width < 1)
return nil;
UIGraphicsImageRenderer *renderer = [[UIGraphicsImageRenderer alloc] initWithSize:size];
UIImage *image = [renderer imageWithActions:^(UIGraphicsImageRendererContext * _Nonnull context) {
[color setFill];
[context fillRect:renderer.format.bounds];
}];
return image;
}
+ (UIImage *)gradientImageWithSize:(CGSize)size startColor:(UIColor *)startColor endColor:(UIColor *)endColor startPoint:(CGPoint)startPoint endPoint:(CGPoint)endPoint {
if (!startColor || !endColor)
return nil;
CFArrayRef colors = (__bridge CFArrayRef) [NSArray arrayWithObjects:
(id)startColor.CGColor,
(id)endColor.CGColor,
nil];
CGGradientRef g = CGGradientCreateWithColors(nil, colors, nil);
startPoint.x *= size.width;
startPoint.y *= size.height;
endPoint.x *= size.width;
endPoint.y *= size.height;
UIGraphicsImageRenderer *renderer = [[UIGraphicsImageRenderer alloc] initWithSize:size];
UIImage *gradientImage = [renderer imageWithActions:^(UIGraphicsImageRendererContext * _Nonnull rendererContext) {
CGContextDrawLinearGradient(rendererContext.CGContext, g, startPoint, endPoint, kCGGradientDrawsAfterEndLocation);
}];
return gradientImage;
}
#end

How to display animated GIF in Objective C on top of the layered View?

I am trying to draw animated gif on my screen in mac OSX app .
I used this code to insert the gif: I can see the Gif as 1 picture it doesn't animates
only static picture :( what should I add to make it animated ?
#import <Cocoa/Cocoa.h>
#import <Quartz/Quartz.h>//for drawing circle
#import "sharedPrefferences.h"
#interface GenericFanSubView : NSView
{
NSColor * _backgroundColor;
NSImageView* imageView;
}
- (void)setBackgroundColor :(NSColor*)color;
- (void)insertGif1;
- (void)insertGif2;
- (void)insertGif3;
#end
#import "GenericFanSubView.h"
#define PI 3.14285714285714
#implementation GenericFanSubView
- (id)initWithFrame:(NSRect)frame
{
self = [super initWithFrame:frame];
if (self) {
// Initialization code here.
imageView = [[NSImageView alloc]initWithFrame:CGRectMake(0, 0,self.frame.size.width,self.frame.size.height)];
[imageView setAnimates: YES];
}
return self;
}
- (void)drawRect:(NSRect)dirtyRect
{
[super drawRect:dirtyRect];
// Drawing code here.
[self drawCircleInRect];
_backgroundColor = [NSColor whiteColor];
[self insertGif1];
}
-(void)drawCircleInRect
{
//draw colored circle here
CGContextRef context = [[NSGraphicsContext // 1
currentContext] graphicsPort];
// ********** Your drawing code here ********** // 2
CGContextSetFillColorWithColor(context,[self NSColorToCGColor:(_backgroundColor)]);
float radius1 = self.frame.size.height/2;
float startAngle = 0;
float endAngle = endAngle = PI*2;
CGPoint position = CGPointMake(self.frame.size.height/2,self.frame.size.height/2);//center of the view
CGContextBeginPath(context);
CGContextAddArc(context, position.x, position.y, radius1, startAngle, endAngle, 1);
CGContextDrawPath(context, kCGPathFill); // Or kCGPathFill
}
- (void)setBackgroundColor :(NSColor*)color
{
_backgroundColor = color;
[self setNeedsDisplay:YES];
}
- (CGColorRef)NSColorToCGColor:(NSColor *)color
{
NSInteger numberOfComponents = [color numberOfComponents];
CGFloat components[numberOfComponents];
CGColorSpaceRef colorSpace = [[color colorSpace] CGColorSpace];
[color getComponents:(CGFloat *)&components];
CGColorRef cgColor = CGColorCreate(colorSpace, components);
return cgColor;
}
//curentlly calling only this 1
- (void)insertGif1
{
[imageView removeFromSuperview];
[imageView setImageScaling:NSImageScaleNone];
[imageView setAnimates: YES];
imageView.image = [NSImage imageNamed:#"FanBlades11.gif"];
[self addSubview:imageView];
}
#end
Edit: I discovered the source of the problem:
I was adding my class (that represents gif inside the circle) on top of RMBlurredView
and the animations doesn't work when I adding it as subview ,However it works on all the other views I added.
Any ideas what could be the reason inside the RMBlurredView to stop my NSImageView from animating ?
Edit:
I think [self setWantsLayer:YES]; is the reason I am not getting animations
how can I still get the animation with this feature enabled?
Edit:
Here is a simple sample with my problem
http://snk.to/f-cdk3wmfn
my gif:This is my gif it is invisible on white background color
"You must disable the autoscaling feature of the NSImageView for the
animation playback to function. After you've done that, no extra
programming required. It works like a charm!"
--http://www.cocoabuilder.com/archive/cocoa/108530-nsimageview-and-animated-gifs.html
imageView.imageScaling = NSImageScaleNone;
imageView.animates = YES;
needed for layer backed views:
if the image view is in a layer backed view or is layer backed itself:
imageView.canDrawSubviewsIntoLayer = YES;
working example using the question's own gif:
NSImageView *view = [[NSImageView alloc] initWithFrame:CGRectMake(10, 10, 50, 50)];
view.imageScaling = NSImageScaleNone;
view.animates = YES;
view.image = [NSImage imageNamed:#"FanBlades2_42x42.gif"];
view.canDrawSubviewsIntoLayer = YES;
NSView *layerview = [[NSView alloc] initWithFrame:CGRectMake(0, 0, 60, 60)];
layerview.wantsLayer = YES;
[layerview addSubview:view];
[self.window.contentView addSubview:layerview];

loading screen not centered on second launch

I have a UIView which is my loading view. All it does is display the circular loading circle(lol to much "circle" for one sentence).
It works fine the first time but after that the circle is not centered. It moves to the left and down some. How can I get it to always be centered, take in mind I have limited the app to only display in the landscape modes (landscape left, landscape right) in all views so the issue is not coming from the device being rotated.
call to load the view:
loadingViewController = [LoadingViewController loadSpinnerIntoView:self.view];
LoadingViewController.h:
#import <UIKit/UIKit.h>
#import <QuartzCore/QuartzCore.h>
#import "CrestronClient.h"
#interface LoadingViewController : UIView
{
CrestronClient *cClient;
}
+(LoadingViewController *)loadSpinnerIntoView:(UIView *)superView;
-(void)removeLoadingView;
- (UIImage *)addBackground;
#end
LoadingView.m:
#import "LoadingViewController.h"
#import "RootViewController.h"
#implementation LoadingViewController
CGRect priorFrameSettings;
UIView *parentView;
- (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation
{
// Return YES for supported orientations
if (interfaceOrientation == UIInterfaceOrientationLandscapeLeft ||interfaceOrientation == UIInterfaceOrientationLandscapeRight ) {
return YES;
}else{
return NO;
}
}
-(void)removeLoadingView
{
// [parentView setFrame:priorFrameSettings];
CATransition *animation = [CATransition animation];
[animation setType:kCATransitionFade];
[[[self superview] layer] addAnimation:animation forKey:#"layerAnimation"];
[self removeFromSuperview];
}
+(LoadingViewController *)loadSpinnerIntoView:(UIView *)superView
{
priorFrameSettings = superView.frame;
parentView = superView;
// [superView setFrame:CGRectMake(0, 0, 1024, 1024)];
// Create a new view with the same frame size as the superView
LoadingViewController *loadingViewController = [[LoadingViewController alloc] initWithFrame:superView.frame];
loadingViewController.autoresizingMask = UIViewAutoresizingFlexibleWidth | UIViewAutoresizingFlexibleHeight;
// If something's gone wrong, abort!
if(!loadingViewController){ return nil; }
[superView addSubview:loadingViewController];
if(!loadingViewController){ return nil; }
// This is the new stuff here ;)
UIActivityIndicatorView *indicator =
[[[UIActivityIndicatorView alloc]
initWithActivityIndicatorStyle: UIActivityIndicatorViewStyleWhiteLarge] autorelease];
// Set the resizing mask so it's not stretched
UIImageView *background = [[UIImageView alloc] initWithImage:[loadingViewController addBackground]];
// Make a little bit of the superView show through
background.alpha = 0.7;
[loadingViewController addSubview:background];
indicator.autoresizingMask =
UIViewAutoresizingFlexibleTopMargin |
UIViewAutoresizingFlexibleRightMargin |
UIViewAutoresizingFlexibleBottomMargin |
UIViewAutoresizingFlexibleLeftMargin;
// Place it in the middle of the view
indicator.center = superView.center;
// Add it into the spinnerView
[loadingViewController addSubview:indicator];
// Start it spinning! Don't miss this step
[indicator startAnimating];
// Create a new animation
CATransition *animation = [CATransition animation];
// Set the type to a nice wee fade
[animation setType:kCATransitionFade];
// Add it to the superView
[[superView layer] addAnimation:animation forKey:#"layerAnimation"];
return loadingViewController;
}
- (UIImage *)addBackground{
cClient = [CrestronClient sharedManager];
if (cClient.isConnected == FALSE) {
[cClient connect];
}
// Create an image context (think of this as a canvas for our masterpiece) the same size as the view
UIGraphicsBeginImageContextWithOptions(self.bounds.size, YES, 1);
// Our gradient only has two locations - start and finish. More complex gradients might have more colours
size_t num_locations = 2;
// The location of the colors is at the start and end
CGFloat locations[2] = { 0.0, 1.0 };
// These are the colors! That's two RBGA values
CGFloat components[8] = {
0.4,0.4,0.4, 0.8,
0.1,0.1,0.1, 0.5 };
// Create a color space
CGColorSpaceRef myColorspace = CGColorSpaceCreateDeviceRGB();
// Create a gradient with the values we've set up
CGGradientRef myGradient = CGGradientCreateWithColorComponents (myColorspace, components, locations, num_locations);
// Set the radius to a nice size, 80% of the width. You can adjust this
float myRadius = (self.bounds.size.width*.8)/2;
// Now we draw the gradient into the context. Think painting onto the canvas
CGContextDrawRadialGradient (UIGraphicsGetCurrentContext(), myGradient, self.center, 0, self.center, myRadius, kCGGradientDrawsAfterEndLocation);
// Rip the 'canvas' into a UIImage object
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
// And release memory
CGColorSpaceRelease(myColorspace);
CGGradientRelease(myGradient);
UIGraphicsEndImageContext();
// … obvious.
return image;
}
- (void)dealloc {
[super dealloc];
}
#end
Make sure the loading view is set to its parents frame and has the proper autoresizingMask set. This would likely by UIViewAutoresizingFlexibleWidth | UIViewAutoresizingFlexibleHeight.
fixed the background by adding
[background setFrame:CGRectMake(0, 0, 1024, 768 )];
and fixed the centering of the circle with:
indicator.center = background.center;

QuartzCore performance on clear UIView

I have a UIView where I set the background color to clear in the init method. I then draw a blue line between 2 points in drawRect. The first point is the very first touch on the view, and the second point is changed when touchesMoved is called.
So that one end of the line is fixed, and the other end moves with the users finger. However when the backgroundcolor is clear the line has quite a big of delay and skips (is not smooth).
If I change the background color to black (just uncommenting the setbackgroundcolor line) then the movement is much smoother and it looks great, apart from you can't see the views behind it.
How can I solve this issue? (on the iPad, I only have the simulator, and haven't currently got access to a device) So that the performance is smooth, but the background is clear so you can see the background views.
-(id) initWithFrame:(CGRect)frame {
if (self = [super initWithFrame:frame]) {
//self.backgroundColor = [UIColor clearColor];
}
return self;
}
-(void) drawRect:(CGRect)rect {
if (!CGPointEqualToPoint(touchPoint, CGPointZero)) {
if (touchEnded) {
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextClearRect(context, rect);
} else {
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextClearRect(context, rect);
CGContextSetLineWidth(context, 3.0f);
CGFloat blue[4] = {0.0f, 0.659f, 1.0f, 1.0f};
CGContextSetAllowsAntialiasing(context, YES);
CGContextSetStrokeColor(context, blue);
CGContextMoveToPoint(context, startPoint.x, startPoint.y);
CGContextAddLineToPoint(context, touchPoint.x, touchPoint.y);
CGContextStrokePath(context);
CGContextSetFillColor(context, blue);
CGContextFillEllipseInRect(context, CGRectMakeForSizeAroundCenter(CGSizeMake(10, 10), touchPoint));
CGContextFillEllipseInRect(context, CGRectMakeForSizeAroundCenter(CGSizeMake(10, 10), startPoint));
}
}
}
-(void) touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
self.touchEnded = NO;
if ([[touches allObjects] count] > 0)
startPoint = [[[touches allObjects] objectAtIndex:0] locationInView:self];
touchPoint = [[[touches allObjects] objectAtIndex:0] locationInView:self];
[self setNeedsDisplay];
}
-(void) touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
self.touchEnded = NO;
if ([[touches allObjects] count] > 0)
touchPoint = [[[touches allObjects] objectAtIndex:0] locationInView:self];
[self setNeedsDisplay];
}
-(void) touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event {
self.touchEnded = YES;
[self setNeedsDisplay];
[self removeFromSuperview];
}
- (void)dealloc {
[super dealloc];
}
First of all, you should test performance on a device, rather than in Simulator. Most of graphics-related stuff works very differently in Simulator, some being much faster and some, surprisingly, much slower.
That said, if a transparent background does prove to be a performance problem on a device, it's most likely because you fill too many pixels with CGContextClearRect(). I can think of two solutions off the top of my head:
In touchesBegan:withEvent: you can make a snapshot of the underlying view(s) using CALayer's renderInContext: method and draw the snapshot as the background instead of clearing it. That way you can leave the view opaque and thus save time on compositing the view with views underneath it.
You can create a one-point-tall CALayer representing a line and transform it in touchesMoved:withEvent: so that its ends are in correct positions. Same goes for dots representing the line ends (CAShapeLayer is great for that). There will be some ugly trigonometry involved, but the performance is going to be excellent.

Saving CGContextRef

I have a drawing app in which I would like to create an undo method. The drawing takes place inside the TouchesMoved: method.
I am trying to create a CGContextRef and push it to the stack OR save it in a context property that can be restored later but am not having any luck. Any advice would be great. Here is what I have ...
UIImageView *drawingSurface;
CGContextRef undoContext;
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
UIGraphicsBeginImageContext(self.view.frame.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[drawingSurface.image drawInRect:CGRectMake(0, 0, drawingSurface.image.size.width, drawingSurface.image.size.height)];
UIGraphicsPushContext(context);
// also tried but cant figure how to restore it
undoContext = context;
UIGraphicsEndImageContext();
}
Then I have a method triggered by my undo button ...
- (IBAction)restoreUndoImage {
UIGraphicsBeginImageContext(self.view.frame.size);
UIGraphicsPopContext();
drawingSurface.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
When I run this, I believe my drawingSurface is being assigned nil because it just erases everything in the image.
My guess is I can't use pop and push this way. But I can't seem to figure out how to just save the context and then push it back onto the drawingSurface. Hmmmm. Any help would be ... well ... helpfull. Thanks in advance -
And, just for reference, here is what I am doing to draw to the screen, which is working great. This is inside my TouchesMoved:
UIGraphicsBeginImageContext(self.view.frame.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[drawingSurface.image drawInRect:CGRectMake(0, 0, drawingSurface.image.size.width, drawingSurface.image.size.height)];
CGContextSetLineCap(context, kCGLineCapRound); //kCGLineCapSquare, kCGLineCapButt, kCGLineCapRound
CGContextSetLineWidth(context, self.brush.size); // for size
CGContextSetStrokeColorWithColor (context,[currentColor CGColor]);
CGContextBeginPath(context);
CGContextMoveToPoint(context, lastPoint.x, lastPoint.y);
CGContextAddLineToPoint(context, currentPoint.x, currentPoint.y);
CGContextStrokePath(context);
drawingSurface.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
I think you're approaching the problem the wrong way and confusing contexts.
In an immediate mode API you save the 'state' of objects with push/pop, not the graphical representation. The state consists of things like line widths, colours and positions. The graphical representation is the result of a paint operation (a bitmap) and generally something you don't want to save.
Instead try to save 'information' you use to create the drawing.
My initial suggestion would be to decouple your shape creation and painting. On OSX you can use NSBezierPath, but for iOS we have to use an array of points.
For example given this protocol:
// ViewController.h
#protocol DrawSourceProtocol <NSObject>
- (NSArray*)pathsToDraw;
#end
#interface ViewController : UIViewController<DrawSourceProtocol>
#end
You can implement these functions:
// ViewController.m
#interface ViewController () {
NSMutableArray *currentPath;
NSMutableArray *allPaths;
MyView *view_;
}
#end
...
- (void)viewDidLoad {
[super viewDidLoad];
currentPath = [[NSMutableArray alloc] init];
allPaths = [[NSMutableArray alloc] init];
view_ = (MyView*)self.view;
view_.delegate = self;
}
- (NSArray*)pathsToDraw {
// Return the currently draw path too
if (currentPath && currentPath.count) {
NSMutableArray *allPathsPlusCurrent = [[NSMutableArray alloc] initWithArray:allPaths];
[allPathsPlusCurrent addObject:currentPath];
return allPathsPlusCurrent;
}
return allPaths;
}
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
currentPath = [[NSMutableArray alloc] init];
}
- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event {
// When a touch ends, save the current path
[allPaths addObject:currentPath];
currentPath = [[NSMutableArray alloc] init];
[view_ setNeedsDisplay];
}
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch = [touches anyObject];
CGPoint currentPoint = [touch locationInView:self.view];
// We store the point with the help of NSValue
[currentPath addObject:[NSValue valueWithCGPoint:currentPoint]];
// Update the view
[view_ setNeedsDisplay];
}
Now subclass your view (I call mine MyView here) and implement something like this:
// MyView.h
#import "ViewController.h"
#protocol DrawSourceProtocol;
#interface MyView : UIView {
__weak id<DrawSourceProtocol> delegate_;
}
#property (weak) id<DrawSourceProtocol> delegate;
#end
// MyView.m
#synthesize delegate = delegate_;
...
- (void)drawRect:(CGRect)rect {
NSLog(#"Drawing!");
// Setup a context
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetStrokeColorWithColor(context, [UIColor redColor].CGColor);
CGContextSetRGBFillColor(context, 0.0, 0.0, 1.0, 1.0);
CGContextSetLineWidth(context, 2.0);
// Get the paths
NSArray *paths = [delegate_ pathsToDraw];
for (NSArray *aPath in paths) {
BOOL firstPoint = TRUE;
for (NSValue *pointValue in aPath) {
CGPoint point = [pointValue CGPointValue];
// Always move to the first point
if (firstPoint) {
CGContextMoveToPoint(context, point.x, point.y);
firstPoint = FALSE;
continue;
}
// Draw a point
CGContextAddLineToPoint(context, point.x, point.y);
}
}
// Stroke!
CGContextStrokePath(context);
}
The only cavet here is that setNeedsDisplay isn't very performant. It's better to use setNeedsDisplayInRect:, see my last post regarding an efficient way of determining the 'drawn' rect.
As for undo? Your undo operation is merely popping the last object from the allPaths array. This exercise I'll leave you to :)
Hope this helps!