Saving CGContextRef - objective-c

I have a drawing app in which I would like to create an undo method. The drawing takes place inside the TouchesMoved: method.
I am trying to create a CGContextRef and push it to the stack OR save it in a context property that can be restored later but am not having any luck. Any advice would be great. Here is what I have ...
UIImageView *drawingSurface;
CGContextRef undoContext;
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
UIGraphicsBeginImageContext(self.view.frame.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[drawingSurface.image drawInRect:CGRectMake(0, 0, drawingSurface.image.size.width, drawingSurface.image.size.height)];
UIGraphicsPushContext(context);
// also tried but cant figure how to restore it
undoContext = context;
UIGraphicsEndImageContext();
}
Then I have a method triggered by my undo button ...
- (IBAction)restoreUndoImage {
UIGraphicsBeginImageContext(self.view.frame.size);
UIGraphicsPopContext();
drawingSurface.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
When I run this, I believe my drawingSurface is being assigned nil because it just erases everything in the image.
My guess is I can't use pop and push this way. But I can't seem to figure out how to just save the context and then push it back onto the drawingSurface. Hmmmm. Any help would be ... well ... helpfull. Thanks in advance -
And, just for reference, here is what I am doing to draw to the screen, which is working great. This is inside my TouchesMoved:
UIGraphicsBeginImageContext(self.view.frame.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[drawingSurface.image drawInRect:CGRectMake(0, 0, drawingSurface.image.size.width, drawingSurface.image.size.height)];
CGContextSetLineCap(context, kCGLineCapRound); //kCGLineCapSquare, kCGLineCapButt, kCGLineCapRound
CGContextSetLineWidth(context, self.brush.size); // for size
CGContextSetStrokeColorWithColor (context,[currentColor CGColor]);
CGContextBeginPath(context);
CGContextMoveToPoint(context, lastPoint.x, lastPoint.y);
CGContextAddLineToPoint(context, currentPoint.x, currentPoint.y);
CGContextStrokePath(context);
drawingSurface.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

I think you're approaching the problem the wrong way and confusing contexts.
In an immediate mode API you save the 'state' of objects with push/pop, not the graphical representation. The state consists of things like line widths, colours and positions. The graphical representation is the result of a paint operation (a bitmap) and generally something you don't want to save.
Instead try to save 'information' you use to create the drawing.
My initial suggestion would be to decouple your shape creation and painting. On OSX you can use NSBezierPath, but for iOS we have to use an array of points.
For example given this protocol:
// ViewController.h
#protocol DrawSourceProtocol <NSObject>
- (NSArray*)pathsToDraw;
#end
#interface ViewController : UIViewController<DrawSourceProtocol>
#end
You can implement these functions:
// ViewController.m
#interface ViewController () {
NSMutableArray *currentPath;
NSMutableArray *allPaths;
MyView *view_;
}
#end
...
- (void)viewDidLoad {
[super viewDidLoad];
currentPath = [[NSMutableArray alloc] init];
allPaths = [[NSMutableArray alloc] init];
view_ = (MyView*)self.view;
view_.delegate = self;
}
- (NSArray*)pathsToDraw {
// Return the currently draw path too
if (currentPath && currentPath.count) {
NSMutableArray *allPathsPlusCurrent = [[NSMutableArray alloc] initWithArray:allPaths];
[allPathsPlusCurrent addObject:currentPath];
return allPathsPlusCurrent;
}
return allPaths;
}
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
currentPath = [[NSMutableArray alloc] init];
}
- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event {
// When a touch ends, save the current path
[allPaths addObject:currentPath];
currentPath = [[NSMutableArray alloc] init];
[view_ setNeedsDisplay];
}
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch = [touches anyObject];
CGPoint currentPoint = [touch locationInView:self.view];
// We store the point with the help of NSValue
[currentPath addObject:[NSValue valueWithCGPoint:currentPoint]];
// Update the view
[view_ setNeedsDisplay];
}
Now subclass your view (I call mine MyView here) and implement something like this:
// MyView.h
#import "ViewController.h"
#protocol DrawSourceProtocol;
#interface MyView : UIView {
__weak id<DrawSourceProtocol> delegate_;
}
#property (weak) id<DrawSourceProtocol> delegate;
#end
// MyView.m
#synthesize delegate = delegate_;
...
- (void)drawRect:(CGRect)rect {
NSLog(#"Drawing!");
// Setup a context
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetStrokeColorWithColor(context, [UIColor redColor].CGColor);
CGContextSetRGBFillColor(context, 0.0, 0.0, 1.0, 1.0);
CGContextSetLineWidth(context, 2.0);
// Get the paths
NSArray *paths = [delegate_ pathsToDraw];
for (NSArray *aPath in paths) {
BOOL firstPoint = TRUE;
for (NSValue *pointValue in aPath) {
CGPoint point = [pointValue CGPointValue];
// Always move to the first point
if (firstPoint) {
CGContextMoveToPoint(context, point.x, point.y);
firstPoint = FALSE;
continue;
}
// Draw a point
CGContextAddLineToPoint(context, point.x, point.y);
}
}
// Stroke!
CGContextStrokePath(context);
}
The only cavet here is that setNeedsDisplay isn't very performant. It's better to use setNeedsDisplayInRect:, see my last post regarding an efficient way of determining the 'drawn' rect.
As for undo? Your undo operation is merely popping the last object from the allPaths array. This exercise I'll leave you to :)
Hope this helps!

Related

UIScrollView with drawable view as subview

i have an app which is composed by a main UIScrollView and a variable number of "pages" subviews.
Each one of this pages contains an imageView, some buttons and an UIView subclass, called drawView, that is responsible to draw over the UIImageView ( using touchesBegin , touchesMoved and tochesEnded) allowing users to take notes over the page.
Unfortunately as all pages are subviews of uiscrollview, so no touch reaches my drawView.
I have a button on the main interface to enable highlight mode and once tapped i'd like to disable all UIScrollView events so all events goes to my drawView.
I've tried to set userInteractionEnabled to the main ScrollView but, as my view is a subview of the UIScrollView, also my drawView receives no touch, same if i disable scroll.
Does anyone have a suggestion on how to solve my issue and achive it ?
Dont worry, make it simple.
Create a custom UIView Class that will allocate a CALayer to place your UIImage and a second CALayer to show the future drawing on top of it. Both CALayers can be placed on top of each other. Assuming you know how to handle transparency of the drawing CALayer.
As both visible parts are in one UIView you can keep on going with
-touchesBegin: -touchesMoved: -touchesEnded:
and handle your BezierPath or how ever you want to catch the fingers/pen.
Now take care and reduce the CGRect size that is repainted with -drawRect: so it still looks nice and does not paint the whole image buffer again, which will make it a little bit faster.
As you handle both Layers in one View you can keep on going as is with your UIScrollView but you will use the new CustomViewClass instead of the UIViews. Well - you don't need the UIImageView then.
something like.
#import UIKit;
#interface ImageDrawingUIView : UIView
-(instancetype)initWithFrame:(CGRect)frame andImage:(UIImage*)img;
#property (nonatomic) UIImage *image;
#property (nonatomic) CALayer *drawLayer;
#end
and
#import "ImageDrawingUIView.h"
#implementation ImageDrawingUIView {
UIImage* _painting;
NSMutableArray<NSValue *> *points;
NSUInteger strokes;
}
-(instancetype)initWithFrame:(CGRect)frame andImage:(UIImage *)img {
if (!(self=[super initWithFrame:frame])) return nil;
_image = img;
CALayer *imgLayer = [CALayer layer];
imgLayer.drawsAsynchronously = YES;
imgLayer.frame = frame;
imgLayer.contents = img;
[self.layer addSublayer:imgLayer];
_drawLayer = [CALayer layer];
_drawLayer.drawsAsynchronously = YES;
_drawLayer.frame = frame;
[self.layer addSublayer:_drawLayer];
_drawLayer.contents = (__bridge id _Nullable)[self paintingImage].CGImage;
points = [NSMutableArray new];
return self;
}
-(UIImage*)imageFromPoints {
// do the CGContext image creation from array of points here
CGSize size = CGSizeMake(66, 66);
const CGFloat mid = size.width * 0.5;
CGFloat f = 1.4;
UIGraphicsBeginImageContextWithOptions(size, NO, 0.0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSaveGState(context);
CGContextSetLineWidth(context, 1.5);
CGContextSetLineCap(context, kCGLineCapSquare);
int strokeNum = 0;
for (int p = 0; p < [points count]; p++) {
// NSValues can store CGPoints, the points Array contains NSValues.
NSValue *value = points[p];
CGPoint dot = [value CGPointValue];
CGPoint point = CGPointMake(mid + (dot.x*f) * mid, mid + (dot.y*f) * mid);
if (strokes > strokeNum) {
CGContextMoveToPoint(context, point.x, point.y);
CGContextAddRect(context, CGRectMake(point.x - 1.5, point.y - 1.5, 3, 3));
strokeNum++;
} else {
CGContextAddLineToPoint(context, point.x, point.y);
}
}
CGContextStrokePath(context);
CGContextRestoreGState(context);
UIImage *i = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return i;
}
-(UIImage*)paintingImage {
if (!_painting) {
_painting = [self imageFromPoints];
}
return _painting;
}
// following code is not complete, just showing the methods
-(void)touchesBegan:(NSSet<UITouch *> *)touches withEvent:(UIEvent *)event {
// capture first finger, store first point in a array of CGPoints
[points addObject:[NSValue valueWithCGPoint: /*touches*/ CGPointMake(0, 0)]];
}
-(void)touchesMoved:(NSSet<UITouch *> *)touches withEvent:(UIEvent *)event {
// capture first finger, store in array of CGPoints
[points addObject:[NSValue valueWithCGPoint: /*touches*/ CGPointMake(0, 0)]];
_painting = [self imageFromPoints]; // update painted image
}
-(void)touchesEnded:(NSSet<UITouch *> *)touches withEvent:(UIEvent *)event {
// capture first finger, store in array and arrange the next finger is another array
[points addObject:[NSValue valueWithCGPoint: /*touches*/ CGPointMake(0, 0)]];
// apply new painting to CALayer again
_drawLayer.contents = (__bridge id _Nullable)(_painting.CGImage);
}
#end
You can allocate it in your UIScrollView like
NSUInteger pages = 0;
ImageDrawingUIView *page = [[ImageDrawingUIView alloc] initWithFrame:CGRectMake(0,pages*pageHeight,width,height) andImage:[UIImage imageWithContentsOfFile:#"filename"]];
pages++;
[self addSubView:page];
ooops , did'nt want to take all the joy to code it youself from ya. :)
PS: in -initWithFrame:andImage: there is still space to allocate the Buttons and stuff you need on top and interact with the content of the CustomUIView.

get CGRect location in superview objective-c

I want to print out the x coordinate of a CGRect. The x,y coordinates of the rect is set to where the user touches, like this:
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *touch = [[event allTouches]anyObject];
startPoint = [touch locationInView:self];
}
- (void)drawRect:(CGRect)rect
{
ctx = UIGraphicsGetCurrentContext();
jrect = CGRectMake(startPoint.x, startPoint.y, 100, 100);
CGContextAddRect(ctx, jrect);
CGContextFillPath(ctx);
}
I could just print out the startPoint but if I would print out the CGRect's coordinate I tried to do this:
int jrectX = lroundf(CGRectGetMinX(jrect));
xlabel.text = [NSString stringWithFormat:#"x: %i", jrectX];
But the number it returns doesn't make any sense at all, sometimes they are bigger to the left than to the right. Is there anything wrong with the code?
A CGRect is a struct with four CGFloat properties: x, y, width, height
To print the x value from a CGRect:
[NSString stringWithFormat:#"%f", rect.x]
To print the entire rect, there's a convenience function:
NSStringFromCGRect(rect)
You're having problems above, because you're storing the x value into an int, and then using a float rounding function on it. So it should be:
CGFloat jrectX = CGRectGetMinX(jrect);
. . . unless you're doing rotation transformation, you can just use:
CGFloat jrectX = jrect.origin.x;
DR.h
#import <UIKit/UIKit.h>
#interface DR : UIView
{
CGContextRef ctx;
CGRect jrect;
CGPoint startPoint;
UILabel *xlabel;
}
#end
DR.m file.
- (id)initWithFrame:(CGRect)frame
{
self = [super initWithFrame:frame];
if (self) {
xlabel=[[UILabel alloc] initWithFrame:CGRectMake(0, 0, 100, 100)];
[self addSubview:xlabel];
// Initialization code
}
return self;
}
// Only override drawRect: if you perform custom drawing.
// An empty implementation adversely affects performance during animation.
- (void)drawRect:(CGRect)rect
{
ctx = UIGraphicsGetCurrentContext();
jrect = CGRectMake(startPoint.x, startPoint.y, 100, 100);
CGContextAddRect(ctx, jrect);
CGContextFillPath(ctx);
// Drawing code
}
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *touch = [[event allTouches]anyObject];
startPoint = [touch locationInView:self];
int jrectX = lroundf(CGRectGetMinX(jrect));
NSLog(#"jrectX --------------");
xlabel.text = [NSString stringWithFormat:#"x: %i", jrectX];
[self setNeedsDisplay];
}
#end
In other viewController use it...
- (void)viewDidLoad
{
DR *drr=[[DR alloc] initWithFrame:CGRectMake(0, 0, 320, 460)];
[drr setBackgroundColor:[UIColor greenColor]];
[self.view addSubview:drr];
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
}

Saving UIImageView and overlayed UIView as an image (xcode/iOS)

I have an image loaded into a UIImageView, which has a UIView overlayed (with a CGRect drawn inside it), I am looking for a way to save what is displayed on the screen as a new image. I am using storyboards and ARC.
I have a UIViewController, which contains UIImageView. The UIView is displayed on top of this, and a circle is drawn where the user touches the screen. This all works fine, but now I want to save what is displayed as an image (JPG).
Screenshot of my storyboard:
Below is the code from my PhotoEditViewController so far. passedPhoto is the photo that is loaded into the UIImageView.
#import "PhotoEditViewController.h"
#interface PhotoEditViewController ()
#end
#implementation PhotoEditViewController
#synthesize selectedPhoto = _selectedPhoto;
#synthesize backButton;
#synthesize savePhoto;
- (void)viewDidLoad {
[super viewDidLoad];
[passedPhoto setImage:_selectedPhoto];
}
- (void)viewDidUnload {
[self setBackButton:nil];
[self setSavePhoto:nil];
[super viewDidUnload];
}
- (IBAction)savePhoto:(id)sender{
NSLog(#"save photo");
// this is where it needs to happen!
}
#end
Below is the code from my PhotoEditView which handles the creation of the circle overlay:
#import "PhotoEditView.h"
#implementation PhotoEditView
#synthesize myPoint = _myPoint;
- (void)setMyPoint:(CGPoint)myPoint {
_myPoint = myPoint;
self.backgroundColor = [UIColor colorWithWhite:1.0 alpha:0.01];
[self setNeedsDisplay];
}
- (id)initWithFrame:(CGRect)frame {
self = [super initWithFrame:frame];
return self;
}
-(void) touchesBegan: (NSSet *) touches withEvent: (UIEvent *) event {
NSSet *allTouches = [event allTouches];
switch ([allTouches count]){
case 1: {
UITouch *touch = [[allTouches allObjects] objectAtIndex:0];
CGPoint point = [touch locationInView:self];
NSLog(#"x=%f", point.x);
NSLog(#"y=%f", point.y);
self.myPoint = point;
}
break;
}
}
- (void)drawRect:(CGRect)rect {
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextSetLineWidth(ctx, 4.0);
CGContextSetRGBFillColor(ctx, 0, 0, 0, 0.5);
CGContextSetRGBStrokeColor(ctx, 1.0, 0.0, 0.0, 1);
CGContextStrokePath(ctx);
CGRect circlePoint = CGRectMake(self.myPoint.x - 50, self.myPoint.y - 50, 100.0, 100.0);
CGContextStrokeEllipseInRect(ctx, circlePoint);
}
#end
Any help would be appreciated.
Have you tried this:
UIGraphicsBeginImageContext(rect.size);
[imageView.layer renderInContext:UIGraphicsGetCurrentContext()];
[customView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageView *imgView = [[UIImageView alloc] initWithImage:viewImage];
This is just the same approach as for getting an UIImage from a UIView, only you draw two views in the same context.

How would I go about setting an images default location to a CGRect, and only allow the image to be dropped inside another CGRect

I've been working at this for two days and have not had much luck. I have a little, tile image that I'm able to drag and drop anywhere on the screen with a touche's coordinates. My first question is: how can I set the tile images coordinates so that it starts off inside of the green CGRect instead of the top corner screen? In other words, I want to make the green CGRect the image's default location with affecting the users toch coordinates. My next question is how can I code it to where the only other location on the screen that I can drag and drop the image to, is inside the red CGRect? And if more then half of the image is inside the red CGRect, how can I make it snap to the inside of the Red CGRect if someone lets go of it. If more then half of the image is OUTSIDE of the CGRect, I want to animate it back to it's default location. If your going to answer please explain your answer in detail because I'm new to objective-c. Here's a screen shoot of my GUI.
Here is my code so far:
DADViewController.h
#import <UIKit/UIKit.h>
#import "DADView.h"
#interface DADViewController : UIViewController
#property(nonatomic, weak)IBOutlet DADView *dadView;
#end
DADViewController.m
#import "DADViewController.h"
#implementation DADViewController
#synthesize dadView = _dadView;
-(void)setDadView:(DADView *)dadView
{
_dadView = dadView;
[self.dadView setNeedsDisplay];
}
-(void) touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch = [[event allTouches] anyObject];
CGPoint location = [touch locationInView:touch.view];
location.x -= 40;
location.y -= 40;
self.dadView.thePoint = location;
[self.dadView setNeedsDisplay];
}
-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
[self touchesBegan:touches withEvent:event];
}
DADView.h
#interface DADView : UIView
#property(nonatomic) CGPoint thePoint;
#end
DADView.m
#import "DADView.h"
#implementation DADView
#synthesize thePoint = _thePoint;
- (id)initWithFrame:(CGRect)frame
{
self = [super initWithFrame:frame];
if (self) {
// Initialization code
}
return self;
}
- (void)drawRect:(CGRect)rect
{
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextSetLineWidth(ctx, 2.0);
CGContextSetStrokeColorWithColor(ctx, [UIColor redColor].CGColor);
CGRect rectangle = CGRectMake(120, 70, 70, 70);
CGContextAddRect(ctx, rectangle);
CGContextStrokePath(ctx);
CGContextSetLineWidth(ctx, 2.0);
CGContextSetStrokeColorWithColor(ctx, [UIColor greenColor].CGColor);
CGRect rectangle2 = CGRectMake(120, 290, 70, 70);
CGContextAddRect(ctx, rectangle2);
CGContextStrokePath(ctx);
UIImage* pngWord = [UIImage imageNamed:#"wordbutton_01.png"];
[pngWord drawAtPoint:self.thePoint];
}
#end
To set the view at the beginning:
CGRect viewFrame = view.frame;
viewFrame.origin = startingRect.origin;
view.frame = viewFrame;
view is your image, and startingRect is the rect where you want your image to start, and this code in the awakFromNib or in or when you set the image
To check if the image is inside of a frame when being dragged:
-(void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event {
CGFloat middleX = CGRectGetMidX(imageView.frame);
CGFloat middleY = CGRectGetMidY(imageView.frame);
if(middleX > rectangle2.origin.x && middleX < CGRectGetMaxX(rectangle2) && middleY > rectangle.origin.y && middleY < CGRectGetMaxY(ractangle2)) {
// if the imageView is inside of the box
}
else imageView.frame = defaultFrame;
imageView is the image that is being moved and rectangle 2 is the frame that you want the imageView in, while default frame is where the image's frame should be set to if it doesn't end in the box

QuartzCore performance on clear UIView

I have a UIView where I set the background color to clear in the init method. I then draw a blue line between 2 points in drawRect. The first point is the very first touch on the view, and the second point is changed when touchesMoved is called.
So that one end of the line is fixed, and the other end moves with the users finger. However when the backgroundcolor is clear the line has quite a big of delay and skips (is not smooth).
If I change the background color to black (just uncommenting the setbackgroundcolor line) then the movement is much smoother and it looks great, apart from you can't see the views behind it.
How can I solve this issue? (on the iPad, I only have the simulator, and haven't currently got access to a device) So that the performance is smooth, but the background is clear so you can see the background views.
-(id) initWithFrame:(CGRect)frame {
if (self = [super initWithFrame:frame]) {
//self.backgroundColor = [UIColor clearColor];
}
return self;
}
-(void) drawRect:(CGRect)rect {
if (!CGPointEqualToPoint(touchPoint, CGPointZero)) {
if (touchEnded) {
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextClearRect(context, rect);
} else {
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextClearRect(context, rect);
CGContextSetLineWidth(context, 3.0f);
CGFloat blue[4] = {0.0f, 0.659f, 1.0f, 1.0f};
CGContextSetAllowsAntialiasing(context, YES);
CGContextSetStrokeColor(context, blue);
CGContextMoveToPoint(context, startPoint.x, startPoint.y);
CGContextAddLineToPoint(context, touchPoint.x, touchPoint.y);
CGContextStrokePath(context);
CGContextSetFillColor(context, blue);
CGContextFillEllipseInRect(context, CGRectMakeForSizeAroundCenter(CGSizeMake(10, 10), touchPoint));
CGContextFillEllipseInRect(context, CGRectMakeForSizeAroundCenter(CGSizeMake(10, 10), startPoint));
}
}
}
-(void) touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
self.touchEnded = NO;
if ([[touches allObjects] count] > 0)
startPoint = [[[touches allObjects] objectAtIndex:0] locationInView:self];
touchPoint = [[[touches allObjects] objectAtIndex:0] locationInView:self];
[self setNeedsDisplay];
}
-(void) touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
self.touchEnded = NO;
if ([[touches allObjects] count] > 0)
touchPoint = [[[touches allObjects] objectAtIndex:0] locationInView:self];
[self setNeedsDisplay];
}
-(void) touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event {
self.touchEnded = YES;
[self setNeedsDisplay];
[self removeFromSuperview];
}
- (void)dealloc {
[super dealloc];
}
First of all, you should test performance on a device, rather than in Simulator. Most of graphics-related stuff works very differently in Simulator, some being much faster and some, surprisingly, much slower.
That said, if a transparent background does prove to be a performance problem on a device, it's most likely because you fill too many pixels with CGContextClearRect(). I can think of two solutions off the top of my head:
In touchesBegan:withEvent: you can make a snapshot of the underlying view(s) using CALayer's renderInContext: method and draw the snapshot as the background instead of clearing it. That way you can leave the view opaque and thus save time on compositing the view with views underneath it.
You can create a one-point-tall CALayer representing a line and transform it in touchesMoved:withEvent: so that its ends are in correct positions. Same goes for dots representing the line ends (CAShapeLayer is great for that). There will be some ugly trigonometry involved, but the performance is going to be excellent.