-(void) drawRect:(CGRect)rect; is using up nearly all of iPhone CPU - objective-c

- (void)drawRect:(CGRect)rect{
float sliceSize = rect.size.width / imagesShownAtOnce;
//Apply our clipping region and fill it with black
[clippingRegion addClip];
[clippingRegion fill];
//Draw the 3 images (+1 for inbetween), with our scroll amount.
CGPoint loc;
for (int i=0;i<imagesShownAtOnce+1;i++){
loc = CGPointMake(rect.origin.x+(i*sliceSize)-imageScroll, rect.origin.y);
[[buttonImages objectAtIndex:i] drawAtPoint:loc];
}
//Draw the text region background
[[UIColor blackColor] setFill];
[textRegion fillWithBlendMode:kCGBlendModeNormal alpha:0.4f];
//Draw the actual text.
CGRect textRectangle = CGRectMake(rect.origin.x+16,rect.origin.y+rect.size.height*4/5.6,rect.size.width/1.5,rect.size.height/3);
[[UIColor whiteColor] setFill];
[buttonText drawInRect:textRectangle withFont:[UIFont fontWithName:#"Avenir-HeavyOblique" size:22]];
}
clippingRegion and textRegion are UIBezierPaths to give me the rounded rectangles I want (First for a clipping region, 2nd as an overlay for my text)
The middle section is drawing 3 images and letting them scroll along, which im updating every 2 refreshes from a CADisplayLink, and that invalidates the draw region by calling [self setNeedsDisplay], and also increasing my imageScroll variable.
Now that that background information is done, here is my issue:
It runs, and even runs smoothly. But it is using up an absolutely high amount of CPU time (80%+)!! How do I push this off to the GPU on the phone instead? Someone told me about CALayers but I've never dealt with them before

Draw each component of your drawing once into something (a view or layer) and let it hold the cached the drawing. Then you just move or transform each component, and exactly as you say, it's all done by the GPU.
You could do this with individual views or with individual layers, but that doesn't really matter (a view is a layer, under the hood). The point is that there is no need to be constantly redrawing from scratch when all you really want is to move the same persistent pieces around.
Learning about CALayer would be a good idea, as it is in fact the basis of all drawing on iOS. What could be more important to know about than that?

Related

Blending two images and drawing resized image from two UIImageViews

I have two ImageViews, one called imageView and the other called subView (which is a subview of imageView).
I want to blend the images on these views together, with the user being able to switch the alpha of the blend with a pan. My code works, but right now, the code is slow as we are redrawing the image each time the pan gesture is moved. Is there a faster/more efficient way of doing this?
BONUS Q: I want to allow for my subView image to drawn zoomed in. Currently I've set my subView to be UIViewContentModeCenter, however I can't seem to draw a zoomed in part of my image with this content mode. Is there any way around this?
My drawrect:
- (void)drawRect:(CGRect)rect
{
float xCenter = self.center.x - self.currentImage1.size.width/2.0;
float yCenter = self.center.y - self.currentImage1.size.height/2.0;
subView.alpha = self.blendAmount; // Customize the opacity of the top image.
UIGraphicsBeginImageContext(self.currentImage1.size);
CGContextRef c = UIGraphicsGetCurrentContext();
CGContextSetBlendMode(c, kCGBlendModeColorBurn);
[imageView.layer renderInContext:c];
self.blendedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[self.blendedImage drawAtPoint:CGPointMake(xCenter,yCenter)];
}
You need to use GPU for image processing which is far faster than using CPU (as you're doing right now).
You can use Core Image framework which is very fast and easy to use but requires iOS 5, or you can use Open GL directly but you need to be experienced and have some knowledge about Open GL Shading.

speeding up drawing of an OverlayView

I'm drawing an Image (2.5 MB, PNG image data, 1240 x 1240, 8-bit/color RGBA, non-interlaced) on a MKMapView by Subclassing MKPolygonView and overriding the drawMapRect:zoomScale:inContext:-method as follows:
-(void) drawMapRect:(MKMapRect)mapRect zoomScale:(MKZoomScale)zoomScale inContext:(CGContextRef)context {
CGRect theRect = [self rectForMapRect:overlayRect];
CGRect clipRect = [self rectForMapRect:mapRect];
CGContextClipToRect(context, clipRect);
CGContextDrawImage(context, theRect, imageReference);
}
overlayRect is an MKMapRect defining the position and size of the image on the map (hard coded and initialized in initWithOverlay:)
imageReference holds a reference to the image, loaded in an UIImage and obtained by calling CGImage on the UIImage instance (also in initWithOverlay:)
My MKMapView takes between 8-14 seconds to draw the image on the map the first time, taking again about the same time when zooming in to redraw the tiles with better resolution.
That seems really long and i'm wondering if I'm doing anything fundamentally wrong as it is my first time with MapKit.
You can use tile images to improve the performance. Apple used this technique in the WWDC 2010. You can see the example here: https://github.com/klokantech/Apple-WWDC10-TileMap.
Also in this blog explain the process: http://shawnsbits.com/blog/2010/12/23/mapkit-overlays-session-1-overlay-map/

Odd problem with NSImage -lockFocusFlipped:

I'm using NSImage's -lockFocusFlipped: method to do some drawing into an image. My code looks like this:
NSImage *image = [[NSImage alloc] initWithSize:NSMakeSize(256, 256)];
[image lockFocusFlipped:YES];
NSShadow *shadow = [[NSShadow alloc] init];
[shadow setShadowColor:[NSColor blackColor]];
[shadow setShadowBlurRadius:6.0];
[shadow setShadowOffset:NSMakeSize(0, 3)];
[shadow set];
NSRect shapeRect = NSMakeRect(0, 0, 256, 100);
[[NSColor redColor] set];
NSRectFill(shapeRect);
[image unlockFocus];
This code works to a certain point. I can confirm that the context is indeed flipped because [[NSGraphicsContext currentContext] isFlipped] returns YES, and also because shapeRect is drawn at the right position (using the top left corner as the origin). That said, the NSShadow does not seem to respect the flipped status of the context. Setting the shadow offset to (0, 3) should move the shadow down when the context is flipped, but it actually moves it up (which is what would happen in a standard non-flipped context).
This problem seems specific to -lockFocusFlipped, because when I'm drawing using this same code into a CALayer with a flipped coordinate system, the shadow is drawn just fine (respecting the flip). Documentation on -lockFocusFlipped also seems to be quite vague. This is all it says in the NSImage class documentation:
Prepares the image to receive drawing commands using the specified flipped state.
And I also found this note in the Snow Leopard AppKit Release Notes:
There are cases, for example drawing directly via NSLayoutManager, that require a flipped context. To cover this case, we add
- (void)lockFocusFlipped:(BOOL)flipped;
This doesn't alter the state of the image itself, only the context on which focus is locked. It means that (0,0) is at the top left and positive along the Y-axis is down in the locked context.
None of the docs seem to explain NSShadow's behaviour in this case. And through further testing, it seems NSGradient does not seem to respect the flipped state of the drawing context used by NSImage either.
Any insight is greatly appreciated :-)
From the NSShadow class reference:
Shadows are always drawn in the default user coordinate space, regardless of any transformations applied to that space. This means that rotations, translations and other transformations of the current transformation matrix (the CTM) do not affect the resulting shadow.
And that's what flipping ultimately is: Translate up, scale back the other way.
There's no such statement for NSGradient, so I'd suggest filing a bug about that one.

A way to implement a sideways NSTextFieldCell?

I have an NSTableView in my application with data being drawn in for both the X and Y axes (ie, every row is matched with every column.) I've got the data populating the cells the way I'd like, but it looks terrible with the columns stretched out horizontally.
I would like to turn the NSTextFieldCell on its side, so that the text is written vertically instead of horizontally. I realize that I'm probably going to have to subclass the NSTextFieldCell, but I'm not sure which functions I'm going to need to override in order to accomplish what I want to do.
What functions in NSTextFieldCell draw the text itself? Is there any built-in way to draw text vertically instead of horizontally?
Well, it took a lot of digging to figure this one out, but I eventually came across the NSAffineTransform object, which apparently can be used to shift the entire coordinate system with respect to the application. Once I had figured that out, I subclassed NSTextViewCell and overrode the -drawInteriorWithFrame:inView: function to rotate the coordinate system around before drawing the text.
- (void)drawInteriorWithFrame:(NSRect)cellFrame inView:(NSView *)controlView {
// Save the current graphics state so we can return to it later
NSGraphicsContext *context = [NSGraphicsContext currentContext];
[context saveGraphicsState];
// Create an object that will allow us to shift the origin to the center
NSSize originShift = NSMakeSize(cellFrame.origin.x + cellFrame.size.width / 2.0,
cellFrame.origin.y + cellFrame.size.height / 2.0);
// Rotate the coordinate system
NSAffineTransform* transform = [NSAffineTransform transform];
[transform translateXBy: originShift.width yBy: originShift.height]; // Move origin to center of cell
[transform rotateByDegrees:270]; // Rotate 90 deg CCW
[transform translateXBy: -originShift.width yBy: -originShift.height]; // Move origin back
[transform concat]; // Set the changes to the current NSGraphicsContext
// Create a new frame that matches the cell's position & size in the new coordinate system
NSRect newFrame = NSMakeRect(cellFrame.origin.x-(cellFrame.size.height-cellFrame.size.width)/2,
cellFrame.origin.y+(cellFrame.size.height-cellFrame.size.width)/2,
cellFrame.size.height, cellFrame.size.width);
// Draw the text just like we normally would, but in the new coordinate system
[super drawInteriorWithFrame:newFrame inView:controlView];
// Restore the original coordinate system so that other cells can draw properly
[context restoreGraphicsState];
}
I now have an NSTextCell that draws its contents sideways! By changing the row height, I can give it enough room to look good.

NSImage rotation

I have a window with an subclass of NSView in it. Inside the view, I put an NSImage.
I want to be able to rotate the image by 90 degrees, keeping the (new) upper left corner of the image in the upper left corner of the view. Of course, I will have to rotate the image, and then translate it to put the origin back into place.
In Carbon, I found CGContextRotateCTM which does what I want . However, I can't find the right call in ObjC. setFrameCenterRotation doesn't seem to do anything, and in setFrameRotation, I can't seem to figure out where the origin is, so I can approprately translate.
It seems to move. When I resize the window it puts the image (or part of it, I seem to have a strange clipping issue as wel) and when I scroll, it jumps to a different (and not always the saem) location.
Does this make sense to anyone?
thanks
I rotate text on the screen for an app I work on and the Cocoa (I assume you mean Cocoa and not ObjC in your question) way of doing this is to use NSAffineTransform.
Here's a snippet that should get you started
double rotateDeg = 90;
NSAffineTransform *rotate = [[NSAffineTransform alloc] init];
NSGraphicsContext *context = [NSGraphicsContext currentContext];
[context saveGraphicsState];
[rotate rotateByDegrees:rotateDeg];
[rotate concat];
/* Your drawing code [NSImage drawAtPoint....]for the image goes here
Also, if you need to lock focus when drawing, do it here. */
[rotate release];
[context restoreGraphicsState];
The mathematics on the rotation can get a little tricky here because what the above does is to rotate the coordinate system that you are drawing into. My rotation of 90 degrees is a counter-clockwise rotation.