iOS Newbie: Need help understanding Core Graphics Layers - objective-c

A bit of a newbie question for iOS, just started learning about Core Graphics but I'm still not sure how it's integrated into the actual view
So I have a method that creates a CGLayer ref, draws some bezier curves on it, and then returns the CGLayer; here's a very condensed version of my procedure:
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGLayerRef layer = CGLayerCreateWithContext(ctx, CGSizeMake(300.0,300.0), NULL);
CGContextRef layerCtx = CGLayerGetContext(layer);
Then I add some bezier fills that are dynamically generated by parsing an SVG file.
I've already verified that the SVG is being parsed properly, and I am using this code http://ponderwell.net/2011/05/converting-svg-paths-to-objective-c-paths/ to parse the path instructions (from the path tags' d parameters) into UIBezierPath objects
So I start by creating the fill
CGColorRef color = [UIColor colorWithRed:red green:green blue:blue alpha:1.0].CGColor;
CGContextSetFillColorWithColor(layerCtx, color);
Then adding the UIBezierPath
SvgToBezier *bezier = [[SvgToBezier alloc] initFromSVGPathNodeDAttr:pathAttr rect:CGRectMake(0, 0, 300, 300)];
UIBezierPath *bezierPath = bezier.bezier;
CGContextAddPath(layerCtx, bezierPath.CGPath);
CGContextDrawPath(layerCtx, kCGPathFill);
And finally I return the CGLayerRef like so
return layer;
My question: Now that I have this layer ref, I want to draw it to a UIView (or whatever type of object would be best to draw to)
The ultimate goal is to have the layer show up in a UITableViewCell.
Also, the method I've described above exists in a class I've created from scratch (which inherits from NSObject). There is separate method which uses this method to assign the layer ref to a property of the class instance.
I would be creating this instance from the view controller connected to my table view. Then hopefully adding the layer to a UITableView cell within the "cellForRowAtIndexPath" method.
I hope I included all of the relevant information and I greatly appreciate your help.
Cheers

Short answer is the view you want to draw on's layer needs to be set to your newly created layer.
[myCurrentViewController.view setLayer:newLayer];
Long answer can be found in the excellent Quartz2D programming guide, found here.

Related

how to customize the input of drawRect

Here's my drawRect code:
- (void)drawRect:(CGRect)rect {
if (currentLayer) {
[currentLayer removeFromSuperlayer];
}
if (currentPath) {
currentLayer = [[CAShapeLayer alloc] init];
currentLayer.frame = self.bounds;
currentLayer.path = currentPath.CGPath;
if ([SettingsManager shared].isColorInverted) {
currentLayer.strokeColor = [UIColor blackColor].CGColor;
} else {
currentLayer.strokeColor = [UIColor whiteColor].CGColor;
}
currentLayer.strokeColor = _strokeColorX.CGColor;
currentLayer.fillColor = [UIColor clearColor].CGColor;
currentLayer.lineWidth = self.test;
currentLayer.contentsGravity = kCAGravityCenter;
[self.layer addSublayer:currentLayer];
//currentLayer.position = CGPointMake(CGRectGetMidX(self.frame), CGRectGetMidY(self.frame));
}
//[currentPath stroke];
}
"The portion of the view’s bounds that needs to be updated"
That's from the apple dev docs.
I am dealing with a UIView and a device called a slate -the slate has the ability to record pencil drawings to my iOS app. I managed to get it working on the entire view. But, I dont want the entire view to be filled with the slate input. Instead, I'd like to modify the UIView to have a height of phonesize - 40px; Makes sense?
I found out if I set the frame that I have as currentLayer to new bounds the screen that can be drawn on is resized accordingly.
I think you’re mixing up the meaning of the various rectangles in UIView. The drawRect parameter designates a partial section of the view that needs to be redrawn. This can be that you called setNeedsDisplay(rect), or this can be because iOS thinks it needs to be redrawn (e.g. because your app‘s drawings were ignored because the screen was locked, and now the user unlocked the screen and current drawings are needed).
This rectangle has nothing to do with the size at which your contents are drawn. The size of the area your view is drawn in is controlled by its frame and bounds, the former of which is usually controlled using Auto Layout (i.e. layout constraint objects).
In general, while in drawRect, you look at the bounds of your view to get the full rectangle to draw in. The parameter to drawRect, on the other hand, is there to allow you to optimize your drawing to only redraw the parts that actually changed.
Also, it is in general a bad idea to manipulate the view and layer hierarchies from inside drawRect. The UIView drawing mechanism expects you to only draw your current view‘s state in there. Since it recursively walks the list of views and subviews to call drawRect, you changing the list of views or layers can cause weird side effects like views being skipped until the next redraw.
Create and add new layers in your state-changing methods or event handling methods, and position them using auto layout or from inside layoutSubviews.

Issue with "renderincontext" with opengl views

I have a problem, with openGL views. I have two openGL views. The second view is added as a subview to the mainview. The two opengl views are drawn in two different opengl contexts. I need to capture the screen with the two opengl views.
The issue is that if I try to render one CAEAGLLayer in a context as below:
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(context, 1*(self.frame.size.width*0.5), 1*(self.frame.size.height*0.5));
CGContextScaleCTM(context, 3, 3);
CGContextTranslateCTM(context, abcd, abcd);
CAEAGLLayer *eaglLayer = (CAEAGLLayer*) self.myOwnView.layer;
[eaglLayer renderInContext:context];
it does not work. If I see the context (given the output as an image), The contents in the opengl layer are missing. But I find the toolbar and 2d images attached to the view, in the output image. I am not sure of the problem. Please help.
I had a similar problem and found a much more elegant solution. Basically, you subclass CAEAGLLayer, and add your own implementation of renderInContext that simply asks the OpenGL view to render the contents using glReadPixels. The beauty is that now you can call renderInContext on any layer in the hierarchy, and the result is a fully composed, perfect looking screenshot that includes your OpenGL views in it!
Our renderInContext in the subclassed CAEAGLLayer is:
- (void)renderInContext:(CGContextRef)ctx
{
[super renderInContext: ctx];
[self.delegate renderInContext: ctx];
}
Then, in the OpenGL view we replace layerClass so that it returns our subclass instead of the plain vanilla CAEAGLLayer:
+ (Class)layerClass
{
return [MyCAEAGLLayer class];
}
We add a method in the view to actually render the contents of the view into the context. Note that this code MUST run after your GL view has been rendered, but before you call presentRenderbuffer so that the render buffer will contain your frame. Otherwise the resulting image will most likely be empty (you may see different behavior between the device and the simulator on this particular issue).
- (void) renderInContext: (CGContextRef) context
{
GLint backingWidth, backingHeight;
// Bind the color renderbuffer used to render the OpenGL ES view
// If your application only creates a single color renderbuffer which is already bound at this point,
// this call is redundant, but it is needed if you're dealing with multiple renderbuffers.
// Note, replace "_colorRenderbuffer" with the actual name of the renderbuffer object defined in your class.
glBindRenderbuffer(GL_RENDERBUFFER, _colorRenderBuffer);
// Get the size of the backing CAEAGLLayer
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &backingWidth);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &backingHeight);
NSInteger x = 0, y = 0, width = backingWidth, height = backingHeight;
NSInteger dataLength = width * height * 4;
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
// Read pixel data from the framebuffer
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);
// Create a CGImage with the pixel data
// If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
// otherwise, use kCGImageAlphaPremultipliedLast
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(width, height, 8, 32, width * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast,
ref, NULL, true, kCGRenderingIntentDefault);
CGFloat scale = self.contentScaleFactor;
NSInteger widthInPoints, heightInPoints;
widthInPoints = width / scale;
heightInPoints = height / scale;
// UIKit coordinate system is upside down to GL/Quartz coordinate system
// Flip the CGImage by rendering it to the flipped bitmap context
// The size of the destination area is measured in POINTS
CGContextSetBlendMode(context, kCGBlendModeCopy);
CGContextDrawImage(context, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);
// Clean up
free(data);
CFRelease(ref);
CFRelease(colorspace);
CGImageRelease(iref);
}
Finally, in order to grab a screenshot you use renderInContext in the usual fasion. Of course the beauty is that you don't need to grab the OpenGL view directly. You can grab one of the superviews of the OpenGL view and get a composed screenshot that includes the OpenGL view along with anything else next to it or on top of it:
UIGraphicsBeginImageContextWithOptions(superviewToGrab.bounds.size, YES, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
[superviewToGrab.layer renderInContext: context]; // This recursively calls renderInContext on all the sublayers, including your OpenGL layer(s)
CGImageRef screenShot = UIGraphicsGetImageFromCurrentImageContext().CGImage;
UIGraphicsEndImageContext();
This question has already been settled, but I wanted to note that Idoogy's answer is actually dangerous and a poor choice for most use cases.
Rather than subclass CAEAGLLayer and create a new delegate object, you can use the existing delegate methods which accomplish exactly the same thing. For example:
- (void) drawLayer:(CALayer *) layer inContext:(CGContextRef)ctx;
is a great method to implement in your GL-based views. You can implement it in much that same way he suggests, using glReadPixels: just make sure to set the Retained-Backing property on your view to YES, so that you can call the above method anytime without having to worry about it having been invalidated by presentation for display.
Subclassing CAEAGL layer messes with the existing UIView / CALayer delegate relationship: in most cases, setting the delegate object on your custom layer will result in your UIView being excluded from the view hierarchy. Thus, code like:
customLayerView = [[CustomLayerView alloc] initWithFrame:someFrame];
[someSuperview addSubview:customLayerView];
will result in a weird, one-way superview-subview relationship, since the delegate methods that UIView relies on won't be implemented. (Your superview will still have the sublayer from your custom view, though).
So, instead of subclassing CAEAGLLayer, just implement some of the delegate methods. Apple lays it out for you here: https://developer.apple.com/library/ios/documentation/QuartzCore/Reference/CALayerDelegate_protocol/Reference/Reference.html#//apple_ref/doc/uid/TP40012871
All the best,
Sam
I think http://developer.apple.com/library/ios/#qa/qa1704/_index.html provides what you want.

Drawing a layer shadow to a PDF context

I have a bunch of UIViews to which I add shadows via their layers, in their drawRect method:
self.layer.shadowPath = path;
self.layer.shadowColor = [[UIColor blackColor] CGColor];
self.layer.shadowOpacity = .6;
self.layer.shadowOffset = CGSizeMake(2,3);
self.layer.shadowRadius = 2;
This works well, but my problem is I also need to create a PDF with those views.
I'm doing this by creating a PDF context and passing it to the drawing method so that the drawing happens in the PDF context.
That also works well, except that the shadows are not rendered in the PDF. I've experimented with a couple approaches, but haven't managed to find a proper, easy way to get those shadows to appear where they belong in the PDF.
Would anyone know how to do this?
You will need to make the relevant CoreGraphics calls in the drawrect to draw the shadows rather than using the CALayer properties.
Check out the Apple docs on shadows.

Saving photo with added overlay to photo library

I'm making an app where the user takes a photo, an overlay is placed over the image, and the user can then save the image or upload it to Facebook or other sites.
I have managed to get the app to take the photo, and to make the overlay I am using a UIImageView, which is placed over the top of the photo.
I'm not sure how exactly I can export the image, with the overlay, as one file, to the photo library, or to Facebook.
This should give you the idea. This code may need a tweak or two, though. (I admit that I haven't run it.)
Your UIImageView's base layer can render itself into a CoreGraphics bitmap context (this will include all the sublayers, of course), which can then provide you with a new UIImage.
UIImage *finalImage;
// Get the layer
CALayer *layer = [myImageView layer];
// Create a bitmap context and make it the current context
UIGraphicsBeginImageContext(layer.bounds.size);
// Get a reference to the context
CGContextRef ctx = UIGraphicsGetCurrentContext();
// Ask the layer to draw itself
[layer drawInContext:ctx];
// Then ask the context itself for the image
finalImage = UIGraphicsGetImageFromCurrentImageContext();
// Finally, pop the context off the stack
UIGraphicsEndImageContext();
These functions are all described in the UIKit Function Reference.
There's also a slightly more complicated way that gives you more control over things like color, which involves creating a CGBitmapContext and getting a CGImageRef, from which you can again produce a UIImage. That is described in the Quartz 2D Programming Guide, section "Creating a Bitmap Graphics Context".

How do I render a view which contains Core Animation layers to a bitmap?

I am using an NSView to host several Core Animation CALayer objects. What I want to be able to do is grab a snapshot of the view's current state as a bitmap image.
This is relatively simple with a normal NSView using something like this:
void ClearBitmapImageRep(NSBitmapImageRep* bitmap) {
unsigned char* bitmapData = [bitmap bitmapData];
if (bitmapData != NULL)
bzero(bitmapData, [bitmap bytesPerRow] * [bitmap pixelsHigh]);
}
#implementation NSView (Additions)
- (NSBitmapImageRep*)bitmapImageRepInRect:(NSRect)rect
{
NSBitmapImageRep* imageRep = [self bitmapImageRepForCachingDisplayInRect:rect];
ClearBitmapImageRep(imageRep);
[self cacheDisplayInRect:rect toBitmapImageRep:imageRep];
return imageRep;
}
#end
However, when I use this code, the Core Animation layers are not rendered.
I have investigated CARenderer, as it appears to do what I need, however I cannot get it to render my existing layer tree. I tried the following:
NSOpenGLPixelFormatAttribute att[] =
{
NSOpenGLPFAWindow,
NSOpenGLPFADoubleBuffer,
NSOpenGLPFAColorSize, 24,
NSOpenGLPFAAlphaSize, 8,
NSOpenGLPFADepthSize, 24,
NSOpenGLPFANoRecovery,
NSOpenGLPFAAccelerated,
0
};
NSOpenGLPixelFormat *pixelFormat = [[NSOpenGLPixelFormat alloc] initWithAttributes:att];
NSOpenGLView* openGLView = [[NSOpenGLView alloc] initWithFrame:[self frame] pixelFormat:pixelFormat];
NSOpenGLContext* oglctx = [openGLView openGLContext];
CARenderer* renderer = [CARenderer rendererWithCGLContext:[oglctx CGLContextObj] options:nil];
renderer.layer = myContentLayer;
[renderer render];
NSBitmapImageRep* bitmap = [oglView bitmapImageRepInRect:[oglView bounds]];
However, when I do this I get an exception:
CAContextInvalidLayer -- layer <CALayer: 0x1092ea0> is already attached to a context
I'm guessing that this must be because the layer tree is hosted in my NSView and therefore attached to its context. I don't understand how I can detach the layer tree from the NSView in order to render it to a bitmap, and it's non-trivial in this case to create a duplicate layer tree.
Is there some other way to get the CALayers to render to a bitmap? I can't find any sample code anywhere for doing this, in fact I can't find any sample code for CARenderer at all.
There is a great post on "Cocoa is my girlfriend" about recording Core Animations. The author captures the whole animation into a movie, but you could use the part where he grabs a single frame.
Jump to the "Obtaining the Current Frame" section in this article:
http://www.cimgf.com/2009/02/03/record-your-core-animation-animation/
The basic idea is:
Create a CGContext
Use CALayer's renderInContext:
Create a NSBitmapImageRep from the
context (using
CGBitmapContextCreateImage and
NSBitmapImageRep's initWithCGImage)
Update:
I just read, that the renderInContext: method does not support all kind of layers in Mac OS X 10.5.
It does not work for the following layers classes:
QCCompositionLayer
CAOpenGLLayer
QTMovieLayer
If you want sample code for how to render a CALayer hierarchy to an NSImage (or UIImage for the iPhone), you can look at the Core Plot framework's CPLayer and its -imageOfLayer method. We actually created a rendering pathway that is independent of the normal -renderInContext: process used by CALayer, because the normal way does not preserve vector elements when generating PDF representations of layers. That's why you'll see the -recursivelyRenderInContext: method in this code.
However, this won't help you if you are trying to capture from any of the layer types mentioned by weichsel (QCCompositionLayer, CAOpenGLLayer, or QTMovieLayer).