Issue with "renderincontext" with opengl views - objective-c

I have a problem, with openGL views. I have two openGL views. The second view is added as a subview to the mainview. The two opengl views are drawn in two different opengl contexts. I need to capture the screen with the two opengl views.
The issue is that if I try to render one CAEAGLLayer in a context as below:
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(context, 1*(self.frame.size.width*0.5), 1*(self.frame.size.height*0.5));
CGContextScaleCTM(context, 3, 3);
CGContextTranslateCTM(context, abcd, abcd);
CAEAGLLayer *eaglLayer = (CAEAGLLayer*) self.myOwnView.layer;
[eaglLayer renderInContext:context];
it does not work. If I see the context (given the output as an image), The contents in the opengl layer are missing. But I find the toolbar and 2d images attached to the view, in the output image. I am not sure of the problem. Please help.

I had a similar problem and found a much more elegant solution. Basically, you subclass CAEAGLLayer, and add your own implementation of renderInContext that simply asks the OpenGL view to render the contents using glReadPixels. The beauty is that now you can call renderInContext on any layer in the hierarchy, and the result is a fully composed, perfect looking screenshot that includes your OpenGL views in it!
Our renderInContext in the subclassed CAEAGLLayer is:
- (void)renderInContext:(CGContextRef)ctx
{
[super renderInContext: ctx];
[self.delegate renderInContext: ctx];
}
Then, in the OpenGL view we replace layerClass so that it returns our subclass instead of the plain vanilla CAEAGLLayer:
+ (Class)layerClass
{
return [MyCAEAGLLayer class];
}
We add a method in the view to actually render the contents of the view into the context. Note that this code MUST run after your GL view has been rendered, but before you call presentRenderbuffer so that the render buffer will contain your frame. Otherwise the resulting image will most likely be empty (you may see different behavior between the device and the simulator on this particular issue).
- (void) renderInContext: (CGContextRef) context
{
GLint backingWidth, backingHeight;
// Bind the color renderbuffer used to render the OpenGL ES view
// If your application only creates a single color renderbuffer which is already bound at this point,
// this call is redundant, but it is needed if you're dealing with multiple renderbuffers.
// Note, replace "_colorRenderbuffer" with the actual name of the renderbuffer object defined in your class.
glBindRenderbuffer(GL_RENDERBUFFER, _colorRenderBuffer);
// Get the size of the backing CAEAGLLayer
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &backingWidth);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &backingHeight);
NSInteger x = 0, y = 0, width = backingWidth, height = backingHeight;
NSInteger dataLength = width * height * 4;
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
// Read pixel data from the framebuffer
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);
// Create a CGImage with the pixel data
// If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
// otherwise, use kCGImageAlphaPremultipliedLast
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(width, height, 8, 32, width * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast,
ref, NULL, true, kCGRenderingIntentDefault);
CGFloat scale = self.contentScaleFactor;
NSInteger widthInPoints, heightInPoints;
widthInPoints = width / scale;
heightInPoints = height / scale;
// UIKit coordinate system is upside down to GL/Quartz coordinate system
// Flip the CGImage by rendering it to the flipped bitmap context
// The size of the destination area is measured in POINTS
CGContextSetBlendMode(context, kCGBlendModeCopy);
CGContextDrawImage(context, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);
// Clean up
free(data);
CFRelease(ref);
CFRelease(colorspace);
CGImageRelease(iref);
}
Finally, in order to grab a screenshot you use renderInContext in the usual fasion. Of course the beauty is that you don't need to grab the OpenGL view directly. You can grab one of the superviews of the OpenGL view and get a composed screenshot that includes the OpenGL view along with anything else next to it or on top of it:
UIGraphicsBeginImageContextWithOptions(superviewToGrab.bounds.size, YES, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
[superviewToGrab.layer renderInContext: context]; // This recursively calls renderInContext on all the sublayers, including your OpenGL layer(s)
CGImageRef screenShot = UIGraphicsGetImageFromCurrentImageContext().CGImage;
UIGraphicsEndImageContext();

This question has already been settled, but I wanted to note that Idoogy's answer is actually dangerous and a poor choice for most use cases.
Rather than subclass CAEAGLLayer and create a new delegate object, you can use the existing delegate methods which accomplish exactly the same thing. For example:
- (void) drawLayer:(CALayer *) layer inContext:(CGContextRef)ctx;
is a great method to implement in your GL-based views. You can implement it in much that same way he suggests, using glReadPixels: just make sure to set the Retained-Backing property on your view to YES, so that you can call the above method anytime without having to worry about it having been invalidated by presentation for display.
Subclassing CAEAGL layer messes with the existing UIView / CALayer delegate relationship: in most cases, setting the delegate object on your custom layer will result in your UIView being excluded from the view hierarchy. Thus, code like:
customLayerView = [[CustomLayerView alloc] initWithFrame:someFrame];
[someSuperview addSubview:customLayerView];
will result in a weird, one-way superview-subview relationship, since the delegate methods that UIView relies on won't be implemented. (Your superview will still have the sublayer from your custom view, though).
So, instead of subclassing CAEAGLLayer, just implement some of the delegate methods. Apple lays it out for you here: https://developer.apple.com/library/ios/documentation/QuartzCore/Reference/CALayerDelegate_protocol/Reference/Reference.html#//apple_ref/doc/uid/TP40012871
All the best,
Sam

I think http://developer.apple.com/library/ios/#qa/qa1704/_index.html provides what you want.

Related

Optimize CGContextDrawRadialGradient in drawRect:

In my iPad app, I have a UITableView that alloc/inits a UIView subclass every time a new cell is selected. I've overridden drawRect: in this UIView to draw a radial gradient and it works fine, but performance is suffering - when a cell is tapped, the UIView takes substantially longer to draw a gradient programmatically as opposed to using a .png for the background. Is there any way to "cache" my drawRect: method or the gradient it generates to improve performance? I'd rather use drawRect: instead of a .png. My method looks like this:
- (void)drawRect:(CGRect)rect
{
CGContextRef context = UIGraphicsGetCurrentContext();
size_t gradLocationsNum = 2;
CGFloat gradLocations[2] = {0.0f, 1.0f};
CGFloat gradColors[8] = {0.0f,0.0f,0.0f,0.0f,0.0f,0.0f,0.0f,0.5f};
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGGradientRef gradient = CGGradientCreateWithColorComponents(colorSpace, gradColors, gradLocations, gradLocationsNum);
CGColorSpaceRelease(colorSpace);
CGPoint gradCenter = CGPointMake(CGRectGetMidX(self.bounds), CGRectGetMidY(self.bounds));
float gradRadius = MIN(self.bounds.size.width , self.bounds.size.height) ;
CGContextDrawRadialGradient (context, gradient, gradCenter, 0, gradCenter, gradRadius, kCGGradientDrawsAfterEndLocation);
CGGradientRelease(gradient);
}
Thanks!
You can render graphics into a context and then store that as a UIImage. This answer should get you started:
drawRect: is a method on UIView used to draw the view itself, not to pre-create graphic objects.
Since it seems that you want to create shapes to store them and draw later, it appears reasonable to create the shapes as UIImage and draw them using UIImageView. UIImage can be stored directly in an NSArray.
To create the images, do the following (on the main queue; not in drawRect:):
1) create a bitmap context
UIGraphicsBeginImageContextWithOptions(size, opaque, scale);
2) get the context
CGContextRef context = UIGraphicsGetCurrentContext();
3) draw whatever you need
4) export the context into an image
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
5) destroy the context
UIGraphicsEndImageContext();
6) store the reference to the image
[yourArray addObject:image];
Repeat for each shape you want to create.
For details see the documentation for the above mentioned functions. To get a better understanding of the difference between drawing in drawRect: and in arbitrary place in your program and of working with contexts in general, I would recommend you read the Quartz2D Programming Guide, especially the section on Graphics Contexts.

Draw rounded linear gradient (or extended radial gradient) with CoreGraphics

I want to do some custom drawing with CoreGraphics. I need a linear gradient on my view, but the thing is that this view is a rounded rectangle so I want my gradient to be also rounded at angles. You can see what I want to achieve on the image below:
So is this possible to implement in CoreGraphics or some other programmatic and easy way?
Thank you.
I don't think there is an API for that, but you can get the same effect if you first draw a radial gradient, say, in an (N+1)x(N+1) size bitmap context, then convert the image from the context to a resizable image with left and right caps set to N.
Pseudocode:
UIGraphicsBeginImageContextWithOptions(CGSizeMake(N+1,N+1), NO, 0.0f);
CGContextRef context = UIGraphicsGetCurrentContext();
// <draw the gradient into 'context'>
UIImage* gradientBase = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImage* gradientImage = [gradientBase resizableImageWithCapInsets:UIEdgeInsetsMake(0,N,0,N)];
In case you want the image to scale vertically as well, you just have to set the caps to UIEdgeInsetsMake(N,N,N,N).
I just want to add more sample code for this technique, as some things weren't obvious for. Maybe it will be useful for somebody:
So, let's say, we have our custom view class and in it's drawRect: method we put this:
// Defining the rect in which to draw
CGRect drawRect=self.bounds;
Float32 gradientSize=drawRect.size.height; // The size of original radial gradient
CGPoint center=CGPointMake(0.5f*gradientSize,0.5f*gradientSize); // Center of gradient
// Creating the gradient
Float32 colors[4]={0.f,1.f,1.f,0.2f}; // From opaque white to transparent black
CGGradientRef gradient=CGGradientCreateWithColorComponents(CGColorSpaceCreateDeviceGray(), colors, nil, 2);
// Starting image and drawing gradient into it
UIGraphicsBeginImageContextWithOptions(CGSizeMake(gradientSize, gradientSize), NO, 1.f);
CGContextRef context=UIGraphicsGetCurrentContext();
CGContextDrawRadialGradient(context, gradient, center, 0.f, center, center.x, 0); // Drawing gradient
UIImage* gradientImage=UIGraphicsGetImageFromCurrentImageContext(); // Retrieving image from context
UIGraphicsEndImageContext(); // Ending process
gradientImage=[gradientImage resizableImageWithCapInsets:UIEdgeInsetsMake(0.f, center.x-1.f, 0.f, center.x-1.f)]; // Leaving 2 pixels wide area in center which will be tiled to fill whole area
// Drawing image into view frame
[gradientImage drawInRect:drawRect];
That's all. Also if you're not going to ever change the gradient while app is running, you would want to put everything except last line in awakeFromNib method and then in drawRect: just draw the gradientImage into view's frame. Also don't forget to retain the gradientImage in this case.

crop image from certain portion of screen in iphone programmatically

NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
CGSize contextSize=CGSizeMake(320,400);
UIGraphicsBeginImageContext(self.view.bounds.size);
UIGraphicsBeginImageContext(contextSize);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *savedImg = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[self setSaveImage:savedImg];
to extarct some part of image from main screen.
In UIGraphicsBeginImageContext I can only use size, is there any way to use CGRect or some other way to extract image from a specific portion of screen ie (x,y, 320, 400) some thing like this
Hope this helps:
// Create new image context (retina safe)
UIGraphicsBeginImageContextWithOptions(size, NO, 0.0);
// Create rect for image
CGRect rect = CGRectMake(x, y, size.width, size.height);
// Draw the image into the rect
[existingImage drawInRect:rect];
// Saving the image, ending image context
UIImage * newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
This question is really a duplicate of several other questions including this: How to crop the UIImage?, but since it took me a while to find a solution, I will cross post again.
In my quest for a solution that I could more easily understand (and written in Swift), I arrived at this:
I wanted to be able to crop from a region based on an aspect ratio, and scale to a size based on a outer bounding extent. Here is my variation:
import AVFoundation
import ImageIO
class Image {
class func crop(image:UIImage, crop source:CGRect, aspect:CGSize, outputExtent:CGSize) -> UIImage {
let sourceRect = AVMakeRectWithAspectRatioInsideRect(aspect, source)
let targetRect = AVMakeRectWithAspectRatioInsideRect(aspect, CGRect(origin: CGPointZero, size: outputExtent))
let opaque = true, deviceScale:CGFloat = 0.0 // use scale of device's main screen
UIGraphicsBeginImageContextWithOptions(targetRect.size, opaque, deviceScale)
let scale = max(
targetRect.size.width / sourceRect.size.width,
targetRect.size.height / sourceRect.size.height)
let drawRect = CGRect(origin: -sourceRect.origin * scale, size: image.size * scale)
image.drawInRect(drawRect)
let scaledImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return scaledImage
}
}
There are a couple things that I found confusing, the separate concerns of cropping and resizing. Cropping is handled with the origin of the rect that you pass to drawInRect, and scaling is handled by the size portion. In my case, I needed to relate the size of the cropping rect on the source, to my output rect of the same aspect ratio. The scale factor is then output / input, and this needs to be applied to the drawRect (passed to drawInRect).
One caveat is that this approach effectively assumes that the image you are drawing is larger than the image context. I have not tested this, but I think you can use this code to handle cropping / zooming, but explicitly defining the scale parameter to be the aforementioned scale parameter. By default, UIKit applies a multiplier based on the screen resolution.
Finally, it should be noted that this UIKit approach is higher level than CoreGraphics / Quartz and Core Image approaches, and seems to handle image orientation issues. It is also worth mentioning that it is pretty fast, second to ImageIO, according to this post here: http://nshipster.com/image-resizing/

CGAffineTransformMake - how to save the state of UIimageview to futher process it

I am using CGAffineTransformMake to flip an UIImageView vertically. It works fine but it does not seem to save the new flipped position of UIImageview, because when I try to flip it 2nd time (execute the line code below) it just does not work.
shape.transform = CGAffineTransformMake(1, 0, 0, 1, 0, 0);
help please.
Thanks in advance.
Kedar
Transforms are not automatically additive/accumulative as you would expect. Assigning a transform just transforms the target once.
Each transform is highly specific. If apply a rotation transform that rotates a view +45 degrees, you will see it rotate only once. Applying the same transform again does not rotate the view an additional +45 degrees. All subsequent applications of the same transforms produce no visible effect because the view is already rotated +45 degrees and that is all that transform will ever do.
To make transforms accumulative you have apply the new transform to the existing transform instead of just replacing it. So as mentioned previously for each subsequent rotation you use:
shape.transform = CGAffineTransformRotate(shape.transform, M_PI);
Which adds the new transform to the existing transform. If you add a +45 degree transform in this manner the view will rotate an additional +45 each time it is applied.
I have the same problem with you and I found the solution! I want to rotate the UIImageView, because I will have the animation. To save the image I use this method:
void CGContextConcatCTM(CGContextRef c, CGAffineTransform transform)
the transform param is the transform of your UIImageView so anything you have done to the imageView will be the same with image! And I have write a category method of UIImage.
-(UIImage *)imageRotateByTransform:(CGAffineTransform)transform{
// calculate the size of the rotated view's containing box for our drawing space
UIView *rotatedViewBox = [[UIView alloc] initWithFrame:CGRectMake(0,0,self.size.width, self.size.height)];
rotatedViewBox.transform = transform;
CGSize rotatedSize = rotatedViewBox.frame.size;
[rotatedViewBox release];
// Create the bitmap context
UIGraphicsBeginImageContext(rotatedSize);
CGContextRef bitmap = UIGraphicsGetCurrentContext();
// Move the origin to the middle of the image so we will rotate and scale around the center.
CGContextTranslateCTM(bitmap, rotatedSize.width/2, rotatedSize.height/2);
//Rotate the image context using tranform
CGContextConcatCTM(bitmap, transform);
// Now, draw the rotated/scaled image into the context
CGContextScaleCTM(bitmap, 1.0, -1.0);
CGContextDrawImage(bitmap, CGRectMake(-self.size.width / 2, -self.size.height / 2, self.size.width, self.size.height), [self CGImage]);
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
Hope this will help you.
If you just want to reverse the effects of a previous transformation, you may like to look into setting the shape.transform property to the value CGAffineTransformIdentity.
When you set a view's transform property you are replacing any existing transform it has, not adding to it. So if you assign a transform which causes a rotation, it will forget about any flip you had previously configured.
If you want to add an additional rotation or scaling operation to a view which you have previously transformed you should investigate the functions which allow you to specify an existing transform.
I.e. instead of using
shape.transform = CGAffineTransformMakeRotation(M_PI);
which replaces the existing transform with the specified rotation, you could use
shape.transform = CGAffineTransformRotate(shape.transform, M_PI);
this applies the rotation to the existing transform (what ever that may be) and then assigns it to the view. Take a look at Apple's documentation for CGAffineTransformRotate, it may clarify things a little.
BTW, the documentation says: "If you don’t plan to reuse an affine transform, you may want to use CGContextScaleCTM, CGContextRotateCTM, CGContextTranslateCTM, or CGContextConcatCTM."

How do I render a view which contains Core Animation layers to a bitmap?

I am using an NSView to host several Core Animation CALayer objects. What I want to be able to do is grab a snapshot of the view's current state as a bitmap image.
This is relatively simple with a normal NSView using something like this:
void ClearBitmapImageRep(NSBitmapImageRep* bitmap) {
unsigned char* bitmapData = [bitmap bitmapData];
if (bitmapData != NULL)
bzero(bitmapData, [bitmap bytesPerRow] * [bitmap pixelsHigh]);
}
#implementation NSView (Additions)
- (NSBitmapImageRep*)bitmapImageRepInRect:(NSRect)rect
{
NSBitmapImageRep* imageRep = [self bitmapImageRepForCachingDisplayInRect:rect];
ClearBitmapImageRep(imageRep);
[self cacheDisplayInRect:rect toBitmapImageRep:imageRep];
return imageRep;
}
#end
However, when I use this code, the Core Animation layers are not rendered.
I have investigated CARenderer, as it appears to do what I need, however I cannot get it to render my existing layer tree. I tried the following:
NSOpenGLPixelFormatAttribute att[] =
{
NSOpenGLPFAWindow,
NSOpenGLPFADoubleBuffer,
NSOpenGLPFAColorSize, 24,
NSOpenGLPFAAlphaSize, 8,
NSOpenGLPFADepthSize, 24,
NSOpenGLPFANoRecovery,
NSOpenGLPFAAccelerated,
0
};
NSOpenGLPixelFormat *pixelFormat = [[NSOpenGLPixelFormat alloc] initWithAttributes:att];
NSOpenGLView* openGLView = [[NSOpenGLView alloc] initWithFrame:[self frame] pixelFormat:pixelFormat];
NSOpenGLContext* oglctx = [openGLView openGLContext];
CARenderer* renderer = [CARenderer rendererWithCGLContext:[oglctx CGLContextObj] options:nil];
renderer.layer = myContentLayer;
[renderer render];
NSBitmapImageRep* bitmap = [oglView bitmapImageRepInRect:[oglView bounds]];
However, when I do this I get an exception:
CAContextInvalidLayer -- layer <CALayer: 0x1092ea0> is already attached to a context
I'm guessing that this must be because the layer tree is hosted in my NSView and therefore attached to its context. I don't understand how I can detach the layer tree from the NSView in order to render it to a bitmap, and it's non-trivial in this case to create a duplicate layer tree.
Is there some other way to get the CALayers to render to a bitmap? I can't find any sample code anywhere for doing this, in fact I can't find any sample code for CARenderer at all.
There is a great post on "Cocoa is my girlfriend" about recording Core Animations. The author captures the whole animation into a movie, but you could use the part where he grabs a single frame.
Jump to the "Obtaining the Current Frame" section in this article:
http://www.cimgf.com/2009/02/03/record-your-core-animation-animation/
The basic idea is:
Create a CGContext
Use CALayer's renderInContext:
Create a NSBitmapImageRep from the
context (using
CGBitmapContextCreateImage and
NSBitmapImageRep's initWithCGImage)
Update:
I just read, that the renderInContext: method does not support all kind of layers in Mac OS X 10.5.
It does not work for the following layers classes:
QCCompositionLayer
CAOpenGLLayer
QTMovieLayer
If you want sample code for how to render a CALayer hierarchy to an NSImage (or UIImage for the iPhone), you can look at the Core Plot framework's CPLayer and its -imageOfLayer method. We actually created a rendering pathway that is independent of the normal -renderInContext: process used by CALayer, because the normal way does not preserve vector elements when generating PDF representations of layers. That's why you'll see the -recursivelyRenderInContext: method in this code.
However, this won't help you if you are trying to capture from any of the layer types mentioned by weichsel (QCCompositionLayer, CAOpenGLLayer, or QTMovieLayer).