CGImageRef MemoryLeak again - objective-c

I have the memory leak problem with CGImageRef (I guess, it's so) in my Cocoa Desktop Application. I've read lots of questions here and anywhere else, read FAQ of Core Graphics memory management on developer.apple.com. Maybe this question is more similar to mine, though solution didn't help.
My task is to scale area 15*15 from saved CGImage and return NSImage* as a result, it is done on each mouse movement.
-(NSImage*)getScaledAreaInX:(int)x andY:(int)y
{
// Catching image from the screen
CGImageRef fullscreen = CGImageRetain(_magniHack);
// Cropping
int screenHeight = CGImageGetHeight(fullscreen);
CGRect fixedRect = CGRectMake(x-7, screenHeight-y-8, 15, 15);
CGImageRef cropped = CGImageCreateWithImageInRect(fullscreen, fixedRect);
// Scaling
int width = CGImageGetWidth(cropped)*8; // New width;
int height = CGImageGetHeight(cropped)*8; // New height;
CGColorSpaceRef colorspace = CGImageGetColorSpace(cropped);
CGContextRef context = CGBitmapContextCreate(NULL, width, height,
CGImageGetBitsPerComponent(cropped),
CGImageGetBytesPerRow(cropped),
colorspace,
CGImageGetAlphaInfo(cropped));
CGContextSetInterpolationQuality(context, kCGInterpolationNone);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), cropped);
CGImageRef scaled = CGBitmapContextCreateImage(context);
// Casting to NSImage
NSImage *image = [[NSImage alloc] initWithCGImage:scaled size:NSZeroSize];
// Releasing memory
CGImageRelease(fullscreen);
CGColorSpaceRelease(colorspace);
CGContextRelease(context);
//CGImageRelease(cropped); // Can't do: will crash; In what situations can free?
cropped = NULL;
CGImageRelease(scaled);
scaled = NULL;
return image;
}
If I uncomment CGImageRelease line, app will crash on 6th movement of cursor, on retaining _magniHack or cropping the image (it differs every time), the message is "EXC_BAD_ACCESS". If I don't, there will be memory leak every time (during frequent movements the leak is dozens of MB). The same result I get if I release cropped, but do not release scaled image (though leak will be much more).
_magniHack - CGImageRef, it is private instance variable, it is set only once in that code:
-(void)storeFullScreen
{
if (_magniHack) {
CGImageRelease(_magniHack);
}
_magniHack = CGDisplayCreateImage(displays[0]);
}
I use ARC in project if it helps. Though this thing still can't help get rid of leaks.
I guess that _magniHack is released somewhere, but I can't find, where, cause I always implement retain on start and release in the end.

This is pretty old, but I had the same problem in my case.
The issue was releasing colorspace without actually having a copy of it. CGImageGetColorSpace(cropped) gives you a pointer to existing color space and you should not release it. It's not a copy created for you.
That was my case. When I noticed that (I used code from the internet to scale image as well)
CGDisplayCreateImage(displays[0]) was not crashing anymore.

Actually, I got rid of this leak by using Core Graphics only if necessary, i.e. after grabbing the screen I wrap it into NSImage*, and atter I work only with it. But it is still interesting, what was wrong with the code above.

Related

NSImage drawInRect and NSView cacheDisplayInRect memory retained

Application that I am working process images. It's like user drops at max 4 images and app layout them based on user's selected template. One image might be added 2-3 times in final view.
Each image in layout is drawn in NSView (drawRect method of NSView using drawInRect method).Now final image (combined image by layouting all images) is created by saving NSView as Image and it all works very well.
Now problem that I am facing is memory is being retained by app once all processing is done. I have used instruments allocation and I don't see memory leaks but I see "Persistent bytes" are increasing continuously with each session of app and one user reported issue in GB's. Please see screenshot.
When I further investigated in Instruments I saw below code snaps of app that is causing memory retentions. All are related to ImageIO and coreImages. See below from instruments:
However this seems to be only problem with 10.10 and above system. Tested same version of the app in 10.9.x and memory usage remains with in 60MB. During session execution in app it goes to 200MB but once it's done it comes back to 50-60MB that usual for kind of app.
[_photoImage drawInRect: self.bounds fromRect: NSZeroRect operation: NSCompositeSourceOver fraction: 1.0 respectFlipped: YES hints: nil];
_photoImage = nil;
Above code I am using to draw image in NSView's drawRect method and code shown in image is being used to get NSView as Image.
Update: After my further investigation I found that it's CGImageSourceCreateWithData that is caching the TIFF data of NSImage. More ever I am using below code to crop the image and if i uncomment it memory consumption just works fine.
NSData *imgData = [imageToCrop TIFFRepresentation];
CGImageSourceRef source = CGImageSourceCreateWithData((CFDataRef)imgData, NULL);
CGImageRef maskRef = CGImageSourceCreateImageAtIndex(source, 0, NULL);
CGImageRef imageRef = CGImageCreateWithImageInRect(maskRef, rect);
NSImage *cropped = [[NSImage alloc] initWithCGImage: imageRef size:rect.size];
CGImageRelease(maskRef);
CGImageRelease(imageRef);
CFRelease(source);
//CFRelease( options );
imgData = nil;
I have also trying explicitly setting kCGImageSourceShouldCache to false (but it's by default false) but same results.
Please help to solve the memory retention issue.
Finally after lots of debugging it turns out that CGImageSourceCreateWithData is somewhere retaining TIFF data of NSImage. When I changed this line:
CGImageSourceRef source = CGImageSourceCreateWithData((CFDataRef)imgData, NULL);
with
CGImageSourceRef source = CGImageSourceCreateWithURL((CFURLRef)[NSURL fileURLWithPath:path], NULL);
everything just started working fine and app's memory usage was dropped from 300MB (for 6images) to 50-60MB and it's consistent behaviour now.
Apart from above changes it was still causing memory retention somewhere so to get rid of that, after all processing is done I cleared image of each layer to 'nil' and that works like charm. I was in impression that making parent as 'nil' would release images as well but that was not working.
Anyway if anyone seems issue with drawInRect or cacheDisplayInRect then make sure to clear out the image if not needed later on.
Update 2nd July 2016
I found that kCGImageSourceShouldCache is false by default in 32bit and true for 64bit. I was able to release memory with below code by setting it to false.
const void *keys[] = { kCGImageSourceShouldCache};
const void *values[] = { kCFBooleanFalse};
CFDictionaryRef optionsDictionary = CFDictionaryCreate(NULL, keys, values, 1, NULL, NULL);
CGImageSourceRef source = CGImageSourceCreateWithData((__bridge CFDataRef)[image TIFFRepresentation], optionsDictionary);
Hope it helps someone.

Undo state of bitmap (CGContext)

I'm making a multiplayer game which involves drawing lines. Now i'm trying to implement online multiplayer into the game. However I've had some doing struggle doing this. The thing is that I will need to reverse the state of the draw lines in case a packet from the server comes late to the client. I've searched here on stack overflow but haven't found any real answer how to "undo" a bitmap context. The biggest problem is that the drawing needs to be done very fast since the game updates every 20th millisecond. However I figured out and tried some different approaches to this:
Save the state of the whole context and then redraw it. This is probably the slowest method.
Only save a part of the context (100x100) in a another bitmap hidden by looping through each pixel, then looping through each pixel from that bitmap to the main bitmap that is shown on the screen.
Save each point of the drawn path in a CGMutablePath ref, then when reverting the context, draw this path with a transparent color (0,0,0,0).
Saving the position in the bitmap of each pixel that gets drawn in a separate array and then setting that pixel alpha to 0 (in the drawn bitmap) when I need to undo.
The last approach should be the fastest of them all. However, I'm not sure how I can get the position of each drawn pixel unless i do it completely manual by. Right now I uses this code to drawn lines.
CGContextSetStrokeColorWithColor(cacheContext, [color CGColor]);
CGContextSetLineCap(cacheContext, kCGLineCapRound);
CGContextSetLineWidth(cacheContext, 6+thickness);
CGContextBeginPath(cacheContext);
CGContextMoveToPoint(cacheContext, point1.x, point1.y);
CGContextAddLineToPoint(cacheContext, point2.x, point2.y);
CGContextStrokePath(cacheContext);
CGRect dirtyPoint1 = CGRectMake(point1.x-10, point1.y-10, 20, 20);
CGRect dirtyPoint2 = CGRectMake(point2.x-10, point2.y-10, 20, 20);
[self setNeedsDisplayInRect:CGRectUnion(dirtyPoint1, dirtyPoint2)];
Here is how the CGBitmapcontext is setup
- (BOOL) initContext:(CGSize)size {
scaleFactor = [[UIScreen mainScreen] scale];
// scaleFactor = 1;
//non-retina
// scalefactor = 2; retina
int bitmapBytesPerRow;
// Declare the number of bytes per row. Each pixel in the bitmap in this
// example is represented by 4 bytes; 8 bits each of red, green, blue, and
// alpha.
bitmapBytesPerRow = (size.width * 4*scaleFactor);
bitmapByteCount = (bitmapBytesPerRow * (size.height*scaleFactor));
// Allocate memory for image data. This is the destination in memory
// where any drawing to the bitmap context will be rendered.
cacheBitmap = malloc( bitmapByteCount );
if (cacheBitmap == NULL){
return NO;
}
CGBitmapInfo bitmapInfo = kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Little;
colorSpace = CGColorSpaceCreateDeviceRGB();
cacheContext = CGBitmapContextCreate (cacheBitmap, size.width*scaleFactor, size.height *scaleFactor, 8, bitmapBytesPerRow, colorSpace, bitmapInfo);
CGContextScaleCTM(cacheContext, scaleFactor, scaleFactor);
CGColorSpaceRelease(colorSpace);
CGContextSetRGBFillColor(cacheContext, 0, 0, 0, 0.0);
CGContextFillRect(cacheContext, (CGRect){CGPointZero, CGSizeMake(size.height*scaleFactor, size.width*scaleFactor)});
return YES;
}
Is there anyway other better way to undo the bitmap? If not, how can I get the positions of each pixels that gets draw with core graphics? Is this even possible?
Your 4th approach will either duplicate the whole canvas bitmap (should you consider a flat NxM matrix representation) or result a performance mess in case of map-based structure or something like that.
Actually, I believe 2nd way does the trick. I have had implemented that way of undo few times during past years, including a DirectX-based drawing app with some 25-30fps rendering pipeline.
However, your #2 description has a strange mention of some "loop" you want to perform across the area. You do not need a loop, what you need is a proper API method for copying a portion of the bitmap/graphics context, it might be CGContextDrawImage used to preserve your canvas portion and same method to undo/redo the drawing.

CGBitmapContextGetData - can't copy data to the returned block of memory

I am drawing RGBA data onto the screen using CGBitmapContextCreate and CGContextDrawImage. When I try to create bitmapcontext using CGBitmapContextCreate(pixelBuffer,...) where I have alreadymalloc'ed pixelBuffer and placed my data there, this works just fine.
However, I would like Core Graphics to manage its own memory so I would like to pass NULL to CGBitmapContextCreate, and then get the pointer to the memory block used by calling CGBitmapContextGetData, and copying my RGBA buffer to the aforementioned block using memcpy. However, my memcpy fails. Please see my code below.
Any idea what I am doing wrong?
gtx = CGBitmapContextCreate(NULL, screenWidth, screenHeight, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaNoneSkipLast);
void *data = CGBitmapContextGetData(gtx);
memcpy(data, pixelBuffer, area*componentsPerPixel);
CGContextRef currentContext = UIGraphicsGetCurrentContext();
CGImageRef image = CGBitmapContextCreateImage(gtx);
CGContextTranslateCTM(currentContext, 0, screenHeight);
CGContextScaleCTM(currentContext, 1.0, -1.0);
CGContextDrawImage(currentContext, currentSubrect, image);
Based on all my research, using drawrect to draw frequently/repeatedly is a bad idea so I decided to move to UIImageView based drawing, as suggested in a response to this other SO question
Here's another SO answer on why UIImageView is more efficient than drawrect.
Based on the above, I am now using UIImageView instead of drawrect and am seeing better drawing performance.

Issue with "renderincontext" with opengl views

I have a problem, with openGL views. I have two openGL views. The second view is added as a subview to the mainview. The two opengl views are drawn in two different opengl contexts. I need to capture the screen with the two opengl views.
The issue is that if I try to render one CAEAGLLayer in a context as below:
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(context, 1*(self.frame.size.width*0.5), 1*(self.frame.size.height*0.5));
CGContextScaleCTM(context, 3, 3);
CGContextTranslateCTM(context, abcd, abcd);
CAEAGLLayer *eaglLayer = (CAEAGLLayer*) self.myOwnView.layer;
[eaglLayer renderInContext:context];
it does not work. If I see the context (given the output as an image), The contents in the opengl layer are missing. But I find the toolbar and 2d images attached to the view, in the output image. I am not sure of the problem. Please help.
I had a similar problem and found a much more elegant solution. Basically, you subclass CAEAGLLayer, and add your own implementation of renderInContext that simply asks the OpenGL view to render the contents using glReadPixels. The beauty is that now you can call renderInContext on any layer in the hierarchy, and the result is a fully composed, perfect looking screenshot that includes your OpenGL views in it!
Our renderInContext in the subclassed CAEAGLLayer is:
- (void)renderInContext:(CGContextRef)ctx
{
[super renderInContext: ctx];
[self.delegate renderInContext: ctx];
}
Then, in the OpenGL view we replace layerClass so that it returns our subclass instead of the plain vanilla CAEAGLLayer:
+ (Class)layerClass
{
return [MyCAEAGLLayer class];
}
We add a method in the view to actually render the contents of the view into the context. Note that this code MUST run after your GL view has been rendered, but before you call presentRenderbuffer so that the render buffer will contain your frame. Otherwise the resulting image will most likely be empty (you may see different behavior between the device and the simulator on this particular issue).
- (void) renderInContext: (CGContextRef) context
{
GLint backingWidth, backingHeight;
// Bind the color renderbuffer used to render the OpenGL ES view
// If your application only creates a single color renderbuffer which is already bound at this point,
// this call is redundant, but it is needed if you're dealing with multiple renderbuffers.
// Note, replace "_colorRenderbuffer" with the actual name of the renderbuffer object defined in your class.
glBindRenderbuffer(GL_RENDERBUFFER, _colorRenderBuffer);
// Get the size of the backing CAEAGLLayer
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &backingWidth);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &backingHeight);
NSInteger x = 0, y = 0, width = backingWidth, height = backingHeight;
NSInteger dataLength = width * height * 4;
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
// Read pixel data from the framebuffer
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);
// Create a CGImage with the pixel data
// If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
// otherwise, use kCGImageAlphaPremultipliedLast
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(width, height, 8, 32, width * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast,
ref, NULL, true, kCGRenderingIntentDefault);
CGFloat scale = self.contentScaleFactor;
NSInteger widthInPoints, heightInPoints;
widthInPoints = width / scale;
heightInPoints = height / scale;
// UIKit coordinate system is upside down to GL/Quartz coordinate system
// Flip the CGImage by rendering it to the flipped bitmap context
// The size of the destination area is measured in POINTS
CGContextSetBlendMode(context, kCGBlendModeCopy);
CGContextDrawImage(context, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);
// Clean up
free(data);
CFRelease(ref);
CFRelease(colorspace);
CGImageRelease(iref);
}
Finally, in order to grab a screenshot you use renderInContext in the usual fasion. Of course the beauty is that you don't need to grab the OpenGL view directly. You can grab one of the superviews of the OpenGL view and get a composed screenshot that includes the OpenGL view along with anything else next to it or on top of it:
UIGraphicsBeginImageContextWithOptions(superviewToGrab.bounds.size, YES, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
[superviewToGrab.layer renderInContext: context]; // This recursively calls renderInContext on all the sublayers, including your OpenGL layer(s)
CGImageRef screenShot = UIGraphicsGetImageFromCurrentImageContext().CGImage;
UIGraphicsEndImageContext();
This question has already been settled, but I wanted to note that Idoogy's answer is actually dangerous and a poor choice for most use cases.
Rather than subclass CAEAGLLayer and create a new delegate object, you can use the existing delegate methods which accomplish exactly the same thing. For example:
- (void) drawLayer:(CALayer *) layer inContext:(CGContextRef)ctx;
is a great method to implement in your GL-based views. You can implement it in much that same way he suggests, using glReadPixels: just make sure to set the Retained-Backing property on your view to YES, so that you can call the above method anytime without having to worry about it having been invalidated by presentation for display.
Subclassing CAEAGL layer messes with the existing UIView / CALayer delegate relationship: in most cases, setting the delegate object on your custom layer will result in your UIView being excluded from the view hierarchy. Thus, code like:
customLayerView = [[CustomLayerView alloc] initWithFrame:someFrame];
[someSuperview addSubview:customLayerView];
will result in a weird, one-way superview-subview relationship, since the delegate methods that UIView relies on won't be implemented. (Your superview will still have the sublayer from your custom view, though).
So, instead of subclassing CAEAGLLayer, just implement some of the delegate methods. Apple lays it out for you here: https://developer.apple.com/library/ios/documentation/QuartzCore/Reference/CALayerDelegate_protocol/Reference/Reference.html#//apple_ref/doc/uid/TP40012871
All the best,
Sam
I think http://developer.apple.com/library/ios/#qa/qa1704/_index.html provides what you want.

crop image from certain portion of screen in iphone programmatically

NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
CGSize contextSize=CGSizeMake(320,400);
UIGraphicsBeginImageContext(self.view.bounds.size);
UIGraphicsBeginImageContext(contextSize);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *savedImg = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[self setSaveImage:savedImg];
to extarct some part of image from main screen.
In UIGraphicsBeginImageContext I can only use size, is there any way to use CGRect or some other way to extract image from a specific portion of screen ie (x,y, 320, 400) some thing like this
Hope this helps:
// Create new image context (retina safe)
UIGraphicsBeginImageContextWithOptions(size, NO, 0.0);
// Create rect for image
CGRect rect = CGRectMake(x, y, size.width, size.height);
// Draw the image into the rect
[existingImage drawInRect:rect];
// Saving the image, ending image context
UIImage * newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
This question is really a duplicate of several other questions including this: How to crop the UIImage?, but since it took me a while to find a solution, I will cross post again.
In my quest for a solution that I could more easily understand (and written in Swift), I arrived at this:
I wanted to be able to crop from a region based on an aspect ratio, and scale to a size based on a outer bounding extent. Here is my variation:
import AVFoundation
import ImageIO
class Image {
class func crop(image:UIImage, crop source:CGRect, aspect:CGSize, outputExtent:CGSize) -> UIImage {
let sourceRect = AVMakeRectWithAspectRatioInsideRect(aspect, source)
let targetRect = AVMakeRectWithAspectRatioInsideRect(aspect, CGRect(origin: CGPointZero, size: outputExtent))
let opaque = true, deviceScale:CGFloat = 0.0 // use scale of device's main screen
UIGraphicsBeginImageContextWithOptions(targetRect.size, opaque, deviceScale)
let scale = max(
targetRect.size.width / sourceRect.size.width,
targetRect.size.height / sourceRect.size.height)
let drawRect = CGRect(origin: -sourceRect.origin * scale, size: image.size * scale)
image.drawInRect(drawRect)
let scaledImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return scaledImage
}
}
There are a couple things that I found confusing, the separate concerns of cropping and resizing. Cropping is handled with the origin of the rect that you pass to drawInRect, and scaling is handled by the size portion. In my case, I needed to relate the size of the cropping rect on the source, to my output rect of the same aspect ratio. The scale factor is then output / input, and this needs to be applied to the drawRect (passed to drawInRect).
One caveat is that this approach effectively assumes that the image you are drawing is larger than the image context. I have not tested this, but I think you can use this code to handle cropping / zooming, but explicitly defining the scale parameter to be the aforementioned scale parameter. By default, UIKit applies a multiplier based on the screen resolution.
Finally, it should be noted that this UIKit approach is higher level than CoreGraphics / Quartz and Core Image approaches, and seems to handle image orientation issues. It is also worth mentioning that it is pretty fast, second to ImageIO, according to this post here: http://nshipster.com/image-resizing/