CGImageRef Memory leak - objective-c

I'm having a memory leak when using this custom method which returns a CGImageRef. I can't release "cgImage" properly because I have to return it. What chould I do ?
- (CGImageRef)rectRoundedImageRef:(CGRect)rect radius:(int)radius
{
CGSize contextSize = CGSizeMake(rect.size.width, rect.size.height);
CGFloat imageScale = (CGFloat)1.0;
CGFloat width = contextSize.width;
CGFloat height = contextSize.height;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL, width * imageScale, height * imageScale, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast);
// Draw ...
// Get your image
CGImageRef cgImage = CGBitmapContextCreateImage(context);
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
//CGImageRelease(cgImage); //If I release cgImage the app crashes.
return cgImage;
}

cgImage is owned by your method, you need to return it and give responsibility to the caller to release it through CFRelease.
You can also return the CGImage wrapped inside a UIImage instance, like this:
UIImage *image = [UIImage imageWithCGImage:cgImage];
CFRelease(cgImage); //cgImage is retained by the UIImage above
return image;

This is a general problem with Core Foundation objects because there is no autorelease pool in CF. As I see it, you have two options to solve the problem:
Rename the method to something like -newRectRoundedImageRef:radius: to tell the caller that he takes ownership of the returned object and responsible for releasing it.
Wrap the CGImageRef in an autoreleased UIImage object and return that ([UIImage imageWithCGImage:]). That's probably what I would do.

You can autorelease a Core Foundation-compatible object. it just looks a bit wonky. :)
The GC-safe way is like so:
CGImageRef image = ...;
if (image) {
image = (CGImageRef)[[(id)image retain] autorelease];
CGImageRelease(image);
}
The shortcut, which is safe on iOS but no longer safe on the Mac, is this:
CGImageRef image = ...;
if (image) {
image = (CGImageRef)[(id)image autorelease];
}
Either one will place the image in an autorelease pool and prevent a leak.

As suggested, we used:
CGImageRelease(imageRef);
but we still got an memory leak.
our solution was to wrap code with an
#autoreleasepool {}
block and that solve our problem.

Related

CGImageRef not init/alloc correctly

I am currently having problems with CGImageRef.
Whenever I create a CGImageRef and look at it in debugger view, in Xcode, it is nil.
Here's the code:
-(void)mouseMoved:(NSEvent *)theEvent{
if (self.shoulddrag) {
NSPoint event_location = [theEvent locationInWindow];//direct from the docs
NSPoint local_point = [self convertPoint:event_location fromView:nil];//direct from the docs
CGImageRef theImage = (__bridge CGImageRef)(self.image);
CGImageRef theClippedImage = CGImageCreateWithImageInRect(theImage, CGRectMake(local_point.x,local_point.y,1,1));
NSImage * image = [[NSImage alloc] initWithCGImage:theClippedImage size:NSZeroSize];
self.pixleView.image = image;
CGImageRelease(theClippedImage);
}
}
Everything else seems to be working though. I can't understand. Any help would be appreciated.
Note: self.pixelView is an NSImageView instance that has not been overridden in any way.
Very likely local_point is not inside of the image. You've converted the point from the window to the view coordinates, but that may not be equivalent to the image coordinates. Test this to see if the lower-left corner of your image results in local_point being (0,0).
It's not clear how your view is laid out, but I suspect that what you want to do is subtract the origin of whatever region (possibly a subview) the user is interacting with relative to self.
Alright, I figured it out.
What I was using to create the CGImageRef was:
CGImageRef theImage = (__bridge CGImageRef)(self.image);
Apparently what I should have used is:
CGImageSourceRef theImage = CGImageSourceCreateWithData((CFDataRef)[self.image TIFFRepresentation], NULL);
I guess my problem was that for some reason I thought NSImage and CGImageRef had toll free bridging.
Apparently, I was wrong.

Merging/stacking two images with Cocoa/OSX

I have a CGImageRef (lets call it original image) and a transparent png (watermark). I'm trying to write a method to place the watermark on top of the original, and return a CGImageRef.
In iOS I would have used UIKit to draw them both onto a context, but that doesn't seem possible with OSX (doesn't support UIKit).
Whats the simplest way to stack two images? Thanks
For a quick 'n dirty solution you can use the NSImage drawing APIs:
NSImage *background = [NSImage imageNamed:#"background"];
NSImage *overlay = [NSImage imageNamed:#"overlay"];
NSImage *newImage = [[NSImage alloc] initWithSize:[background size]];
[newImage lockFocus];
CGRect newImageRect = CGRectZero;
newImageRect.size = [newImage size];
[background drawInRect:newImageRect];
[overlay drawInRect:newImageRect];
[newImage unlockFocus];
CGImageRef newImageRef = [newImage CGImageForProposedRect:NULL context:nil hints:nil];
If you don't like that, most of the CGContext APIs you'd expect are available cross platform—for drawing with a little more control. Similarly, you could look into NSGraphicsContext.
This is pretty easy when you render to a CGContext.
If you want an image as a result, you can create and render to a CGBitmapContext, then request the image after render.
General flow, with common details and contextual info omitted:
CGImageRef CreateCompositeOfImages(CGImageRef pBackground,
const CGRect pBackgroundRect,
CGImageRef pForeground,
const CGRect pForegroundRect)
{
// configure context parameters
CGContextRef gtx = CGBitmapContextCreate( %%% );
// configure context
// configure context to render background image
// draw background image
CGContextDrawImage(gtx, pBackgroundRect, pBackground);
// configure context to render foreground image
// draw foreground image
CGContextDrawImage(gtx, pForegroundRect, pForeground);
// create result
CGImageRef result = CGBitmapContextCreateImage(gtx);
// cleanup
return result;
}
You would need to create a CGImage from your PNG.
Additional APIs you may be interested in using:
CGContextSetBlendMode
CGContextSetAllowsAntialiasing
CGContextSetInterpolationQuality.
I know a lot of people will generally advise you to use higher level abstractions (i.e. AppKit and UIKit), but CoreGraphics is a great library for rendering in both of those contexts. If you are interested in graphics implementations which are easy to use in both OS X and iOS, CoreGraphics is a good choice to base your work upon if you are comfortable working with those abstractions.
If anyone, like me, needs a Swift version.
This is a functional Swift 5 version:
let background = NSImage(named: "background")
let overlay = NSImage(named: "overlay")
let newImage = NSImage(size: background.size)
newImage.lockFocus()
var newImageRect: CGRect = .zero
newImageRect.size = newImage.size
background.draw(in: newImageRect)
overlay.draw(in: newImageRect)
newImage.unlockFocus()
I wish I had the time to do the same with the CGContext example.

Objective C Memory Crash, UIImageView

My app assigns and displays an image inside of a UIImageView. This happens with 24 imageViews all at viewDidLoad. The images are assigned randomly from a list of fifty images. The view controller is pushed to modally from the main screen. The first time it takes a while to load. If I'm lucky, the view loads a second time. The third time, it almost always crashes. I've tried resizing the images to around 200 pixels. I've tried assigning the images with :
image1 = [UIImage imageNamed:#"image1.png"];
[self.imageView setImage: image1];
and also with:
NSString *imagePath = [[NSBundle mainBundle] pathForResource:#"image1" ofType:#"png"];
image1 = [[UIImage alloc] initWithContentsOfFile:imagePath];
This second one seemed to only make things worse.
I also tried running the app with Instruments, which didn't recognize any memory leaks.
I really don't know where else to turn. This app represents an enormous investment of time and I would really like to see this problem resolved...
Thank you so much
The most efficient way to load a smaller version of an image from disk is this: instead of using imageNamed:, use the Image I/O framework to request a thumbnail that is the actual size you'll be displaying, by calling CGImageSourceCreateThumbnailAtIndex. Here's the example from my book:
NSURL* url =
[[NSBundle mainBundle] URLForResource:#"colson"
withExtension:#"jpg"];
CGImageSourceRef src =
CGImageSourceCreateWithURL((__bridge CFURLRef)url, nil);
CGFloat scale = [UIScreen mainScreen].scale;
CGFloat w = self.iv.bounds.size.width*scale;
NSDictionary* d =
#{(id)kCGImageSourceShouldAllowFloat: (id)kCFBooleanTrue,
(id)kCGImageSourceCreateThumbnailWithTransform: (id)kCFBooleanTrue,
(id)kCGImageSourceCreateThumbnailFromImageAlways: (id)kCFBooleanTrue,
(id)kCGImageSourceThumbnailMaxPixelSize: #((int)w)};
CGImageRef imref =
CGImageSourceCreateThumbnailAtIndex(src, 0, (__bridge CFDictionaryRef)d);
UIImage* im =
[UIImage imageWithCGImage:imref scale:scale
orientation:UIImageOrientationUp];
self.iv.image = im;
CFRelease(imref); CFRelease(src);
It is a huge waste of memory to ask a UIImageView to display an image larger than the UIImageView itself, as the bitmap for the full-size image must be maintained in memory. The Image I/O framework generates the smaller version without even ever unpacking the entire original image into memory as a bitmap.
I had this problem once before with images from a regular website that were way larger than the view I was using. The images are being uncompressed to their full resolution and then fit into the image view, if I remember correctly, hogging up your memory. I had to scale them down to the image view size first before showing them. Add CoreGraphics.framework and use this class to make an image object to use with your image view. I found it online and tweaked it a little looking for the same answer but don't remember where, so thanks to that person who posted the original, whoever they are.
ImageScale.h
#import <Foundation/Foundation.h>
#interface ImageScale : NSObject
+ (UIImage*)imageWithImage:(UIImage*)sourceImage scaledToSize:(CGSize)newSize;
#end
ImageScale.m
#import "ImageScale.h"
#implementation ImageScale
+ (UIImage*)imageWithImage:(UIImage*)sourceImage scaledToSize:(CGSize)newSize
{
CGFloat targetWidth = newSize.width;
CGFloat targetHeight = newSize.height;
CGImageRef imageRef = [sourceImage CGImage];
CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(imageRef);
CGColorSpaceRef colorSpaceInfo = CGImageGetColorSpace(imageRef);
if (bitmapInfo == kCGImageAlphaNone) {
bitmapInfo = kCGImageAlphaNoneSkipLast;
}
CGContextRef bitmap;
if (sourceImage.imageOrientation == UIImageOrientationUp || sourceImage.imageOrientation == UIImageOrientationDown) {
bitmap = CGBitmapContextCreate(NULL, targetWidth, targetHeight, CGImageGetBitsPerComponent(imageRef), CGImageGetBytesPerRow(imageRef), colorSpaceInfo, bitmapInfo);
} else {
bitmap = CGBitmapContextCreate(NULL, targetHeight, targetWidth, CGImageGetBitsPerComponent(imageRef), CGImageGetBytesPerRow(imageRef), colorSpaceInfo, bitmapInfo);
}
if (sourceImage.imageOrientation == UIImageOrientationLeft) {
CGContextRotateCTM (bitmap, M_PI_2); // + 90 degrees
CGContextTranslateCTM (bitmap, 0, -targetHeight);
} else if (sourceImage.imageOrientation == UIImageOrientationRight) {
CGContextRotateCTM (bitmap, -M_PI_2); // - 90 degrees
CGContextTranslateCTM (bitmap, -targetWidth, 0);
} else if (sourceImage.imageOrientation == UIImageOrientationUp) {
// NOTHING
} else if (sourceImage.imageOrientation == UIImageOrientationDown) {
CGContextTranslateCTM (bitmap, targetWidth, targetHeight);
CGContextRotateCTM (bitmap, -M_PI); // - 180 degrees
}
CGContextDrawImage(bitmap, CGRectMake(0, 0, targetWidth, targetHeight), imageRef);
CGImageRef ref = CGBitmapContextCreateImage(bitmap);
UIImage* newImage = [UIImage imageWithCGImage:ref];
CGContextRelease(bitmap);
CGImageRelease(ref);
return newImage;
}
#end

Xcode Screenshot EAGLContext [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How to get UIImage from EAGLView?
So I was just wondering if anybody knows any way to save what is stored in an EAGLContext as a UIImage.
I am currently using:
UIGraphicsBeginImageContext(CGSizeMake(768, 1024));
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
in other apps I have and this works fine, but obviously, EAGLContext doesn't have a .layer property. I've tried casting to UIView, but that - unsurprisingly - doesn't work:
UIView *newView = [[UIView alloc] init];
newView = (UIView *)context;
I am drawing to an EAGLContext property on a UIView (technically an EAGLContext on a UIView on another UIView on a View Controller, but I figure that shouldn't make any difference) using OpenGLES 1.
If anybody knows anything about this, even if its just that I'm completely barking up an impossible tree, please let me know!
Matt
After a few days I finally got a working solution to this. There is code provided by Apple which produces an UIImage from an EAGLView. Then you simply need to flip the image vertically since UIKit is upside down. The link to the documentation where I found this method doesn't exist anymore.
Method to capture EAGLView:
-(UIImage *)drawableToCGImage
{
GLint backingWidth2, backingHeight2;
//Bind the color renderbuffer used to render the OpenGL ES view
// If your application only creates a single color renderbuffer which is already bound at this point,
// this call is redundant, but it is needed if you're dealing with multiple renderbuffers.
// Note, replace "_colorRenderbuffer" with the actual name of the renderbuffer object defined in your class.
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
// Get the size of the backing CAEAGLLayer
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth2);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight2);
NSInteger x = 0, y = 0, width2 = backingWidth2, height2 = backingHeight2;
NSInteger dataLength = width2 * height2 * 4;
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
// Read pixel data from the framebuffer
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width2, height2, GL_RGBA, GL_UNSIGNED_BYTE, data);
// Create a CGImage with the pixel data
// If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
// otherwise, use kCGImageAlphaPremultipliedLast
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(width2, height2, 8, 32, width2 * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast,
ref, NULL, true, kCGRenderingIntentDefault);
// OpenGL ES measures data in PIXELS
// Create a graphics context with the target size measured in POINTS
NSInteger widthInPoints, heightInPoints;
if (NULL != UIGraphicsBeginImageContextWithOptions) {
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// Set the scale parameter to your OpenGL ES view's contentScaleFactor
// so that you get a high-resolution snapshot when its value is greater than 1.0
CGFloat scale = self.contentScaleFactor;
widthInPoints = width2 / scale;
heightInPoints = height2 / scale;
UIGraphicsBeginImageContextWithOptions(CGSizeMake(widthInPoints, heightInPoints), NO, scale);
}
else {
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
widthInPoints = width2;
heightInPoints = height2;
UIGraphicsBeginImageContext(CGSizeMake(widthInPoints, heightInPoints));
}
CGContextRef cgcontext = UIGraphicsGetCurrentContext();
// UIKit coordinate system is upside down to GL/Quartz coordinate system
// Flip the CGImage by rendering it to the flipped bitmap context
// The size of the destination area is measured in POINTS
CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);
CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);
// Retrieve the UIImage from the current context
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Clean up
free(data);
CFRelease(ref);
CFRelease(colorspace);
CGImageRelease(iref);
return image;
}
Method to flip the image vertically:
- (UIImage *)flipImageVertically:(UIImage *)originalImage
{
UIImageView *tempImageView = [[UIImageView alloc] initWithImage:originalImage];
UIGraphicsBeginImageContext(tempImageView.frame.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGAffineTransform flipVertical = CGAffineTransformMake(
1, 0, 0, -1, 0, tempImageView.frame.size.height
);
CGContextConcatCTM(context, flipVertical);
[tempImageView.layer renderInContext:context];
UIImage *flippedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//[tempImageView release];
return flippedImage;
}

Turning an NSImage* into a CGImageRef?

Is there an easy way to do this that works in 10.5?
In 10.6 I can use nsImage CGImageForProposedRect: NULL context: NULL hints: NULL
If I'm not using 1b black and white images (Like Group 4 TIFF), I can use bitmaps, but cgbitmaps seem to not like that setup... Is there a general way of doing this?
I need to do this because I have an IKImageView that seems to only want to add CGImages, but all I've got are NSImages. Currently, I'm using a private setImage:(NSImage*) method that I'd REALLY REALLY rather not be using...
Found the following solution on this page:
NSImage* someImage;
// create the image somehow, load from file, draw into it...
CGImageSourceRef source;
source = CGImageSourceCreateWithData((CFDataRef)[someImage TIFFRepresentation], NULL);
CGImageRef maskRef = CGImageSourceCreateImageAtIndex(source, 0, NULL);
All the methods seem to be 10.4+ so you should be fine.
[This is the long way around. You should only do this if you're still supporting an old version of OS X. If you can require 10.6 or later, just use the CGImage method instead.]
Create a CGBitmapContext.
Create an NSGraphicsContext for the bitmap context.
Set the graphics context as the current context.
Draw the image.
Create the CGImage from the contents of the bitmap context.
Release the bitmap context.
As of Snow Leopard, you can just ask the image to create a CGImage for you, by sending the NSImage a CGImageForProposedRect:context:hints: message.
Here's a more detailed answer for using - CGImageForProposedRect:context:hints::
NSImage *image = [NSImage imageNamed:#"image.jpg"];
NSRect imageRect = NSMakeRect(0, 0, image.size.width, image.size.height);
CGImageRef cgImage = [image CGImageForProposedRect:&imageRect context:NULL hints:nil];
Just did this using CGImageForProposedRect:context:hints:...
NSImage *image; // an image
NSGraphicsContext *context = [NSGraphicsContext currentContext];
CGRect imageCGRect = CGRectMake(0, 0, image.size.width, image.size.height);
NSRect imageRect = NSRectFromCGRect(imageCGRect);
CGImageRef imageRef = [image CGImageForProposedRect:&imageRect context:context hints:nil];
As of 2021, CALayers contents can directly be feed with NSImage but you still may get an [Utility] unsupported surface format: LA08 warning.
After a couple of days research and testing around i found out that this Warning is triggered if you created an NSView with backingLayer, aka CALayer and just used an NSImage to feed its contents. This alone is not much of a problem. But if you try to render it via [self.layer renderInContext:ctx], each rendering will trigger the warning again.
To make use simple, i created an NSImage extension to fix this..
#interface NSImage (LA08FIXExtension)
-(id)CGImageRefID;
#end
#implementation NSImage (LA08FIXExtension)
-(id)CGImageRefID {
NSSize size = [self size];
NSRect rect = NSMakeRect(0, 0, size.width, size.height);
CGImageRef ref = [self CGImageForProposedRect:&rect context:[NSGraphicsContext currentContext] hints:NULL];
return (__bridge id)ref;
}
#end
see how it does not return an CGImageRef directly but a bridged cast to id..! This does the trick.
Now you can use it like..
CALayer *somelayer = [CALayer layer];
somelayer.frame = ...
somelayer.contents = [NSImage imageWithName:#"SomeImage"].CGImageRefID;
PS: posted her because if you got that problem also, google will lead you to this Thread of answers anyway, and all above answers inspired me for this fix.