How can I display an array of pixels on a NSWindow? - objective-c

Very simple question... I have an array of pixels, how do I display them on the screen?
#define WIDTH 10
#define HEIGHT 10
#define SIZE WIDTH*HEIGHT
unsigned short pixels[SIZE];
for (int i = 0; i < WIDTH; i++) {
for (int j = 0; j < HEIGHT; j++) {
pixels[j*HEIGHT + i] = 0xFFFF;
}
}
That's it... now how can I show them on the screen?

Create a new "Cocoa Application" (if you don't know how to create a cocoa application go to Cocoa Dev Center)
Subclass NSView (if you don't know how to subclass a view read section "Create the NSView Subclass")
Set your NSWindow to size 400x400 on interface builder
Use this code in your NSView
#import "MyView.h"
#implementation MyView
#define WIDTH 400
#define HEIGHT 400
#define SIZE (WIDTH*HEIGHT)
#define BYTES_PER_PIXEL 2
#define BITS_PER_COMPONENT 5
#define BITS_PER_PIXEL 16
- (id)initWithFrame:(NSRect)frame
{
self = [super initWithFrame:frame];
if (self) {
// Initialization code here.
}
return self;
}
- (void)drawRect:(NSRect)dirtyRect
{
// Get current context
CGContextRef context = (CGContextRef)[[NSGraphicsContext currentContext] graphicsPort];
// Colorspace RGB
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Pixel Matrix allocation
unsigned short *pixels = calloc(SIZE, sizeof(unsigned short));
// Random pixels will give you a non-organized RAINBOW
for (int i = 0; i < WIDTH; i++) {
for (int j = 0; j < HEIGHT; j++) {
pixels[i+ j*HEIGHT] = arc4random() % USHRT_MAX;
}
}
// Provider
CGDataProviderRef provider = CGDataProviderCreateWithData(nil, pixels, SIZE, nil);
// CGImage
CGImageRef image = CGImageCreate(WIDTH,
HEIGHT,
BITS_PER_COMPONENT,
BITS_PER_PIXEL,
BYTES_PER_PIXEL*WIDTH,
colorSpace,
kCGImageAlphaNoneSkipFirst,
// xRRRRRGGGGGBBBBB - 16-bits, first bit is ignored!
provider,
nil, //No decode
NO, //No interpolation
kCGRenderingIntentDefault); // Default rendering
// Draw
CGContextDrawImage(context, self.bounds, image);
// Once everything is written on screen we can release everything
CGImageRelease(image);
CGColorSpaceRelease(colorSpace);
CGDataProviderRelease(provider);
}
#end

There's a bunch of ways to do this. One of the more straightforward is to use CGContextDrawImage. In drawRect:
CGContextRef ctx = [[NSGraphicsContext currentContext] graphicsPort];
CGDataProviderRef provider = CGDataProviderCreateWithData(nil, bitmap, bitmap_bytes, nil);
CGImageRef img = CGImageCreate(..., provider, ...);
CGDataProviderRelease(provider);
CGContextDrawImage(ctx, dstRect, img);
CGImageRelease(img);
CGImageCreate has a bunch of arguments which I've left out here, as the correct values will depend on what your bitmap format is. See the CGImage reference for details.
Note that, if your bitmap is static, it may make sense to hold on to the CGImageRef instead of disposing of it immediately. You know best how your application works, so you decide whether that makes sense.

I solved this problem by using an NSImageView with NSBitmapImageRep to create the image from the pixel values. There are lots of options how you create the pixel values. In my case, I used 32-bit pixels (RGBA). In this code, pixels is the giant array of pixel value. display is the outlet for the NSImageView.
NSBitmapImageRep *myBitmap;
NSImage *myImage;
unsigned char *buff[4];
unsigned char *pixels;
int width, height, rectSize;
NSRect myBounds;
myBounds = [display bounds];
width = myBounds.size.width;
height = myBounds.size.height;
rectSize = width * height;
memset(buff, 0, sizeof(buff));
pixels = malloc(rectSize * 4);
(fill in pixels array)
buff[0] = pixels;
myBitmap = [[NSBitmapImageRep alloc] initWithBitmapDataPlanes:buff
pixelsWide:width
pixelsHigh:height
bitsPerSample:8
samplesPerPixel:4
hasAlpha:YES
isPlanar:NO
colorSpaceName:NSCalibratedRGBColorSpace
bitmapFormat:0
bytesPerRow:(4 * width)
bitsPerPixel:32];
myImage = [[NSImage alloc] init];
[myImage addRepresentation:myBitmap];
[display setImage: myImage];
[myImage release];
[myBitmap release];
free(pixels);

Related

Render pixel buffer with Cocoa

I've searched through various Apple's docs and StackOverflow answers, but nothing really helped, still have a blank app's window. I'm trying to display the content of a pixel buffer in the NSWindow, to do that I've allocated a buffer:
UInt8* row = (UInt8 *) malloc(WINDOW_WIDTH * WINDOW_HEIGHT * bytes_per_pixel);
UInt32 pitch = (WINDOW_WIDTH * bytes_per_pixel);
// For each row
for (UInt32 y = 0; y < WINDOW_HEIGHT; ++y) {
Pixel* pixel = (Pixel *) row;
// For each pixel in a row
for (UInt32 x = 0; x < WINDOW_WIDTH; ++x) {
*pixel++ = 0xFF000000;
}
row += pitch;
}
This should prepare a buffer with red pixels. Then I'm creating a NSBitmapImageRep:
NSBitmapImageRep *imageRep = [[NSBitmapImageRep alloc] initWithBitmapDataPlanes:(u8 *) row
pixelsWide:WINDOW_WIDTH
pixelsHigh:WINDOW_HEIGHT
bitsPerSample:8
samplesPerPixel:4
hasAlpha:YES
isPlanar:NO
colorSpaceName:NSDeviceRGBColorSpace
bytesPerRow:WINDOW_WIDTH * 4
bitsPerPixel:32];
Which then converted into NSImage:
NSSize imageSize = NSMakeSize(CGImageGetWidth([imageRep CGImage]), CGImageGetHeight([imageRep CGImage]));
NSImage *image = [[NSImage alloc] initWithSize:imageSize];
[image addRepresentation:imageRep];
Then I'm configuring the view:
NSView *view = [window contentView];
[view setWantsLayer: YES];
[[view layer] setContents: image];
Sadly this doesn't give me the result I expect.
Here are some problems with your code:
You are incrementing row by pitch at the end of each y-loop. You never saved the pointer to the beginning of the buffer. When you create your NSBitmapImageRep, you pass a pointer that is past the end of the buffer.
You are passing row as the first (planes) argument of initWithBitmapDataPlanes:..., but you need to pass &row. The documentation says
An array of character pointers, each of which points to a buffer containing raw image data.[…]
An “array of character pointers” means (in C) you pass a pointer to a pointer.
You say “This should prepare a buffer with red pixels.” But you filled the buffer with 0xFF000000, and you said hasAlpha:YES. Depending on the byte order used by the initializer, either you have set the alpha channel to 0, or you have set the alpha channel to 0xFF but set all of the color channels to 0.
As it happens, you have set each pixel to opaque black (alpha = 0xFF, colors all zero). Try setting each pixel to 0xFF00007F and you'll get a dimmed red (alpha = 0xFF, red = 0x7F).
Thus:
typedef struct {
uint8_t red;
uint8_t green;
uint8_t blue;
uint8_t alpha;
} Pixel;
#implementation AppDelegate
- (void)applicationDidFinishLaunching:(NSNotification *)aNotification {
size_t width = self.window.contentView.bounds.size.width;
size_t height = self.window.contentView.bounds.size.height;
Pixel color = { .red=127, .green=0, .blue=0, .alpha=255 };
size_t pitch = width * sizeof(Pixel);
uint8_t *buffer = malloc(pitch * height);
for (size_t y = 0; y < height; ++y) {
Pixel *row = (Pixel *)(buffer + y * pitch);
for (size_t x = 0; x < width; ++x) {
row[x] = color;
}
}
NSBitmapImageRep *rep = [[NSBitmapImageRep alloc] initWithBitmapDataPlanes:&buffer
pixelsWide:width pixelsHigh:height
bitsPerSample:8 samplesPerPixel:4 hasAlpha:YES isPlanar:NO
colorSpaceName:NSDeviceRGBColorSpace
bytesPerRow:pitch bitsPerPixel:sizeof(Pixel) * 8];
NSImage *image = [[NSImage alloc] initWithSize:NSMakeSize(width, height)];
[image addRepresentation:rep];
self.window.contentView.wantsLayer = YES;
self.window.contentView.layer.contents = image;
}
#end
Result:
Note that I didn't free buffer. If you free buffer before rep is destroyed, things will go wrong. For example, if you just add free(buffer) to the end of applicationDidFinishLaunching:, the window appears gray.
This is a thorny problem to solve. If you use Core Graphics instead, the memory management is all handled properly. You can ask Core Graphics to allocate the buffer for you (by passing NULL instead of a valid pointer), and it will free the buffer when appropriate.
You have to release the Core Graphics objects you create to avoid memory leaks, but you can do that as soon as you're done with them. The Product > Analyze command can also help you find leaks of Core Graphics objects, but will not help you find leaks of un-freed malloc blocks.
Here's what a Core Graphics solution looks like:
typedef struct {
uint8_t red;
uint8_t green;
uint8_t blue;
uint8_t alpha;
} Pixel;
#implementation AppDelegate
- (void)applicationDidFinishLaunching:(NSNotification *)aNotification {
size_t width = self.window.contentView.bounds.size.width;
size_t height = self.window.contentView.bounds.size.height;
CGColorSpaceRef rgb = CGColorSpaceCreateWithName(kCGColorSpaceLinearSRGB);
CGContextRef gc = CGBitmapContextCreate(NULL, width, height, 8, 0, rgb, kCGImageByteOrder32Big | kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(rgb);
size_t pitch = CGBitmapContextGetBytesPerRow(gc);
uint8_t *buffer = CGBitmapContextGetData(gc);
Pixel color = { .red=127, .green=0, .blue=0, .alpha=255 };
for (size_t y = 0; y < height; ++y) {
Pixel *row = (Pixel *)(buffer + y * pitch);
for (size_t x = 0; x < width; ++x) {
row[x] = color;
}
}
CGImageRef image = CGBitmapContextCreateImage(gc);
CGContextRelease(gc);
self.window.contentView.wantsLayer = YES;
self.window.contentView.layer.contents = (__bridge id)image;
CGImageRelease(image);
}
#end
Note sure what's going on, but here's code that has been working for years:
static NSImage* NewImageFromRGBA( const UInt8* rawRGBA, NSInteger width, NSInteger height )
{
size_t rawRGBASize = height*width*4/* sizeof(RGBA) = 4 */;
// Create a bitmap representation, allowing NSBitmapImageRep to allocate its own data buffer
NSBitmapImageRep* imageRep = [[NSBitmapImageRep alloc] initWithBitmapDataPlanes:NULL
pixelsWide:width
pixelsHigh:height
bitsPerSample:8
samplesPerPixel:4
hasAlpha:YES
isPlanar:NO
colorSpaceName:NSCalibratedRGBColorSpace
bytesPerRow:0
bitsPerPixel:0];
NSCAssert(imageRep!=nil,#"failed to create NSBitmapImageRep");
NSCAssert((size_t)[imageRep bytesPerPlane]==rawRGBASize,#"alignment or size of CGContext buffer and NSImageRep do not agree");
// Copy the raw bitmap image into the new image representation
memcpy([imageRep bitmapData],rawRGBA,rawRGBASize);
// Create an empty NSImage then add the bitmap representation to it
NSImage* image = [[NSImage alloc] initWithSize:NSMakeSize(width,height)];
[image addRepresentation:imageRep];
return image;
}

Confusion about getting color of a point on screen with Objective C

I am trying to get color value from a pixel on screen with a cocoa app. The idea is that app should be able to get a color value anywhere on the screen, even outside the scope of the app itself.
I did a bunch of research and this is essentially what I am doing
- (void) keepRunning:(NSTimer *)timer{
NSPoint mouseLoc = [NSEvent mouseLocation];
uint32_t count = 0;
CGDirectDisplayID displayForPoint;
if (CGGetDisplaysWithPoint(NSPointToCGPoint(mouseLoc), 1, &displayForPoint, &count) != kCGErrorSuccess)
{
NSLog(#"Break");
return;
}
CGImageRef image = CGDisplayCreateImageForRect(displayForPoint, CGRectMake(mouseLoc.x-10, mouseLoc.y-10, 1, 1));
NSBitmapImageRep* bitmap = [[NSBitmapImageRep alloc] initWithCGImage:image];
CGImageRelease(image);
NSColor* color = [bitmap colorAtX:0 y:0];
NSLog(#"%#", color);
}
keepRunning fires every 100 ms or so. This seems to be the correct way of doing this. Problem is, I don't get the correct color values. All values outputted are gray and basically wrong. Any ideas about what I am doing wrong? Is it something to do with transparency?
I changed your code a bit and put it in my animation timer. This seems to work. Note that I'm only creating a 1 pixel image and I flipped the mouseLoc.y value.
CGFloat screenHeight = 900.0; // This is the display height of my machine
NSPoint mouseLoc = [NSEvent mouseLocation];
uint32_t count = 0;
CGDirectDisplayID displayForPoint;
if (CGGetDisplaysWithPoint(NSPointToCGPoint(mouseLoc), 1, &displayForPoint, &count) != kCGErrorSuccess)
{
NSLog(#"Break");
return;
}
CGImageRef image = CGDisplayCreateImageForRect(displayForPoint, CGRectMake(mouseLoc.x, screenHeight - mouseLoc.y, 1, 1)); // mouseLoc.y is flipped
NSBitmapImageRep* bitmap = [[NSBitmapImageRep alloc] initWithCGImage:image];
CGImageRelease(image);
NSColor* color = [bitmap colorAtX:0 y:0];
NSLog(#"%#", color);

Detect tap on curved line?

I'm trying to draw a line with a specific width. I searched for examples online, but I only found examples using straight lines. I need curved lines. Also, I need to detect if the user touched within the line. Is it possible to achieve this using Objective C and Sprite Kit? If so can someone provide an example?
You can use UIBezierPath to create bezier curves (nice smooth curves). You can specify this path for a CAShapeLayer and add that as a sublayer to your view:
UIBezierPath *path = [UIBezierPath bezierPath];
[path moveToPoint:CGPointMake(10, 150)];
[path addCurveToPoint:CGPointMake(110, 150) controlPoint1:CGPointMake(40, 100) controlPoint2:CGPointMake(80, 100)];
[path addCurveToPoint:CGPointMake(210, 150) controlPoint1:CGPointMake(140, 200) controlPoint2:CGPointMake(170, 200)];
[path addCurveToPoint:CGPointMake(310, 150) controlPoint1:CGPointMake(250, 100) controlPoint2:CGPointMake(280, 100)];
CAShapeLayer *layer = [CAShapeLayer layer];
layer.lineWidth = 10;
layer.strokeColor = [UIColor redColor].CGColor;
layer.fillColor = [UIColor clearColor].CGColor;
layer.path = path.CGPath;
[self.view.layer addSublayer:layer];
If you want to randomize it a little, you can just randomize some of the curves. If you want some fuzziness, add some shadow. If you want the ends to be round, specify a rounded line cap:
UIBezierPath *path = [UIBezierPath bezierPath];
CGPoint point = CGPointMake(10, 100);
[path moveToPoint:point];
CGPoint controlPoint1;
CGPoint controlPoint2 = CGPointMake(point.x - 5.0 - arc4random_uniform(50), 150.0);
for (NSInteger i = 0; i < 5; i++) {
controlPoint1 = CGPointMake(point.x + (point.x - controlPoint2.x), 50.0);
point.x += 40.0 + arc4random_uniform(20);
controlPoint2 = CGPointMake(point.x - 5.0 - arc4random_uniform(50), 150.0);
[path addCurveToPoint:point controlPoint1:controlPoint1 controlPoint2:controlPoint2];
}
CAShapeLayer *layer = [CAShapeLayer layer];
layer.lineWidth = 5;
layer.strokeColor = [UIColor redColor].CGColor;
layer.fillColor = [UIColor clearColor].CGColor;
layer.path = path.CGPath;
layer.shadowColor = [UIColor redColor].CGColor;
layer.shadowRadius = 2.0;
layer.shadowOpacity = 1.0;
layer.shadowOffset = CGSizeZero;
layer.lineCap = kCALineCapRound;
[self.view.layer addSublayer:layer];
If you want it to be even more irregular, break those beziers into smaller segments, but the idea would be the same. The only trick with conjoined bezier curves is that you want to make sure that the second control point of one curve is in line with the first control point of the next one, or else you end up with sharp discontinuities in the curves.
If you want to detect if and when a user taps on it, that's more complicated. But what you have to do is:
Make a snapshot of the image:
- (UIImage *)captureView:(UIView *)view
{
UIGraphicsBeginImageContextWithOptions(view.bounds.size, NO, 1.0); // usually I'd use 0.0, but we'll use 1.0 here so that the tap point of the gesture matches the pixel of the snapshot
if ([view respondsToSelector:#selector(drawViewHierarchyInRect:afterScreenUpdates:)]) {
BOOL success = [view drawViewHierarchyInRect:view.bounds afterScreenUpdates:YES];
NSAssert(success, #"drawViewHierarchyInRect failed");
} else {
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
}
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
Get the color of the pixel at the coordinate that the user tapped by identifying the color of the pixel the user tapped.
- (void)handleTap:(UITapGestureRecognizer *)gesture
{
CGPoint point = [gesture locationInView:gesture.view];
CGFloat red, green, blue, alpha;
UIColor *color = [self image:self.image colorAtPoint:point];
[color getRed:&red green:&green blue:&blue alpha:&alpha];
if (green < 0.9 && blue < 0.9 && red > 0.9)
NSLog(#"tapped on curve");
else
NSLog(#"didn't tap on curve");
}
Where I adapted Apple's code for getting the pixel buffer in order to determine the color of the pixel the user tapped on was.
// adapted from https://developer.apple.com/library/mac/qa/qa1509/_index.html
- (UIColor *)image:(UIImage *)image colorAtPoint:(CGPoint)point
{
UIColor *color;
CGImageRef imageRef = image.CGImage;
// Create the bitmap context
CGContextRef context = [self createARGBBitmapContextForImage:imageRef];
NSAssert(context, #"error creating context");
// Get image width, height. We'll use the entire image.
size_t width = CGImageGetWidth(imageRef);
size_t height = CGImageGetHeight(imageRef);
CGRect rect = {{0,0},{width,height}};
// Draw the image to the bitmap context. Once we draw, the memory
// allocated for the context for rendering will then contain the
// raw image data in the specified color space.
CGContextDrawImage(context, rect, imageRef);
// Now we can get a pointer to the image data associated with the bitmap
// context.
uint8_t *data = CGBitmapContextGetData (context);
if (data != NULL) {
size_t offset = (NSInteger) point.y * 4 * width + (NSInteger) point.x * 4;
uint8_t alpha = data[offset];
uint8_t red = data[offset+1];
uint8_t green = data[offset+2];
uint8_t blue = data[offset+3];
color = [UIColor colorWithRed:red / 255.0 green:green / 255.0 blue:blue / 255.0 alpha:alpha / 255.0];
}
// When finished, release the context
CGContextRelease(context);
// Free image data memory for the context
if (data) {
free(data); // we used malloc in createARGBBitmapContextForImage, so free it
}
return color;
}
- (CGContextRef) createARGBBitmapContextForImage:(CGImageRef) inImage
{
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
void * bitmapData;
size_t bitmapByteCount;
size_t bitmapBytesPerRow;
// Get image width, height. We'll use the entire image.
size_t pixelsWide = CGImageGetWidth(inImage);
size_t pixelsHigh = CGImageGetHeight(inImage);
// Declare the number of bytes per row. Each pixel in the bitmap in this
// example is represented by 4 bytes; 8 bits each of red, green, blue, and
// alpha.
bitmapBytesPerRow = (pixelsWide * 4);
bitmapByteCount = (bitmapBytesPerRow * pixelsHigh);
// Use the generic RGB color space.
colorSpace = CGColorSpaceCreateDeviceRGB(); // CGColorSpaceCreateDeviceWithName(kCGColorSpaceGenericRGB);
NSAssert(colorSpace, #"Error allocating color space");
// Allocate memory for image data. This is the destination in memory
// where any drawing to the bitmap context will be rendered.
bitmapData = malloc(bitmapByteCount);
NSAssert(bitmapData, #"Unable to allocate bitmap buffer");
// Create the bitmap context. We want pre-multiplied ARGB, 8-bits
// per component. Regardless of what the source image format is
// (CMYK, Grayscale, and so on) it will be converted over to the format
// specified here by CGBitmapContextCreate.
context = CGBitmapContextCreate (bitmapData,
pixelsWide,
pixelsHigh,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
(CGBitmapInfo)kCGImageAlphaPremultipliedFirst);
NSAssert(context, #"Context not created!");
// Make sure and release colorspace before returning
CGColorSpaceRelease( colorSpace );
return context;
}

UIImage loop with image scale - Received memory warning

I have memory leak problem and i can't find answer since 1 week. :(
I loop more than 20 Images in my photos array I always receive an memory warning.
Maybe someone can help me with my frustrating problem?
Debug Message
"Received memory warning."
Loop in my UIView
images = [[NSMutableArray alloc] init];
for (Photo *photo in photos) {
[images addObject:[[UIImage imageWithData:photo.image_compressed] ToSize:videoSize]];
}
ToSize Method
#import "UIImage+toSize.h"
#implementation UIImage (toSize)
- (UIImage *)ToSize:(CGSize)newSize {
float originalWidth = self.size.width;
float originalHeight = self.size.height;
float newWidth = newSize.width;
float newHeight = newSize.height;
float xMargin = 0.0f;
float yMargin = 0.0f;
float ratioWidth = (originalWidth / originalHeight) * newSize.height;
float ratioHeight = (originalHeight / originalWidth) * newSize.width;
// LEFT & RIGHT Margin
if (ratioHeight > newSize.height)
{
// set new image size
newWidth = ratioWidth;
newHeight = newSize.height;
// calculate margin
xMargin = (newSize.width - ratioWidth) / 2;
} else if (ratioWidth > newSize.width)
{
// set new image size
newWidth = newSize.width;
newHeight = ratioHeight;
// calculate margin
yMargin = (newSize.height - ratioHeight) / 2;
}
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef ctx = CGBitmapContextCreate(NULL, newSize.width, newSize.height,
CGImageGetBitsPerComponent(self.CGImage), 0,
colorSpace,
CGImageGetBitmapInfo(self.CGImage));
CGContextDrawImage(ctx, CGRectMake(xMargin, yMargin, newWidth, newHeight), self.CGImage);
CGImageRef cgimg = CGBitmapContextCreateImage(ctx);
UIImage *img = [UIImage imageWithCGImage:cgimg scale:self.scale orientation:UIImageOrientationUp];
CGColorSpaceRelease(colorSpace);
CGContextRelease(ctx);
CGImageRelease(cgimg);
return img;
}
#end
"Received memory warning." is not a memory leak, that's the OS telling you it's short on memory, at which point it calls the delegate method associated with low memory:
- (void)didReceiveMemoryWarning
To give running Apps a chance to clear out some cruft in order to not have to terminate anything, if not enough space is made apps start getting killed. Unless you can see a leak in instruments I really don't think this is a memory leak.

Where is the memory leak in this OpneGL-ES Screenshot function used on the IPad?

i am using this function to create Screenshots of the my IPAD App. I am using the Sparrow Framework in my Project. SPDisplayObject uses OpenGl-ES based rendering.
#implementation SPDisplayObject (ScreenshotFromSPDisplayObject)
- (UIImage *)getImageScreenshot{
int WIDTH = 1024;
int HEIGHT = 768;
CGSize size = CGSizeMake(WIDTH,HEIGHT);
//Create un buffer for pixels
GLuint bufferLenght=size.width*size.height*4;
GLubyte *buffer = (GLubyte *) malloc(bufferLenght);
//Read Pixels from OpenGL
glReadPixels(0,0,size.width,size.height,GL_RGBA,GL_UNSIGNED_BYTE,buffer);
//Make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer, bufferLenght, NULL);
//Configure image
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * size.width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef iref = CGImageCreate(size.width,size.height,bitsPerComponent,bitsPerPixel,bytesPerRow,colorSpaceRef,bitmapInfo,provider,NULL,NO,renderingIntent);
uint32_t *pixels = (uint32_t *)malloc(bufferLenght);
CGContextRef context = CGBitmapContextCreate(pixels, WIDTH, HEIGHT, 8, WIDTH*4, CGImageGetColorSpace(iref), kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextTranslateCTM(context,0, size.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextDrawImage(context, CGRectMake(0.0, 0.0, size.width, size.height), iref);
UIImage* screenshot = [UIImage imageWithCGImage:CGBitmapContextCreateImage(context)];
UIGraphicsEndImageContext();
//free memory
CGColorSpaceRelease(colorSpaceRef);
CGDataProviderRelease(provider);
CGImageRelease(iref);
CGContextRelease(context);
free(buffer);
free(pixels);
return screenshot;
}
#end
I use it like this from an UIViewController:
#interface
UIImageView *screenShot;
UIImage *tempImage;
-(void) deactivePage
{
// attach screenshot
tempImage = [self.stage getImageScreenshot];
screenShot = [[UIImageView alloc] initWithFrame:CGRectMake(0,0,1024,768)];
screenShot.image = tempImage;
[self.view addSubview:screenShot];
}
- (void)dealloc
{
screenShot.image = nil;
[screenShot removeFromSuperview];
[screenShot release];
[super dealloc];
}
The UIViewController is released and deallocated aprox. 5 Seconds after the "deactivePage" function is called.
The Screenshot is used for a View Transition.
Taking Screenshots works like a charm, but with every Screenshot my App is growing around 10 MBs So i can do this around 15 Times till the app crashes.
So where is the leak? I am stuck.. :-(
In the getImageScreenshot function you do this:
UIImage* screenshot = [UIImage imageWithCGImage:CGBitmapContextCreateImage(context)];
which creates a CGImageRef and then creates (autorelease) an UIImage from it.
What happens here is that this CGImageRef remains alive and is never released, so it's leaking.
What you should do, instead, is this:
CGImageRef myCGImage = CGBitmapContextCreateImage(context);
UIImage* screenshot = [UIImage imageWithCGImage:myCGImage];
CGImageRelease(myCGImage);
Have you tried to see it using Instruments (Leaks or Heapshots)? you should see these CGImareRef elements still alive.
I don't see where you deallocate tempImage in the UIViewController when it's going down.