I have some well-working code for reading the screen or offscreen buffer and saving the result to the iPad photo album as a PNG with transparency. The images appear perfectly when viewed in the ipad photo viewer or any other image viewer. However, within ipad's native photo viewing app the thumbnails show portions of other images from the album in the transparent sections of the thumbnail.
Has anyone else experienced this problem, and if so found a fix for it? Here's my offscreen (partial) code for generating the images:
EAGLContext *myContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES1];
[EAGLContext setCurrentContext:myContext];
[... set up render buffer code removed for display ...]
[EAGLContext setCurrentContext:myContext];
ImageTextureManager *imageManager = [[ImageTextureManager alloc] init];
[imageManager loadImageTexture:gAppModel.currentImageRef];
[imageManager release];
glBindRenderbufferOES(GL_RENDERBUFFER_OES, offscreenColorRenderbuffer);
[self renderTransformedImage]; // render the image to the buffer
[myContext presentRenderbuffer:GL_RENDERBUFFER_OES];
// grab image from frameBuffer and return it as UIImage
NSInteger x = 0, y = 0;
NSInteger dataLength = width * height * 4;
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(width, height, 8, 32, width * 4, colorspace,
kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast,
ref, NULL, true, kCGRenderingIntentDefault);
UIGraphicsBeginImageContext(CGSizeMake(width, height));
CGContextRef cgcontext = UIGraphicsGetCurrentContext();
CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);
CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, width, height), iref);
UIImage *image = UIGraphicsGetImageFromCurrentImageContext(); // this call creates an AutoRelease UIImage
NSData* imdata = UIImagePNGRepresentation(image); // get PNG representation
UIImage* myImagePNG = [UIImage imageWithData:imdata]; // wrap UIImage around PNG representation
UIImageWriteToSavedPhotosAlbum(myImagePNG, nil, nil, nil);
UIGraphicsEndImageContext();
Thanks to medvedNick for his offscreen rendering code: Drawing into OpenGL ES framebuffer and getting UIImage from it on iPhone
Very simple question... I have an array of pixels, how do I display them on the screen?
#define WIDTH 10
#define HEIGHT 10
#define SIZE WIDTH*HEIGHT
unsigned short pixels[SIZE];
for (int i = 0; i < WIDTH; i++) {
for (int j = 0; j < HEIGHT; j++) {
pixels[j*HEIGHT + i] = 0xFFFF;
}
}
That's it... now how can I show them on the screen?
Create a new "Cocoa Application" (if you don't know how to create a cocoa application go to Cocoa Dev Center)
Subclass NSView (if you don't know how to subclass a view read section "Create the NSView Subclass")
Set your NSWindow to size 400x400 on interface builder
Use this code in your NSView
#import "MyView.h"
#implementation MyView
#define WIDTH 400
#define HEIGHT 400
#define SIZE (WIDTH*HEIGHT)
#define BYTES_PER_PIXEL 2
#define BITS_PER_COMPONENT 5
#define BITS_PER_PIXEL 16
- (id)initWithFrame:(NSRect)frame
{
self = [super initWithFrame:frame];
if (self) {
// Initialization code here.
}
return self;
}
- (void)drawRect:(NSRect)dirtyRect
{
// Get current context
CGContextRef context = (CGContextRef)[[NSGraphicsContext currentContext] graphicsPort];
// Colorspace RGB
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Pixel Matrix allocation
unsigned short *pixels = calloc(SIZE, sizeof(unsigned short));
// Random pixels will give you a non-organized RAINBOW
for (int i = 0; i < WIDTH; i++) {
for (int j = 0; j < HEIGHT; j++) {
pixels[i+ j*HEIGHT] = arc4random() % USHRT_MAX;
}
}
// Provider
CGDataProviderRef provider = CGDataProviderCreateWithData(nil, pixels, SIZE, nil);
// CGImage
CGImageRef image = CGImageCreate(WIDTH,
HEIGHT,
BITS_PER_COMPONENT,
BITS_PER_PIXEL,
BYTES_PER_PIXEL*WIDTH,
colorSpace,
kCGImageAlphaNoneSkipFirst,
// xRRRRRGGGGGBBBBB - 16-bits, first bit is ignored!
provider,
nil, //No decode
NO, //No interpolation
kCGRenderingIntentDefault); // Default rendering
// Draw
CGContextDrawImage(context, self.bounds, image);
// Once everything is written on screen we can release everything
CGImageRelease(image);
CGColorSpaceRelease(colorSpace);
CGDataProviderRelease(provider);
}
#end
There's a bunch of ways to do this. One of the more straightforward is to use CGContextDrawImage. In drawRect:
CGContextRef ctx = [[NSGraphicsContext currentContext] graphicsPort];
CGDataProviderRef provider = CGDataProviderCreateWithData(nil, bitmap, bitmap_bytes, nil);
CGImageRef img = CGImageCreate(..., provider, ...);
CGDataProviderRelease(provider);
CGContextDrawImage(ctx, dstRect, img);
CGImageRelease(img);
CGImageCreate has a bunch of arguments which I've left out here, as the correct values will depend on what your bitmap format is. See the CGImage reference for details.
Note that, if your bitmap is static, it may make sense to hold on to the CGImageRef instead of disposing of it immediately. You know best how your application works, so you decide whether that makes sense.
I solved this problem by using an NSImageView with NSBitmapImageRep to create the image from the pixel values. There are lots of options how you create the pixel values. In my case, I used 32-bit pixels (RGBA). In this code, pixels is the giant array of pixel value. display is the outlet for the NSImageView.
NSBitmapImageRep *myBitmap;
NSImage *myImage;
unsigned char *buff[4];
unsigned char *pixels;
int width, height, rectSize;
NSRect myBounds;
myBounds = [display bounds];
width = myBounds.size.width;
height = myBounds.size.height;
rectSize = width * height;
memset(buff, 0, sizeof(buff));
pixels = malloc(rectSize * 4);
(fill in pixels array)
buff[0] = pixels;
myBitmap = [[NSBitmapImageRep alloc] initWithBitmapDataPlanes:buff
pixelsWide:width
pixelsHigh:height
bitsPerSample:8
samplesPerPixel:4
hasAlpha:YES
isPlanar:NO
colorSpaceName:NSCalibratedRGBColorSpace
bitmapFormat:0
bytesPerRow:(4 * width)
bitsPerPixel:32];
myImage = [[NSImage alloc] init];
[myImage addRepresentation:myBitmap];
[display setImage: myImage];
[myImage release];
[myBitmap release];
free(pixels);
I'm using the OpenCV framework with XCode and want to convert from cvMat or IplImage to UIImage, how to do that? Thanks.
Note: most implementations don't correctly handle an alpha channel or convert from OpenCV's BGR pixel format to iOS's RGB.
This will correctly convert from cv::Mat to UIImage:
+(UIImage *)UIImageFromCVMat:(cv::Mat)cvMat {
NSData *data = [NSData dataWithBytes:cvMat.data length:image.step.p[0]*image.rows];
CGColorSpaceRef colorSpace;
CGBitmapInfo bitmapInfo;
if (cvMat.elemSize() == 1) {
colorSpace = CGColorSpaceCreateDeviceGray();
bitmapInfo = kCGImageAlphaNone | kCGBitmapByteOrderDefault;
} else {
colorSpace = CGColorSpaceCreateDeviceRGB();
bitmapInfo = kCGBitmapByteOrder32Little | (
cvMat.elemSize() == 3? kCGImageAlphaNone : kCGImageAlphaNoneSkipFirst
);
}
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
// Creating CGImage from cv::Mat
CGImageRef imageRef = CGImageCreate(
cvMat.cols, //width
cvMat.rows, //height
8, //bits per component
8 * cvMat.elemSize(), //bits per pixel
cvMat.step[0], //bytesPerRow
colorSpace, //colorspace
bitmapInfo, // bitmap info
provider, //CGDataProviderRef
NULL, //decode
false, //should interpolate
kCGRenderingIntentDefault //intent
);
// Getting UIImage from CGImage
UIImage *finalImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
return finalImage;
}
And to convert from UIImage to cv::Mat:
+ (cv::Mat)cvMatWithImage:(UIImage *)image
{
CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage);
size_t numberOfComponents = CGColorSpaceGetNumberOfComponents(colorSpace);
CGFloat cols = image.size.width;
CGFloat rows = image.size.height;
cv::Mat cvMat(rows, cols, CV_8UC4); // 8 bits per component, 4 channels
CGBitmapInfo bitmapInfo = kCGImageAlphaNoneSkipLast | kCGBitmapByteOrderDefault;
// check whether the UIImage is greyscale already
if (numberOfComponents == 1){
cvMat = cv::Mat(rows, cols, CV_8UC1); // 8 bits per component, 1 channels
bitmapInfo = kCGImageAlphaNone | kCGBitmapByteOrderDefault;
}
CGContextRef contextRef = CGBitmapContextCreate(cvMat.data, // Pointer to backing data
cols, // Width of bitmap
rows, // Height of bitmap
8, // Bits per component
cvMat.step[0], // Bytes per row
colorSpace, // Colorspace
bitmapInfo); // Bitmap info flags
CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), image.CGImage);
CGContextRelease(contextRef);
return cvMat;
}
From opencv 2.4.6 on this functionality is already included.
Just include opencv2/highgui/ios.h
In OpenCV 3 this include has changed to:
opencv2/imgcodecs/ios.h
And you can use these functions:
UIImage* MatToUIImage(const cv::Mat& image);
void UIImageToMat(const UIImage* image, cv::Mat& m, bool alphaExist = false);
Here is the correct method to convert a cv::Mat to a UIImage.
Every other implementation I've seen — including OpenCV's documentation — is incorrect: they do not correctly convert from OpenCV's BGR to iOS's RGB, and they do not consider the alpha channel (if one exists). See comments above bitmapInfo = ….
+(UIImage *)UIImageFromCVMat:(cv::Mat)cvMat {
NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize()*cvMat.total()];
CGColorSpaceRef colorSpace;
CGBitmapInfo bitmapInfo;
if (cvMat.elemSize() == 1) {
colorSpace = CGColorSpaceCreateDeviceGray();
bitmapInfo = kCGImageAlphaNone | kCGBitmapByteOrderDefault;
} else {
colorSpace = CGColorSpaceCreateDeviceRGB();
// OpenCV defaults to either BGR or ABGR. In CoreGraphics land,
// this means using the "32Little" byte order, and potentially
// skipping the first pixel. These may need to be adjusted if the
// input matrix uses a different pixel format.
bitmapInfo = kCGBitmapByteOrder32Little | (
cvMat.elemSize() == 3? kCGImageAlphaNone : kCGImageAlphaNoneSkipFirst
);
}
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
// Creating CGImage from cv::Mat
CGImageRef imageRef = CGImageCreate(
cvMat.cols, //width
cvMat.rows, //height
8, //bits per component
8 * cvMat.elemSize(), //bits per pixel
cvMat.step[0], //bytesPerRow
colorSpace, //colorspace
bitmapInfo, // bitmap info
provider, //CGDataProviderRef
NULL, //decode
false, //should interpolate
kCGRenderingIntentDefault //intent
);
// Getting UIImage from CGImage
UIImage *finalImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
return finalImage;
}
You should consider using native OpenCV functions to convert forth and back :
#import <opencv2/imgcodecs/ios.h>
...
UIImage* MatToUIImage(const cv::Mat& image);
void UIImageToMat(const UIImage* image,
cv::Mat& m, bool alphaExist = false);
Here I am mentioning all the needed conversion methods together.
Converting UIImage color to UIImage gray, without using opencv and only using iOS library functions:
- (UIImage *)convertImageToGrayScale:(UIImage *)image
{
// Create image rectangle with current image width/height
CGRect imageRect = CGRectMake(0, 0, image.size.width, image.size.height);
// Grayscale color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
// Create bitmap content with current image size and grayscale colorspace
CGContextRef context = CGBitmapContextCreate(nil, image.size.width, image.size.height, 8, 0, colorSpace, kCGImageAlphaNone);
// Draw image into current context, with specified rectangle
// using previously defined context (with grayscale colorspace)
CGContextDrawImage(context, imageRect, [image CGImage]);
/* changes start here */
// Create bitmap image info from pixel data in current context
CGImageRef grayImage = CGBitmapContextCreateImage(context);
// release the colorspace and graphics context
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
// make a new alpha-only graphics context
context = CGBitmapContextCreate(nil, image.size.width, image.size.height, 8, 0, nil, kCGImageAlphaOnly);
// draw image into context with no colorspace
CGContextDrawImage(context, imageRect, [image CGImage]);
// create alpha bitmap mask from current context
CGImageRef mask = CGBitmapContextCreateImage(context);
// release graphics context
CGContextRelease(context);
// make UIImage from grayscale image with alpha mask
UIImage *grayScaleImage = [UIImage imageWithCGImage:CGImageCreateWithMask(grayImage, mask) scale:image.scale orientation:image.imageOrientation];
// release the CG images
CGImageRelease(grayImage);
CGImageRelease(mask);
// return the new grayscale image
return grayScaleImage;
}
Converting color UIImage to color cvMat. Please note that, you will find this piece of code in several links but there is a small modification here. Notice the portion "swap channels". This part is for keeping the color undisturbed otherwise the color channel got modified.
Also notice the following lines. These lines will help to keep the orientation of the image undisturbed.
if (image.imageOrientation == UIImageOrientationLeft
|| image.imageOrientation == UIImageOrientationRight) {
cols = image.size.height;
rows = image.size.width;
}
- (cv::Mat)cvMatFromUIImage:(UIImage *)image
{
CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage);
CGFloat cols = image.size.width;
CGFloat rows = image.size.height;
if (image.imageOrientation == UIImageOrientationLeft
|| image.imageOrientation == UIImageOrientationRight) {
cols = image.size.height;
rows = image.size.width;
}
cv::Mat cvMat(rows, cols, CV_8UC4); // 8 bits per component, 4 channels (color channels + alpha)
CGContextRef contextRef = CGBitmapContextCreate(cvMat.data, // Pointer to data
cols, // Width of bitmap
rows, // Height of bitmap
8, // Bits per component
cvMat.step[0], // Bytes per row
colorSpace, // Colorspace
kCGImageAlphaNoneSkipLast |
kCGBitmapByteOrderDefault); // Bitmap info flags
CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), image.CGImage);
CGContextRelease(contextRef);
CGColorSpaceRelease(colorSpace);
//--swap channels -- //
std::vector<Mat> ch;
cv::split(cvMat,ch);
std::swap(ch[0],ch[2]);
cv::merge(ch,cvMat);
return cvMat;
}
Converting UIImage to cvMat gray. Notice the line
cv::Mat cvMat(rows, cols, CV_8UC4, Scalar(1,2,3,4)); // 8 bits per
component, 4 channels
instead of
cv::Mat cvMat(rows, cols, CV_8UC1); // 8 bits per component, 1
channels
This line is needed otherwise the code will throw error
- (cv::Mat)cvMatGrayFromUIImage:(UIImage *)image
{
CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage);
CGFloat cols = image.size.width;
CGFloat rows = image.size.height;
// cv::Mat cvMat(rows, cols, CV_8UC1); // 8 bits per component, 1 channels
cv::Mat cvMat(rows, cols, CV_8UC4, Scalar(1,2,3,4)); // 8 bits per component, 4 channels
CGContextRef contextRef = CGBitmapContextCreate(cvMat.data, // Pointer to data
cols, // Width of bitmap
rows, // Height of bitmap
8, // Bits per component
cvMat.step[0], // Bytes per row
colorSpace, // Colorspace
kCGImageAlphaNoneSkipLast |
kCGBitmapByteOrderDefault); // Bitmap info flags
CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), image.CGImage);
CGContextRelease(contextRef);
CGColorSpaceRelease(colorSpace);
return cvMat;
}
Now finally, converting cvMat (color,binary, gray) to UIImage (color, binary, gray). Notice the line :
UIImage *finalImage = [UIImage imageWithCGImage:imageRef scale:1 orientation:self.originalImage.imageOrientation];
This line will help to keep the original orientation of the image
ENJOY !!
-(UIImage *)UIImageFromCVMat:(cv::Mat)cvMat {
NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize()*cvMat.total()];
CGColorSpaceRef colorSpace;
CGBitmapInfo bitmapInfo;
if (cvMat.elemSize() == 1) {
colorSpace = CGColorSpaceCreateDeviceGray();
bitmapInfo = kCGImageAlphaNone | kCGBitmapByteOrderDefault;
} else {
colorSpace = CGColorSpaceCreateDeviceRGB();
bitmapInfo = kCGBitmapByteOrder32Little | (
cvMat.elemSize() == 3? kCGImageAlphaNone : kCGImageAlphaNoneSkipFirst
);
}
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
// Creating CGImage from cv::Mat
CGImageRef imageRef = CGImageCreate(
cvMat.cols, //width
cvMat.rows, //height
8, //bits per component
8 * cvMat.elemSize(), //bits per pixel
cvMat.step[0], //bytesPerRow
colorSpace, //colorspace
bitmapInfo, // bitmap info
provider, //CGDataProviderRef
NULL, //decode
false, //should interpolate
kCGRenderingIntentDefault //intent
);
// Getting UIImage from CGImage
UIImage *finalImage = [UIImage imageWithCGImage:imageRef scale:1 orientation:self.originalImage.imageOrientation];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
return finalImage;
}
You should consider using native OpenCV functions to convert forth and back :
#import <opencv2/imgcodecs/ios.h>
...
UIImage* MatToUIImage(const cv::Mat& image);
void UIImageToMat(const UIImage* image,
cv::Mat& m, bool alphaExist = false);
Note: if your UIImage comes from the camera, you should 'normalize' it (
iOS UIImagePickerController result image orientation after upload) before converting to cv::Mat since OpenCV does not take into account Exif data. If you don't do that the result should be misoriented.
As a category:
#import <UIKit/UIKit.h>
#import <opencv2/core/core.hpp>
using namespace cv;
#interface UIImage (OCV)
-(id)initWithOImage:(cv::Mat)oImage;
-(cv::Mat)oImage;
#end
and
#import "UIImage+OCV.h"
#implementation UIImage (OCV)
-(id)initWithOImage:(cv::Mat)oImage
{
NSData *data = [NSData dataWithBytes:oImage.data length:oImage.elemSize() * oImage.total()];
CGColorSpaceRef colorSpace;
if (oImage.elemSize() == 1) {
colorSpace = CGColorSpaceCreateDeviceGray();
} else {
colorSpace = CGColorSpaceCreateDeviceRGB();
}
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
CGImageRef imageRef = CGImageCreate(oImage.cols, // Width
oImage.rows, // Height
8, // Bits per component
8 * oImage.elemSize(), // Bits per pixel
oImage.step[0], // Bytes per row
colorSpace, // Colorspace
kCGImageAlphaNone | kCGBitmapByteOrderDefault, // Bitmap info flags
provider, // CGDataProviderRef
NULL, // Decode
false, // Should interpolate
kCGRenderingIntentDefault); // Intent
self = [self initWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
return self;
}
-(cv::Mat)oImage
{
CGColorSpaceRef colorSpace = CGImageGetColorSpace(self.CGImage);
CGFloat cols = self.size.width;
CGFloat rows = self.size.height;
cv::Mat cvMat(rows, cols, CV_8UC4); // 8 bits per component, 4 channels
CGContextRef contextRef = CGBitmapContextCreate(cvMat.data, // Pointer to backing data
cols, // Width of bitmap
rows, // Height of bitmap
8, // Bits per component
cvMat.step[0], // Bytes per row
colorSpace, // Colorspace
kCGImageAlphaNoneSkipLast |
kCGBitmapByteOrderDefault); // Bitmap info flags
CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), self.CGImage);
CGContextRelease(contextRef);
return cvMat;
}
#end
What I have experienced with converting between UIImage and cvMat is following:
When I used the method:
UIImage* MatToUIImage(const cv::Mat& image);
for converting cv::Mat to UIImage and the method:
void UIImageToMat(const UIImage* image, cv::Mat& m);
for converting UIImage to cv::Mat these methods didn't work correctly using the Simulator.
After I deployed my app on a real device, there weren't any problems.
Best regards,
Nazar
Can any one help me to find the memory leak in the below code, which adjusts the brightness of an image?
+(NSImage *)brightness:(NSImage *)image andLevel:(int)level
{
CGImageSourceRef source= CGImageSourceCreateWithData((CFDataRef)[image TIFFRepresentation], NULL);
CGImageRef img = CGImageSourceCreateImageAtIndex(source, 0, NULL);
NSSize size = image.size;
CGRect rect = CGRectMake(0.0f, 0.0f, size.width, size.height);
//getting bitmap data from receiver's CGImage
CFDataRef dataref=CGDataProviderCopyData(CGImageGetDataProvider(img));
//getting bytes from bitmap image
UInt8 *data=(UInt8 *)CFDataGetBytePtr(dataref);
//getting length
int length=CFDataGetLength(dataref);
// Perform operation on pixels
for(int index=0;index<length;index += 1)
{
// Go For BRIGHTNESS
for(int i=0;i<3;i++)
{
//printf("This pixel is:%d",data[index + i]);
if(data[index+i]+level<0)
{
data[index+i]=0;
}
else
{
if(data[index+i]+level>255)
{
data[index+i]=255;
}
else
{
data[index+i]+=level;
}
}
}
}
// .. Take image attributes
size_t width=CGImageGetWidth(img);
size_t height=CGImageGetHeight(img);
size_t bitsPerComponent=CGImageGetBitsPerComponent(img);
size_t bitsPerPixel=CGImageGetBitsPerPixel(img);
size_t bytesPerRow=CGImageGetBytesPerRow(img);
// .. Do the pixel manupulation
CGColorSpaceRef colorspace=CGImageGetColorSpace(img);
CGBitmapInfo bitmapInfo=CGImageGetBitmapInfo(img);
CFDataRef newData=CFDataCreate(NULL,data,length);
CGDataProviderRef provider=CGDataProviderCreateWithCFData(newData);
// .. Get the Image out of this raw data
CGImageRef newImg=CGImageCreate(width,height,bitsPerComponent,bitsPerPixel,bytesPerRow,colorspace,bitmapInfo,provider,NULL,true,kCGRenderingIntentDefault);
// .. Prepare the image from raw data
NSImage* newImage = [[NSImage alloc] initWithSize:size];
//To make the drawing appear on the image instead of on the screen
[newImage lockFocus];
//Draws an image into a graphics context.
CGContextDrawImage([[NSGraphicsContext currentContext] graphicsPort],*(CGRect*)&rect, newImg);
[newImage unlockFocus];
// .. done with all,so release the references
CFRelease(source);
CFRelease(img);
CFRelease(dataref);
CFRelease(colorspace);
CFRelease(newData);
CFRelease(provider);
return [newImage autorelease];
}
You’ve forgotten to release newImg, which you’ve obtained via a Create function. Also, you shouldn’t release colorSpace since you haven’t obtained it via a Create or Copy function and you haven’t retained it.
Replace the following code lines:
NSImage* newImage = [[NSImage alloc] initWithSize:size];
with
NSImage* newImage = [[[NSImage alloc] initWithSize:size] autorelease];
and this one: return [newImage autorelease]; with return newImage;
I'm not 100% sure about this but give it a try, hope it might help.
:)
i am using this function to create Screenshots of the my IPAD App. I am using the Sparrow Framework in my Project. SPDisplayObject uses OpenGl-ES based rendering.
#implementation SPDisplayObject (ScreenshotFromSPDisplayObject)
- (UIImage *)getImageScreenshot{
int WIDTH = 1024;
int HEIGHT = 768;
CGSize size = CGSizeMake(WIDTH,HEIGHT);
//Create un buffer for pixels
GLuint bufferLenght=size.width*size.height*4;
GLubyte *buffer = (GLubyte *) malloc(bufferLenght);
//Read Pixels from OpenGL
glReadPixels(0,0,size.width,size.height,GL_RGBA,GL_UNSIGNED_BYTE,buffer);
//Make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer, bufferLenght, NULL);
//Configure image
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * size.width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef iref = CGImageCreate(size.width,size.height,bitsPerComponent,bitsPerPixel,bytesPerRow,colorSpaceRef,bitmapInfo,provider,NULL,NO,renderingIntent);
uint32_t *pixels = (uint32_t *)malloc(bufferLenght);
CGContextRef context = CGBitmapContextCreate(pixels, WIDTH, HEIGHT, 8, WIDTH*4, CGImageGetColorSpace(iref), kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextTranslateCTM(context,0, size.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextDrawImage(context, CGRectMake(0.0, 0.0, size.width, size.height), iref);
UIImage* screenshot = [UIImage imageWithCGImage:CGBitmapContextCreateImage(context)];
UIGraphicsEndImageContext();
//free memory
CGColorSpaceRelease(colorSpaceRef);
CGDataProviderRelease(provider);
CGImageRelease(iref);
CGContextRelease(context);
free(buffer);
free(pixels);
return screenshot;
}
#end
I use it like this from an UIViewController:
#interface
UIImageView *screenShot;
UIImage *tempImage;
-(void) deactivePage
{
// attach screenshot
tempImage = [self.stage getImageScreenshot];
screenShot = [[UIImageView alloc] initWithFrame:CGRectMake(0,0,1024,768)];
screenShot.image = tempImage;
[self.view addSubview:screenShot];
}
- (void)dealloc
{
screenShot.image = nil;
[screenShot removeFromSuperview];
[screenShot release];
[super dealloc];
}
The UIViewController is released and deallocated aprox. 5 Seconds after the "deactivePage" function is called.
The Screenshot is used for a View Transition.
Taking Screenshots works like a charm, but with every Screenshot my App is growing around 10 MBs So i can do this around 15 Times till the app crashes.
So where is the leak? I am stuck.. :-(
In the getImageScreenshot function you do this:
UIImage* screenshot = [UIImage imageWithCGImage:CGBitmapContextCreateImage(context)];
which creates a CGImageRef and then creates (autorelease) an UIImage from it.
What happens here is that this CGImageRef remains alive and is never released, so it's leaking.
What you should do, instead, is this:
CGImageRef myCGImage = CGBitmapContextCreateImage(context);
UIImage* screenshot = [UIImage imageWithCGImage:myCGImage];
CGImageRelease(myCGImage);
Have you tried to see it using Instruments (Leaks or Heapshots)? you should see these CGImareRef elements still alive.
I don't see where you deallocate tempImage in the UIViewController when it's going down.