Open CV memory stacked (not released properly) - objective-c

I am using 3rd party library for image processing, this method seems to be the cause of large memory usage (+30MB) everytime it executed, and it won't release properly. Repeated use of it ends up crashing the app (memory overload). The image used is directly from camera of my iP6.
+ (UIImage *)UIImageFromCVMat:(cv::Mat)cvMat
{
NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize() * cvMat.total()];
CGColorSpaceRef colorSpace;
if (cvMat.elemSize() == 1) {
colorSpace = CGColorSpaceCreateDeviceGray();
} else {
colorSpace = CGColorSpaceCreateDeviceRGB();
}
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
CGImageRef imageRef = CGImageCreate(cvMat.cols, // Width
cvMat.rows, // Height
8, // Bits per component
8 * cvMat.elemSize(), // Bits per pixel
cvMat.step[0], // Bytes per row
colorSpace, // Colorspace
kCGImageAlphaNone | kCGBitmapByteOrderDefault, // Bitmap info flags
provider, // CGDataProviderRef
NULL, // Decode
false, // Should interpolate
kCGRenderingIntentDefault); // Intent
// UIImage *image = [[UIImage alloc] initWithCGImage:imageRef];
UIImage *image = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
return image;
}
I suspect the problem is here: (__bridge CFDataRef)data. I cant use CFRelease on it cause it make app crash. Project is running with ARC.
EDIT:
It seems the same code is also in openCV official website:
http://docs.opencv.org/2.4/doc/tutorials/ios/image_manipulation/image_manipulation.html
Gah!
EDIT 2
Here is the code how I use it (actually below code is also a part of the 3rd party lib, but i added some lines).
cv::Mat undistorted = cv::Mat( cvSize(maxWidth,maxHeight), CV_8UC4); // here nothing
cv::Mat original = [MMOpenCVHelper cvMatFromUIImage:_adjustedImage]; // here +30MB
//NSLog(#"%f %f %f %f",ptBottomLeft.x,ptBottomRight.x,ptTopRight.x,ptTopLeft.x);
cv::warpPerspective(original, undistorted,
cv::getPerspectiveTransform(src, dst), cvSize(maxWidth, maxHeight)); // here +16MB
_cropRect.hidden=YES;
#autoreleasepool {
_sourceImageView.image=[MMOpenCVHelper UIImageFromCVMat:undistorted]; // here +15MB (PROBLEM)
}
original.release(); // here -30MB (THIS IS OK)
undistorted.release(); // here -16MB (ok)

Guess it is a hard subject since not many people knows OpenCV that well. What I found is that most answer for the similar problem involves putting #autoreleasepool where this method is used. But seems to be not releasing memory either.
As temporary solution I resize the image fed to this method by half. At least app will last longer before it crash finally. It just works.

Related

Memory problems while converting cvMat to UIImage

before posting this question here, i have read all the materials and similar posts on it but i cant get the main "idea" what is happening and how to fix it, in 10 of the similar question, everyone was fixing this problem with #autoreleasepool in this case i was unable to achive my goal. So while converting cvMat to UIImage i have increasing memory depending on size.
Below are step which i am doing before converting mat to uiimage:
cv::Mat undistorted = cv::Mat(cvSize(maxWidth,maxHeight), CV_8UC1);
cv::Mat original = [MatStructure convertUIImageToMat:adjustedImage];
cv::warpPerspective(original, undistorted, cv::getPerspectiveTransform(src, dst), cvSize(maxWidth, maxHeight));
original.release();
adjustedImage = [MatStructure convertMatToUIImage:undistorted];
undistorted.release();
problem is visible while i am converting my mat to uiimage, memory goes up to 400 mb and on every cycle it rises.
+ (UIImage *) convertMatToUIImage: (cv::Mat) cvMat {
NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize() * cvMat.total()];
CGColorSpaceRef colorSpace;
if (cvMat.elemSize() == 1) {
colorSpace = CGColorSpaceCreateDeviceGray();
} else {
colorSpace = CGColorSpaceCreateDeviceRGB();
}
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef) data);
CGBitmapInfo bmInfo = kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big;
CGImageRef imageRef = CGImageCreate(cvMat.cols, // width
cvMat.rows, // height
8, // bits per component
8 * cvMat.elemSize(), // bits per pixel
cvMat.step.p[0], // bytesPerRow
colorSpace, // colorspace
bmInfo, // bitmap info
provider, // CGDataProviderRef
NULL, // decode
false, // should interpolate
kCGRenderingIntentDefault // intent
);
UIImage *image = [[UIImage alloc] initWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
cvMat.release(); // this line is optional.
return image;
}
I have seen many similar code but every single example works as this one.
I belive that problem holds in (__bridge CFDataRef) and ARC cant clean up this data, if i will try to CFRelease((__bridge CFDataRef)data) than will happen crash because program will search for allocated memory and it will be freed already so it will run to crash.
I am using openCV3 and have tried their method MatToUIImage but problem still exsits, on leaks profiler there are no leaks at all, and most expensive task in memory is convertMatToUIImage.
I am reading all day about it but actually can't find any useful solution yet.
Currently i work on swift 3.0 which inherits class XXX and it uses objC class to crop something and than return to UIImage as well. In deinit i am assigning this inherited class property nil, but problem still exsists.Also i think that dataWithBytes is duplicating memory like if i have 16MB at start after creating NSData it will be 32MB..
And please if you can suggests useful threads about this problem i will be glad to read them all. Thanks for help
After working on this problem more than three days, i had to rewrite function and it worked 100%, i have tested on five different devices.
CFRelease, Free() and #autoreleasepool did not helped me at all and i implemented this:
data = UIImageJPEGRepresentation([[UIImage alloc] initWithCGImage:imageRef], 0.2f); // because images are 30MB and up
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSString *appFile = [documentsDirectory stringByAppendingPathComponent:#"MyFile.jpeg"];
[data writeToFile:appFile atomically:NO];
data = nil;
after this solution everything worked fine. So i grab the UIImage and converting to NSData, after that we should save it to the local directory and the only thing left is to read the data from directory. hope this thread will help someone one day.

Memory leak with core graphics

I've been tasked to solve a memory leak with a custom objective class for a legacy app that uses garbage collection.
The class takes in NSData from a jpeg file and can thumb the image. There is a method to return a new NSData object with the newly resized image.
ImageThumber * imgt = [ImageThumber withNSData:dataObjectFromJpeg];
[imgt thumbImage:1024];
NSdata * smallImage = [imgt imageData];
[imgt thumbImage:256];
NSdata * extraSmallImage = [imgt imageData];
It does what it's supposed to do but it's been discovered that for every ImageThumber that's created it allocates a ImageIO_jpeg_Data object that's never deallocated. This was found in Instruments.
When created using ImageThumber withNSData:(NSData) it creates a CGImage and stores it in a private CGImageRef variable
I found that if thumbImage:(int) isn't called the ImageIO_jpeg_Data will deallocate when ImageThumber is deallocated which leads me to believe the problem lies somewhere within the thumbImage method. If thumbImage method is called multiple times it doesn't create extra ImageIO_jpeg_Data.
I have little experience with core graphics and garbage collection.
+(id)SAMImageDataWithNSData:(NSData *)data
{
SAMImageData * new = [[[self alloc] init] autorelease];
new.imageData = [NSMutableData dataWithData:data];
CFDataRef imgData = (CFDataRef)data;
CGDataProviderRef imgDataProvider;
imgDataProvider = CGDataProviderCreateWithCFData(imgData);
new->_cgImageRef = CGImageCreateWithJPEGDataProvider(imgDataProvider, NULL, true, kCGRenderingIntentDefault);
CGDataProviderRelease(imgDataProvider);
int width = (int)CGImageGetWidth(new->_cgImageRef);
int height = (int)CGImageGetHeight(new->_cgImageRef);
new.originalSize = NSMakeSize(width, height);
return new;
}
-(void)thumbImage:(int)length
{
/* simple logic to calculate new width and height */
//If the next line is commented out the problem doesn't exist.
//You just don't get the image resize.
[self resizeCGImageToWidth:newSize.width andHeight:newSize.height];
CFMutableDataRef workingData = (CFMutableDataRef)[[NSMutableData alloc] initWithCapacity:0];
CGImageDestinationRef dest;
dest = CGImageDestinationCreateWithData(workingData,kUTTypeJPEG,1,NULL);
CGImageDestinationAddImage(dest,_cgImageRef,NULL);
CGImageDestinationFinalize(dest);
CFRelease(dest);
self.imageData = (NSMutableData *)workingData;
}
This where I believe the problem exists:
- (void)resizeCGImageToWidth:(int)width andHeight:(int)height {
CGColorSpaceRef colorspace = CGImageGetColorSpace(_cgImageRef);
CGContextRef context = CGBitmapContextCreate(NULL, width, height,
CGImageGetBitsPerComponent(_cgImageRef),
CGImageGetBytesPerRow(_cgImageRef),
colorspace,
kCGImageAlphaPremultipliedFirst);
CGColorSpaceRelease(colorspace);
if(context == NULL)
return;
CGContextDrawImage(context, CGRectMake(0, 0, width, height), _cgImageRef);
_cgImageRef = CGBitmapContextCreateImage(context);
CGContextRelease(context);
}

CoreGraphics random crash

This is a little test program to duplicate an intermittent issue in a larger class. The real class creates 4 thumbs of various sizes.
This main.m program will crash 1 out of 5 times it's run with EXC_BAD_ACCESS and highlights CGImageRelease(imgRef); If i comment out CGImageRelease(imgRef) then the app experiences serious memory leaks but doesn't crash...
#import <Foundation/Foundation.h>
#import <Cocoa/Cocoa.h>
int main(int argc, const char * argv[])
{
#autoreleasepool {
NSString * image = #"/Users/xxx/Pictures/wallpaper/7gjMT.jpg";
NSData * imageData = [NSData dataWithContentsOfFile:image];
CFDataRef imgData = (__bridge CFDataRef)imageData;
CGImageRef imgRef;
CGDataProviderRef imgDataProvider = NULL;
imgDataProvider = CGDataProviderCreateWithCFData(imgData);
imgRef = CGImageCreateWithJPEGDataProvider(imgDataProvider, NULL, true, kCGRenderingIntentDefault);
CGDataProviderRelease(imgDataProvider);
for (int i = 0; i < 1000; i++) {
// create context, keeping original image properties
CGColorSpaceRef colorspace = CGImageGetColorSpace(imgRef);
CGContextRef context = CGBitmapContextCreate(NULL, 2560, 1440,
CGImageGetBitsPerComponent(imgRef),
CGImageGetBytesPerRow(imgRef),
colorspace,
kCGImageAlphaPremultipliedFirst);
CGColorSpaceRelease(colorspace);
// draw image to context (resizing it)
CGContextDrawImage(context, CGRectMake(0, 0, 2560, 1440), imgRef);
// extract resulting image from context
CGImageRef newImgRef;
newImgRef = CGBitmapContextCreateImage(context);
CGImageRelease(imgRef);
CGContextRelease(context);
imgRef = newImgRef;
}
}
return 0;
}
I found if I release the context first then 1 out of 10 failures it highlights CGImageGetBytesPerRow(imgRef) with the same error.
I added a breakpoint for malloc_error_break and got this on CGImageRelease:
Are CGImageRelease and CGImageRelease releasing a shared resource?
The main problem is almost certainly that you're releasing a colorspace that you don't own. CGImageGetColorSpace(imgRef) does not give you an ownership of the returned colorspace object, so you shouldn't be calling CGColorSpaceRelease(colorspace) later. (By the way, although I happened to get a different failure that clued me into the problem, the static analyzer catches this, too.)
As a secondary issue, I was getting failures to create the context because you're using an inappropriate bytes-per-row value. CGImageGetBytesPerRow(imgRef) is the bytes-per-row of that image, but that's only appropriate for the width of that image. Given that you're hard-coding a width rather than using the width of the image (since you're scaling), you should not be using the bytes-per-row of the image.
I guess it will work if you're scaling down, but it will waste space. If you're scaling up, it fails.
In any case, pass 0. That lets CGBitmapContextCreate() compute an optimal value.

How to create CGImageRef from NSData string data (NOT UIImage)

How does one create a new CGImageRef without a UIImage? I can't use image.CGImage
I am receiving a base64 encoded image as a std::string from a server process. The first part of the code below simulates receiving the encoded string.
- (UIImage *)testChangeImageToBase64String
{
UIImage *processedImage = [UIImage imageNamed:#"myFile.jpg"];
// UIImage to unsigned char *
CGImageRef imageRef = processedImage.CGImage;
NSData *data = (NSData *) CFBridgingRelease(CGDataProviderCopyData(CGImageGetDataProvider(imageRef)));
// encode data to Base64 NSString
NSString *base64EncodedDataString = [data base64EncodedStringWithOptions:0];
// create encoded std::string
std::string encoded([base64EncodedDataString UTF8String]);
// ***************************************************************************
// This is where we call the server method and receive the bytes in a std::string
std::string received = encoded;
// ***************************************************************************
// get Base64 encoded std::string into NSString
NSString *base64EncodedCstring = [NSString stringWithCString:encoded.c_str() encoding:[NSString defaultCStringEncoding]];
// NSData from the Base64 encoded std::string
NSData *nsdataFromBase64String = [[NSData alloc]initWithBase64EncodedString:base64EncodedCstring options:0];
Everything is good!!!!..... until I try to populate the newImage.
When I get the encoded string, I need to get a CGImageRef to get the data back into the correct format to populate a UIImage. If the data is not in the correct format the UIImage will be nil.
I need to create a new CGImageRef with the nsdataFromBase64String.
Something like:
CGImageRef base64ImageRef = [newCGImageRefFromString:nsdataFromBase64String];
Then I can use imageWithCGImage to put the data into a new UIImage.
Something like:
UIImage *imageFromImageRef = [UIImage imageWithCGImage: base64ImageRef];
Then I can return the UIImage.
return newImage;
}
Please note that the following line will NOT work:
UIImage *newImage = [[UIImage alloc] initWithData:nsdataFromBase64String];
The data needs to be in the correct format or the UIImage will be nil. Hence, my question, "How do I create a CGImageRef with NSData?"
Short-ish answer, since this is mostly just going over what I mentioned in NSChat:
Figure out what the format of the image you're receiving is as well as its size (width and height, in pixels). You mentioned in chat that it's just straight ARGB8 data, so keep that in mind. I'm not sure how you're receiving the other info, if at all.
Using CGImageCreate, create a new image using what you know about the image already (i.e., presumably you know its width, height, and so on — if you don't, you should be packing this in with the image you're sending). E.g., this bundle of boilerplate that nobody likes to write:
// NOTE: have not tested if this even compiles -- consider it pseudocode.
CGImageRef image;
CFDataRef bridgedData;
CGDataProviderRef dataProvider;
CGColorSpaceRef colorSpace;
CGBitmapInfo infoFlags = kCGImageAlphaFirst; // ARGB
// Get a color space
colorSpace = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);
// Assuming the decoded data is only pixel data
bridgedData = (__bridge CFDataRef)decodedData;
dataProvider = CGDataProviderCreateWithCFData(bridgedData);
// Given size_t width, height which you should already have somehow
image = CGImageCreate(
width, height, /* bpc */ 8, /* bpp */ 32, /* pitch */ width * 4,
colorSpace, infoFlags,
dataProvider, /* decode array */ NULL, /* interpolate? */ TRUE,
kCGRenderingIntentDefault /* adjust intent according to use */
);
// Release things the image took ownership of.
CGDataProviderRelease(dataProvider);
CGColorSpaceRelease(colorSpace);
That code's written with the idea that it's guaranteed to be ARGB_8888, the data is correct, nothing could possibly return NULL, etc. Copy/pasting the above code could potentially cause everything in a three mile radius to explode. Error handling's up to you (e.g., CGColorSpaceCreateWithName can potentially return null).
Allocate a UIImage using the CGImage. Since the UIImage will take ownership of/copy the CGImage, release your CGImageRef (actually, the docs say nothing about what UIImage does with the CGImage, but you're not going to use it anymore, so you must release yours).

I need help optimizing BGR888 blitting to NSView

This is best I've come up with for blitting a 24-bit BGR image out to an NSView.
I did trim a significant amount of CPU time by ensuring that the NSWindow host also had the same colorSpace.
I think there are 4 or 5 pixel copies going on here:
in the vImage conversion (required)
calling CGDataProviderCreateWithData
calling CGImageCreate
creating the NSBitmapImageRep bitmap
in the final blit with drawInRect (required)
Anyone want to chime in on improving it?
Any help would be much appreciated.
{
// one-time setup code
CGColorSpaceRef useColorSpace = nil;
int w = 1920;
int h = 1080;
[theWindow setColorSpace: [NSColorSpace genericRGBColorSpace]];
// setup vImage buffers (not listed here)
// srcBuffer is my 24-bit BGR image (malloc-ed to be w*h*3)
// dstBuffer is for the resulting 32-bit RGBA image (malloc-ed to be w*h*4)
...
// this is called # 30-60fps
if (!useColorSpace)
useColorSpace = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);
vImage_Error err = vImageConvert_BGR888toRGBA8888(srcBuffer, NULL, 0xff, dstBuffer, NO, 0);
CGDataProviderRef newProvider = CGDataProviderCreateWithData(NULL,dstBuffer->data,w*h*4,myReleaseProvider);
CGImageRef myImageRGBA = CGImageCreate(w, h, 8, 32, w*4, useColorSpace, kCGBitmapByteOrderDefault | kCGImageAlphaLast, newProvider, NULL, false, kCGRenderingIntentDefault);
CGDataProviderRelease(newProvider);
// store myImageRGBA in an array of frames (using NSObject wrappers) for later access (setNeedsDisplay:)
...
}
- (void)drawRect:(NSRect)dirtyRect
{
// this is called # 30-60fps
CGImageRef storedImage = ...; // retrieve from array
NSBitmapImageRep *repImg = [[NSBitmapImageRep alloc] initWithCGImage:storedImage];
CGRect myFrame = CGRectMake(0,0,CGImageGetWidth(storedImage),CGImageGetHeight(storedImage));
[repImg drawInRect:myFrame fromRect:myFrame operation:NSCompositeCopy fraction:1.0 respectFlipped:TRUE hints:nil];
// free image from array (not listed here)
}
// this is called when the CGDataProvider is ready to release its data
void myReleaseProvider (void *info, const void *data, size_t size)
{
if (data) {
free((void *)data);
data=nil;
}
}
Use CGColorSpaceCreateDeviceRGB instead of genericRGB to avoid colorspace conversion inside CG. Use kCGImageAlphaNoneSkipLast instead of kCGImageAlphaLast since we know alpha is opaque to allow for a copy instead of a blend.
After you make those changes, it would be useful to run an Instruments time profile on it to show where the time is going.