Memory problems while converting cvMat to UIImage - objective-c

before posting this question here, i have read all the materials and similar posts on it but i cant get the main "idea" what is happening and how to fix it, in 10 of the similar question, everyone was fixing this problem with #autoreleasepool in this case i was unable to achive my goal. So while converting cvMat to UIImage i have increasing memory depending on size.
Below are step which i am doing before converting mat to uiimage:
cv::Mat undistorted = cv::Mat(cvSize(maxWidth,maxHeight), CV_8UC1);
cv::Mat original = [MatStructure convertUIImageToMat:adjustedImage];
cv::warpPerspective(original, undistorted, cv::getPerspectiveTransform(src, dst), cvSize(maxWidth, maxHeight));
original.release();
adjustedImage = [MatStructure convertMatToUIImage:undistorted];
undistorted.release();
problem is visible while i am converting my mat to uiimage, memory goes up to 400 mb and on every cycle it rises.
+ (UIImage *) convertMatToUIImage: (cv::Mat) cvMat {
NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize() * cvMat.total()];
CGColorSpaceRef colorSpace;
if (cvMat.elemSize() == 1) {
colorSpace = CGColorSpaceCreateDeviceGray();
} else {
colorSpace = CGColorSpaceCreateDeviceRGB();
}
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef) data);
CGBitmapInfo bmInfo = kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big;
CGImageRef imageRef = CGImageCreate(cvMat.cols, // width
cvMat.rows, // height
8, // bits per component
8 * cvMat.elemSize(), // bits per pixel
cvMat.step.p[0], // bytesPerRow
colorSpace, // colorspace
bmInfo, // bitmap info
provider, // CGDataProviderRef
NULL, // decode
false, // should interpolate
kCGRenderingIntentDefault // intent
);
UIImage *image = [[UIImage alloc] initWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
cvMat.release(); // this line is optional.
return image;
}
I have seen many similar code but every single example works as this one.
I belive that problem holds in (__bridge CFDataRef) and ARC cant clean up this data, if i will try to CFRelease((__bridge CFDataRef)data) than will happen crash because program will search for allocated memory and it will be freed already so it will run to crash.
I am using openCV3 and have tried their method MatToUIImage but problem still exsits, on leaks profiler there are no leaks at all, and most expensive task in memory is convertMatToUIImage.
I am reading all day about it but actually can't find any useful solution yet.
Currently i work on swift 3.0 which inherits class XXX and it uses objC class to crop something and than return to UIImage as well. In deinit i am assigning this inherited class property nil, but problem still exsists.Also i think that dataWithBytes is duplicating memory like if i have 16MB at start after creating NSData it will be 32MB..
And please if you can suggests useful threads about this problem i will be glad to read them all. Thanks for help

After working on this problem more than three days, i had to rewrite function and it worked 100%, i have tested on five different devices.
CFRelease, Free() and #autoreleasepool did not helped me at all and i implemented this:
data = UIImageJPEGRepresentation([[UIImage alloc] initWithCGImage:imageRef], 0.2f); // because images are 30MB and up
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSString *appFile = [documentsDirectory stringByAppendingPathComponent:#"MyFile.jpeg"];
[data writeToFile:appFile atomically:NO];
data = nil;
after this solution everything worked fine. So i grab the UIImage and converting to NSData, after that we should save it to the local directory and the only thing left is to read the data from directory. hope this thread will help someone one day.

Related

Open CV memory stacked (not released properly)

I am using 3rd party library for image processing, this method seems to be the cause of large memory usage (+30MB) everytime it executed, and it won't release properly. Repeated use of it ends up crashing the app (memory overload). The image used is directly from camera of my iP6.
+ (UIImage *)UIImageFromCVMat:(cv::Mat)cvMat
{
NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize() * cvMat.total()];
CGColorSpaceRef colorSpace;
if (cvMat.elemSize() == 1) {
colorSpace = CGColorSpaceCreateDeviceGray();
} else {
colorSpace = CGColorSpaceCreateDeviceRGB();
}
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
CGImageRef imageRef = CGImageCreate(cvMat.cols, // Width
cvMat.rows, // Height
8, // Bits per component
8 * cvMat.elemSize(), // Bits per pixel
cvMat.step[0], // Bytes per row
colorSpace, // Colorspace
kCGImageAlphaNone | kCGBitmapByteOrderDefault, // Bitmap info flags
provider, // CGDataProviderRef
NULL, // Decode
false, // Should interpolate
kCGRenderingIntentDefault); // Intent
// UIImage *image = [[UIImage alloc] initWithCGImage:imageRef];
UIImage *image = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
return image;
}
I suspect the problem is here: (__bridge CFDataRef)data. I cant use CFRelease on it cause it make app crash. Project is running with ARC.
EDIT:
It seems the same code is also in openCV official website:
http://docs.opencv.org/2.4/doc/tutorials/ios/image_manipulation/image_manipulation.html
Gah!
EDIT 2
Here is the code how I use it (actually below code is also a part of the 3rd party lib, but i added some lines).
cv::Mat undistorted = cv::Mat( cvSize(maxWidth,maxHeight), CV_8UC4); // here nothing
cv::Mat original = [MMOpenCVHelper cvMatFromUIImage:_adjustedImage]; // here +30MB
//NSLog(#"%f %f %f %f",ptBottomLeft.x,ptBottomRight.x,ptTopRight.x,ptTopLeft.x);
cv::warpPerspective(original, undistorted,
cv::getPerspectiveTransform(src, dst), cvSize(maxWidth, maxHeight)); // here +16MB
_cropRect.hidden=YES;
#autoreleasepool {
_sourceImageView.image=[MMOpenCVHelper UIImageFromCVMat:undistorted]; // here +15MB (PROBLEM)
}
original.release(); // here -30MB (THIS IS OK)
undistorted.release(); // here -16MB (ok)
Guess it is a hard subject since not many people knows OpenCV that well. What I found is that most answer for the similar problem involves putting #autoreleasepool where this method is used. But seems to be not releasing memory either.
As temporary solution I resize the image fed to this method by half. At least app will last longer before it crash finally. It just works.

Saving NSBitmapImageRep as image

I've done an exhaustive search on this and I know similar questions have been posted before about NSBitmapImageRep, but none of them seem specific to what I'm trying to do which is simply:
Read in an image from the desktop (but NOT display it)
Create an NSBitmap representation of that image
Iterate through the pixels to change some colours
Save the modified bitmap representation as a separate file
Since I've never worked with bitmaps before I thought I'd just try to create and save one first, and worry about modifying pixels later. That seemed really straightforward, but I just can't get it to work. Apart from the file saving aspect, most of the code is borrowed from another answer found on StackOverflow and shown below:
-(void)processBitmapImage:(NSString*)aFilepath
{
NSImage *theImage = [[NSImage alloc] initWithContentsOfFile:aFilepath];
if (theImage)
{
CGImageRef CGImage = [theImage CGImageForProposedRect:nil context:nil hints:nil];
NSBitmapImageRep *imageRep = [[NSBitmapImageRep alloc] initWithCGImage:CGImage];
NSInteger width = [imageRep pixelsWide];
NSInteger height = [imageRep pixelsHigh];
long rowBytes = [imageRep bytesPerRow];
// above matches the original size indicating NSBitmapImageRep was created successfully
printf("WIDE pix = %ld\n", width);
printf("HIGH pix = %ld\n", height);
printf("Row bytes = %ld\n", rowBytes);
// We'll worry about this part later...
/*
unsigned char* pixels = [imageRep bitmapData];
int row, col;
for (row=0; row < height; row++)
{
// etc ...
for (col=0; col < width; col++)
{
// etc...
}
}
*/
// So, let's see if we can just SAVE the (unmodified) bitmap first ...
NSData *pngData = [imageRep representationUsingType: NSPNGFileType properties: nil];
NSString *destinationStr = [self pathForDataFile];
BOOL returnVal = [pngData writeToFile:destinationStr atomically: NO];
NSLog(#"did we succeed?:%#", (returnVal ? #"YES": #"NO")); // the writeToFile call FAILS!
[imageRep release];
}
[theImage release];
}
While I like this code for its simplicity, another potential issue down the road might be that Apple docs advise us treat bitmaps returned with 'initWithCGImage' as read-only objects…
Can anyone please tell me where I'm going wrong with this code, and how I could modify it to work. While the overall concept looks okay to my non-expert eye, I suspect I'm making a dumb mistake and overlooking something quite basic. Thanks in advance :-)
That's a fairly roundabout way to create the NSBitmapImageRep. Try creating it like this:
NSBitmapImageRep* imageRep = [NSBitmapImageRep imageRepWithContentsOfFile:aFilepath];
Of course, the above does not give you ownership of the image rep object, so don't release it at the end.

How to create CGImageRef from NSData string data (NOT UIImage)

How does one create a new CGImageRef without a UIImage? I can't use image.CGImage
I am receiving a base64 encoded image as a std::string from a server process. The first part of the code below simulates receiving the encoded string.
- (UIImage *)testChangeImageToBase64String
{
UIImage *processedImage = [UIImage imageNamed:#"myFile.jpg"];
// UIImage to unsigned char *
CGImageRef imageRef = processedImage.CGImage;
NSData *data = (NSData *) CFBridgingRelease(CGDataProviderCopyData(CGImageGetDataProvider(imageRef)));
// encode data to Base64 NSString
NSString *base64EncodedDataString = [data base64EncodedStringWithOptions:0];
// create encoded std::string
std::string encoded([base64EncodedDataString UTF8String]);
// ***************************************************************************
// This is where we call the server method and receive the bytes in a std::string
std::string received = encoded;
// ***************************************************************************
// get Base64 encoded std::string into NSString
NSString *base64EncodedCstring = [NSString stringWithCString:encoded.c_str() encoding:[NSString defaultCStringEncoding]];
// NSData from the Base64 encoded std::string
NSData *nsdataFromBase64String = [[NSData alloc]initWithBase64EncodedString:base64EncodedCstring options:0];
Everything is good!!!!..... until I try to populate the newImage.
When I get the encoded string, I need to get a CGImageRef to get the data back into the correct format to populate a UIImage. If the data is not in the correct format the UIImage will be nil.
I need to create a new CGImageRef with the nsdataFromBase64String.
Something like:
CGImageRef base64ImageRef = [newCGImageRefFromString:nsdataFromBase64String];
Then I can use imageWithCGImage to put the data into a new UIImage.
Something like:
UIImage *imageFromImageRef = [UIImage imageWithCGImage: base64ImageRef];
Then I can return the UIImage.
return newImage;
}
Please note that the following line will NOT work:
UIImage *newImage = [[UIImage alloc] initWithData:nsdataFromBase64String];
The data needs to be in the correct format or the UIImage will be nil. Hence, my question, "How do I create a CGImageRef with NSData?"
Short-ish answer, since this is mostly just going over what I mentioned in NSChat:
Figure out what the format of the image you're receiving is as well as its size (width and height, in pixels). You mentioned in chat that it's just straight ARGB8 data, so keep that in mind. I'm not sure how you're receiving the other info, if at all.
Using CGImageCreate, create a new image using what you know about the image already (i.e., presumably you know its width, height, and so on — if you don't, you should be packing this in with the image you're sending). E.g., this bundle of boilerplate that nobody likes to write:
// NOTE: have not tested if this even compiles -- consider it pseudocode.
CGImageRef image;
CFDataRef bridgedData;
CGDataProviderRef dataProvider;
CGColorSpaceRef colorSpace;
CGBitmapInfo infoFlags = kCGImageAlphaFirst; // ARGB
// Get a color space
colorSpace = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);
// Assuming the decoded data is only pixel data
bridgedData = (__bridge CFDataRef)decodedData;
dataProvider = CGDataProviderCreateWithCFData(bridgedData);
// Given size_t width, height which you should already have somehow
image = CGImageCreate(
width, height, /* bpc */ 8, /* bpp */ 32, /* pitch */ width * 4,
colorSpace, infoFlags,
dataProvider, /* decode array */ NULL, /* interpolate? */ TRUE,
kCGRenderingIntentDefault /* adjust intent according to use */
);
// Release things the image took ownership of.
CGDataProviderRelease(dataProvider);
CGColorSpaceRelease(colorSpace);
That code's written with the idea that it's guaranteed to be ARGB_8888, the data is correct, nothing could possibly return NULL, etc. Copy/pasting the above code could potentially cause everything in a three mile radius to explode. Error handling's up to you (e.g., CGColorSpaceCreateWithName can potentially return null).
Allocate a UIImage using the CGImage. Since the UIImage will take ownership of/copy the CGImage, release your CGImageRef (actually, the docs say nothing about what UIImage does with the CGImage, but you're not going to use it anymore, so you must release yours).

How do I access and manipulate JPEG image pixels?

i have a jpg file. I need to convert it to pixel data and then change color of some pixel. I do it like this:
NSString *string = [[NSBundle mainBundle] pathForResource:#"pic" ofType:#"jpg"];
NSData *data = [NSData dataWithContentsOfFile:string];
unsigned char *bytesArray = dataI.bytes;
NSUInteger byteslenght = data.length;
//--------pixel to array
NSMutableArray *array = [[NSMutableArray alloc] initWithCapacity:byteslenght];
for (int i = 0; i<byteslenght; i++) {
[array addObject:[NSNumber numberWithUnsignedChar:bytesArray[i]]];
}
Here i try to change color of pixels since 95 till 154.
NSNumber *number = [NSNumber numberWithInt:200];
for (int i=95; i<155; i++) {
[array replaceObjectAtIndex:i withObject:number];
}
But when i convert array to image i got a blurred picture. I don't understand why i don't have an influence on some pixels and why i have influence on picture in total?
The process of accessing pixel-level data is a little more complicated than your question might suggest, because, as Martin pointed out, JPEG can be a compressed image format. Apple discusses the approved technique for getting pixel data in Technical Q&A QA1509.
Bottom line, to get the uncompressed pixel data for a UIImage, you would:
Get the CGImage for the UIImage.
Get the data provider for that CGImageRef via CGImageGetDataProvider.
Get the binary data associated with that data provider via CGDataProviderCopyData.
Extract some of the information about the image, so you know how to interpret that buffer.
Thus:
UIImage *image = ...
CGImageRef imageRef = image.CGImage; // get the CGImageRef
NSAssert(imageRef, #"Unable to get CGImageRef");
CGDataProviderRef provider = CGImageGetDataProvider(imageRef); // get the data provider
NSAssert(provider, #"Unable to get provider");
NSData *data = CFBridgingRelease(CGDataProviderCopyData(provider)); // get copy of the data
NSAssert(data, #"Unable to copy image data");
NSInteger bitsPerComponent = CGImageGetBitsPerComponent(imageRef); // some other interesting details about image
NSInteger bitsPerComponent = CGImageGetBitsPerComponent(imageRef);
NSInteger bitsPerPixel = CGImageGetBitsPerPixel(imageRef);
CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(imageRef);
NSInteger bytesPerRow = CGImageGetBytesPerRow(imageRef);
NSInteger width = CGImageGetWidth(imageRef);
NSInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorspace = CGImageGetColorSpace(imageRef);
Given that you want to manipulate this, you presumably want some mutable pixel buffer. The easiest approach would be to make a mutableCopy of that NSData object and manipulate it there, but in these cases, I tend to fall back to C, creating a void *outputBuffer, into which I copy the original pixels and manipulate using traditional C array techniques.
To create the buffer:
void *outputBuffer = malloc(width * height * bitsPerPixel / 8);
NSAssert(outputBuffer, #"Unable to allocate buffer");
For the precise details on how to manipulate it, you have to look at bitmapInfo (which will tell you whether it's RGBA or ARGB; whether it's floating point or integer) and bitsPerComponent (which will tell you whether it's 8 or 16 bits per component, etc.). For example, a very common JPEG format is 8 bits per component, four components, RGBA (i.e. red, green, blue, and alpha, in that order). But you really need to check those various properties we extracted from the CGImageRef to make sure. See the discussion in the Quartz 2D Programming Guide - Bitmap Images and Image Masks for more information. I personally find "Figure 11-2" to be especially illuminating.
The next logical question is when you're done manipulating the pixel data, how to create a UIImage for that. In short, you'd reverse the above process, e.g. create a data provider, create a CGImageRef, and then create a UIImage:
CGDataProviderRef outputProvider = CGDataProviderCreateWithData(NULL, outputBuffer, sizeof(outputBuffer), releaseData);
CGImageRef outputImageRef = CGImageCreate(width,
height,
bitsPerComponent,
bitsPerPixel,
bytesPerRow,
colorspace,
bitmapInfo,
outputProvider,
NULL,
NO,
kCGRenderingIntentDefault);
UIImage *outputImage = [UIImage imageWithCGImage:outputImageRef];
CGImageRelease(outputImageRef);
CGDataProviderRelease(outputProvider);
Where releaseData is a C function that simply calls free the pixel buffer associated with the data provider:
void releaseData(void *info, const void *data, size_t size)
{
free((void *)data);
}

Large Image Processing (ARC) Major memory leaks

My iPad app that I am creating has to be able to create the tiles for a 4096x2992 image that is generated earlier in my app..
4096x2992 image isn't very complex (what i'm testing with) and when written to file in png format is approximately 600kb...
On the simulator, this code seems to work fine however when I run the app in tighter memory conditions (on my iPad) the process quits because it ran out of memory...
I've been using the same code in the app previously what was working fine (was only creating tiles for 3072x2244 images however)...
Either I must be doing something stupidly wrong or my #autoreleasepool's aren't working as they should (i think i mentioned that im using ARC)... When running in instruments I can just see the memory used climb up until ~500mb where it then crashes!
I've analysed the code and it hasn't found a single memory leak related to this part of my app so I'm really confused on why this is crashing on me...
Just a little history on how my function gets called so you know whats happening... The app uses CoreGraphics to render a UIView (4096x2992) with some UIImageView's inside it then it sends that UIImage reference into my function buildFromImage: (below) where it begins cutting up/resizing the image to create my file...
Here is the buildFromImage: code... the memory issues are built up from within the main loop under NSLog(#"LOG ------------> Begin tile loop ");...
-(void)buildFromImage:(UIImage *)__image {
NSLog(#"LOG ------------> Begin Build ");
//if the __image is over 4096 width of 2992 height then we must resize it! (stop crashes ect)
if (__image.size.width > __image.size.height) {
if (__image.size.width > 4096) {
__image = [self resizeImage:__image toSize:CGSizeMake(4096, (__image.size.height * 4096 / __image.size.width))];
}
} else {
if (__image.size.height > 2992) {
__image = [self resizeImage:__image toSize:CGSizeMake((__image.size.width * 2992 / __image.size.height), 2992)];
}
}
//create preview image (if landscape, no more than 748 high... if portrait, no more than 1004 high) must keep scale
NSString *temp_archive_store = [[NSString alloc] initWithFormat:#"%#/%i-temp_imgdat.zip",NSTemporaryDirectory(),arc4random()];
NSString *temp_tile_store = [[NSString alloc] initWithFormat:#"%#/%i-temp_tilestore/",NSTemporaryDirectory(),arc4random()];
//create the temp dir for the tile store
[[NSFileManager defaultManager] createDirectoryAtPath:temp_tile_store withIntermediateDirectories:YES attributes:nil error:nil];
//create each tile and add it to the compressor once its made
//size of tile
CGSize tile_size = CGSizeMake(256, 256);
//the scales that we will be generating the tiles too
NSMutableArray *scales = [[NSMutableArray alloc] initWithObjects:[NSNumber numberWithInt:1000],[NSNumber numberWithInt:500],[NSNumber numberWithInt:250],[NSNumber numberWithInt:125], nil]; //scales to loop round over
NSLog(#"LOG ------------> Begin tile loop ");
#autoreleasepool {
//loop through the scales
for (NSNumber *scale in scales) {
//scale the image
UIImage *imageForScale = [self resizedImage:__image scale:[scale intValue]];
//calculate number of rows...
float rows = ceil(imageForScale.size.height/tile_size.height);
//calulate number of collumns
float cols = ceil(imageForScale.size.width/tile_size.width);
//loop through rows and cols
for (int row = 0; row < rows; row++) {
for (int col = 0; col < cols; col++) {
NSLog(#"LOG ------> Creating Tile (%i,%i,%i)",col,row,[scale intValue]);
//build name for tile...
NSString *tile_name = [NSString stringWithFormat:#"%#_%i_%i_%i.png",#"image",[scale intValue],col,row];
#autoreleasepool {
//build tile for this coordinate
UIImage *tile = [self tileForRow:row column:col size:tile_size image:imageForScale];
//convert image to png data
NSData *tile_data = UIImagePNGRepresentation(tile);
[tile_data writeToFile:[NSString stringWithFormat:#"%#%#",temp_tile_store,tile_name] atomically:YES];
}
}
}
}
}
}
Here are my resizing/cropping functions too as these could also be causing the issue..
-(UIImage *)resizeImage:(UIImage *)inImage toSize:(CGSize)scale {
#autoreleasepool {
CGImageRef inImageRef = [inImage CGImage];
CGColorSpaceRef clrRf = CGColorSpaceCreateDeviceRGB();
CGContextRef ctx = CGBitmapContextCreate(NULL, ceil(scale.width), ceil(scale.height), CGImageGetBitsPerComponent(inImageRef), CGImageGetBitsPerPixel(inImageRef)*ceil(scale.width), clrRf, kCGImageAlphaPremultipliedFirst );
CGColorSpaceRelease(clrRf);
CGContextDrawImage(ctx, CGRectMake(0, 0, scale.width, scale.height), inImageRef);
CGImageRef img = CGBitmapContextCreateImage(ctx);
UIImage *image = [[UIImage alloc] initWithCGImage:img scale:1 orientation:UIImageOrientationUp];
CGImageRelease(img);
CGContextRelease(ctx);
return image;
}
}
- (UIImage *)tileForRow: (int)row column: (int)col size: (CGSize)tileSize image: (UIImage*)inImage
{
#autoreleasepool {
//get the selected tile
CGRect subRect = CGRectMake(col*tileSize.width, row * tileSize.height, tileSize.width, tileSize.height);
CGImageRef inImageRef = [inImage CGImage];
CGImageRef tiledImage = CGImageCreateWithImageInRect(inImageRef, subRect);
UIImage *tileImage = [[UIImage alloc] initWithCGImage:tiledImage scale:1 orientation:UIImageOrientationUp];
CGImageRelease(tiledImage);
return tileImage;
}
}
Now I never use to be that good with memory management, so I did take the time to read up on it and also converted my project to ARC to see if that could address my issues (that was a while ago) but from the results i get after profiling it in instruments I must be doing something STUPIDLY wrong for the memory to leak as bad as it does but I just can't see what i'm doing wrong.
If anybody can point out anything I may be doing wrong it would be great!
Thanks
Liam
(let me know if you need more info)
I use "UIImage+Resize". Inside #autoreleasepool {} it works fine with ARC in a loop.
https://github.com/AliSoftware/UIImage-Resize
-(void)compress:(NSString *)fullPathToFile {
#autoreleasepool {
UIImage *fullImage = [[UIImage alloc] initWithContentsOfFile:fullPathToFile];
UIImage *compressedImage = [fullImage resizedImageByHeight:1024];
NSData *compressedData = UIImageJPEGRepresentation(compressedImage, 75.0);
[compressedData writeToFile:fullPathToFile atomically:NO];
}
}