Where is this leaking? / Why am I having memory issues? - objective-c

I'm having a bit of trouble with an iPad app I'm creating. There is a simple image sequence/animation of about 80 frames at one point.
The code looks like this (this is a UIView subclass subclass):
- (id)init {
UIImage *theImage = [UIImage imageNamed:#"chart0075.jpg"];
// get the frame
CGRect lgeFrame = CGRectMake(20, 130, theImage.size.width, theImage.size.height);
// set the new frame
CGFloat newHeight = theImage.size.height/1.65;
CGFloat newWidth = theImage.size.width/1.65;
CGRect smlFrame = CGRectMake(480, 200, newWidth, newHeight);
self = [super initWithLargeFrame:lgeFrame smallFrame:smlFrame];
if(self){
// lets add the image as an image view
theImageView = [[UIImageView alloc] initWithImage:theImage];
[theImageView setFrame:CGRectMake(0, 0, self.frame.size.width, self.frame.size.height)];
[theImageView setAutoresizingMask:UIViewAutoresizingFlexibleWidth | UIViewAutoresizingFlexibleHeight];
[self addSubview:theImageView];
// now we need to make an array of images for the image sequence
NSMutableArray *imageSeq = [NSMutableArray array];
for(int i = 1; i < 76; i++){
NSString *jpgnumber;
if(i<10){
jpgnumber = [NSString stringWithFormat:#"000%i",i];
}
else {
jpgnumber = [NSString stringWithFormat:#"00%i",i];
}
NSString *imageFile = [[NSBundle mainBundle] pathForResource:[NSString stringWithFormat:#"chart%#",jpgnumber] ofType:#"jpg"];
[imageSeq addObject:[UIImage imageWithContentsOfFile:imageFile]];
}
[theImageView setAnimationImages:imageSeq];
[theImageView setAnimationDuration:1.5];
[theImageView setAnimationRepeatCount:1];
}
return self;
}
Then on a reverse pinch gesture the image is supposed to animate. The first time I do this reverse pinch gesture it takes a few seconds to start the animation. And sometimes I get a memory warning level 1 and the app will crash.
Whats the problem here? Are 80 jpgs too much to keep in memory at once? They're well under 2mb big in total, so they surely shouldn't be filling up the ipad's memory right?
I've looked at it with the allocations tool which is suggesting that I have about 40kb in memory at the time of the animation, but then this goes back down to 0 during subsequent animations. (although the allocations tool does confuse me quite a bit).
Does anyone have any idea what's causing this? I can post more code or anything if necessary?
Thanks a lot :)

Your memory usage depends on how big the images are uncompressed. The width times the height time 4 will tell you the number of bytes the images will take each, multiply by the number of images to get the total.
My guess is you are on the edge of being over memory wise.
Run in Instruments with the VMTracker instrument to be sure. You should be looking at the Dirty Resident memory.
WWDC '10 and WWDC '09 both had great content on Instruments and memory usage analysis.

You are not releasing theImageView
[theImageView release]

Related

AFNetworking + Many imageviews = Crash -> Received memory warning

I 've never had problems with my app using AFNetworking, because I had like 20 imageview to show , but now my app crash because I want to set imagewithURL to 150 Imageviews, if I commented that line all is ok, this is my code:
for (int i=0; i< Array.count; i++) {
UIImageView *imgProd=[[UIImageView alloc] initWithFrame:CGRectMake(margenX, margenY, 220, 330)];
imgProd.contentMode = UIViewContentModeScaleAspectFill;
imgProd.clipsToBounds = YES;
[imgProd setTag:i];
// dispatch_async(dispatch_get_main_queue(), ^{
[imgProd setImageWithURL:[NSURL URLWithString: [Array objectAtIndex:i]]];
// });
imgProd.userInteractionEnabled = YES;
UITapGestureRecognizer *tap ... etc etc.
}
I put a dispatch_async, but is the same problem, please some advices!, thanks :)
You are creating 150 UIImageViews and are filling them up with imagedata which mess up your memory and crashes your app.
You should make a UITableView or UICollectionView and use the built in memory handler
dequeueReusableCellWithReuseIdentifier
, to show the images on the screen and reuse only one and the same UIImageView in the cell. Not create 150 uiimageviews that is, only one is needed.
Some fast googling got me to this tutorial:
https://www.youtube.com/watch?v=OBL8OJUWmsI
Gl !

NSImageView is not deallocated using ARC

Im pretty new to Cocoa development, and I probably do not clearly understand how ARC works.
My problem is that when I'm using NSImageView it is not getting deallocated as I want so the program is leaking memory.
__block CMTime lastTime = CMTimeMake(-1, 1);
__block int count = 0;
[_imageGenerator generateCGImagesAsynchronouslyForTimes:stops
completionHandler:^(CMTime requestedTime, CGImageRef image, CMTime actualTime,
AVAssetImageGeneratorResult result, NSError *error)
{
if (result == AVAssetImageGeneratorSucceeded)
{
if (CMTimeCompare(actualTime, lastTime) != 0)
{
NSLog(#"new frame found");
lastTime = actualTime;
}
else
{
NSLog(#"skipping");
return;
}
// place the image onto the view
NSRect rect = CGRectMake((count+0.5) * 110, 100, 100, 100);
// the problem is here!!! ImageView object gets allocated, but never released by the program even though I'm using ARC
NSImageView *imgV = [[NSImageView alloc] initWithFrame:rect];
[imgV setImageScaling:NSScaleToFit];
NSImage *myImage = [[NSImage alloc] initWithCGImage:image size:(NSSize){50.0,50.0}];
[imgV setImage:myImage];
[self.window.contentView addSubview: imgV];
}
if (result == AVAssetImageGeneratorFailed)
{
NSLog(#"Failed with error: %#", [error localizedDescription]);
}
if (result == AVAssetImageGeneratorCancelled)
{
NSLog(#"Canceled");
}
count++;
}];
Therefore, when I'm returning to this block again t generate new images and display them, everything works perfect except that my program memory use increases by the number of views got created.
If anyone can help me with this I would really appreciate it! Thank you!
Your problem is that you don't remove your subviews when you are generating new ones - make sure you remove your subviews before with something along those lines:
NSArray *viewsToRemove = [self.contentView subviews];
for (NSView *v in viewsToRemove) {
[v removeFromSuperview];
}
So your problem is not related to the usage of ARC actually. Each time you create a NSImageView and add it to contentView it is your responsability to remove them before adding a series of new ones. Note that adding those views to contentView will increment the ref count by one and removing them from the contentView will decrement the ref count by one leading to the memory usage for those views being freed by the system (because nothing else is retaining your views in btw).
Offending piece of code:
[self.window.contentView addSubview: imgV];
You've allocated an NSImageView. and keep adding it to the view. You never remove it, meaning the view is creating many references to different instances of the same object, all allocating their own piece of memory.
Solution: You'll need to keep track of the view, to make sure you can remove it later. Typically, I use class extensions.
For example:
#interface ClassName() {
NSImageView* m_imgV;
}
#end
....
// place the image onto the view
NSRect rect = CGRectMake((count+0.5) * 110, 100, 100, 100);
if (m_imgV) {
[m_imgV removeFromSuperView];
}
m_imgV = [[NSImageView alloc] initWithFrame:rect];
[m_imgV setImageScaling:NSScaleToFit];
NSImage *myImage = [[NSImage alloc] initWithCGImage:image size:(NSSize){50.0,50.0}];
[m_imgV setImage:myImage];
[self.window.contentView addSubview:m_imgV];
I was fighting with this problem for the whole day and finally found the way. For some reason the program wanted me to add a whole function which looks like:
// remove all the view from the superview
// and clean up a garbage array
-(void) killAllViews
{
for (NSImageView *iv in _viewsToRemove)
{
[iv removeFromSuperview];
}
[_viewsToRemove removeAllObjects]; // clean out the array
}
where _viewsToRemove is an array of NSImageViews which I'm filling every time my block is generating new images and adds them to the view.
Still don't understand why just adding the pure code from inside my killAllViews method somewhere into program couldn't solve the problem. Right now I'm basically doing the same, but just calling this method.

NSImage losing quality upon writeToFile

Basically, I'm trying to create a program for batch image processing that will resize every image and add a border around the edge (the border will be made up of images as well). Although I have yet to get to that implementation, and that's beyond the scope of my question, I ask it because even if I get a great answer here, I still may be taking the wrong approach to get there, and any help in recognizing that would be greatly appreciated. Anyway, here's my question:
Question:
Can I take the existing code I have below and modify it to create higher-quality images saved-to-file than the code currently outputs? I literally spent 10+ hours trying to figure out what I was doing wrong; "secondaryImage" drew the high quality resized image into the Custom View, but everything I tried to do to save the file resulted in an image that was substantially lower quality (not so much pixelated, just noticeably more blurry). Finally, I found some code in Apple's "Reducer" example (at the end of ImageReducer.m) that locks the focus and gets a NSBitmapImageRep from the current view. This made a substantial increase in image quality, however, the output from Photoshop doing the same thing is a bit clearer. It looks like the image drawn to the view is of the same quality that's saved to file, and so both are below Photoshop's quality of the same image resized to 50%, just as this one is. Is it even possible to get higher quality resized images than this?
Aside from that, how can I modify the existing code to be able to control the quality of image saved to file? Can I change the compression and pixel density? I'd appreciate any help with either modifying my code or pointing me in the way of good examples or tutorials (preferably the later). Thanks so much!
- (void)drawRect:(NSRect)rect {
// Getting source image
NSImage *image = [[NSImage alloc] initWithContentsOfFile: #"/Users/TheUser/Desktop/4.jpg"];
// Setting NSRect, which is how resizing is done in this example. Is there a better way?
NSRect halfSizeRect = NSMakeRect(0, 0, image.size.width * 0.5, image.size.height * 0.5);
// Sort of used as an offscreen image or palate to do drawing onto; the future I will use to group several images into one.
NSImage *secondaryImage = [[NSImage alloc] initWithSize: halfSizeRect.size];
[secondaryImage lockFocus];
[[NSGraphicsContext currentContext] setImageInterpolation: NSImageInterpolationHigh];
[image drawInRect: halfSizeRect fromRect: NSZeroRect operation: NSCompositeSourceOver fraction: 1.0];
[secondaryImage unlockFocus];
[secondaryImage drawInRect: halfSizeRect fromRect: NSZeroRect operation: NSCompositeSourceOver fraction: 1.0];
// Trying to add image quality options; does this usage even affect the final image?
NSBitmapImageRep *bip = nil;
bip = [[NSBitmapImageRep alloc] initWithBitmapDataPlanes:NULL pixelsWide: secondaryImage.size.width pixelsHigh: secondaryImage.size.width bitsPerSample:8 samplesPerPixel:4 hasAlpha:YES isPlanar:NO colorSpaceName:NSDeviceRGBColorSpace bytesPerRow:0 bitsPerPixel:0];
[secondaryImage addRepresentation: bip];
// Four lines below are from aforementioned "ImageReducer.m"
NSSize size = [secondaryImage size];
[secondaryImage lockFocus];
NSBitmapImageRep *bitmapImageRep = [[NSBitmapImageRep alloc] initWithFocusedViewRect:NSMakeRect(0, 0, size.width, size.height)];
[secondaryImage unlockFocus];
NSDictionary *prop = [NSDictionary dictionaryWithObject: [NSNumber numberWithFloat: 1.0] forKey: NSImageCompressionFactor];
NSData *outputData = [bitmapImageRep representationUsingType:NSJPEGFileType properties: prop];
[outputData writeToFile:#"/Users/TheUser/Desktop/4_halfsize.jpg" atomically:NO];
// release from memory
[image release];
[secondaryImage release];
[bitmapImageRep release];
[bip release];
}
I'm not sure why you are round tripping to and from the screen. That could affect the result, and it's not needed.
You can accomplish all this using CGImage and CGBitmapContext, using the resultant image to draw to the screen if needed. I've used those APIs and had good results (but I do not know how they compare to your current approach).
Another note: Render at a higher quality for the intermediate, then resize and reduce to 8bpc for the version you write. This will not make a significant difference now, but it will (in most cases) once you introduce filtering.
Finally, one of those "Aha!" moments! I tried using the same code on a high-quality .tif file, and the resultant image was 8 times smaller (in dimensions), rather than than the 50% I'd told it to do. When I tried displaying it would any rescaling of the image, it wound up still 4 times smaller than the original, when it should have displayed at the same height and width. I found out the way I was taking the NSSize from the imported image was wrong. Previously, it read:
NSRect halfSizeRect = NSMakeRect(0, 0, image.size.width * 0.5, image.size.height * 0.5);
Where it should be:
NSBitmapImageRep *imageRep = [NSBitmapImageRep imageRepWithData: [image TIFFRepresentation]];
NSRect halfSizeRect = NSMakeRect(0, 0, [imageRep pixelsWide]/2, [imageRep pixelsHigh]/2);
Apparently it has something to do with DPI and that jazz, so I needed to get the correct size from the BitmapImageRep rather than from image.size. With this change, I was able to save at a quality nearly indistinguishable from Photoshop.

Get pixel colour from a Webcam

I am trying to get the pixel colour from an image displayed by the webcam. I want to see how the pixel colour is changing with time.
My current solution sucks a LOT of CPU, it works and gives me the correct answer, but I am not 100% sure if I am doing this correctly or I could cut some steps out.
- (IBAction)addFrame:(id)sender
{
// Get the most recent frame
// This must be done in a #synchronized block because the delegate method that sets the most recent frame is not called on the main thread
CVImageBufferRef imageBuffer;
#synchronized (self) {
imageBuffer = CVBufferRetain(mCurrentImageBuffer);
}
if (imageBuffer) {
// Create an NSImage and add it to the movie
// I think I can remove some steps here, but not sure where.
NSCIImageRep *imageRep = [NSCIImageRep imageRepWithCIImage:[CIImage imageWithCVImageBuffer:imageBuffer]];
NSSize n = {320,160 };
//NSImage *image = [[[NSImage alloc] initWithSize:[imageRep size]] autorelease];
NSImage *image = [[[NSImage alloc] initWithSize:n] autorelease];
[image addRepresentation:imageRep];
CVBufferRelease(imageBuffer);
NSBitmapImageRep* raw_img = [NSBitmapImageRep imageRepWithData:[image TIFFRepresentation]];
NSLog(#"image width is %f", [image size].width);
NSColor* color = [raw_img colorAtX:1279 y:120];
float colourValue = [color greenComponent]+ [color redComponent]+ [color blueComponent];
[graphView setXY:10 andY:200*colourValue/3];
NSLog(#"%0.3f", colourValue);
Any help is appreciated and I am happy to try other ideas.
Thanks guys.
There are a couple of ways that this could be made more efficient. Take a look at the imageFromSampleBuffer: method in this Tech Q&A, which presents a cleaner way of getting from a CVImageBufferRef to an image (the sample uses a UIImage, but it's practically identical for an NSImage).
You can also pull the pixel values straight out of the CVImageBufferRef without any conversion. Once you have the base address of the buffer, you an calculate the offset of any pixel and just read the values from there.

Slight pause in scrolling animation (iPad)

I am relatively new to programming on the iPad and I was trying to put together a simple program. Basically, it's a children's book and I need the functionality of a comic book style (or photo) viewer, where people swipe to change "pages" (or images).
Each image is 1024x768. Currently, they are stored as JPGs because of the very large file sizes PNGs seem to produce. For this story, there are 28 pages.
I took a look at the PageControl example, implementing a UIScrollView. On initialization, I create a big enough scrollview area. Then as the user scrolls, I load in the previous and next images. Again, just like the example only without implementing the page control at the bottom.
The problem I am running into is a very slight pause in the animation when I am flipping. Once the images are loaded or cached, this doesn't happen. Now, I know the photo application doesn't do this and I'm not sure what is causing it.
Here is my code for the scrollViewDidScroll method. I keep up with the page number and it will only call the loadPageIntoScrollView when a page has changed - I was thinking that the insane number of calls it was making was causing the slight pause in animation, but it turned out not to be the case.
- (void) scrollViewDidScroll: (UIScrollView *) sender
{
CGFloat pageWidth = scrollView.frame.size.width;
int localPage = floor( (scrollView.contentOffset.x - pageWidth / 2 ) / pageWidth ) + 1;
if( localPage != currentPage )
{
currentPage = localPage;
[self loadPageIntoScrollView:localPage - 1];
[self loadPageIntoScrollView:localPage];
[self loadPageIntoScrollView:localPage + 1];
}
} // scrollViewDidScroll
And here is my loadPageIntoScrollView method. I'm only creating a UIImageView and loading an image into that - I don't see how that could be much "leaner". But somehow it's causing the pause. Again, it's not a HUGE pause, just one of those things you notice and is enough to make the scrolling look like it has a very. very slight hiccup.
Thank you in advance for any help you could provide.
- (void) loadPageIntoScrollView: (int)page
{
if( page < 0 || page >= kNumberOfPages )
return;
UIImageView *controller = [pages objectAtIndex:page];
NSLog( #"checking pages" );
if( (NSNull *)controller == [NSNull null] )
{
UITapGestureRecognizer *singleTap = [[UITapGestureRecognizer alloc] initWithTarget:self action:#selector(handleSingleTap:)];
NSString *pageName = [[NSString alloc] initWithFormat:#"%d.jpg", page];
controller = [[UIImageView alloc]initWithImage:[UIImage imageNamed:pageName]];
[controller setUserInteractionEnabled:YES];
[controller addGestureRecognizer:singleTap];
[pages replaceObjectAtIndex:page withObject:controller];
[controller release];
} // if controller == null
// add the page to the scrollview
if( controller.superview == nil )
{
NSLog(#"superview was nil, adding page %d", page );
CGRect frame = scrollView.frame;
frame.origin.x = frame.size.width * page;
frame.origin.y = 0;
controller.frame = frame;
[scrollView addSubview:controller];
} // if
} // loadPageIntoScrollView
Since you say after an image is loaded in it no longer lags, I'd suspect that it is disk access that is causing your lag, but you should run your app through instruments to try to rule out cpu-spikes as well as evaluate file system usage. You may try to pre-load images to the left and right of whatever image you are on so that the user doesn't perceive as much lag.
First off, you should be able to use PNG's just fine. I have build several apps that do exactly what you are doing here, you can fit 3 1024 x 768 PNGs in memory without running out (but you can't do much more). You should also use PNG's as they are the preferred format for iOS as they are optimized when the app is bundled together during build.
The slight lag is caused by loading the image, in this line:
controller = [[UIImageView alloc]initWithImage:[UIImage imageNamed:pageName]];
What I usually do is load the images in a separate thread, using something like this:
[self performSelectorInBackground:#selector(loadPageIntoScrollView:) withObject:[NSNumber numberWithInt:localPage]];
Note that you need to put your localPage integer into a NSNumber object to pass it along, so don't forget to change your loadPageIntoScrollView: method:
- (void) loadPageIntoScrollView: (NSNumber *)pageNumber
{
int page = [pageNumber intValue];
....