My goal is to make a video out of a short sequence of opengl frames (around 200 frames). So in order to do this, I use the following code to create a array of images:
NSMutableArray* images = [NSMutableArray array];
KTEngine* engine = [KTEngine sharedInstance]; //Opengl - based engine
for (unsigned int i = engine.animationContext.unitStart; i < engine.animationContext.unitEnd ; ++i)
{
NSLog(#"Render Image %d", i);
[engine.animationContext update:i];
[self.view setNeedsDisplay];
[images addObject:[view snapshot]];
}
NSLog(#"Total image rendered %d", [images count]);
[self createVideoFileFromArray:images];
So this works perfectly fine on simulator, but not on device (retina iPad). So my guess is that the device does not support so many UIimages (specially in 2048*1536). The crash always happends after 38 frames or so.
Now as for the solution, I thought to create a video for each 10 frames, and then attached them all together, but when can I know if I have enough space (is the autorelease pool drained?).
Maybe I should use a thread, process 10 images, and fire it again for the next 10 frames once it's over?
Any idea?
It seems quite likely that you're running out of memory.
To reduce memory usage, you could try to store the images as NSData using the PNG or JPG format instead. Both PNG's and JPG's are quite small when represented as data, but loading them into UIImage objects can be very memory consuming.
I would advice you to do something like below in your loop. The autorelease pool is needed to drain the returned snapshot on each iteration.
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
UIImage *image = [view snapshot];
NSData *imageData = UIImagePNGRepresentation(image);
[images addObject:imageData];
[pool release];
This of course requires your createVideoFileFromArray: method to handle pure image data instead of UIImage objects, but that should probably be feasible to implement.
Related
I am trying to mirror screen of my mac to iphone. I have this method in Mac app delegate to capture screeen into base64 string.
-(NSString*)baseString{
CGImageRef screen = CGDisplayCreateImage(displays[0]);
CGFloat w = CGImageGetWidth(screen);
CGFloat h = CGImageGetHeight(screen);
NSImage * image = [[NSImage alloc] initWithCGImage:screen size:(NSSize){w,h}];
[image lockFocus];
NSBitmapImageRep *bitmapRep = [[NSBitmapImageRep alloc] initWithFocusedViewRect:NSMakeRect(0, 0, w, h)];
[bitmapRep setCompression:NSTIFFCompressionJPEG factor:.3];
[image unlockFocus];
NSData *imageData = [bitmapRep representationUsingType:NSJPEGFileType properties:_options];
NSString *base64String = [imageData base64EncodedStringWithOptions:0];
image = nil;
bitmapRep = nil;
imageData = nil;
return base64String;}
after that I am sending it to iphone and present it in UIImageView.
Delay between screenshots is 40 miliseconds. Everything works as expected until there is enough memory. After minute of streaming it starts swapping and use 6GB of RAM. iOS app memory usage is also growing lineary. By the time iOS reaches 90MB of ram, mac has 6GB.
Even if I stop streaming memory is not released.
I'm using ARC in both projects. Would it make any difference if migrate it to manual reference counting ?
I also tried #autoreleasepool {...} block, but it didn't help.
Any ideas ?
EDIT
My iOS code is here
NSString message = [NSString stringWithFormat:#"data:image/png;base64,%#",base64];
NSURL *url = [NSURL URLWithString:message];
NSData *imageData = [NSData dataWithContentsOfURL:url];
UIImage *ret = [UIImage imageWithData:imageData];
self.image.image = ret;
You have a serious memory leak. The docs for CGDisplayCreateImage clearly state:
The caller is responsible for releasing the image created by calling CGImageRelease.
Update your code with a call to:
CGImageRelease(screen);
I'd add that just after creating the NSImage.
We can't help with your iOS memory leaks since you didn't post your iOS code, but I see a big memory leak in your Mac code.
You are calling a Core Foundation function, CGDisplayCreateImage. Core Foundation objects are not managed by ARC. If a Core Foundation function has "Create" (or "copy") in the name then it follows the "create rule" and you are responsible for releasing the returned CF object when you are done with it.
Some CF objects have special release calls. For those that don't, just call CFRelease. CGImageRef has a special release call, CGImageRelease().
You need a corresponding call to CGImageRelease(screen), probably after the call to initWithCGImage.
I'm writing an application that will take several images from URL's, turn them into a UIImage and then add them to the photo library and then to the custom album. I don't believe its possible to add them to a custom album without having them in the Camera Roll, so I'm accepting it as impossible (but it would be ideal if this is possible).
My problem is that I'm using the code from this site and it does work, but once it's dealing with larger photos it returns a few as 'Write Busy'. I have successfully got them all to save if I copy the function inside its own completion code and then again inside the next one and so on until 6 (the most I saw it take was 3-4 but I don't know the size of the images and I could get some really big ones) - this has lead to the problem that they weren't all included in the custom album as they error'd at this stage too and there was no block in place to get it to repeat.
I understand that the actual image saving is moved to a background thread (although I don't specifically set this) as my code returns as all done before errors start appearing, but ideally I need to queue up images to be saved on a single background thread so they happen synchronously but do not freeze the UI.
My code looks like this:
UIImage *image = [UIImage imageWithData:[NSData dataWithContentsOfURL:[NSURL URLWithString:singleImage]]];
[self.library saveImage:image toAlbum:#"Test Album" withCompletionBlock:^(NSError *error) {
if (error!=nil) {
NSLog(#"Error");
}
}];
I've removed the repetition of the code otherwise the code sample would be very long! It was previously where the NSLog code existed.
For my test sample I am dealing with 25 images, but this could easily be 200 or so, and could be very high resolution, so I need something that's able to reliably do this over and over again without missing several images.
thanks
Rob
I've managed to make it work by stripping out the save image code and moving it into its own function which calls itself recursively on an array on objects, if it fails it re-parses the same image back into the function until it works successfully and will display 'Done' when complete. Because I'm using the completedBlock: from the function to complete the loop, its only running one file save per run.
This is the code I used recursively:
- (void)saveImage {
if(self.thisImage)
{
[self.library saveImage:self.thisImage toAlbum:#"Test Album" withCompletionBlock:^(NSError *error) {
if (error!=nil) {
[self saveImage];
}
else
{
[self.imageData removeObject:self.singleImageData];
NSLog(#"Success!");
self.singleImageData = [self.imageData lastObject];
self.thisImage = [UIImage imageWithData:[NSData dataWithContentsOfURL:[NSURL URLWithString:self.singleImageData]]];
[self saveImage];
}
}];
}
else
{
self.singleImageData = nil;
self.thisImage = nil;
self.imageData = nil;
self.images = nil;
NSLog(#"Done!");
}
}
To set this up, I originally used an array of UIImages's but this used a lot of memory and was very slow (I was testing up to 400 photos). I found a much better way to do it was to store an NSMutableArray of URL's as NSString's and then perform the NSData GET within the function.
The following code is what sets up the NSMutableArray with data and then calls the function. It also sets the first UIImage into memory and stores it under self.thisImage:
NSEnumerator *e = [allDataArray objectEnumerator];
NSDictionary *object;
while (object = [e nextObject]) {
NSArray *imagesArray = [object objectForKey:#"images"];
NSString *singleImage = [[imagesArray objectAtIndex:0] objectForKey:#"source"];
[self.imageData addObject:singleImage];
}
self.singleImageData = [self.imageData lastObject];
self.thisImage = [UIImage imageWithData:[NSData dataWithContentsOfURL:[NSURL URLWithString:self.singleImageData]]];
[self saveImage];
This means the rest of the getters for UIImage can be contained in the function and the single instance of UIImage can be monitored. I also log the raw URL into self.singleImageData so that I can remove the correct elements from the array to stop duplication.
These are the variables I used:
self.images = [[NSMutableArray alloc] init];
self.thisImage = [[UIImage alloc] init];
self.imageData = [[NSMutableArray alloc] init];
self.singleImageData = [[NSString alloc] init];
This answer should work for anyone using http://www.touch-code-magazine.com/ios5-saving-photos-in-custom-photo-album-category-for-download/ for iOS 6 (tested on iOS 6.1) and should result in all pictures being saved correctly and without errors.
If saveImage:toAlbum:withCompletionBlock it's using dispatch_async i fear that for i/o operations too many threads are spawned: each write task you trigger is blocked by the previous one (bacause is still doing I/O on the same queue), so gcd will create a new thread (usually dispatch_async on the global_queue is optimized by gcd by using an optimized number of threads).
You should either use semaphores to limit the write operation to a fixed number at the same time or use dispatch_io_ functions that are available from iOS 5 if i'm not mistaken.
There are plenty example on how to do this with both methods.
some on the fly code for giving an idea:
dispatch_semaphore_t aSemaphore = dispatch_semaphore_create(4);
dispatch_queue_t ioQueue = dispatch_queue_create("com.customqueue", NULL);
// dispatch the following block to the ioQueue
// ( for loop with all images )
dispatch_semaphore_wait(aSemaphore , DISPATCH_TIME_FOREVER);
[self.library saveImage:image
toAlbum:#"Test Album"
withCompletionBlock:^(NSError *error){
dispatch_semaphore_signal(aSemaphore);
}];
so every time you will have maximum 4 saveImage:toAlbum, as soon as one completes another one will start.
you have to create a custom queue, like above (the ioQueue) where to dispatch the code that does the for loop on the images, so when the semaphore is waiting the main thread is not blocked.
I am using 150 images in a sequence for animation .
Here is my code.
NSMutableArray *arrImages =[[NSMutableArray alloc]initWithCapacity:0];
for(int i = 0; i <=158; i++)
{
UIImage* image = [UIImage imageNamed:[NSString stringWithFormat:#"baby%05d.jpg",i]];
[arrImages addObject:image];
}
babyimage.animationImages = arrImages;
[arrImages release];
babyimage.animationDuration=6.15;
[babyimage startAnimating];
but it is taking too much memory.After playing it for 1 minute it shows memory warnings in console. and then crashed.i have reduced images resolution also and i can't make it less then 150 for better quality.
Is there any better way to do this animation without memory issue.
Thanks a lot
plz help ...
Instead of
[UIImage imageNamed:[NSString stringWithFormat:#"baby%05d.jpg",i]]
use
[UIImage imageWithContentsOfFile:[[NSBundle mainBundle] pathForResource:[NSString stringWithFormat:#"baby%05d.jpg",i] ofType:nil]]
Reason is imageNames caches image and does not release until it release memory warning
Edit
Also, don't store entire image into array, just save image name if you want or don't save anything. This will also take much memory.
I am unaware as to what the animation is you haven't specified it. But if I were doing the code, I would follow the following algorithm.
This is assuming there's only one image shown at a particular time but during transition, you will end up having two convolving in a way.
procedureAnimate:
photo_1 = allocate(file_address_array[i++])
while i < MAX_PHOTOS
photo_2 = allocate(file_address_array[i++])
photo_3 = allocate(file_address_array[i++])
perform_animation(photo_1, photo_2)
release(photo_1)
photo_1 = photo_2
photo_2 = photo_3
release(last_two)
I not sure if this is the_perfect_way of doing such a thing. But this will be a lot more efficient.
(May be there's something terribly wrong with the alloc/release login but this is consistent with my working at 5.14 AM in the morning). Let me know if this doesn't work.
I have come across an issue when loading different NSImages repeatedly in an application written using Automatic Reference Counting. It seems as though ARC is not releasing the image objects correctly, and instead the memory usage increases as the list is iterated until the iteration is complete, at which point the memory is freed.
I am seeing up to 2GB of memory being used through the process in some cases. There's a good discussion on SO on a very similar issue which ends up putting the process in an NSAutoReleasePool and releasing the image and draining the pool. This works for me as well if I don't use ARC, but it is not possible to make calls to these objects / methods in ARC.
Is there any way to make this work in ARC as well? It seems as though ARC should be figuring this out all by itself - which makes me think that the fact that these objects are not getting released must be a bug is OS X. (I'm running Lion with XCode 4.2.1).
The kind of code which is causing the issue looks like this:
+(BOOL)checkImage:(NSURL *)imageURL
{
NSImage *img = [[NSImage alloc] initWithContentsOfURL:imageURL];
if (!img)
return NO;
// Do some processing
return YES;
}
This method is called repeatedly in a loop (for example, 300 times). Profiling the app, the memory usage continues to increase with 7.5MB alloced for each image. If ARC is not used, the following can be done (as suggested in this topic):
+(BOOL)checkImage:(NSURL *)imageURL
{
NSAutoReleasePool *apool = [[NSAutoReleasePool alloc] init];
NSImage *img = [[NSImage alloc] initWithContentsOfURL:imageURL];
if (!img)
return NO;
// Do some processing
[img release];
[apool drain];
return YES;
}
Does anyone know how to force ARC to do the memory cleaning? For the time being, I have put the function in a file which is compiled with -fno-objc-arc passed in as a compiler flag. This works OK, but it would be nice to have ARC do it for me.
use #autoreleasepool, like so:
+(BOOL)checkImage:(NSURL *)imageURL
{
#autoreleasepool { // << push a new pool on the autotrelease pool stack
NSImage *img = [[NSImage alloc] initWithContentsOfURL:imageURL];
if (!img) {
return NO;
}
// Do some processing
} // << pushed pool popped at scope exit
return YES;
}
I am implementing an iOS App that needs to fetch a huge amount of images over HTTP. I've tried several approaches but independently what I do, Instuments shows constantly increasing memory allocations and the App crashes sooner or later when I run it on a device. There are no leaks shown by Instruments.
So far I have tried the following approches:
Fetch the images using a synchronous NSURLConnection within an NSOperation
Fetch the images using a asynchronous NSURLConnection within an NSOperation
Fetch the images using [NSData dataWithContentsOfURL:url] in the Main-Thread
Fetch the images using synchronous ASIHTTPRequest within an NSOperation
Fetch the images using asynchronous ASIHTTPRequest and adding it to a NSOperationQueue
Fetch the images using asynchronous ASIHTTPRequest and using a completionBlock
The Call Tree in Instrumetns shows that the memory is consumed while processing the HTTP-Response. In case of asynchronous NSURLConnection this is in
- (void)connection:(NSURLConnection *)connection didReceiveData:(NSData *)data {
[receivedData appendData:data];
}
In case of the synchronous NSURLConnection, Instruments shows a growing CFData (store) entry.
The problem with ASIHTTPRequest seems to be the same as with the asynchronous NSURLConnection in a analogous code-position. The [NSData dataWithContentsOfURL:url] approach shows an increasing amount of total memory allocation in exactely that statement.
I am using an NSAutoReleasePool when the request is done in a separate thread and I have tried to free up memory with [[NSURLCache sharedURLCache] removeAllCachedResponses] - no success.
Any ideas/hints to solve the problem? Thanks.
Edit:
The behaviour only shows up if I persist the images using CoreData. Here is the code I run as a NSInvocationOperation:
-(void) _fetchAndSave:(NSString*) imageId {
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
NSString *url = [NSString stringWithFormat:#"%#%#", kImageUrl, imageId];
HTTPResponse *response = [SimpleHTTPClient GET:url headerOrNil:nil];
NSData *data = [response payload];
if(data && [data length] > 0) {
UIImage *thumbnailImage = [UIImage imageWithData:data];
NSData *thumbnailData = UIImageJPEGRepresentation([thumbnailImage scaleToSize:CGSizeMake(55, 53)], 0.5); // UIImagePNGRepresentation(thumbnail);
[self performSelectorOnMainThread:#selector(_save:) withObject:[NSArray arrayWithObjects:imageId, data, thumbnailData, nil] waitUntilDone:NO];
}
[pool release];
}
All CoreData related stuff is done in the Main-Thread here, so there should not be any CoreData multithreading issue. However, if I persist the images, Instruments shows constantely increasing memory allocations at the positions described above.
Edit II:
CoreData related code:
-(void) _save:(NSArray*)args {
NSString *imageId = [args objectAtIndex:0];
NSData *data = [args objectAtIndex:1];
NSData *thumbnailData = [args objectAtIndex:2];
Image *image = (Image*)[[CoreDataHelper sharedSingleton] createObject:#Image];
image.timestamp = [NSNumber numberWithDouble:[[NSDate date] timeIntervalSince1970]];
image.data = data;
Thumbnail *thumbnail = (Thumbnail*)[[CoreDataHelper sharedSingleton] createObject:#"Thumbnail"];
thumbnail.data = thumbnailData;
thumbnail.timestamp = image.timestamp;
[[CoreDataHelper sharedSingleton] save];
}
From CoreDataHelper (self.managedObjectContext is picking the NSManagedObjectContext usable in the current thread):
-(NSManagedObject *) createObject:(NSString *) entityName {
return [NSEntityDescription insertNewObjectForEntityForName:entityName inManagedObjectContext:self.managedObjectContext];
}
We had a similar problem. While fetching lots of images over http, there was huge growth and a sawtooth pattern in the memory allocation. We'd see the system clean up, more or less, as it went, but slowly, and not predictably. Meanwhile the downloads were streaming in, piling up on whatever was holding onto the memory. Memory allocation would crest around 200M and then we'd die.
The problem was an NSURLCache issue. You stated that you tried [[NSURLCache sharedURLCache] removeAllCachedResponses]. We tried that, too, but then tried something a little different.
Our downloads are done in groups of N images/movies, where N was typically 50 to 500. It was important that we get all of N as an atomic operation.
Before we started our group of http downloads, we did this:
NSURLCache *sharedCache = [[NSURLCache alloc] initWithMemoryCapacity:0 diskCapacity:0 diskPath:0];
[NSURLCache setSharedURLCache:sharedCache];
We then get each image in N over http with a synchronous call. We do this group download in an NSOperation, so we're not blocking the UI.
NSData *movieReferenceData = [NSURLConnection sendSynchronousRequest:request returningResponse:&response error:&error];
Finally, after each individual image download, and after we're done with our NSData object for that image, we call:
[sharedCache removeAllCachedResponses];
Our memory allocation peak behavior dropped to a very comfortable handful of megabytes, and stopped growing.
In this case, you're seeing exactly what you're supposed to see. -[NSMutableData appendData:] increases the size of its internal buffer to hold the new data. Since an NSMutableData is always located in memory, this causes a corresponding increase in memory usage. What were you expecting?
If the ultimate destination for these images is on disk, try using an NSOutputStream instead of NSMutableData. If you then want to display the image, you can create a UIImage pointing to the file when you're done.