I have a problem with Core Data and NSMutableArray.
Reading this document: https://developer.apple.com/library/mac/#documentation/Cocoa/Conceptual/CoreData/Articles/cdPerformance.html#//apple_ref/doc/uid/TP40003468-SW2
in the chapter "Faulting Behavior" I read: Since isEqual and hash do not cause a fault to fire, managed objects can typically be placed in collections without firing a fault.
Ok, for this reason, I understand that I can:
- fetch the managedObjectContext
- put all managed object into an array (the objects contains image data)
without firing a fault and waste memory (until the object is accessed for the first time), correct?
But, for some reason, Core Data is firing a fault when I try to put the result in an NSMutableArray
NSArray *fetchResults = [self.managedObjectContext executeFetchRequest:request error:&error]; //this line does'n fire a fault
self.cache = [NSMutableArray arrayWithArray:fetchResults]; //this line fires a fault
self.cache is simply a NSMutableArray.
After the last line of code, I see the memory usage growing through instruments (I have 50MB of images in the DB, and the memory goes immediately from 2-3Mb to 52-53MB.
Any suggestion?
Thanks
Ok, It was my mistake looking only to the Instruments memory occupation to determine if the fault was firing.
Core Data documentation says: If you need to determine whether an object is a fault, you can send it an isFault message without firing the fault. If isFault returns NO, then the data must be in memory. However, if isFault returns YES, it does not imply that the data is not in memory. The data may be in memory, or it may not, depending on many factors influencing caching.
I added this code after the "incriminate" lines:
for (ImageCache *cache in self.cache) {
NSLog(#"Is fault? %i", [cache isFault]);
}
The result was 1 for all the objects.
Then I modified the for loop:
for (ImageCache *cache in self.cache) {
NSLog(#"Is fault? %i", [cache isFault]);
UIImageView *imageView = [[UIImageView alloc]initWithImage:cache.image];
NSLog(#"Is fault? %i", [cache isFault]);
}
The result was 1 for the first NSLog, and 0 for the second NSLog of each object (the fault fired after the access to the image)
As the documentations says, it seems that Core Data is correctly faulting my objects, the memory occupation is due to Core Data caches.
Mea culpa :-)
(although I'm still curious to see how it behaves in real low memory situations. I expect this cache to flush...trying to send a memory warning has no effects on the memory size)
Thanks
Related
I am calling a selector which I want to use to fire off a background process with the following
[self performSelectorInBackground:#selector(startSync) withObject:nil];
For an example, lets say startSync looks something like this
-(void)startSync {
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
// expensive background process
Sync *sync = [Sync new];
[sync performSync];
[sync release];
[pool release];
}
The really intensive processing is happening in "performSync" of the sync object. It's retrieving some XML, parsing it into an array, and inserting it into a MySQL database. The process itself seems to be working fine and the Analyzer isn't showing any leaks, but using the profiler and doing a baseline Heap Mark at the beginning before it runs and then again after it runs is showing about a 5mb gain. To the best of our knowledge (no pun intended) we are properly allocating and releasing objects in the performSync process.
The question is I am running this process in the background, if I am creating an AutoreleasePool for it, and releasing it at the end of the background process, shouldn't it dealloc everything related to the background process when it is over? I don't have a good understanding of why all of the allocated objects aren't getting destroyed.
The release will only happen at the end of the event loop
We were missing a close database statement each time we were doing a record insert, which was creating more connections and causing our problem. After doing so, our baseline heap mark was at 1.22mb and our ending heap mark was 1.22mb, exactly as I was expecting.
I'm using ARC and NSCache which is created and stored on my app delegate. Then I call it from the controllers trough the app delegate. The NSCache stores images as they are loaded from an url and the memory usage goes up really quick. When I check the profiler for real memory usage, my app reaches even 320 MB of memory usage but on allocations it says it has just allocated 20-30 MB.
On my app delegate I set the cache as follows (it is an ivar):
cache = [[NSCache alloc] init];
[cache setCountLimit:100];
[cache setTotalCostLimit:1500000];
[cache setEvictsObjectsWithDiscardedContent:YES];
I implemented a button to experiment with NSCache and when I click on it it calls:
- (IBAction)eraseCache:(id)sender {
[[appDelegate cache] removeAllObjects];
}
On the profiler, the memory used does not go down, but it actually starts to get the images again, so I know the objects where removed. How can I release this memory usage at will using ARC? How can I get the size of my cache to know when to release it?
In ARC, once there are no pointers to an object, it's automatically released. If the only pointers you had to the object were in the cache, then they have been released.
Note that you don't actually have to remove the objects; if you assign the pointer to a new object (with the result that it no longer points at the old object) then the old object is deallocated.
Ex:
NSArray *array = [NSArray new];
array = [NSArray new]; //the original array gets deallocated because nothing points to it.
From the NSCache Class Reference:
The NSCache class incorporates various auto-removal policies, which
ensure that it does not use too much of the system’s memory. The
system automatically carries out these policies if memory is needed by
other applications. When invoked, these policies remove some items
from the cache, minimizing its memory footprint.
I am pretty much a newbie to objective-c and as I started to program, I did not really grasp how to properly release objects. So, my project being an introduction into the world of Objective-c, I omitted it completely. Now, however, I think this project evolved in that it is too much of a pity to just leave it at that. So, all the allocs, copys and news aside, I have serious problems with understanding why my project is still leaking so much memory.
I have made use of the leaks tool in Instruments (look at screenshot), and it shows me that whole array of objects that are leaked. My question now: Is this something to be worried about, or are these objects released at some point ? If not, how do I find the cause of the leak ? I know that if I press cmd + e it shows me the extended detail window, but which of these methods should I look in ? I assume that it is my own methods I have to open up, but most of the times it says that i.e. the allocation and initialization of a layer causes the problem.
That said, I would like to know how to effectively detect leaks. When I look at the leaks bar of instruments, at the initialization of my game layer (HelloWorldLayer) a biiiig red line appears. However, this is only at it's initialization... So, do I have to worry about this ?
Here is the screenshot:
link to file (in order to enlarge) -> http://i.stack.imgur.com/QXgc3.jpg
EDIT:
I solved a couple of leaks, but now I have another leak that I don't quite understand :
for (int i = 1; i<=18; i++) {
NSMutableDictionary *statsCopy = (NSMutableDictionary *)CFPropertyListCreateDeepCopy(kCFAllocatorDefault, (CFDictionaryRef)stats, kCFPropertyListMutableContainers);
NSNumber *setDone = [num copy];
[levels setObject:statsCopy forKey:[NSString stringWithFormat:#"level%d", i]];
[levels setObject:setDone forKey:#"setDone"];
[statsCopy release];
[setDone release];
}
He happens to detect a leak with the deep copy, even though I release it...
The screenshot shows that there's a dictionary allocated in -[Categories init] that never gets released. Actually, there are many (2765) such dictionaries.
That method seems to be invoking -[NSDictionary newWithContentsOf:immutable:]. The stack trace here may be somewhat misleading due to optimizations internal to Cocoa. That's not a public method. It's probably called by another NSDictionary method with a tail call which got optimized to a jump rather than a subroutine call.
Assuming there's debug information available, Instruments should show you the precise line within -[Categories init] if you double-click that line in the stack trace.
Knowing where it is allocated is not the whole story. The Categories class may manage ownership of the object correctly. Some other class may get access to it, though, and over-retain or under-release it. So, you may have to track the whole history of retains and releases for one of those objects to see which class took ownership and neglected to release it. Note, this has to be done for one of the leaked dictionaries, not one of the malloc blocks that was used internally to the dictionaries. Go down two lines in the table for some promising candidates. Toggle open that line to see the specific objects. Double-click one or click the circled-arrow button next to its address (I forget which) to see the history of retains and releases.
I have the following code in a loop iterating over the different document objects:
NSAutoreleasePool* pool = [[NSAutoreleasePool alloc] init];
NSData* data = [document primitiveValueForKey:#"data"];
[document.managedObjectContext refreshObject:document mergeChanges:NO];
[pool release];
The "data" property is a large blob (a 1MB image).
And as I monitor the memory with the Allocation Instrument memory usage is increasing. I cannot find where the leak is coming from and how to remove it.
Thanks!
Something is wrong with your sample code, did you mean:
NSData *data = [document primitiveValueForKey:#"data"];
As data is currently not assigned within the scope of your autoreleasepool it is also not released with with your autoreleasepool
Why are you using primitiveValueForKey and not a dynamic accessor?
The dynamic accessors are much more
efficient, and allow for compile-time
checking.
How about calling [pool drain] instead of [pool release]?
I managed to solve the problem by doing : [document.managedObjectContext processPendingChanges] right before draining the pool. However, I don't understand what pending changes would be there? Could someone enlighten me on that?
Your observation that processPendingChanges seems to solve the problem suggests to me that, as you import, the UndoManager for your NSManagedObjectContext is keeping track of all the changes you make as you do your bulk import.
What processPendingChanges is doing (as I understand it) is pushing changes stored in the managedObjectContext to the persistent store.
Try [[document managedObjectContext] setUndoManager:nil] (or create a new managedObjectContext for the import and set its undoManager to nil, if your document.managedObjectContext is the 'main' managedObjectContext and you don't want to turn off undo registration.
I'm loading a series of images from a server into NSData objects like so:
for (int i = 0; i < 36; i++)
{
NSURL *url = [NSURL URLWithString:#"http://12.34.56.78/image.jpg"];
NSData *data = [NSData dataWithContentsOfURL:url];
// Further processing here
}
The problem is that half of each data object is being kept in memory. This does not show up as a leak in instruments. I know it's the NSData object because I have removed everything having to do with images and really only have the two lines before the comment now. The same behavior occurs. I've tried alloc initing and releasing explicitly with the same result.
The thing that makes this really hard to figure out is that I created a second project to try to recreate this behavior and I can't get it to do so. In the other project, this code acts as expected. So I'm asking, what might cause such behavior? I feel like I'm overlooking something extremely obvious.
From the two lines you have written, that data object should never leak because you are not retaining it, when you go out of scope that d ata object should autorelease...So cant really tell from the two lines you have posted..
I have encountered something similar where I had an array in my AppDelegate and I was grabbing a reference to a single row then (mistakenly) releasing my handle on the object. The result was that after 3 subsequent calls, the object in the row in question had nil values in all properties but itself was not nil. Took me about a week to figure that one out. To this day I still have no idea why it took 3 calls to release before I noticed a problem. I'm sure you can imagine my frustration when a week later I realized that one line of code was the source of 20 or so wasted hours. ;)
If what you're seeing is steadily growing memory, use Instruments' Object Allocations probe, and look for what is actually holding the memory. There are many ways to waste memory in ways that are not a "leak." The fact that the size is half the NSData size suggests that you're looking in the wrong place. It is unlikely that you are freeing half an object.