I've been using an NSCache to store items that should be cached for performance reasons, since they are rather expensive to recreate. Understanding NSCache behaviour is giving me some headaches though.
I initialize my NSCache as follows:
_cellCache = [[NSCache alloc] init];
[_cellCache setDelegate:self];
[_cellCache setEvictsObjectsWithDiscardedContent:NO]; // never evict cells
The objects held in the cache implement the NSDiscardableContent protocol. Now my problem is the NSCache class does not seem to be working correctly in a couple of instances.
1) First the smaller issue. NSCache's setCountLimit: states that:
Setting the count limit to a number less than or equal to 0 will have no effect on the maximum size of the cache.
Can someone shed some light on what this means? I.e., that the NSCache is maximal, and will issue discardContentIfPossible messages only when memory is required elsewhere? Or that the NSCache is minimal, and it will issue discardContentIfPossible messages immediately?
The former makes more sense, but testing seems to indicate that the later is what is happening. If I log calls to the discardContentIfPossible method in my cached object, I see that it is being called almost immediately -- after only a 2-3 dozen items have been added to the cache (each less than 0.5 MB).
Okay. So I try then to set a large count limit -- way more than I will ever need -- by adding the following line:
[_cellCache setCountLimit:10000000];
Then the discardContentIfPossible messages are no longer sent almost immediately. I have to load a lot more content into the cache and use it for a while before these message start occurring which makes more sense.
So what is the intended behaviour here?
2) The larger issue. The documentation states:
By default, NSDiscardableContent objects in the cache are automatically removed from the cache if their content is discarded, although this automatic removal policy can be changed. If an NSDiscardableContent object is put into the cache, the cache calls discardContentIfPossible on it upon its removal.
So I set the eviction policy to NO (as above) so objects are never evicted. Instead, when discardContentIfPossible is called, I clear and release the internal data of the cached object according to special criteria. That is, I may decide not to actually clear and discard the data under certain circumstances (for example if the item has been very recently used). In such a scenario, I simply return from the discardContentIfPossible method not having discarded anything. The idea is that some other object that isn't recently used will get the message at some point, and it can discard it's content instead.
Now, interestingly, all seems to work great for a while of heavy use. Loading lots of content, placing and accessing objects in the cache, etc. After some time though, when I try to access an arbitrary object in the NSCache it's literally not there! Somehow it appears it has been removed -- even though I specifically set the eviction policy to NO.
Okay. So implementing the delegate method cache:willEvictObject: shows it never gets called. Which means the object is not actually getting evicted. But it's mysteriously disappearing from the NSCache since it can't be found on future lookups -- or somehow the key it was associated with is is no longer mapped to the original object. But how can that happen?
BTW, the key object that I'm associating with my value objects (CALayer's) is an NSURL container/wrapper, since I can't use NSURL's directly as NSCache doesn't copy keys -- only retains them, and a later NSURL key used for lookup might not be the exact original object (only the same URL string) that was initially used to load the cache with that object. Whereas with an NSURL wrapper/container, I can ensure it is the exact original key object that was used to add the original value object to the NSCache.
Can anyone shed some light on this?
Your comment "never evict cells" is not what -setEvictsObjectsWithDiscardedContent: says it does. It says it won't evict cells just because their content was discarded. That's not the same as saying it will never evict cells. You can still run past the maximum size or count and they can still be evicted. The advantage of discardable content is that you may be able to discard some of your content without removing yourself from the cache. You would then rebuild the discarded content on demand, but might not have to rebuild the entire object. In some cases this can be a win.
That brings us to your first question. Yes, NSCache starts evicting when it hits the maximum size. That's why it's called the "maximum size." You don't indicate whether this is Mac or iPhone, but in both cases you don't want the cache to grow until memory is exhausted. That's bad on a number of fronts. On Mac, you're going to start swapping heavily long before memory is exhausted. In iOS, you don't want to start sending memory warnings to every other process just because one process got crazy with its cache. In either case, a cache that is too large provides poor performance. So objects put into an NSCache should always expect to be evicted at any time.
Related
I was running a test to make sure objects are being deallocated properly by wrapping the relevant code section in a 10 second long while loop. I ran the test in Debug and Release configurations with different results.
Debug (Build & Run in simulator):
Release (Build & Run on device, and Profile using Instruments):
The CPU spikes signify where objects are created and destroyed (there's 3 in each run). Notice how in the Debug build, the memory usage rises gradually during the busy loop, and then settles a little afterwards at a higher base level, this happens with each loop iteration. On the Release build it stays constant the whole time. At the end after 3 runs the memory usage level of the Debug build is significantly higher than that of the Release build. (The CPU spikes are offset on the time axis relative to each other but that's just because I pressed the button that triggers the loop at different times).
The inner loop code in question is very simple and basically consists of a bunch of correctly paired malloc and free statements as well as a bunch retain and release calls (courtesy of ARC, also verified as correctly paired).
Any idea what is causing this behaviour?
In Release builds ARC will do its best to keep objects out of the autorelease pool. It does this using objc_returnsRetainAutorelease and checking for it at runtime.
A lot of Cocoa-Touch classes use caching to improve performance. Memory amount used for caching data could vary depending on total memory, available memory and probably some other things. Since you compare results for Mac and device it is not strange that you receive different results.
Some examples of classes/methods that use caching:
+(UIImage *)imageNamed:(NSString *)name
Discussion
This method looks in the system caches for an image object with the specified name and
returns that object if it exists. If a matching image object is not
already in the cache, this method loads the image data from the
specified file, caches it, and then returns the resulting object.
NSURLCache
The NSURLCache class implements the caching of responses to
URL load requests by mapping NSURLRequest objects to
NSCachedURLResponse objects. It provides a composite in-memory and
on-disk cache
For one thing, the release builds optimize code and remove debugging information from the code. As a result, the application package is significantly smaller and to load it, less memory is necessary.
I suppose that most of the used memory in Debug builds is the actual debugging information, zombie tracking etc.
Reading through the other questions that are similar to mine, I see that most people want to know why you would need to know the size of an instance, so I'll go ahead and tell you although it's not really central to the problem. I'm working on a project that requires allocating thousands to hundreds of thousands of very small objects, and the default allocation pattern for objects simply doesn't cut it. I've already worked around this issue by creating an object pool class, that allows a tremendous amount of objects to be allocated and initialized all at once; deallocation works flawlessly as well (objects are returned to the pool).
It actually works perfectly and isn't my issue, but I noticed class_getInstanceSize was returning unusually large sizes. For instance, a class that stores one size_t and two (including isA) Class instance variables is reported to be 40-52 bytes in size. I give a range because calling class_getInstanceSize multiple times, even in a row, has no guarantee of returning the same size. In fact, every object but NSObject seemingly reports random sizes that are far from what they should be.
As a test, I tried:
printf("Instance Size: %zu\n", class_getInstanceSize(objc_getClass("MyClassName"));
That line of code always returns a value that corresponds to the size that I've calculated by hand to be correct. For instance, the earlier example comes out to 12 bytes (32-bit) and 24 bytes (64-bit).
Thinking that the runtime may be doing something behind the scenes that requires more memory, I watched the actual memory use of each object. For the example given, the only memory read from or written to is in that 12/24 byte block that I've calculated to be the expected size.
class_getInstanceSize acts like this on both the Apple & GNU 2.0 runtime. So is there a known bug with class_getInstanceSize that causes this behavior, or am I doing something fundamentally wrong? Before you blame my object pool; I've tried this same test in a brand new project using both the traditional alloc class method and by allocating the object using class_createInstance(self, 0); in a custom class method.
Two things I forgot to mention before: I'm almost entirely testing this on my own custom classes, so I know the trickery isn't down to the class actually being a class cluster or any of that nonsense; second, class_getInstanceSize([MyClassName class]) and class_getInstanceSize(self) \\ Ran inside a class method rarely produce the same result, despite both simply referencing isA. Again, this happens in both runtimes.
I think I've solved the problem and it was due to possibly the dumbest reason ever.
I use a profiling/debugging library that is old; in fact, I don't know its actual name (the library is libcsuomm; the header for it has no identifying info). All I know about it is that it was a library available on the computers in the compsci labs (I did a year of Comp-Sci before switching to a Geology major, graduating and never looking back).
Anyway, the point of the library is that it provides a number of profiling and debugging functionalities; the one I use it most for is memory leak detection, since it actually tracks per object unlike my other favorite memory-leak library (now unsupported, MSS) which is based in C and not aware of objects outside of raw allocations.
Because I use it so much when debugging, I always set it up by default without even thinking about it. So even when creating my test projects to try and pinpoint the bug, I set it up without even putting any thought into it. Well, it turns out that the library works by pulling some runtime trickery, so it can properly track objects. Things seem to work correctly now that I've disabled it, so I believe that it was the source of my problems.
Now I feel bad about jumping to conclusions about it being a bug, but at the time I couldn't see anything in my own code that could possibly cause that problem.
I try to authenticate my app with Twitter with following code: pastebin
However, if I remove the (useless?) loop line 23ff
for (ACAccount *acc in arrayOfAccounts) {
[acc accountType].identifier;
//Otherwise the identifier get lost - god knows why -__-
}
the acc.type becomes (null) when it gets executed further in
AccountHandler checkAccountOf:acc. If I leave the loop in, the type is correctly set.
I am pretty sure it has to do with the fact that I am in a block and then move on to the main queue, but I am wondering if I am doing something wrong? This loop does not look like sth I am supposed to have to do.
Something kinda similar happened here.
ACAccounts are not thread safe. You should use them only on the thread that they originate. And for this purpose you can read 'thread' as 'queue'.
While I've not seen formal documentation of that, if you NSLog an account you'll see that it's a Core Data object and the lack of thread safety on Core Data objects is well documented.
The specific behaviour is that a Core Data object can be a fault. That means that what you're holding is a reference to the object but not the actual object. When you try to access a property the object will be loaded into memory.
What Core Data is doing underneath is caching things in memory and returning faults until it knows that an object is really needed. The efficient coordination of that cache is what limits individual instances of the Core Data object that coordinates objects to a single thread.
If you do the action that should bring the object into memory on the wrong thread — which is what happens when you access identifier here — then the behaviour is undefined. You could just get a nil result or you could crash your application.
(aside: the reason that Core Data works like this is that it stores an object graph, so possibly 1000s of interconnected objects, and you can traverse it just like any other group of objects. However you don't normally want to pay the costs associated with loading every single one of them into memory just to access whatever usually tiny subset of information you're going to use, so it needs a way of providing a normal Objective-C interface while lazily loading)
The code you've linked to skirts around that issue by ensuring that the objects are in the cache, and hence in memory, before queue hopping. So the 'fetch from store' step occurs on the correct queue. However the code is nevertheless entirely unsafe because objects may transition from being in memory back to being faults according to whatever logic Core Data cares to apply.
The author obviously thinks they've found some bug on Apple's part. They haven't, they've merely decided to assume something is thread safe when it isn't and have then found a way of relying on undefined behaviour that happened to work in their tests.
Moral of the story: keep the accounts themselves on a single thread. If you want to do some processing with the properties of an account then collect the relevant properties themselves as fundamental Foundation objects and post those off.
this a question I always wanted to ask.
When I am running an iOS application in Profiler looking for allocation issues, I found out that NSManagedObject stays in memory long after they have been used and displayed, and the UIViewController who recall has been deallocated. Of course when the UIViewController is allocated again, the number is not increasing, suggesting that there's no leak, and there's some kind of object reuse by CoreData.
If I have a MyManagedObject class which has been given 'mobjc' as name, then in profiler I can see an increasing number of:
MyManagedObject_mobjc_
the number may vary, and for small amount of data, for example 100 objects in sqllite, it grows to that limit and stays there.
But it also seems that sometimes during the application lifecycle the objects are deallocated, so I suppose that CoreData itself is doing some kind of memory optimizations.
It also seems that not the whole object is retained, but rather the 'fault' of it (please forgive my english :-) ) because of the small live byte size.
Even tough a lot of fault objects would also occupy memory.
But at this point I would like some confirmation:
is CoreData really managing and optimizing object in memory ?
is there anything I can do for helping my application to retain less object as possible ?
related to the point above, do I really need to take care of this issue or not ?
do you have some link, possibly by Apple, where this specific subject is explained ?
maybe it is relevant, the app I used for testing rely on ARC and iOS 5.1.
thanks
In this SO topic, Core Data Memory Management, you can find the info you are looking for.
This, instead, is the link to Apple doc on Core Data Memory Managament.
Few tips here.
First, when you deal with Core Data you deal with an object graph. To reduce memory consumption (to prune your graph) you can do a reset on the context you are using or turn objects into fauts passing NO to refreshObject:(NSManagedObject *)object mergeChanges:(BOOL)flag method. If you pass NO to that method, you can lose unsaved changes, so pay attention to it.
Furthermore, don't use Undo Management if you don't need it. It increases memory use (by default in iOS, no undo manager is created).
Hope that helps.
I want to cache the instances of a certain class. The class keeps a dictionary of all its instances and when somebody requests a new instance, the class tries to satisfy the request from the cache first. There is a small problem with memory management though: The dictionary cache retains the inserted objects, so that they never get deallocated. I do want them to get deallocated, so that I had to overload the release method and when the retain count drops to one, I can remove the instance from cache and let it get deallocated.
This works, but I am not comfortable mucking around the release method and find the solution overly complicated. I thought I could use some hashing class that does not retain the objects it stores. Is there such? The idea is that when the last user of a certain instance releases it, the instance would automatically disappear from the cache.
NSHashTable seems to be what I am looking for, but the documentation talks about “supporting weak relationships in a garbage-collected environment.” Does it also work without garbage collection?
Clarification: I cannot afford to keep the instances in memory unless somebody really needs them, that is why I want to purge the instance from the cache when the last “real” user releases it.
Better solution: This was on the iPhone, I wanted to cache some textures and on the other hand I wanted to free them from memory as soon as the last real holder released them. The easier way to code this is through another class (let’s call it TextureManager). This class manages the texture instances and caches them, so that subsequent calls for texture with the same name are served from the cache. There is no need to purge the cache immediately as the last user releases the texture. We can simply keep the texture cached in memory and when the device gets short on memory, we receive the low memory warning and can purge the cache. This is a better solution, because the caching stuff does not pollute the Texture class, we do not have to mess with release and there is even a higher chance for cache hits. The TextureManager can be abstracted into a ResourceManager, so that it can cache other data, not only textures.
Yes, you can use an NSHashTable to build what is essentially a non-retaining dictionary. Alternatively, you can call CFDictionaryCreate with NULL for release and retain callbacks. You can then simply typecast the result to a NSDictionary thanks to tollfree bridging, and use it just like a normal NSDictionary except for not fiddling with retain counts.
If you do this the dictionary will not automatically zero the reference, you will need to make sure to remove it when you dealloc an instance.
What you want is a zeroing weak reference (it's not a "Graal of cache managing algorithms", it's a well known pattern). The problem is that Objective C provides you with zeroing weak references only when running with garbage collection, not in manual memory managed programs. And the iPhone does not provide garbage collection (yet).
All the answers so far seem to point you to half-solutions.
Using a non-reataining reference is not sufficient because you will need to zero it out (or remove the entry from the dictionary) when the referenced object is deallocated. However this must be done BEFORE the -dealloc method of that object is called otherwise the very existence of the cache expose you to the risk that the object is resurrected. The way to do this is to dynamically subclass the object when you create the weak reference and, in the dynamically created subclass, override -release to use a lock and -dealloc to zero out the weak reference(s).
This works in general but it fails miserably for toll-free bridged Core Foundation objects. Unfortunately the only solution, if you need to to extend the technique to toll-free bridged objects, requires some hacking and undocumented stuff (see here for code and explanations) and is therefore not usable for iOS or programs that you want to sell on the Mac App Store.
If you need to sell on the Apple stores and must therefore avoid undocumented stuff, your best alternative is to implement locked access to a retaining cache and then scavenge it for references with a current -retainCount value of 1 when you want to release memory. As long as all accesses to the cache are done with the lock held, if you observe a count of 1 while holding the lock you know that there's no-one that can resurrect the object if you remove it from the cache (and therefore release it) before relinquishing the lock.
For iOS you can use UIApplicationDidReceiveMemoryWarningNotification to trigger the scavenging. On the mac you need to implement your own logic: maybe just a periodical check or even simply a periodical scavenging (both solutions would also work on iOS).
I've just implemented this kind of thing by using an NSMutableDictionary and registering for UIApplicationDidReceiveMemoryWarningNotification. On a memory warning I remove anything from the dictionary with a retainCount of 1...
Use [NSValue valueWithNonretainedObject:] to wrap the instance in an NSValue and put that in the dictionary. In the instance dealloc method, remove the corresponding entry from the dictionary. No messing with retain.
My understanding is that you want to implement the Graal of cache managing algorithms: drop items that will no longer be used.
You may want to consider other criteria, such as dropping the least recently requested items.
I think the way I would approach this is to maintain a separate count or a flag somewhere to indicate if the object in the cache is being used or not. You could then check this when you're done with an object, or just run a check every n seconds to see if it needs to be released or not.
I would avoid any solution involving releasing the object before removing it from the dictionary (using NSValue's valueWithNonretainedObject: would be another way to accomplish this). It would just cause you problems in the long run.