What can cause different ObjC/ARC memory behaviour between Release and Debug configuration? - objective-c

I was running a test to make sure objects are being deallocated properly by wrapping the relevant code section in a 10 second long while loop. I ran the test in Debug and Release configurations with different results.
Debug (Build & Run in simulator):
Release (Build & Run on device, and Profile using Instruments):
The CPU spikes signify where objects are created and destroyed (there's 3 in each run). Notice how in the Debug build, the memory usage rises gradually during the busy loop, and then settles a little afterwards at a higher base level, this happens with each loop iteration. On the Release build it stays constant the whole time. At the end after 3 runs the memory usage level of the Debug build is significantly higher than that of the Release build. (The CPU spikes are offset on the time axis relative to each other but that's just because I pressed the button that triggers the loop at different times).
The inner loop code in question is very simple and basically consists of a bunch of correctly paired malloc and free statements as well as a bunch retain and release calls (courtesy of ARC, also verified as correctly paired).
Any idea what is causing this behaviour?

In Release builds ARC will do its best to keep objects out of the autorelease pool. It does this using objc_returnsRetainAutorelease and checking for it at runtime.

A lot of Cocoa-Touch classes use caching to improve performance. Memory amount used for caching data could vary depending on total memory, available memory and probably some other things. Since you compare results for Mac and device it is not strange that you receive different results.
Some examples of classes/methods that use caching:
+(UIImage *)imageNamed:(NSString *)name
Discussion
This method looks in the system caches for an image object with the specified name and
returns that object if it exists. If a matching image object is not
already in the cache, this method loads the image data from the
specified file, caches it, and then returns the resulting object.
NSURLCache
The NSURLCache class implements the caching of responses to
URL load requests by mapping NSURLRequest objects to
NSCachedURLResponse objects. It provides a composite in-memory and
on-disk cache

For one thing, the release builds optimize code and remove debugging information from the code. As a result, the application package is significantly smaller and to load it, less memory is necessary.
I suppose that most of the used memory in Debug builds is the actual debugging information, zombie tracking etc.

Related

Memory not fully freed

I just started creating an app using SceneKit and SpriteKit and ARC for the first time. I noticed that the memory usage is quickly increasing when I switch between different Views. My first thought was that I have memory leaks but I am not sure now. The behavior even occurs in this basic example:
for(int r=0;r<9999999;r+=1){
NSString *s=[NSString stringWithFormat:#"test%i",r];
s=nil;
}
From my understanding an NSString Object is created and directly released in this loop. I've tried this example in the iPhone-Simulator and on an iPhone and it makes the app use several hundreds MB of RAM after this loop is executed. (I am checking the memory usage with the Xcode debug navigator)
I am obviously misunderstanding something. Why is this example still retaining memory afterwards?
edit:
You could also create a new project: iOS -> Game -> Game Technology: SceneKit
Then add this into viewDidLoad:
for(int r=0;r<999999;r+=1){
SCNNode *tn=[SCNNode node];
tn=nil;
}
The memory will peak at 550MB and go down to 300MB which would be to much if there objects were fully released and removed from the RAM.
Don't rely on NSString for memory diagnostics. It has fairly atypical behavior.
This is a not-uncommon scenario, one that I've seen on S.O. more than once, that in an attempt to reduce some complicated memory problem to something simpler, the developer creates a simplified example using NSString, unaware that choosing that particular class introduces curious, unrelated behaviors. The new "Debug Memory Graph" tool or the old tried-and-true Instruments (discussed below) is the best way to diagnose the underlying issues in one's code.
As an aside, you talk about releasing objects immediately. If your method doesn't start with alloc, new, copy or mutableCopy, the object returned will not deallocated immediately after falling out of scope, because they're autorelease objects. They're not released until the autorelease pool is drained (e.g., you yield back to the run loop).
So, if your app's "high water" mark is too high, but memory eventually falls back to acceptable levels, then consider the choice of autorelease objects (and/or the introducing of your own autorelease pools). But generally this autorelease vs non-autorelease object distinction is somewhat academic unless you have a very long running loop in which you're allocating many objects prior to yielding back to the run loop.
In short, autorelease objects don't affect whether objects are deallocated or not, but merely when they are deallocated. I only mention this in response to the large for loop and the contention that objects should be deallocated immediately. The precise timing of the deallocation is impacted by the presence of autorelease objects.
Regarding your rapid memory increase in your app, it's likely to be completely unrelated to your example here. The way to diagnose this is to use Instruments (as described in WWDC 2013 Fixing Memory Issues). In short, choose "Product" - "Profile" and choose the "Leaks" tool (which will grab the essential "Allocations" tool, as well), exercise the app, and then look at precisely what was allocated and not released.
Also, Xcode 8's "Debug Object Graph" tool is incredibly useful, too, and is even easier to use. It is described in WWDC 2016's Visual Debugging with Xcode. With this tool you can see a list of objects in the left panel, and when you choose one, you can see the object graph associated with that object, so you can diagnose what unresolved references you might still have:
By the way, you might try simulating a memory warning. Cocoa objects do all sorts of caching, some of which is purged when there's memory pressure.
If you turned on any memory debugging options (e.g., zombies) on your scheme, be aware that those cause additional memory growth as it captures the associated debugging information. You might want to turn off any debugging options before analyzing leaked, abandoned or cached memory.
Bottom line, if you're seeing growth of a couple of kb per iteration and none of the objects that you instantiate are showing up and you don't have any debugging options turned on, then you might not need to worry about it. Many Cocoa objects are doing sundry cacheing that is outside of our control and it's usually negligible. But if memory is growing by mb or gb every iteration (and don't worry about the first iteration, but only subsequent ones), then that's something you really need to look at carefully.

Unity3D: optimize garbage collection

Unity3D Profiler gives me spikes that is mostly about garbage collection. In the screenshot below, the three red spikes represent three stalls that I had in my gameplay. Each of these stalls are 100+ms and most of the time was spent on TrackDependencies.
According to Unity instruction, I tried adding this to my code:
if (Time.frameCount % 30 == 0)
{
System.GC.Collect();
}
This didn't help. I still have spikes and they still take 100+ms. What exactly is going on and what can I do to optimize my game?
PS:
I am dynamically creating and destroying a lot of GameObjects in my game. Could that be a problem?
I don't have string concatenation in a loop or array as return value as caveated in the post.
This didn't help. I still have spikes and they still take 100+ms. What
exactly is going on and what can I do to optimize my game?
With System.GC.Collect you are simply force a garbage collection. If you have allocated a lot of memory to be deallocated from the last collect, than you can't avoid spikes. This is only useful in order to try to distribute garbage collection over time avoiding a massive deallocation.
I am dynamically creating and destroying a lot of GameObjects in my
game. Could that be a problem?
Probably this could be the problem.
Some hints:
Try to allocate (LoadResource and Instantiate) as much as possible of your resources at the begin of you application. If the memory required isn't too much, you can simply instantiate all the resources you need and disable/enable them on demand. If the resource memory requirements are huge this is not achievable.
Avoid ingame calls to Instantiate and Destroy. Create a pool of object where a set of resources is Instantiated when the application starts. Enable the resources you need, and disable all the rest. Instead of destroying an object release it to the pool, so that it can be disabled and reanabled on demand.
Avoid ingame calls to Resources.UnloadUnusedAssets. This can only increase the time required to Instantiate a new resource if you have previously release it. It is useful to opitmize memory usage, but calling it at costant intervals or every time you destroy an object makes no sense.

Does class_getInstanceSize have a known bug about returning incorrect sizes?

Reading through the other questions that are similar to mine, I see that most people want to know why you would need to know the size of an instance, so I'll go ahead and tell you although it's not really central to the problem. I'm working on a project that requires allocating thousands to hundreds of thousands of very small objects, and the default allocation pattern for objects simply doesn't cut it. I've already worked around this issue by creating an object pool class, that allows a tremendous amount of objects to be allocated and initialized all at once; deallocation works flawlessly as well (objects are returned to the pool).
It actually works perfectly and isn't my issue, but I noticed class_getInstanceSize was returning unusually large sizes. For instance, a class that stores one size_t and two (including isA) Class instance variables is reported to be 40-52 bytes in size. I give a range because calling class_getInstanceSize multiple times, even in a row, has no guarantee of returning the same size. In fact, every object but NSObject seemingly reports random sizes that are far from what they should be.
As a test, I tried:
printf("Instance Size: %zu\n", class_getInstanceSize(objc_getClass("MyClassName"));
That line of code always returns a value that corresponds to the size that I've calculated by hand to be correct. For instance, the earlier example comes out to 12 bytes (32-bit) and 24 bytes (64-bit).
Thinking that the runtime may be doing something behind the scenes that requires more memory, I watched the actual memory use of each object. For the example given, the only memory read from or written to is in that 12/24 byte block that I've calculated to be the expected size.
class_getInstanceSize acts like this on both the Apple & GNU 2.0 runtime. So is there a known bug with class_getInstanceSize that causes this behavior, or am I doing something fundamentally wrong? Before you blame my object pool; I've tried this same test in a brand new project using both the traditional alloc class method and by allocating the object using class_createInstance(self, 0); in a custom class method.
Two things I forgot to mention before: I'm almost entirely testing this on my own custom classes, so I know the trickery isn't down to the class actually being a class cluster or any of that nonsense; second, class_getInstanceSize([MyClassName class]) and class_getInstanceSize(self) \\ Ran inside a class method rarely produce the same result, despite both simply referencing isA. Again, this happens in both runtimes.
I think I've solved the problem and it was due to possibly the dumbest reason ever.
I use a profiling/debugging library that is old; in fact, I don't know its actual name (the library is libcsuomm; the header for it has no identifying info). All I know about it is that it was a library available on the computers in the compsci labs (I did a year of Comp-Sci before switching to a Geology major, graduating and never looking back).
Anyway, the point of the library is that it provides a number of profiling and debugging functionalities; the one I use it most for is memory leak detection, since it actually tracks per object unlike my other favorite memory-leak library (now unsupported, MSS) which is based in C and not aware of objects outside of raw allocations.
Because I use it so much when debugging, I always set it up by default without even thinking about it. So even when creating my test projects to try and pinpoint the bug, I set it up without even putting any thought into it. Well, it turns out that the library works by pulling some runtime trickery, so it can properly track objects. Things seem to work correctly now that I've disabled it, so I believe that it was the source of my problems.
Now I feel bad about jumping to conclusions about it being a bug, but at the time I couldn't see anything in my own code that could possibly cause that problem.

Live object is garbage collected?

I am using Garbage collector in my Cocoa based application on Mac OS X. It has 100s of threads running and synchronization is done using Operation Queue.
After a long run, one of the object is garbaged and application will crash.
Checking to see if the object is non nil also fails as the the object is invalid and contains some garbage value. Calling a method on the object leads to crash.
Anyone please help me in debugging the issue.
Thank you......................
I am using Garbage collector in my
Cocoa based application on Mac OS X.
It has 100s of threads running and
synchronization is done using
Operation Queue.
More likely than not, the bug lies within the seemingly rather overly concurrent nature of your code. Running 100s of threads on a machine with "only" double digits worth of cores (if that) is unlikely to be very efficient and, of course, keeping everything properly in sync is going to be rather difficult.
The best place to start is to turn on Malloc stack logging and use malloc_history to find out what events occurred at the address that went south.
Also, there were fixes in 10.6.5 that impacted GC correctness.
If you can change the code of the garbage collected object, then override the finalize method like this, to get some information:
- (void) finalize
{
NSLog(#"Finalizing!\n%#", [[NSThread callStackSymbols] componentsJoinedByString:#"\n"]);
//if you put here a breakpoint, you can check the supposed references to this object
[super finalize];
}

Better Understanding Memory Release

I am new to Objective-C and as my first application I am writing I will be starting off with a simple WebKit based browser. So far I have done good and am able to load websites but after a while of usage memory usage tends to get high. I have read the documentation on retain, release, autorelease management and I just had one question. If I did release on my webkit view and created a new instance every time I loaded a new website would this mean my usage would drop down to its original state or am I misunderstanding how release works?
In theory, yes, the usage should drop down to it's original state as that releasing removes the frees the object off the heap.
The reason for the memory usage getting higher as you load more websites, is because WebKit retains bits of the website so it can load it up faster the next time.