We were testing our app, on the first generation iPad, using the profile tool, which uses a lot of memory on the iPad, in Xcode and found that our app had slow memory deallocation (It took more than 1 minute to deallocate 20 MB). However, in our code we properly deallocate memory. We then monitored some professional 3rd party apps (Chrome and Flipboard) and noticed those apps also had the same rate of deallocation.
Does iOS or iPad 1 have slow memory deallocation or perhaps the profiler interferes with iOS's memory deallocation?
We are using the lastest iOS version. And yes, in real world usage our app's performance will be affected by the slow deallocation of memory
We measured the rate of deallocation by loading the app and waiting for the memory to stabilize and then loading media or a view and waiting for that to stabilize then we go back to the original view and see how load it takes to deallocate the previous view/media. Which goes at a rate of (less than 20 MB every 1 minute). The app will never go back to it's original memory usage, the usage after opening the app and waiting for it to stabilize, when on the startup view.
We measured the rate of deallocation by loading the app and waiting for the memory to stabilize and then loading media or a view and waiting for that to stabilize then we go back to the original view and see how load it takes to deallocate the previous view/media.
This is not a useful benchmark. Okay, so you allocated some memory. Then you told the OS that you didn't need it any more. Fine so far. It's often not going to bother doing much about it unless it needs the memory elsewhere. It makes more sense to keep a cache so that if you need it again, it's available quicker. You aren't measuring anything useful - you're measuring how long it takes the system to need the memory elsewhere, not how long is necessary to deallocate it. Common sense should tell you that a minute to deallocate 20MB is not correct.
I suggest you come up with a benchmark that measures what you are actually interested in. How would your application be affected by slow deallocation? Are you sure you aren't inadvertently using that as a poor substitute for a factor you really are interested in?
If you do release memory when needed (i. e. you make the effort to minimize the time during which memory is allocated), then you don't have to worry about "slow deallocation" (whatever this nonsense word means).
Don't overestimate a profiler's abilities.
Related
I'm working on a Mac app. Initially monitoring Xcode's memory report while I ran my app showed showed the memory was just ramping up crazy. I used Instruments and profiled my app for allocations and leaks. Turned out there wasn't much leaked memory as you would expect due to strong reference cycles etc. However there was a lot of abandoned memory. By following the stack trace that lead to my code I have fixed by 70% using autorelease pools etc. Still the remaining 30% of abandoned memory seems to point system calls.
Now I have two questions based on that I have two questions
1) I want to fix the remaining 30%. How can I get rid of abandoned memory? I have already used Instruments and know exactly where those system calls are spawned but still dont know what to do to have that memory be cleaned up. (using ARC no manual retain/ release and autorelease doesn't seem to make a diff.)
2) After I know whatever my application was doing has completed and there is no need for any memory to be there (just like the application first started) I want to get rid of all memory that my app has used up. This I plan to use as a brute force approach to clean up all memory just like the system would if the user closes the app or turns off the system.
Basically if I know where my apps memory is in the file system I'll just programmatically call purge command on that or something similar. Because at this point I'm 100% sure nothing needs to be in memory for the app except for the first screen that you would expect the first time you launch the app.
I read this, this, this and this but they weren't helpful.
I have this app written in VB.Net with winforms that shows some stats and pictures on a bigscreen monitor. I also monitor the memory usage of sad app by using this.
Process.WorkingSet64
I know windows does not always report the correct usage but I just wanted to know if I didn't have any little memory leaks which I had but are solved now. But the first week the memory usage was around 100MB and the second week the memory usage showed around 50MB.
So why did it all of a sudden drop while still running the exact same code?
I can hardly imagine that the garbage collector kicked in this late since the app refreshes every 10 seconds and it has ample time in between those periods to do it's thing.
Or perhaps there is just better way to get memory usage for a process that is more reliable.
Process.WrokingSet64 doesn't report the memory usage, it omits the memory that is swapped to disk:
The value returned by this property represents the current size of working set memory used by the process. The working set of a process is the set of memory pages currently visible to the process in physical RAM memory. These pages are resident and available for an application to use without triggering a page fault. (MSDN)
Even if your system was never low on free memory, you may have minimized the application window, which caused Windows to trim its working set.
If you want to look for memory leaks you should probably use Process.PrivateMemorySize64 instead. Your shared memory is going to contain just executable code and it's going to remain more or less constant throughout the life of the process, so you should focus on the private memory.
I'm having strange effects with my app. I implemented my own PDF viewer. It shows ONE page at a time. Using Instruments Activity Monitor I see that my real memory is constantly at around 50MB.
After switching pages forth and back a couple of times I receive a memory warning level 0.
I do my best to react on it and sacrifice the low-res background image I'm rendering first to show something until CATiledLayer catches up.
Does not help. A few pages later I get memory warning level 1 and level 2 and after a few more pages my app gets killed with reason "9". Memory NEVER goes above 50MB!
Why do I get those warnings in the first place? There IS enough memory available.
This is happening on on iPad running iOS 4.3.
I don't think there's anything mysterious going on here -- which I'm sure is not what you wanted to hear. There are no absolute figures of "safe" amounts of memory to use. The rule is: when the OS tells you you're using too much, use less. It will jettison background processes first and in preference to your foreground app but there are still limits.
In the "olden days," you used to be lucky to get 20Mb. I'm sure you can safely get more than that on an iPad but, apparently, it's less than 50Mb.
You don't say how much memory you free by releasing the background image, but it seems that you need to be caching less data. You might also want to check Leaks (also in Instruments) to make sure you're releasing the objects you think you are.
Is it possible to recover memory lost from w3wp.exe? I thought a session.abandon() should clear up the resources like that? The thing is I have a web application, certain pages make w3wp.exe grow significantly. Like from 40 MB to 400 MB. Now I am going to optimize my code defiantly to reduce this, however for what ever amount the w3wp.exe grows, is there no way to recover the lost memory even when the user has logged out and closed the browser?
I know this worker process will recycle after 30 minutes (default) of idle use, but what if there is no idle use-age for a long time and the worker process still has that portion of memory, it just keeps on growing? Any thoughts people?
The garbage collector will take care of whatever memory needs to be freed, provided that you dispose things correctly, etc. The GC doesn't immediately kick in every time you call Session.Abandon(), as that would be a major performance hit.
That said, every application has a "normal" memory usage, i.e. a stable memory usage (again, provided you don't have leaks), and this figure is different for every application. 400MB can be a lot or it can be nothing, depending on what your app does. I have apps that hover around 400MB and others around 1.5GB and that's OK as long as memory usage stabilizes somewhere. If you see unbounded memory usage then you most likely have a leak somewhere in your app.
Storing large amounts of data in the in-proc session can also quickly rack up the memory usage. Instead, use a file or a database to store this data.
unless you are leaking the memory, the memory manager will re-use this memory so you should not see the process memory keep growing.
Even the standard blank-window Cocoa app that gets built when you make a new Cocoa project in Xcode uses almost 6 MB of memory. What's the reason for this? Is it possible to make an app use less, or does OS X simply manage memory differently for Cocoa apps?
Not that I'm complaining. I know that performance "hardly matters anymore" (edit: what I mean is, it matters less than readability/maintainability/the programmer's time). I'm just curious.
OS X does a lot of magic with shared memory and copy-on-write pages, so chances are that it doesn't take that much physical RAM for every application.
You can check exactly how memory blocks are mapped by running:
sudo vmmap <PID of the process>
Depends on the all the framework (APIs) you use. Combine that with the VM allocations done by low level ops.
It's only worth trying to reduce the heap alloc (total), as well as the resident size of the code. Making sure your data structs are allocated efficiently and trying to compile with the ever-so-famous "-Os" optimization flag (size optimization). There isn't much you can do about the VM eaten by Cocoa. I wouldn't really worry about it.
This is clearly a 'WTF' moment for developers in general. The question is usually - why does my trivial application use up so much memory.
The answer is down to the underlying framework. You could argue that 6MB is too much, but really, it is nothing.
It's not rare to see computers come with 2GB of memory these days. The stock IMAC is 4GB. The whole point of the computer industry is to use up all the resources a machine has so that it continues to evolve.
Yes you should avoid ineffecincies where possible (Don't load up a 5million point array at start up for instance). But unless your beta demonstrates you fudged up just keep it on the list of todo's.
I'm a bit out on a limb here, but I guess it's because all the libraries that get added have to do quite a bit of setting up and there is no need to garbage collect, so they simply get to waste memory; plus, even if all memory got autoreleased, it would wait until the first idle event, which is after the creation of the window. Delete unneeded libraries/frameworks, or force a garbage collect somewhere after loading the window from the nib and see how much it goes down if you're so concerned.
I am not concerned about it. Some of the memory might be returned later, and the rest is the price you pay for a powerful framework.
A factor which is not directly linked to cocoa but is valid to frameworks in general is that the overhead is not linear. There is usually a fixed and a variable "price", in terms of overhead, to use the framework.
When you create a simple blank window, the fixed overhead is crushing, but when you create an application with tens of windows, dialogs, controls and all, the initial fixed overhead becomes negligible, against the size of the application itself.