Meaning of clearance of generation analysis memory growth on simulator "simulate memory warning" event - objective-c

I was trying to debug the memory growth in generation analysis and was frustrated (Lots of objects that was the result of call toCGGlyphBitmapCreate was not being released) . Then, I ran the program on simulator and captured many generation snapshots and then I did a simulate memory warning. Almost every generation cleared to zero ( a few had a few bytes here and there). Does that mean my code is fine and I should not worry about it? How can I prevent the growth so that it wont have to wait until a simulate memory warning event to clear the growth? (By the way, all these growth was caused by system libraries)

If the memory is getting released upon memory warning, then you're probably OK. The OS will cache all sorts of stuff (that it will free/reuse as it sees fit) that you don't generally have to be concerned about.
Still, I would run the code through the static analyzer (press shift+command+B in Xcode or select "Analyze" on the Xcode "Product" menu) just to be safe.

Related

Is it possible to set a baseline memory usage in valgrind for leak detection?

Is there a way to tell valgrind from inside my code when to start and when to stop checking for memory leaks?
I am using a legacy testing framework which must link with my testing program in order to run. The framework has memory leaks in it - valgrind shows about 50KB of memory that has not been released, but is reachable via heuristic. This is annoying, because I must keep this number in mind to see how much memory is leaked from my code. It would be a lot more convenient if I could tell valgrind to start collecting memory stats when my first test begins, and stop collecting when the last test is over. Is there an API for it?
valgrind memcheck allows to do a "differential" leak search. The differential leak search reports the delta between the previous leak search and the current situation.
You can do such a differential leak search using monitor commands with vgdb, either from the shell or from gdb. See https://www.valgrind.org/docs/manual/mc-manual.html#mc-manual.monitor-commands.
You can also use the client request VALGRIND_DO_CHANGED_LEAK_CHECK from your program, see https://www.valgrind.org/docs/manual/mc-manual.html#mc-manual.clientreqs.

How to programmatically purge/clean cocoa application memory?

I'm working on a Mac app. Initially monitoring Xcode's memory report while I ran my app showed showed the memory was just ramping up crazy. I used Instruments and profiled my app for allocations and leaks. Turned out there wasn't much leaked memory as you would expect due to strong reference cycles etc. However there was a lot of abandoned memory. By following the stack trace that lead to my code I have fixed by 70% using autorelease pools etc. Still the remaining 30% of abandoned memory seems to point system calls.
Now I have two questions based on that I have two questions
1) I want to fix the remaining 30%. How can I get rid of abandoned memory? I have already used Instruments and know exactly where those system calls are spawned but still dont know what to do to have that memory be cleaned up. (using ARC no manual retain/ release and autorelease doesn't seem to make a diff.)
2) After I know whatever my application was doing has completed and there is no need for any memory to be there (just like the application first started) I want to get rid of all memory that my app has used up. This I plan to use as a brute force approach to clean up all memory just like the system would if the user closes the app or turns off the system.
Basically if I know where my apps memory is in the file system I'll just programmatically call purge command on that or something similar. Because at this point I'm 100% sure nothing needs to be in memory for the app except for the first screen that you would expect the first time you launch the app.
I read this, this, this and this but they weren't helpful.

Why does Intellij not release memory after closing a project?

I had three projects open. One of them - Spark - was very large. Upon closing spark there was NO difference in memory usage - as reported by os/x activity monitor. Note: all projects are opened within the same Intellij instance.
It is in fact using just over 4GB. And I only now have two projects open. Those two projects only take up 1.5GB if I shut down Intellij and start it up again.
So .. what to do to "encourage" Intellij to release the memory it is using? It is running very very slowly (can not keep up with my typing for example)
Update I just closed the larger of the two remaining projects. STILL no reduction in memory usage. The remaining project is a single python file. So Intellij should be using under 512Meg at this point!
Following up on #PeterGromov's answer it seems that is were difficult to obtain the memory back. In addition #KevinKrumwiede mentioned the XX:MaxHeapFreeRatio which appears to be an avenue.
Here are a couple of those ideas taken bit farther from Does GC release back memory to OS?
The HotSpot JVM does release memory back to the OS, but does so
reluctantly.
You can make it more aggressive by setting -XX:GCTimeRatio=19
-XX:MinHeapFreeRatio=20 -XX:MaxHeapFreeRatio=30 which will allow it to spend more CPU time on collecting and constrain the amount of
allocated-but-unused heap memory after a GC cycle.
Additionally with Java 9 -XX:-ShrinkHeapInSteps option can be be used
to apply the shrinking caused by the previous two options more
aggressively. Relevant OpenJDK bug.
Do note that shrinking ability and behavior depends on the chosen
garbage collector. For example G1 only gained the ability to yield
back unused chunks in the middle of the heap with jdk8u20.
So if heap shrinking is needed it should be tested for a particular
JVM version and GC configuration.
and from How to free memory in Java?
To extend upon the answer and comment by Yiannis Xanthopoulos and Hot
Licks (sorry, I cannot comment yet!), you can set VM options like this
example:
-XX:+UseG1GC -XX:MinHeapFreeRatio=15 -XX:MaxHeapFreeRatio=30 In my jdk 7 this will then release unused VM memory if more than 30% of the heap
becomes free after GC when the VM is idle. You will probably need to
tune these parameters.
While I didn't see it emphasized in the link below, note that some
garbage collectors may not obey these parameters and by default java
may pick one of these for you, should you happen to have more than one
core (hence the UseG1GC argument above).
I am going to add the -XX:MaxHeapFreeRatio to IJ and report back if it were to help.
Our application presently only runs on Java7 so the first approach above is not yet viable - but there is hope since our app is moving to jdk8 soon.
https://www.jetbrains.com/help/idea/status-bar.html
I used this:
Shows the current heap level and memory usage. Visibility of this section in the Status bar is defined by the Show memory indicator check box in the Appearance page of the Settings/Preferences dialog. It is not shown by default.
Click the memory indicator to run the garbage collector.
The underlying Java virtual machine supports only growing of its heap. So even if after closing all projects the IDE doesn't need all of it, it's still allocated and counted as used in the OS.

Memory graph / chart in XCode 5 during Debug

What does "Memory" usage chart/graph exactly represents in XCode 5 Debug navigator window?
I have an iOS app project with ARC disabled and no-storyboard/xib (i.e. old style). All memory management done manually using retain/release/autorelease.
When I debug the project in XCode 5, the memory pie-chart / graph show gradually increasing memory usage as the app runs, exceeds 1 GB memory footprints within half hour.
Roughly, it keeps increasing by 0.1 to 0.3 MB per 2 to 3 second with very rare memory dips/decrease (of magnitude < 0.1 MB per 30 seconds).
Is this a concern (memory leak) with respect to memory management? I did memory analysis (using Allocations/Memory Leak through Instruments on XCode 4.6) but didn't find any leaks.
Found answer myself. Unfortunately I had NSZombieEnabled (Zombie object) for debug mode - see below - (menu Product > Scheme > Edit Scheme)
Typically NSZombieEnabled tool keeps even the released objects in memory to help developer find over released objects. Refer this link - What is NSZombie?
After I unchecked "Enable Zombie Objects" option, the memory usage stabilized to about 10 mb (not always increasing) even after half hour app usage - see below -
BOTTOM LINE - Ensure to clear "Enable Zombie Objects" when you want to analyze memory usage.
It simply measures the memory your app uses. So if it is increasing it must be a memory leak.
When using the leak analysis tools, I would use it as a guideline. It may help you find leaks but with all automated tools it may not find it all. As certain pieces of code (Especially the more dynamic pieces) may be hard to predict what they do memory wise for an automated tool.
I am seeing an issue where memory (heap) grows indefinitely on heavy processing but when running the exact same binary without Xcode; memory usage is fine. Remember to test outside of Xcode -- no idea what the cause is. NSZombies and all other debug options are off.

Memoryusage drops after a week

I have this app written in VB.Net with winforms that shows some stats and pictures on a bigscreen monitor. I also monitor the memory usage of sad app by using this.
Process.WorkingSet64
I know windows does not always report the correct usage but I just wanted to know if I didn't have any little memory leaks which I had but are solved now. But the first week the memory usage was around 100MB and the second week the memory usage showed around 50MB.
So why did it all of a sudden drop while still running the exact same code?
I can hardly imagine that the garbage collector kicked in this late since the app refreshes every 10 seconds and it has ample time in between those periods to do it's thing.
Or perhaps there is just better way to get memory usage for a process that is more reliable.
Process.WrokingSet64 doesn't report the memory usage, it omits the memory that is swapped to disk:
The value returned by this property represents the current size of working set memory used by the process. The working set of a process is the set of memory pages currently visible to the process in physical RAM memory. These pages are resident and available for an application to use without triggering a page fault. (MSDN)
Even if your system was never low on free memory, you may have minimized the application window, which caused Windows to trim its working set.
If you want to look for memory leaks you should probably use Process.PrivateMemorySize64 instead. Your shared memory is going to contain just executable code and it's going to remain more or less constant throughout the life of the process, so you should focus on the private memory.