I wrote a small document based application for OS X.
I see that every New document when created increases the memory usage (Xcode gauge) as expected, but when I close the document the memory usage is decreased only insignificantly.
Thus many New/Close operations produce memory leak ...
How to explain the leak, and how to free the memory ?
Details:
The application has a document which presents itself in a NSOutlineView with a million lines(big amount to demonstrate the memory usage change) containing the same string at every line.
Related
TL;DR: Is an IOSurfaceRef a valid surface to write to after it has been purged and its state changed to kIOSurfacePurgeableEmpty?
I'm trying to get a better understanding of what it means for an IOSurface to be purged. The only documentation I have come across is in IOSurfaceRef.h and the only sample code I've come across is in WebKit.
I'm using the command line tool memory_pressure to simulate a critical memory pressure environment for 10 seconds like so:
> memory_pressure -S -s 10 -l critical
I've written a very simple application that allocates 100 IOSurfaces with identical properties. When I use Instruments to measure the memory allocations, I see VM: IOSurface at roughly 6GB, which is about 6MB for each surface. (4096x4096x4)
I then change the purgeable state of each IOSurface to kIOSurfacePurgeableVolatile and run the memory_pressure simulation.
Instruments still reports that I have 6GB of surfaces allocated. However, if I check the purgeable state of each surface, they are marked as kIOSurfacePurgeableEmpty.
So it looks like they were successfully purged, but the memory is still allocated to my application. Why is that and what condition are these surfaces in?
The header file states that I should assume they have "undefined content" in them. Fair enough.
But is the actual IOSurfaceRef or IOSurface * object still valid? I can successfully query all of its properties and I can successfully lock it for reading and writing.
Am I allowed to just reuse that object even though its contents were purged or do I have to discard that instance and create an entirely new IOSurface?
macos 10.14
Yes, it's still usable. It's just that the pixel data has been lost.
Basically, when the system is under memory pressure, it would normally page data out to disk. Marking a purgeable object volatile allows the system to simply discard that data, instead. The app has indicated that while it's nice-to-have, it's not has-to-have, and can be recreated if necessary.
When it wants to work with the IOSurface again, the app should mark the object nonvolatile and check the old state. If it was empty, then the app should recreate the data.
The reason that Instruments reports that your app still has 6GB allocated is because it has 6GB of its address space reserved for the IOSurfaces. But allocated does not necessarily mean backed by either physical RAM or swap file. It's just bookkeeping until the memory is actually used. Your app's resident set size (RSS) should shrink.
Im trying to work around an issue which has been bugging me for a while. In a nutshell: on which basis should one assign a max heap space for resource-hogging application and is there a downside for tit being too large?
I have an application used to visualize huge medical datas, which can eat up to several gigabytes of memory if several imaging volumes are opened size by side. Caching the data to be viewed is essential for fluent workflow. The software is supported with windows workstations and is started with a bootloader, which assigns the heap size and launches the main application. The actual memory needed by main application is directly proportional to the data being viewed and cannot be determined by the bootloader, because it would require reading the data, which would, ultimately, consume too much time.
So, to ensure that the JVM has enough memory during launch we set up xmx as large as we dare based, by current design, on the max physical memory of the workstation. However, is there any downside to this? I've read (from a post from 2008) that it is possible for native processes to hog up excess heap space, which can lead to memory errors during runtime. Should I maybe also sniff for free virtualmemory or paging file size prior to assigning heap space? How would you deal with this situation?
Oh, and this is my first post to these forums. Nice to meet you all and be gentle! :)
Update:
Thanks for all the answers. I'm not sure if I put my words right, but my problem rose from the fact that I have zero knowledge of the hardware this software will be run on but would, nevertheless, like to assign as much heap space for the software as possible.
I came to a solution of assigning a heap of 70% of physical memory IF there is sufficient amount of virtual memory available - less otherwise.
You can have heap sizes of around 28 GB with little impact on performance esp if you have large objects. (lots of small objects can impact GC pause times)
Heap sizes of 100 GB are possible but have down sides, mostly because they can have high pause times. If you use Azul Zing, it can handle much larger heap sizes significantly more gracefully.
The main limitation is the size of your memory. If you heap exceeds that, your application and your computer will run very slower/be unusable.
A standard way around these issues with mapping software (which has to be able to map the whole world for example) is it break your images into tiles. This way you only display the image which is one the screen (or portions which are on the screen) If you need to be able to zoom in and out you might need to store data at two to four levels of scale. Using this approach you can view a map of the whole world on your phone.
Best to not set JVM max memory to greater than 60-70% of workstation memory, in some cases even lower, for two main reasons. First, what the JVM consumes on the physical machine can be 20% or more greater than heap, due to GC mechanics. Second, the representation of a particular data entity in the JVM heap may not be the only physical copy of that entity in the machine's RAM, as the OS has caches and buffers and so forth around the various IO devices from which it grabs these objects.
What does "Memory" usage chart/graph exactly represents in XCode 5 Debug navigator window?
I have an iOS app project with ARC disabled and no-storyboard/xib (i.e. old style). All memory management done manually using retain/release/autorelease.
When I debug the project in XCode 5, the memory pie-chart / graph show gradually increasing memory usage as the app runs, exceeds 1 GB memory footprints within half hour.
Roughly, it keeps increasing by 0.1 to 0.3 MB per 2 to 3 second with very rare memory dips/decrease (of magnitude < 0.1 MB per 30 seconds).
Is this a concern (memory leak) with respect to memory management? I did memory analysis (using Allocations/Memory Leak through Instruments on XCode 4.6) but didn't find any leaks.
Found answer myself. Unfortunately I had NSZombieEnabled (Zombie object) for debug mode - see below - (menu Product > Scheme > Edit Scheme)
Typically NSZombieEnabled tool keeps even the released objects in memory to help developer find over released objects. Refer this link - What is NSZombie?
After I unchecked "Enable Zombie Objects" option, the memory usage stabilized to about 10 mb (not always increasing) even after half hour app usage - see below -
BOTTOM LINE - Ensure to clear "Enable Zombie Objects" when you want to analyze memory usage.
It simply measures the memory your app uses. So if it is increasing it must be a memory leak.
When using the leak analysis tools, I would use it as a guideline. It may help you find leaks but with all automated tools it may not find it all. As certain pieces of code (Especially the more dynamic pieces) may be hard to predict what they do memory wise for an automated tool.
I am seeing an issue where memory (heap) grows indefinitely on heavy processing but when running the exact same binary without Xcode; memory usage is fine. Remember to test outside of Xcode -- no idea what the cause is. NSZombies and all other debug options are off.
I have an application which allocates a large number of objects (mostly of 3 classes) and occasionally releases these.
The ActivityMonitor Real Memory Usage only ever goes up, never down. (Indeed I have noticed this with other applications.)
Profiling shows my application has no leaks, and Garbage Collections shows objects being reclaimed.
I have this app written in VB.Net with winforms that shows some stats and pictures on a bigscreen monitor. I also monitor the memory usage of sad app by using this.
Process.WorkingSet64
I know windows does not always report the correct usage but I just wanted to know if I didn't have any little memory leaks which I had but are solved now. But the first week the memory usage was around 100MB and the second week the memory usage showed around 50MB.
So why did it all of a sudden drop while still running the exact same code?
I can hardly imagine that the garbage collector kicked in this late since the app refreshes every 10 seconds and it has ample time in between those periods to do it's thing.
Or perhaps there is just better way to get memory usage for a process that is more reliable.
Process.WrokingSet64 doesn't report the memory usage, it omits the memory that is swapped to disk:
The value returned by this property represents the current size of working set memory used by the process. The working set of a process is the set of memory pages currently visible to the process in physical RAM memory. These pages are resident and available for an application to use without triggering a page fault. (MSDN)
Even if your system was never low on free memory, you may have minimized the application window, which caused Windows to trim its working set.
If you want to look for memory leaks you should probably use Process.PrivateMemorySize64 instead. Your shared memory is going to contain just executable code and it's going to remain more or less constant throughout the life of the process, so you should focus on the private memory.