Unmanaged memory consumption very high after creating a snapshot - why? - dotmemory

I am trying to find ways to reduce the memory footprint of our application. As soon as I manage to reduce the "managed memory" usage, the "unmanaged memory usage" always goes up by more than what I saved in managed memory usage.
Also I noticed that the composition of memory usage (managed vs native) always seems to change when I create a snapshot (see screenshot).
Why is this happening? Is there any significance to it? Does that mean that the process of creating a snapshot itself is changing the composition of memory consumption?

Related

.NET Core application running on fargate with memory issues

We are running a .NET application in fargate via terraform where we specify CPU and memory in the aws_ecs_task_definition resource.
The service has just 1 task e.g.
resource "aws_ecs_task_definition" "test" {
....
cpu = 256
memory = 512
....
From the documentation this is required for Fargate.
You can also specify cpu and memory in the container_definitions, but the documentation states that the field is optional, and as we are already setting values at the task level we did not set them here.
We have observed that our memory was growing after the tasks started, depending on application, sometimes quite fast and others over a period of time.
So we starting thinking we had a memory leak and went to profile using the dotnet-monitor tool as a sidecar.
As part of introducing the sidecar we set cpu and memory values for our .NET application at the container_definitions level.
After we done this, we have observed that our memory in our applications is behaving much better.
From .NET monitor traces we are seeing that when we set memory at the container_definitions level:
Working Set is much smaller
Gen 0/1/2 GC Count is above 1(GC occurring early)
GC 0/1/2 Size is less
GC Committed Bytes is smaller
So to summarize when we do not set memory at container_definitions level, memory continues to grow and no GC occurring until we are almost running out of memory.
When we set memory at container_definitions level, GC occurring regularly and memory not spiking up.
So we have a solution, but do not understand why this is the case.
Would like to know why it is so

Akka Stream application using more memory than the jvm's heap

Summary:
I have a Java application that uses akka streams that's using more memory than I have specified the jvm to use. The below values are what I have set through the JAVA_OPTS.
maximum heap size (-Xmx) = 700MB
metaspace (-XX) = 250MB
stack size (-Xss) = 1025kb
Using those values and plugging them into the formula below, one would assume the application would be using around 950MB. However that is not the case and it's using over 1.5GB.
Max memory = [-Xmx] + [-XX:MetaspaceSize] + number_of_threads * [-Xss]
Question: Thoughts on how this is possible?
Application overview:
This java application uses alpakka to connect to pubsub and consumes messages. It utilizes akka stream's parallelism where it performs logic on the consumed messages and then it produces those messages to a kafka instance. See the heap dump below. Note, the heap is only 912.9MB so something is taking up 587.1MB and getting the memory usage over 1.5GB
Why is this a problem?
This application is deployed on a kubernetes cluster and the POD has a memory limit specified to 1.5GB. So when the container, where the java application is running, consumes more that 1.5GB the container is killed and restarted.
The short answer is that those do not account for all the memory consumed by the JVM.
Outside of the heap, for instance, memory is allocated for:
compressed class space (governed by the MaxMetaspaceSize)
direct byte buffers (especially if your application performs network I/O and cares about performance, it's virtually certain to make somewhat heavy use of those)
threads (each thread has a stack governed by -Xss ... note that if mixing different concurrency models, each model will tend to allocate its own threads and not necessarily provide a means to share threads)
if native code is involved (e.g. perhaps in the library Alpakka is using to interact with pubsub?), that can allocate arbitrary amounts of memory outside of the heap)
the code cache (typically 48MB)
the garbage collector's state (will vary based on the GC in use, including the presence of any tunable options)
various other things that generally aren't going to be that large
In my experience you're generally fairly safe with a heap that's at most (pod memory limit minus 1 GB), but if you're performing exceptionally large I/Os etc. you can pretty easily get OOM even then.
Your JVM may ship with support for native memory tracking which can shed light on at least some of that non-heap consumption: most of these allocations tend to happen soon after the application is fully loaded, so running with a much higher resource limit and then stopping (e.g. via SIGTERM with enough time to allow it to save results) should give you an idea of what you're dealing with.

IntelliJ uses more memory than allocated

My IntelliJ goes unbearably slow, so I was fiddling with memory settings. If you select Help -> Change Memory Settings, you can set the max heap size for IntelliJ. But even after restarting, then running Mac's Activity Monitor, I see it using 5.5GB even though I set the heap to 4092MB.
It's using 1.5GB more than allocated for heap. That's a lot of memory for permgen + stack, don't you think? Or, could it be that this memory setting actually doesn't have any effect on the program?
It's the virtual memory you see, it may also include memory mapped files and many other things occupied by the JVM internals, plus the native libraries for a dozen of Apple frameworks loaded into the process. There is nothing to worry about unless you get OOM or IDE becomes slow.
If it happens, refer to the KB documents and report the issues to YouTrack with the CPU/Memory snapshots.

Activity monitor - Memory usage when profiling / not profiling

Any idea why my app's memory usage does not increase whilst using Instruments profiler (searching for leaks), but does when I don't use any profiler? To the tune of 1MB per operation performed. Instruments does not show any leaks.
OS memory management is a complex thing. It is likely that when you free memory it is not returned immediately to the system, but instead it is still "attached" to your process to make any future allocations your application needs more efficient. Although it is recorded as part of your process's memory space, it would be marked as unused, and when the system is running out of memory (or when your application exits), it would then reclaim the unused memory from your application.
If Instruments isn't reporting any leaks, you should be fine.

Memoryusage drops after a week

I have this app written in VB.Net with winforms that shows some stats and pictures on a bigscreen monitor. I also monitor the memory usage of sad app by using this.
Process.WorkingSet64
I know windows does not always report the correct usage but I just wanted to know if I didn't have any little memory leaks which I had but are solved now. But the first week the memory usage was around 100MB and the second week the memory usage showed around 50MB.
So why did it all of a sudden drop while still running the exact same code?
I can hardly imagine that the garbage collector kicked in this late since the app refreshes every 10 seconds and it has ample time in between those periods to do it's thing.
Or perhaps there is just better way to get memory usage for a process that is more reliable.
Process.WrokingSet64 doesn't report the memory usage, it omits the memory that is swapped to disk:
The value returned by this property represents the current size of working set memory used by the process. The working set of a process is the set of memory pages currently visible to the process in physical RAM memory. These pages are resident and available for an application to use without triggering a page fault. (MSDN)
Even if your system was never low on free memory, you may have minimized the application window, which caused Windows to trim its working set.
If you want to look for memory leaks you should probably use Process.PrivateMemorySize64 instead. Your shared memory is going to contain just executable code and it's going to remain more or less constant throughout the life of the process, so you should focus on the private memory.