I am facing issue with my application which runs in a docker. Out of blue it crash after 30 minutes. As my application deals which cache so first suspect is the memory utilization . I googled and find out using jstat -gcutil command we can monitor heap space utilization .
I ran the command with the start of application and for it showed Metaspace utilization as 98% which is quite odd.
So my question is , is this utilization showing that my application is using 98% of OS memory for the JVM process or 98% of OS memory is available for JVM ?
It shows the current Metaspace occupancy relative to the current Metaspace capacity, i.e.
used / capacity
Metaspace used, capacity, committed and reserved values are illustrated in this answer.
This is a rather useless metrics since Metaspace can grow and shrink during an application lifecycle.
Related
We are running a .NET application in fargate via terraform where we specify CPU and memory in the aws_ecs_task_definition resource.
The service has just 1 task e.g.
resource "aws_ecs_task_definition" "test" {
....
cpu = 256
memory = 512
....
From the documentation this is required for Fargate.
You can also specify cpu and memory in the container_definitions, but the documentation states that the field is optional, and as we are already setting values at the task level we did not set them here.
We have observed that our memory was growing after the tasks started, depending on application, sometimes quite fast and others over a period of time.
So we starting thinking we had a memory leak and went to profile using the dotnet-monitor tool as a sidecar.
As part of introducing the sidecar we set cpu and memory values for our .NET application at the container_definitions level.
After we done this, we have observed that our memory in our applications is behaving much better.
From .NET monitor traces we are seeing that when we set memory at the container_definitions level:
Working Set is much smaller
Gen 0/1/2 GC Count is above 1(GC occurring early)
GC 0/1/2 Size is less
GC Committed Bytes is smaller
So to summarize when we do not set memory at container_definitions level, memory continues to grow and no GC occurring until we are almost running out of memory.
When we set memory at container_definitions level, GC occurring regularly and memory not spiking up.
So we have a solution, but do not understand why this is the case.
Would like to know why it is so
I could notice that 0.2vCore is allotted to API in Rruntime Manager, memory utilization is 63%.
When i check Heap utilization in Anypoint Monitoring is fluctuating between 200MB and 810MB and max Heap size mentioned 870MB this raised some doubts.
Runtime manager vCore and Anypoint monitor Heap Size are same ? I allotted 1 GB of vCore but in Heap graphs maximum heap size is showing in between 850-870 MB depends on time. my question is why it is not showing maximum available heap size is 1GB ?
Heap graph is not falling below 200MB, will it reach 0MB at any point ? or any kind of compile code occupies this 200MB space ?
I'm bit confused here, can anybody clarify here please..
Thanks..,
Runtime manager vCore and Anypoint monitor Heap Size are same ? if yes why only 870MB Heap is available instead of 1 GB(0.2vCore) allotted ?
A 0.2 vCore worker has a maximum heap size of 1 GB. It may be that the JVM didn't need to increase the actually used maximum size to the maximum available.
Heap graph is not falling below 200MB, will it reach 0MB at any point ? or any kind of compile code occupies this 200MB space ?
Any Java application will have some objects created, if not by the application by the JVM runtime itself, to execute. That means that it will have a baseline minimum heap usage. I don't think it is possible for any running JVM to have 0 MB usage of heap.
Your application gets loaded into the JVM, objects are created and it will take up some memory. the 200 is your base line in this case. Then as more events gets created, the heap size increases, and as events expire, garbage collection comes into play and free up memory.
More details can be found in the following link.
https://help.mulesoft.com/s/article/Java-JVM-memory-allocation-pattern-explained
We have a tomcat with following arguments
Xms 1g
Xmx 4g
Parallel GC
It is installed in Ubuntu machine with JVM 1.8.181
Lately GC is being started with full throttle and doesn't let any other process go on. What I don't understand is this takes place when even total JVM is just 2.8 GB while maximum heap can go is 4GB. Why does full GC run when mamory has not reached to max?
When I dug deep, ii found that there is a sudden change in the used and committed memory; from 1+GB to ~4GB. Does that mean that because I had set the min heap to 1 GB, it goes till 1GB only and as soon as it reaches there it increases to a next step? Because of this the Garbage collection takes place?
If yes that does that mean that in order to avoid this situation, I need to increase the min heap?
More info- this is happening when there almost 0 traffic. No background process is going on. I understand it can build up but without using anything, how can it go up! - I need to figure this out myself.
When you set the min heap to 1 GB, it starts with a 1 GB heap, thought the process itself could be a few 100 MB to a few GB more than this depending on which libraries you use. i.e. the resident size can be larger.
As the pressure on the heap grows from activity, it may decide it needs to increase the heap size. There has to be a significant load to trigger this to happen otherwise it won't change the heap.
I am wondering what's the JVM behaviour for the following situation:
JVM minimum heap size = 500MB
JVM maximum heap size = 2GB
OS has 1GB memory
After the JVM started and the program runs for a period of time, it uses more than 1GB memory. I wonder if OOM will happen immediately or it will try to GC first!
It depends on how much swap space you have.
If you don't have enough free swap, the JVM won't start as it can't allocate enough virtual memory.
If you have enough free swap your program could start and run. However once a JVM starts swapping its heap the GC times rise dramatically. The GC assumes it can access the heap somewhat randomly.
If your heap can't fit in main memory, the program, and possibly the machine becomes unusable. In my experience, on Windows, a reboot is needed at this point. On Linux, I usually find I can kill the process.
In short, you might be able to start the JVM, but it's a bad idea.
We migrated web application from jsf1.0 to 1.2 and deployed in Websphere 8.5. EArlier application was deployed in Websphere6.0. We are facing performance issue during SOAK testing. Got some thread hung message in sysout logs also i observe lot of blocking thread in thread dump file and its released on time.
Application performance degrades on time. i can see the performance issue remains same even the application is idle for 1 day .
Main issue is with the High CPU usage and high JVM memory even the application is idle for 1 day. Application is fast after the restart of server. Does the GC will not clear the JVM memory for 1 day or why this CPU is high ?
High cpu with low/declining app throughput is typical of java heap exhaust, when the JVM spends most of its time running GC trying to clear space in the heap to do real work. You should enable verbose GC logging, the GC log will show the heap state and GC activity. If the heap is below 10% tenure/OldGen free (assuming using default gencon collector) after a global/full GC, you are in heap exhaust state.
You could try increasing the heap size, maybe it just needs more space than currently provided. If the heap use (used tenure after global) continues to climb over time, when the workload offered is steady/constant, then the app probably has a memory leak. The objects accumulating in the heap can be seen by taking a core/system dump when the server is near heap exhaust state, and examining the dump with e.g. Eclipse Memory Analyzer.