Java process size is huge. Any ability to reduce it? - process

I have several processes (JVMs) that are using huge amount of memory.
I'm starting the JVM with no -Xms value and with 2G or 3G in -XmX.
heap is stable and doesnt seems to have any leak or other issues. GC works perfect...
But... the process size and the VIRT value are huge. most are 10G-15G and even one is 20G !!!
Java used is 1.7 and working on VM installed with RH linux 6.5
I understand that the process size will be higher then the heap size as there is more than just objects :) but I've never seen such a huge addon...
Any idea what might cause that? any idea how we can reduce it?
Thanks a lot!
Ori

Related

Memory allocation fails when using Valgrind

I am trying to debug an embedded application with Valgrind.
Unfortunately, this application behaves differently than when I run it without Valgrind. At one point a driver allocates a data block of about 4 MB. This allocation fails even though there is still about 90 MB of memory available. Could it be that Valgrind fragments the memory so much that no contiguous block of that size is available anymore?
Does anyone have an idea how to remedy this?

JVM - effect of Xms on commited memory and garbage collection

We have a tomcat with following arguments
Xms 1g
Xmx 4g
Parallel GC
It is installed in Ubuntu machine with JVM 1.8.181
Lately GC is being started with full throttle and doesn't let any other process go on. What I don't understand is this takes place when even total JVM is just 2.8 GB while maximum heap can go is 4GB. Why does full GC run when mamory has not reached to max?
When I dug deep, ii found that there is a sudden change in the used and committed memory; from 1+GB to ~4GB. Does that mean that because I had set the min heap to 1 GB, it goes till 1GB only and as soon as it reaches there it increases to a next step? Because of this the Garbage collection takes place?
If yes that does that mean that in order to avoid this situation, I need to increase the min heap?
More info- this is happening when there almost 0 traffic. No background process is going on. I understand it can build up but without using anything, how can it go up! - I need to figure this out myself.
When you set the min heap to 1 GB, it starts with a 1 GB heap, thought the process itself could be a few 100 MB to a few GB more than this depending on which libraries you use. i.e. the resident size can be larger.
As the pressure on the heap grows from activity, it may decide it needs to increase the heap size. There has to be a significant load to trigger this to happen otherwise it won't change the heap.

JVM behaviour when OS runs out of memory

I am wondering what's the JVM behaviour for the following situation:
JVM minimum heap size = 500MB
JVM maximum heap size = 2GB
OS has 1GB memory
After the JVM started and the program runs for a period of time, it uses more than 1GB memory. I wonder if OOM will happen immediately or it will try to GC first!
It depends on how much swap space you have.
If you don't have enough free swap, the JVM won't start as it can't allocate enough virtual memory.
If you have enough free swap your program could start and run. However once a JVM starts swapping its heap the GC times rise dramatically. The GC assumes it can access the heap somewhat randomly.
If your heap can't fit in main memory, the program, and possibly the machine becomes unusable. In my experience, on Windows, a reboot is needed at this point. On Linux, I usually find I can kill the process.
In short, you might be able to start the JVM, but it's a bad idea.

Frequent CPU spike on Openfire on Windows 2008

We are running Operfire version : 3.9.1 on Windows 2008 R2 server in a 64 bit JVM.
Very recently , we have started seeing frequent CPU spikes on the server. The threads that are taking up most of the CPU time are blocked on this offset in
JVM -
jvm!JVM_FindSignal+2d7d
We are not seeing any out of memory exceptions. Also the CPU spike is generally seen during non peak hours. As a first resolution for this issue we recently increased the max heap memory from 1024mb to 2048mb but that seems to have made spikes more frequent. The server has total of 8gb memory out of which more than 4gb is free.
Please see attached screenshot for JVM version.
Any idea what this offset refers to ? We are not sure what is stressing the CPU so much and if this is an indication of a problem that can get bigger.
Any help is much appreciated
jvm!JVM_FindSignal is internal function inside JVM library that listens the signal from native operating system and returns to Java.
Signal can be (SIGABRT, SIGFPE, "SEGV", SIGSEGV,SIGINT,SIGTERM,SIGBREAK,SIGILL)
We need to inspect vmstat and iostat information to actually figure out the actual issue.
You can file issue to http://bugreport.java.com/ with vmstat and io stat information we will get back to you.
You are using JDK 8 update 91. Please upgrade to latest version JDK8 update 112.

Shinking JVM memory and Swap

Virtual Machine:
4CPU
10GB RAM
10GB swap
Java 1.7
-Xms=-Xmx=6144m
Tomcat 7
We observed a very strange behaviour with the JVM. The JVm resident memory began to shrink and the swap usage shot up to over 50%.
Please see below stats from monitoring tools.
http://i44.tinypic.com/206n6sp.jpg
http://i44.tinypic.com/m99hl0.jpg
Any pointers to understand this is grateful.
Thanks!
Or maybe your Java program was idle and it didn't need that memory, and you have high swappiness? In such situation your OS would free RAM just in case and leave only used part.
In my opinion, that is actually good behaviour, why should you waste RAM for process that won't use it?
Unless you run only this one process on VM, then it would be quite good idea to set swappiness to 0 or other small number - this memory was given to this single process, so we may disable swapping it.
Thanks for the response. Yes this is more close to a system troubleshooting than Java but I thought this the right forum to initiate this topic incase anybody has seen such a phenomena with JVM.
Anyways, I had already checked the top and no there was no other process than Java which was hungry for memory. Actually the second top process was utilizing 72MB (RSS).
No the swappiness is not aggressive set on this system but at default 60. One additional information I missed to share is we have 4 app servers in cluster and all showed this behaviour exactly at the same time. AFAIK, JVM does not swap out but the OS would. But all of it is what confusing me.
All these app servers are production and busy serving request so not idle. The used Heap size was at Avg 5 GB used of the the 6GB.
The other interesting thing I found out were some failed messages in the Vmware logs at the same time which is what I'm investigating.