Frequent CPU spike on Openfire on Windows 2008 - jvm

We are running Operfire version : 3.9.1 on Windows 2008 R2 server in a 64 bit JVM.
Very recently , we have started seeing frequent CPU spikes on the server. The threads that are taking up most of the CPU time are blocked on this offset in
JVM -
jvm!JVM_FindSignal+2d7d
We are not seeing any out of memory exceptions. Also the CPU spike is generally seen during non peak hours. As a first resolution for this issue we recently increased the max heap memory from 1024mb to 2048mb but that seems to have made spikes more frequent. The server has total of 8gb memory out of which more than 4gb is free.
Please see attached screenshot for JVM version.
Any idea what this offset refers to ? We are not sure what is stressing the CPU so much and if this is an indication of a problem that can get bigger.
Any help is much appreciated

jvm!JVM_FindSignal is internal function inside JVM library that listens the signal from native operating system and returns to Java.
Signal can be (SIGABRT, SIGFPE, "SEGV", SIGSEGV,SIGINT,SIGTERM,SIGBREAK,SIGILL)
We need to inspect vmstat and iostat information to actually figure out the actual issue.
You can file issue to http://bugreport.java.com/ with vmstat and io stat information we will get back to you.
You are using JDK 8 update 91. Please upgrade to latest version JDK8 update 112.

Related

Websphere - frequent thread/heap dump generation

Our application in prod environment is generating frequent heap/thread dumps while running very large reports eventually resulting in JVM failure. WebSphere is the server and heap size is set to 1024/2048(initial/max) across all nodes.
What are some ways to tackle this issue? I could think about the following options. Is there anything else I am missing?
Set min/max heap size to 2048 or even higher?
Enable verbose garbage collection in WebSphere and analyze optimal heap size?
Thread Analysis:
Runnable : 123(67%)
Blocked : 16(9%)
Waiting on Condition : 43(23%)
A good starting place to start investigating the OOM is this IBM KnowledgeCenter topic
Since it seems you experience an OutOfMemory Issue there are three possibilities to consider:
Your Apps consistently need more memory to handle the current load.
Solution: You have to load test you application with production-like traffic and tune your Min/Max Heap Size accordingly.
You have a Memory Leak issue.
Solution: Analyze the heapdumps/coredumps produced using IBM Support Assistant tools. A PMR to IBM would help.
Websphere has a memory leak.
Solution: Open a PMR
Here is a nice read about Java Memory Management in WAS environments.
Try to capture the memory, garbage collection information from the production environment. I am not sure if GC log has any performance impact. However, jstat is an extremely light weight tool and can be used in production environment with out any performance impact. Dump the output of jstat at regular intervals using the following command (Here I am setting the interval to 1 hour):
jstat -gc <PID> 3600s

Java process size is huge. Any ability to reduce it?

I have several processes (JVMs) that are using huge amount of memory.
I'm starting the JVM with no -Xms value and with 2G or 3G in -XmX.
heap is stable and doesnt seems to have any leak or other issues. GC works perfect...
But... the process size and the VIRT value are huge. most are 10G-15G and even one is 20G !!!
Java used is 1.7 and working on VM installed with RH linux 6.5
I understand that the process size will be higher then the heap size as there is more than just objects :) but I've never seen such a huge addon...
Any idea what might cause that? any idea how we can reduce it?
Thanks a lot!
Ori

Apache server cannot allocate memory for new process

I have a apache server with 32 GB of RAM. When I start the server and execute top to see the resources It show me that the CPU is at 95 percent. It doesn't a normal behaviour and after a few minutes it raises:
apache cannot allocate memory fork unable to fork new process
I don't know how to solve the problem. Any tips?
I had same problem to fix it there is 2 options:
1- move from micro instances to small and this was the change that solved the problem (micro instances on amazon tend to have large cpu steal time)
2- tune the mysql database server configuration and my apache configuration to use a lot less memory.
tuning guide for a low memory situation such as this one: http://www.narga.net/optimizing-apachephpmysql-low-memory-server/ (But don't use the suggestion of MyISAM tables - horrible...)
this 2 options will make the problem much much less happening .. I am still looking for better solution to close the process that are done and kill the ones that hang in there .

Total Memory shown in Task Manager less than Hyper-v Manager Assigned Memory

I am running VMs on 2008 R2 and just tried to add memory to one. So I turned the machine off, increased the memory (static) and turned started it. The "Assigned Memory says "40970 MB" but Windows Task Manager at the VM says "32768" in the total row for physical memory.
Has anyone experienced this before, and can help me explain why this is happening and how to address it?
Sounds like this could be a limitation of your guest OS. Please verify that your guest OS supports more than 32GB. 32 is the max for Server 2008 R2 Standard Edition.
According to this article, Hyper-V assigns a memory buffer, which you can edit under the "Memory Management" page, as described in "Step 3.
The reason why there's more "Assigned Memory" is because Hyper-v allocated more ram to the VM than it's actively using, because the dynamic memory feature is enabled.
The dynamic memory feature lets VMs consume memory dynamically based on the current workload. If an application on a VM is designed to use a fixed amount of memory, it’s better to give that VM exactly the amount of memory it needs instead of using dynamic memory in order to make full use the installed memory.

Shinking JVM memory and Swap

Virtual Machine:
4CPU
10GB RAM
10GB swap
Java 1.7
-Xms=-Xmx=6144m
Tomcat 7
We observed a very strange behaviour with the JVM. The JVm resident memory began to shrink and the swap usage shot up to over 50%.
Please see below stats from monitoring tools.
http://i44.tinypic.com/206n6sp.jpg
http://i44.tinypic.com/m99hl0.jpg
Any pointers to understand this is grateful.
Thanks!
Or maybe your Java program was idle and it didn't need that memory, and you have high swappiness? In such situation your OS would free RAM just in case and leave only used part.
In my opinion, that is actually good behaviour, why should you waste RAM for process that won't use it?
Unless you run only this one process on VM, then it would be quite good idea to set swappiness to 0 or other small number - this memory was given to this single process, so we may disable swapping it.
Thanks for the response. Yes this is more close to a system troubleshooting than Java but I thought this the right forum to initiate this topic incase anybody has seen such a phenomena with JVM.
Anyways, I had already checked the top and no there was no other process than Java which was hungry for memory. Actually the second top process was utilizing 72MB (RSS).
No the swappiness is not aggressive set on this system but at default 60. One additional information I missed to share is we have 4 app servers in cluster and all showed this behaviour exactly at the same time. AFAIK, JVM does not swap out but the OS would. But all of it is what confusing me.
All these app servers are production and busy serving request so not idle. The used Heap size was at Avg 5 GB used of the the 6GB.
The other interesting thing I found out were some failed messages in the Vmware logs at the same time which is what I'm investigating.