ActiveMQ 5.13.3 server is running normally.
But when I executed the activemq list command, an error occurred.
Java HotSpot(TM) 64-Bit Server VM warning: Using incremental CMS is deprecated and will likely be removed in a future release
Java HotSpot(TM) 64-Bit Server VM warning: INFO:os::commit_memory(0x0000000654cc0000, 3946053632, 0) failed; error='Cannot allocate memory' (errno=12)
My question is a little different from here.
I am wondering why the execution of activemq list command will report this error.
So the reason is that activemq list starts a new JVM, then connects to broker. The message is telling you that your box doesn't have enough memory to start this second JVM. Either add swap, or add memory to the system.
Also, apache-activemq-5.15.9/bin/env contains the Xms setting for the JVM. This is an unfortunate piece of bad "JVM Tuning Advice" from the past (JDK 1.5 days) that people blindly apply. I'd remove the Xms setting completely and let the JVM resize it's heap as necessary.
Related
I have three pods ActiveMQ 5.16.3 pods (xlarge) running inside a Amazon MQ. I am encountering some memory issues where the pods are consuming a lot of memory in general like the traffic is comparable to large (instance type) only but it is still hitting 60-70% heap usage sometimes.
To debug the issue I need to take the heap dump from Amazon MQ. Any idea on how to do that?
My application will create new cache on demand, but it seems Apache Ignite always takes seconds of time to create a new cache when there are hundreds of caches already. I find there are two stages
consuming most of the time when creating new cache :
stage1: Waiting in exchange queue
stage2: Waiting for full message
Is there any way I can optimize this process?
Apache ignige: 2.10.0, cluster mode, two nodes, jdbc thin client
Jvm: Java HotSpot(TM) 64 bit Server VM, 1.8.0_60
Cache creation operation is not cheap as you correctly highlighted, it is cluster-wide operation and requires PME and other internal routines. For that reasons, think of reusing the existing caches if you need best performance.
You can accelerate caches processing and reduce resource usage if you group them in a single Cache Group. But network communication will be required nevertheless.
Rabbitmq beam.smp process utilize most of the Memory size for no reason
RabbitMQ version: 3.7
erlang 22
I don't have any special configuration or anything
I don't use celery or anything except Rabbitmq.
I searched for this issue and all I found is something related to celery!
What's the problem with this RabbitMQ .. it can't stand for couple of days without issues!
CPU utilization could be Erratic with large number of mirrored queues.
Please mention the following RabbitMQ deployment details:
RabbitMQ v3.7
Erlang/OTP v22
Ubuntu 16.04.5 LTS
Linux 4.15.0-32-generic x86_64
mention dstat --cpu check for RabbitMQ nodes utilization w.r.t CPU (user + system)
Additionally u can do what
It is possible to make schedulers that currently do not have work to do using the +sbwt flag:
RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS="+sbwt none"
The value of none can reduce CPU usage on systems that have a large number of mostly idle connections.
Several reasons can increase the CPU usage, you are not providing enough information.
What you should do is:
Check the rabbitmq logs, to see if there is some error
Check if you have some publisher that are trolling the server
Check the number of the queues/binding, maybe you are creating too many queues/binding
you can also enable this rabbitmq plugin https://github.com/rabbitmq/rabbitmq-top to see with process is using all the CPU
We are currently using JAX-RS 2.0 jersey on WebLogic for hosting restful web services . We are observing very high heap memory utilization in the benchmarks that keep increasing with time. Even after benchmark is over the heap memory allocated does not get released even after I hit perform GC on jconsole. When I analyze the heap dump with MAT I see ~99% of the heap is consumed by oracle.j2ee.ws.server.jaxrs.dms.monitoring.internal.DmsApplication. I un-targetted DMS from the managed server but still the same behavior.
A Little bit of analysis of dominator tree in heap dump shows that every request is being tracked by the listener. The weblogic.jaxrs.monitoring.JaxRsRequestEventListener is mapped to oracle.j2ee.ws.server.jaxrs.dms.monitoring.DmsApplicationEventListener.
Am I understanding this correctly? Does JAX-RS jersey maps to DMS request event listener internally. How this can be configured correctly so we don't face this memory issue.
I think you need to look at your diagnostic module in weblogic. Look at watches & notifications
I am trying to generate thread dump from weblogic console(Server-> -> Monitoring -> Threads -> Dump Thread Stacks.
I am getting below message: Server must be running before thread stacks can be displayed.
But, when I try to generate thread dump using kill -3 <PID>, it gets generated.
OS: Centos
Weblogic: WebLogic Server Version: 10.3.6.0
Can anyone please help me in understanding, why thread dump does not get generated from console and Why I am getting the message saying server must be running.
NOTE: Server is in running state.
As you are executing the Thread Dump command from Console, there might be an issue with AdminServer and managed server communication.
Console uses WLST to capture Thread Dumps and before generating thread dumps it will check Managed Server status. May be Admin Server unable to get current state of Managed Server hence you're seeing the error.
Recommended way to take Thread Dumps is OS command (kill -3 ) and from JDK tools, jstack for hostpot and jrcmd for JRockit. Thread Dumps taken from Console might not have lock related information and it might get truncated if thread dump is too long
I guess you was using JDK 7. It is a kind of bug in WLS 10.3.6.0 when using JDK 7. You can either downgrade the JDK to JDK 6 or patch the weblogic.