I am creating a memory monitoring reporting tasks in Apache NiFi, to monitory the JVM usage. But i don't know which memory pool is appropriate to monitor usage of JVM. Any suggestion will be appreciated.
Memory pools available:
Code Cache
Metaspace
Compressed Class Space
G1 Eden Space
G1 Survivor Space
G1 Old Gen
As per my knowledge G1 Eden Space, G1 Survivor Space and G1 Old Gen are younger generation memory pool, so these three used to monitor java heap space. correct me if i am wrong.
You can use MonitorMemory to monitor Java Heap.
Detail is here:
NIFI : Monitoring processor and nifi Service
Monitor Apache NiFi with Apache NiFi
Related
I have three pods ActiveMQ 5.16.3 pods (xlarge) running inside a Amazon MQ. I am encountering some memory issues where the pods are consuming a lot of memory in general like the traffic is comparable to large (instance type) only but it is still hitting 60-70% heap usage sometimes.
To debug the issue I need to take the heap dump from Amazon MQ. Any idea on how to do that?
We are moving an in-memory cache implementation of DB results cache to redis(Aws elastic cache). The JVM memory usage for the redisson based redis implementaion in performance test show more CPU usage (about 30 to 50%
The blue line is the redisson to redis implementation of distributed cache and the yellow line is the in memory caffeine implementation. Is this expected legitimate increase due to more I/O ? Or some redission configuration tuning needed?
We are migrating a web application from an ad hoc in memory cache solution to an apache ignite cluster, where the jboss that runs the webapp works as client node and two external vm works as ignite server nodes.
When testing performance with one client node and one server node all goes ok. But when testing with one client node and two server nodes in cluster, the server nodes crash with an OutOfMemoryError.
The virtual machine of both nodes it's started with -server -Xms1024M -Xmx1024M -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+UseTLAB -XX:NewSize=128m -XX:MaxNewSize=128m -XX:MaxTenuringThreshold=0 -XX:SurvivorRatio=1024 -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=60 -XX:MaxGCPauseMillis=1000 -XX:InitiatingHeapOccupancyPercent=50 -XX:+UseCompressedOops -XX:ParallelGCThreads=8 -XX:ConcGCThreads=8 -XX:+DisableExplicitGC
Any idea why a two nodes cluster fails when a single node one works perfectly running the same test ?
I don't know if it's relevant, but the test consists on 10 parallel http requests launched against the JBoss server, that each one starts a process that writes several entries into the caché.
The communication between nodes can add some overhead, so apparently 1GB is not enough for the data and Ignite itself. Generally 1GB is not enough, I would recommend to allocate at least 2GB, better 4GB per node.
In the end the problem wasn't in the amount of memory required by the two nodes, but in the synchronization between the nodes. My test caché was running with PRIMARY_SYNC, but write/read cycles where faster than the replication in the cluster and end up in an inconsistent read that provoked an infinite loop that wrote infinite values to the cluster.
Changing to FULL_SYNC fixed the problem.
We are currently using JAX-RS 2.0 jersey on WebLogic for hosting restful web services . We are observing very high heap memory utilization in the benchmarks that keep increasing with time. Even after benchmark is over the heap memory allocated does not get released even after I hit perform GC on jconsole. When I analyze the heap dump with MAT I see ~99% of the heap is consumed by oracle.j2ee.ws.server.jaxrs.dms.monitoring.internal.DmsApplication. I un-targetted DMS from the managed server but still the same behavior.
A Little bit of analysis of dominator tree in heap dump shows that every request is being tracked by the listener. The weblogic.jaxrs.monitoring.JaxRsRequestEventListener is mapped to oracle.j2ee.ws.server.jaxrs.dms.monitoring.DmsApplicationEventListener.
Am I understanding this correctly? Does JAX-RS jersey maps to DMS request event listener internally. How this can be configured correctly so we don't face this memory issue.
I think you need to look at your diagnostic module in weblogic. Look at watches & notifications
I want information on the WSO2 ESB clustering system requirements for production deployment on Linux.
Went through the following link :ESB clustering
Understand that more than 1 copy of the WSO2 ESB would be extracted and set up on single server for Worker nodes and similarly on the other server for Manager (DepSyn and admin) , worker nodes .
Can someone suggest what would be the system requirements of each server in this case ?
system prerequisites link suggests
Memory - 2 GB , 1 GB Heap size
Disk - 1 GB
assuming to handle one ESB instance (worker or manager node).
Thanks in advance,
Sai.
As a minimum, the system requirement would be 2 GB for the ESB worker JVM (+appropriate memory for the OS: assume 2GB for Linux in this case) which would be 4 GB minimum. Of course based on the type of work done and load, this requirement might increase.
The worker manager separation is for the separation of concerns. Hence in a typical production deployment, you might have a single manager node (same specs) and 2 worker nodes where only the worker nodes would handle traffic.