Jboss 7.1.1 Final occuping huge physical RAM of linux - jboss7.x

I am using jboss 7.1.1 Final with jdk version 1.6.0_45 and while starting jboss i just configured 5gb for heap and 1gb for non-heap. My Linux complete RAM size is around 60GB. After starting jboss some time i can see from linux top command its occupied ram is 50GB. Moreover from jconsole & jvisualvm tool i can see my jboss ram utilization keep up & down reaching max 90% ( approx. using between 4-5 gb)
top - 11:52:35 up 1 day, 17:40, 4 users, load average: 0.89, 1.20, 1.27
Tasks: 174 total, 1 running, 173 sleeping, 0 stopped, 0 zombie
Cpu(s): 12.1%us, 1.8%sy, 0.0%ni, 85.8%id, 0.0%wa, 0.1%hi, 0.2%si, 0.0%st
Mem: 62.948G total, 49.872G used, 13.076G free, 347.309M buffers
Swap: 8197.219M total, 0.000k used, 8197.219M free, 2053.590M cached
And Jboss parameters are like this :
-D[Standalone] -server -Xms5120m -Xmx5120m -XX:MaxPermSize=1024m -XX:PermSize=1024m -Djava.net.preferIPv4Stack=true -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000 -Dorg.jboss.resolver.warning=true -XX:+UseConcMarkSweepGC -XX:+CMSPermGenSweepingEnabled -XX:+UseParNewGC
Please help out why more Linux RAM is consumed ????
Regards
Veera

Related

Queries on ignite leads to unresponsive ignite node

When we do select count(*) from table , whole ignite server becomes unresponsive for execution of query. Query execution time is also very high and become higher if number of records are higher.
Even if query takes long time, whole server should not become unresponsive (unable to even ssh), all other queries timeout.
Apache ignite version 2.7.5
Apache Ignite version : 2.7.5
Ignite persistence is enabled (true)
2 node cluster in partitioned mode
RAM - 150 GB per node
JVM xms and xmx 20G
Number of records - 160 million
JVM options -
/usr/java/jdk1.8.0_144/bin/java -XX:+AggressiveOpts -server -Xms20g -Xmx20g -XX:+AlwaysPreTouch -XX:+UseG1GC -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/etappdata/ignite/logs/PROD/etail-prod-ignite76-163/logs -XX:+ExitOnOutOfMemoryError -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=100M -Xloggc:/etappdata/ignite/logs/PROD/etail-prod-ignite76-163/gc.log -XX:+PrintAdaptiveSizePolicy -XX:+UseTLAB -verbose:gc -XX:+ParallelRefProcEnabled -XX:+UseLargePages -XX:+AggressiveOpts -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Addresses=true -Djava.net.preferIPv6Stack=false -Djava.net.preferIPv6Addresses=false -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=8996 -Dcom.sun.management.jmxremote.rmi.port=8996 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.local.only=false -Djava.rmi.server.hostname=etail-prod-ignite76-163 -XX:MaxDirectMemorySize=4g -javaagent:/tmp/apminsight-javaagent-prod/apminsight-javaagent.jar -Dfile.encoding=UTF-8 -XX:+UseG1GC -DIGNITE_QUIET=false -DIGNITE_SUCCESS_FILE=/ignite/apache-ignite-2.7.5-bin/work/ignite_success_7d9ec20d-9728-475a-aa80-4355eb8eaf02 -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=49112 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -DIGNITE_HOME=/ignite/apache-ignite-2.7.5-bin -DIGNITE_PROG_NAME=./bin/ignite.sh -cp /ignite/apache-ignite-2.7.5-bin/libs/:/ignite/apache-ignite-2.7.5-bin/libs/ignite-indexing/:/ignite/apache-ignite-2.7.5-bin/libs/ignite-spring/:/ignite/apache-ignite-2.7.5-bin/libs/licenses/ org.apache.ignite.startup.cmdline.CommandLineStartup config/config-cache.xml
For queries such as SELECT * FROM table (without WHERE) it is recommended to enable lazy mode so that Ignite will not try to store the entire result set on heap at once.
You can enable it via JDBC/ODBC driver properties/connection string or on SqlFieldsQuery.

GraphDB OutOfMemoryError: Java heap space

I'm using GraphDb Free 8.6.1 in research project, I'm running it with default configuration on linux server having 4GB memory in total.
Currently, we execute quite many CRUD operations in tripplestore.
GraphDB throwed exception in console:
java.lang.OutOfMemoryError: Java heap space
-XX:OnOutOfMemoryError="kill -9 %p"
Executing /bin/sh -c "kill -9 1411"...
Looking into process, GraphDB runs with parameter XX:MaxDirectMemorySize=128G
I was not able to changed, even with ./graph -Xmx3g, process is still running with XX:MaxDirectMemorySize=128G.
I've tried to configure ./grapdh parameter, setting the GDB_HEAP_SIZE=3072m, now process runs with additional -Xms3072m -Xmx3072m parameters, but remains XX:MaxDirectMemorySize=128G.
After update to GDB_HEAP_SIZE=3072m, repository went down again without .hprof file, no exception, nothing suspicious in logs. The following line was flushed into console: Java HotSpot(TM) 64-Bit Server VM warning:
Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00007f5b4b6d0000, 65536, 1) failed; error='Cannot allocate memory' (errno=12)
Please, can you help me to properly configure GraphDB tripplestore to get rid of the Heap Space exceptions?
Thank you.
By default, the value of the -XX:MaxDirectMemorySize (off heap memory) parameter in the JVM is equal to the -XMx (on heap memory). For very large repositories the size of the off heap memory may become insufficient so the GraphDB developers made this parameter 128GB or unlimited.
I suspect that your actual issue is actually allocating too much on heap memory, which leaves no space for the off heap in the RAM. When the database tries to allocate off heap RAM you hit this low OS-level error 'Cannot allocate memory'.
You have two options in solving this problem:
Increase the RAM of the server to 8GB and keep the same configuration - this would allow the 8 GB RAM to be distributed: 2GB (OS) + 3GB (on heap) + 3GB (off heap)
Decrease the -Xmx value to 2GB so the 4GB RAM will be distributed: 1GB (OS) + 2GB (on heap) + 1GB (off heap)
To get a good approximation how much RAM GraphDB needs please check the hardware sizing page:
http://graphdb.ontotext.com/documentation/8.6/free/requirements.html

apache rockerMQ broker doesn't start

i try to star rockerMQ broker,but i got the error message:
There is insufficient memory for the Java Runtime Environment to continue.
Native memory allocation (mmap) failed to map 8589934592 bytes for committing reserved memory.
An error report file with more information is saved as:
/usr/local/soft/rocketMQ/incubator-rocketmq/distribution/target/apache-rocketmq/hs_err_pid6034.log
Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00000005c0000000, 8589934592, 0) failed; error='Cannot allocate memory' (errno=12)
and i got something from the error log file about message of memory:
Memory: 4k page, physical 4089840k(551832k free), swap 2621432k(2621432k free)
vm_info: Java HotSpot(TM) 64-Bit Server VM (25.144-b01) for linux-amd64 JRE (1.8.0_144-b01), built on Jul 21 2017 21:57:33 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
how can i let the rockerMQ broker working for me
You can reduce the JVM heap size.
Open the distribution/bin/runbroker.sh file of your project and change the following line
JAVA_OPT="${JAVA_OPT} -server -Xms8g -Xmx8g -Xmn4g"
as
JAVA_OPT="${JAVA_OPT} -server -Xms4g -Xmx4g -Xmn2g"
now broker will only generate a 4G heap .I hope it will solve your problem.Now you can try to build and run.
Try modifying the start shell scripts to make a smaller JVM heap size in your dev/test env

Gc overhead limit exceeded is showing after adding google play services library

If I add google play services library to my project, then gc overhead limit exceeded is showing!
This is eclipse.ini content:
-startup
plugins/org.eclipse.equinox.launcher_1.3.0.v20140415-2008.jar
--launcher.library
plugins/org.eclipse.equinox.launcher.win32.win32.x86_64_1.1.200.v20140603-1326
-product
org.eclipse.epp.package.java.product
--launcher.defaultAction
openFile
--launcher.XXMaxPermSize
1024M
-showsplash
org.eclipse.platform
--launcher.XXMaxPermSize
1024m
--launcher.defaultAction
openFile
--launcher.appendVmargs
-vmargs
-Dosgi.requiredJavaVersion=1.6
-Xms512m
-Xmx1024m
I was also facing the same problem. Then I set the values to 2048 for xms and xmx. However high CPU uses was observed for intial few seconds, but it worked.
-Xms2048m
-Xmx2048m

java.lang.OutOfMemoryError: requested 16 bytes for CHeapObj-new. Out of swap space?

I got this error on trying to get the Java search process UP(start a java process). I am setting the address space using the RLIMIT_AS.
Please help me to get past this error.
I have doubts about the VM Arguements. (See below).
Is there any way to get past this issue without changing the configurations. (VM Arguements)
A fatal error has been detected by the Java Runtime Environment:
java.lang.OutOfMemoryError: requested 16 bytes for CHeapObj-new.
Out of swap space?
Internal Error (allocation.inline.hpp:39), pid=16994,
tid=1097390400
Error: CHeapObj-new
JRE version: 6.0_21-b06
Java VM: Java HotSpot(TM) 64-Bit Server VM (17.0-b16 mixed mode
linux-amd64 )
If you would like to submit a bug report, please visit:
http://java.sun.com/webapps/bugreport/crash.jsp
--------------- T H R E A D ---------------
Current thread (0x00000000489a7800):
JavaThread "main" [_thread_in_vm,
id=17043,
stack(0x000000004158d000,0x000000004168e000)]
Stack:
[0x000000004158d000,0x000000004168e000],
sp=0x00000000416897f0, free
space=3f10000000000000018k
VM state:not at safepoint (normal
execution)
VM Mutex/Monitor currently owned by a
thread: None
Heap PSYoungGen total 38208K,
used 24989K [0x00002aaae8f80000,
0x00002aaaeba20000,
0x00002aab03a20000) eden space
32768K, 76% used
[0x00002aaae8f80000,0x00002aaaea7e7518,0x00002aaaeaf80000)
from space 5440K, 0% used
[0x00002aaaeb4d0000,0x00002aaaeb4d0000,0x00002aaaeba20000)
to space 5440K, 0% used
[0x00002aaaeaf80000,0x00002aaaeaf80000,0x00002aaaeb4d0000)
PSOldGen total 87424K, used 0K
[0x00002aaab3a20000,
0x00002aaab8f80000,
0x00002aaae8f80000) object space
87424K, 0% used
[0x00002aaab3a20000,0x00002aaab3a20000,0x00002aaab8f80000)
PSPermGen total 21248K, used
10141K [0x00002aaaae620000,
0x00002aaaafae0000,
0x00002aaab3a20000) object space
21248K, 47% used
[0x00002aaaae620000,0x00002aaaaf007410,0x00002aaaafae0000)
VM Arguments: jvm_args: -Xms128M
-Xmx1280M -D.config=path -D.home=path1 -D .logfile=path2 java_command: com. .base.Server Launcher Type:
SUN_STANDARD
OS:CentOS release 5.5 (Final)
uname:Linux 2.6.18-194.el5 #1 SMP Fri
Apr 2 14:58:14 EDT 2010 x86_64
libc:glibc 2.5 NPTL 2.5 rlimit: STACK
10240k, CORE 1000001k, NPROC 24576,
NOFILE 4096, AS 1835008k load
average:1.87 0.45 0.22
CPU:total 2 (1 cores per cpu, 1
threads per core) family 6 model 46
stepping 6, cmov, cx8, fxsr, mmx, sse,
sse2, sse3, ssse3, sse4.1, sse4.2,
popcnt
Memory: 4k page, physical
2959608k(2057540k free), swap
4096532k(4096532k free)
vm_info: Java HotSpot(TM) 64-Bit
Server VM (17.0-b16) for linux-amd64
JRE (1.6.0_21-b06), built on Jun 22
2010 01:10:00 by "java_re" with gcc
3.2.2 (SuSE Linux)
time: Tue Mar 22 03:08:27 2011 elapsed
time: 5 seconds
What I did was google the Internal Error (allocation.inline.hpp:39) mesage and found this page http://forums.oracle.com/forums/thread.jspa?messageID=5203404 suggesting the swap space limit was causing the problem (even it shouldn't do so) after removing the limit, the problem went away.