Can I mmap to file in JVM? - jvm

I want to mmap to file, i.e., without MAP_ANON flag.
So I modified mmap calls in os_linux.cpp, and then JVM failed with SIGSEGV.
How can I mmap to a particular file or block device??

Related

karate-gatling: how to resolve java heap space OutOfMemoryError?

Currently I'm trying to run our functional tests (about 300 requests) with 10 users in parallel using gatling-plugin
mvn clean test-compile gatling:test -Dkarate.env=test
with the following .mvn/jvm.config local maven options in the project folder:
-d64 -Xmx4g -Xms1g -XshowSettings:vm -Djava.awt.headless=true
At some point while processing some big response in parallel the gatling process is aborted:
[ERROR] Failed to execute goal io.gatling:gatling-maven-plugin:3.0.2:test (default-cli) on project np.rest-testing: Gatling failed.: Process exited with an error: -1 (Exit value: -1) -> [Help 1]
with the following stack trace:
java.lang.OutOfMemoryError: Java heap space
Dumping heap to java_pid25960.hprof ...
Heap dump file created [1611661680 bytes in 18.184 secs]
Uncaught error from thread [GatlingSystem-scheduler-1]: Java heap space, shutting down JVM since 'akka.jvm-exit-on-fatal-error' is enabled for ActorSystem[GatlingSystem]
java.lang.OutOfMemoryError: Java heap space
at akka.actor.LightArrayRevolverScheduler$$anon$3.nextTick(LightArrayRevolverScheduler.scala:269)
at akka.actor.LightArrayRevolverScheduler$$anon$3.run(LightArrayRevolverScheduler.scala:235)
at java.lang.Thread.run(Thread.java:748)
I have tried to increase heap space to 10 GB (-Xmx10g) in different ways:
via environment property MAVEN_OPTS=-Xmx10g
via local project maven options .mvn/jvm.config
via maven-surefire-plugin configuration as suggested here
Although 10GB is allocated for maven process as you can see at the start of maven process:
VM settings:
Min. Heap Size: 1.00G
Max. Heap Size: 10.00G
Ergonomics Machine Class: client
Using VM: Java HotSpot(TM) 64-Bit Server VM
but the OutOfMemoryError is still thrown during each gatling-plugin execution.
When analyzing each heap dump eclipse memory analyzer indicates always the same results:
84 instances of "com.intuit.karate.core.StepResult", loaded by "sun.misc.Launcher$AppClassLoader # 0xc0000000" occupy 954 286 864 (90,44 %) bytes.
Biggest instances:
•com.intuit.karate.core.StepResult # 0xfb93ced8 - 87 239 976 (8,27 %) bytes...
What can be done to reduce the heap space usage and prevent OutOfMemoryError?
Can someone share some thoughts and experience?
After some investigations I've finally noticed, that heap dump shows always 1GB. That means the increased heap space is not used by gatling-plugin.
By adding the following jvm argument to the plugin, the problem is solved even with 4GB:
<jvmArgs>
<jvmArg>-Xmx4g</jvmArg>
</jvmArgs>
So, with the following gatling-plugin configuration the error doesn't appear any more:
<plugin>
<groupId>io.gatling</groupId>
<artifactId>gatling-maven-plugin</artifactId>
<version>${gatling.plugin.version}</version>
<configuration>
<simulationsFolder>src/test/java</simulationsFolder>
<includes>
<include>performance.test.workflow.WorkflowSimulation</include>
</includes>
<compilerJvmArgs>
<compilerJvmArg>-Xmx512m</compilerJvmArg>
</compilerJvmArgs>
<jvmArgs>
<jvmArg>-Xmx4g</jvmArg>
</jvmArgs>
</configuration>
</plugin>
You can try this
<configuration>
<meminitial>1024m</meminitial>
<maxmem>4096m</maxmem>
</configuration>

GraphDB OutOfMemoryError: Java heap space

I'm using GraphDb Free 8.6.1 in research project, I'm running it with default configuration on linux server having 4GB memory in total.
Currently, we execute quite many CRUD operations in tripplestore.
GraphDB throwed exception in console:
java.lang.OutOfMemoryError: Java heap space
-XX:OnOutOfMemoryError="kill -9 %p"
Executing /bin/sh -c "kill -9 1411"...
Looking into process, GraphDB runs with parameter XX:MaxDirectMemorySize=128G
I was not able to changed, even with ./graph -Xmx3g, process is still running with XX:MaxDirectMemorySize=128G.
I've tried to configure ./grapdh parameter, setting the GDB_HEAP_SIZE=3072m, now process runs with additional -Xms3072m -Xmx3072m parameters, but remains XX:MaxDirectMemorySize=128G.
After update to GDB_HEAP_SIZE=3072m, repository went down again without .hprof file, no exception, nothing suspicious in logs. The following line was flushed into console: Java HotSpot(TM) 64-Bit Server VM warning:
Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00007f5b4b6d0000, 65536, 1) failed; error='Cannot allocate memory' (errno=12)
Please, can you help me to properly configure GraphDB tripplestore to get rid of the Heap Space exceptions?
Thank you.
By default, the value of the -XX:MaxDirectMemorySize (off heap memory) parameter in the JVM is equal to the -XMx (on heap memory). For very large repositories the size of the off heap memory may become insufficient so the GraphDB developers made this parameter 128GB or unlimited.
I suspect that your actual issue is actually allocating too much on heap memory, which leaves no space for the off heap in the RAM. When the database tries to allocate off heap RAM you hit this low OS-level error 'Cannot allocate memory'.
You have two options in solving this problem:
Increase the RAM of the server to 8GB and keep the same configuration - this would allow the 8 GB RAM to be distributed: 2GB (OS) + 3GB (on heap) + 3GB (off heap)
Decrease the -Xmx value to 2GB so the 4GB RAM will be distributed: 1GB (OS) + 2GB (on heap) + 1GB (off heap)
To get a good approximation how much RAM GraphDB needs please check the hardware sizing page:
http://graphdb.ontotext.com/documentation/8.6/free/requirements.html

apache rockerMQ broker doesn't start

i try to star rockerMQ broker,but i got the error message:
There is insufficient memory for the Java Runtime Environment to continue.
Native memory allocation (mmap) failed to map 8589934592 bytes for committing reserved memory.
An error report file with more information is saved as:
/usr/local/soft/rocketMQ/incubator-rocketmq/distribution/target/apache-rocketmq/hs_err_pid6034.log
Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00000005c0000000, 8589934592, 0) failed; error='Cannot allocate memory' (errno=12)
and i got something from the error log file about message of memory:
Memory: 4k page, physical 4089840k(551832k free), swap 2621432k(2621432k free)
vm_info: Java HotSpot(TM) 64-Bit Server VM (25.144-b01) for linux-amd64 JRE (1.8.0_144-b01), built on Jul 21 2017 21:57:33 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
how can i let the rockerMQ broker working for me
You can reduce the JVM heap size.
Open the distribution/bin/runbroker.sh file of your project and change the following line
JAVA_OPT="${JAVA_OPT} -server -Xms8g -Xmx8g -Xmn4g"
as
JAVA_OPT="${JAVA_OPT} -server -Xms4g -Xmx4g -Xmn2g"
now broker will only generate a 4G heap .I hope it will solve your problem.Now you can try to build and run.
Try modifying the start shell scripts to make a smaller JVM heap size in your dev/test env

JVM Crash Problematic Frame: Canonicalizer::do_If

Iam facing JVM Crash cosistently while enabling hotdeploy (USING below java options on starting up JAVA_OPTS -Xmx4096m -XX:MetaspaceSize=512m -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=crash -XX:ThreadStackSize=512 -XX:+UseConcMarkSweepGC -XX:ParallelGCThreads=5 -XX:NewRatio=2 -XX:+UnlockDiagnosticVMOptions -XX:-UseLoopPredicate -Xdebug -Xrunjdwp:transport=dt_socket,address=$DEBUG_PORT,server=y,suspend=n -XX:NewRatio=2 -Dspringloaded.synchronize=true JAVA_OPTS=`echo $JAVA_OPTS -Dspringloaded.synchronize=true -javaagent:springloaded-1.2.1.jar -noverify
)
Environment : JDK 1.8 U 66, RHEL 6.7
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x00007faee9a1e27c, pid=27208, tid=140379827795712
#
# JRE version: Java(TM) SE Runtime Environment (8.0_66-b17) (build 1.8.0_66-b17)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.66-b17 mixed mode linux-amd64 )
# Problematic frame:
# V [libjvm.so+0x35027c] Canonicalizer::do_If(If*)+0x1c
#
# Core dump written. Default location: core.27208
#
# An error report file with more information is saved as:
# hs_err_pid27208.log
# [ timer expired, abort... ]
I've noticed both -javaagent and -noverify in Java options list.
It looks like springloaded agent generates invalid bytecode, while the bytecode verification is explicitly turned off. No surprise, this may lead to unpredictable results including JVM crash.
This is not a JVM problem, but most likely a bug in springloaded agent. Try to remove -noverify option.
-XX:-TieredCompilation may also work around this particular problem, but don't expect application to work correctly if the bytecode fails to pass verification. It's better to stay away from the buggy agent libraries.
4.2.1 Crash in HotSpot Compiler Thread or Compiled Code
If the fatal error log indicates that the crash occurred in a compiler
thread, then it is possible (but not always the case) that you have
encountered a compiler bug. Similarly, if the crash is in compiled
code then it is possible that the compiler has generated incorrect
code.
In the case of the HotSpot Client VM (-client option), the compiler
thread appears in the error log as CompilerThread0. With the HotSpot
Server VM there are multiple compiler threads and these appear in the
error log file as CompilerThread0, CompilerThread1, and AdapterThread.
Below is a fragment of an error log for a compiler bug that was
encountered and fixed during the development of J2SE 5.0. The log file
shows that the HotSpot Server VM is used and the crash occurred in
CompilerThread1. In addition, the log file shows that the Current
CompileTask was the compilation of the java.lang.Thread.setPriority
method.
An unexpected error has been detected by HotSpot Virtual Machine:
:
Java VM: Java HotSpot(TM) Server VM (1.5-internal-debug mixed mode) :
--------------- T H R E A D ---------------
Current thread (0x001e9350): JavaThread "CompilerThread1" daemon
[_thread_in_vm, id=20]
Stack: [0xb2500000,0xb2580000), sp=0xb257e500, free space=505k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code,
C=native code) V [libjvm.so+0xc3b13c] :
Current CompileTask: opto: 11 java.lang.Thread.setPriority(I)V
(53 bytes)
--------------- P R O C E S S ---------------
Java Threads: ( => current thread ) 0x00229930 JavaThread "Low
Memory Detector" daemon [_thread_blocked, id=21]
=>0x001e9350 JavaThread "CompilerThread1" daemon [_thread_in_vm, id=20] :
In this case there are two potential workarounds:
The brute force approach: change the configuration so that the application is run with the -client option to specify the HotSpot
Client VM.
Assume that the bug only occurs during the compilation of the setPriority method and exclude this method from compilation.
The first approach (to use the -client option) might be trivial to
configure in some environments. In others, it might be more difficult
if the configuration is complex or if the command line to configure
the VM is not readily accessible. In general, switching from the
HotSpot Server VM to the HotSpot Client VM also reduces the peak
performance of an application. Depending on the environment, this
might be acceptable until the actual issue is diagnosed and fixed.
The second approach (exclude the method from compilation) requires
creating the file .hotspot_compiler in the working directory of the
application. Below is an example of this file:
exclude java/lang/Thread setPriority
In general the format of this file is exclude CLASS METHOD, where
CLASS is the class (fully qualified with the package name) and METHOD
is the name of the method. Constructor methods are specified as
and static initializers are specified as .
Note - The .hotspot_compiler file is an unsupported interface. It is
documented here solely for the purposes of troubleshooting and finding
a temporary workaround.
Once the application is restarted, the compiler will not attempt to
compile any of the methods listed as excluded in the .hotspot_compiler
file. In some cases this can provide temporary relief until the root
cause of the crash is diagnosed and the bug is fixed.
In order to verify that the HotSpot VM correctly located and processed
the .hotspot_compiler file that is shown in the example above, look
for the following log information at runtime. Note that the file name
separator is a dot, not a slash.
Excluding compile: java.lang.Thread::setPriority
Source
Agree with #apangin, In the program you are doing bytecode intrumentation (-agent) but specifies -noverify. When verification is turned off, you may end up such crashes.
You should not use -noverify or -Xverify:none during byte code intrumentation.
For those of you unfamiliar with bytecode verification, it is simply part of the JVM's classloading process that checks the code for certain dangerous and disallowed behavior. You can (but shouldn't) disable this protection on many JVMs by adding -Xverify:none or -noverify to the Java command line. https://blogs.oracle.com/buck/entry/never_disable_bytecode_verification_in

Java 1.5 crash on AIX 5

I have a problem with Java 1.5.0 for AIX. The error happens just when I log on with specific user on AIX (myuser). When I log on with other user java works ok.
The error come up even when I executed just "java -version" or simply "java" (of course, without quoting). I've tried executing it with the full path: /usr/java5/jre/bin/java but still fails.
There was installed the version 1.4 of java on system too. So the $PATH variable for the user contained /usr/java14/jre/bin, but I removed that value, I even uninstalled that version of java (1.4) so that just java 5 exists on the system, but the error continues.
If I execute "java -fullversion" it doesn't crash.
This is part of the error (the full output is very long):
JVMJ9VM011W Unable to load j9dmp23: No such file or directory
JVMJ9VM011W Unable to load j9jit23: No such file or directory
JVMJ9VM011W Unable to load j9gc23: No such file or directory
JVMJ9VM011W Unable to load j9vrb23: No such file or directory
Unhandled exception
Type=Illegal instruction vmState=0x00000000
J9Generic_Signal_Number=00000010 Signal_Number=00000004 Error_Value=00000000
Signal_Code=0000001e
Handler1=F0719CC8 Handler2=F0714F5C
.....
Target=2_30_20091103_45935_bHdSMr (AIX 5.3)
CPU=ppc (4 logical CPUs) (0x7d0000000 RAM)
JavaVMInitArgs.nOptions=14:
-Xjcl:jclscar_23
-Dcom.ibm.oti.vm.bootstrap.library.path=/usr/java5/jre/bin
-Dsun.boot.library.path=/usr/java5/jre/bin
-Djava.library.path=/usr/java5/jre/bin:/usr/java5/jre/bin:/usr/java5/jre/bin/classic:/usr/java5/jre/bin:/sqllib/lib:/home/myuser/comm:/home/myuser/sys:/home/myuser/bin:/db2util/db2adm/sqllib/lib64:/usr/java5/jre/bin/j9vm:/usr/lib
-Djava.home=/usr/java5/jre
-Djava.ext.dirs=/usr/java5/jre/lib/ext
-Duser.dir=/home/myuser
_j2se_j9=70912 (extra info: F070EA2C)
-Xdump
vfprintf (extra info: 300017A4)
-Dinvokedviajava
-Djava.class.path=/db2util/db2adm/sqllib/java/db2java.zip:/db2util/db2adm/sqllib/java/db2jcc.jar:/db2util/db2adm/sqllib/java/sqlj.zip:/db2util/db2adm/sqllib/function:/db2util/db2adm/sqllib/java/db2jcc_license_cu.jar:.
vfprintf
_port_library (extra info: F070EE30)
Note: "Enable full CORE dump" in smit is set to FALSE and as a result there will be limited threading information in core file.
Note: dump may be truncated if "ulimit -c" is set too low
Generated system dump: {default OS core name}
(no Thread object associated with thread)
(no Thread object associated with thread)
Unhandled exception in signal handler
ksh: 2179192 IOT/Abort trap(coredump)
I found the error. The problem is a line on the .profile which sets the environment variable LIBPATH:
export LIBPATH=/home/myuser/sys
I deleted that line in the .profile and java worked.