karate-gatling: how to resolve java heap space OutOfMemoryError? - karate

Currently I'm trying to run our functional tests (about 300 requests) with 10 users in parallel using gatling-plugin
mvn clean test-compile gatling:test -Dkarate.env=test
with the following .mvn/jvm.config local maven options in the project folder:
-d64 -Xmx4g -Xms1g -XshowSettings:vm -Djava.awt.headless=true
At some point while processing some big response in parallel the gatling process is aborted:
[ERROR] Failed to execute goal io.gatling:gatling-maven-plugin:3.0.2:test (default-cli) on project np.rest-testing: Gatling failed.: Process exited with an error: -1 (Exit value: -1) -> [Help 1]
with the following stack trace:
java.lang.OutOfMemoryError: Java heap space
Dumping heap to java_pid25960.hprof ...
Heap dump file created [1611661680 bytes in 18.184 secs]
Uncaught error from thread [GatlingSystem-scheduler-1]: Java heap space, shutting down JVM since 'akka.jvm-exit-on-fatal-error' is enabled for ActorSystem[GatlingSystem]
java.lang.OutOfMemoryError: Java heap space
at akka.actor.LightArrayRevolverScheduler$$anon$3.nextTick(LightArrayRevolverScheduler.scala:269)
at akka.actor.LightArrayRevolverScheduler$$anon$3.run(LightArrayRevolverScheduler.scala:235)
at java.lang.Thread.run(Thread.java:748)
I have tried to increase heap space to 10 GB (-Xmx10g) in different ways:
via environment property MAVEN_OPTS=-Xmx10g
via local project maven options .mvn/jvm.config
via maven-surefire-plugin configuration as suggested here
Although 10GB is allocated for maven process as you can see at the start of maven process:
VM settings:
Min. Heap Size: 1.00G
Max. Heap Size: 10.00G
Ergonomics Machine Class: client
Using VM: Java HotSpot(TM) 64-Bit Server VM
but the OutOfMemoryError is still thrown during each gatling-plugin execution.
When analyzing each heap dump eclipse memory analyzer indicates always the same results:
84 instances of "com.intuit.karate.core.StepResult", loaded by "sun.misc.Launcher$AppClassLoader # 0xc0000000" occupy 954 286 864 (90,44 %) bytes.
Biggest instances:
•com.intuit.karate.core.StepResult # 0xfb93ced8 - 87 239 976 (8,27 %) bytes...
What can be done to reduce the heap space usage and prevent OutOfMemoryError?
Can someone share some thoughts and experience?

After some investigations I've finally noticed, that heap dump shows always 1GB. That means the increased heap space is not used by gatling-plugin.
By adding the following jvm argument to the plugin, the problem is solved even with 4GB:
<jvmArgs>
<jvmArg>-Xmx4g</jvmArg>
</jvmArgs>
So, with the following gatling-plugin configuration the error doesn't appear any more:
<plugin>
<groupId>io.gatling</groupId>
<artifactId>gatling-maven-plugin</artifactId>
<version>${gatling.plugin.version}</version>
<configuration>
<simulationsFolder>src/test/java</simulationsFolder>
<includes>
<include>performance.test.workflow.WorkflowSimulation</include>
</includes>
<compilerJvmArgs>
<compilerJvmArg>-Xmx512m</compilerJvmArg>
</compilerJvmArgs>
<jvmArgs>
<jvmArg>-Xmx4g</jvmArg>
</jvmArgs>
</configuration>
</plugin>

You can try this
<configuration>
<meminitial>1024m</meminitial>
<maxmem>4096m</maxmem>
</configuration>

Related

I am getting an error when redeploying the Mule Api on my local anypoint studio or you can say local machine

when I deploy a Mule API and after deployment, I do some changes and save it then my API gets redeploy.
during redeployment, I am getting the below error in the console that stops the deployment.
java.lang.OutOfMemoryError: Metaspace
Dumping heap to java_pid19656.hprof ...
Heap dump file created [197920637 bytes in 0.811 secs]
#
# java.lang.OutOfMemoryError: Metaspace
# -XX:OnOutOfMemoryError="taskkill /F /PID %p"
# Executing "taskkill /F /PID 19656"...
JVM exited unexpectedly.
Automatic JVM Restarts disabled. Shutting down.
<-- Wrapper Stopped
anyone can help me to fix this issue?
Thanks
Probably you are having a known issue related to redeployments that cause the metaspace are to get exhausted. It is recommended to use the latest version of Mule and the latest version of every connector used in your applications to mitigate that issue.
Also ensure that the MetaspaceSize is half of MaxMetaspaceSize. You can increase MaxMetaspaceSize if you feel you are deploying lots of classes or applications, but keep the proportion mentioned.
wrapper.java.additional.7=-XX:MetaspaceSize=128m
wrapper.java.additional.8=-XX:MaxMetaspaceSize=256m

GraphDB OutOfMemoryError: Java heap space

I'm using GraphDb Free 8.6.1 in research project, I'm running it with default configuration on linux server having 4GB memory in total.
Currently, we execute quite many CRUD operations in tripplestore.
GraphDB throwed exception in console:
java.lang.OutOfMemoryError: Java heap space
-XX:OnOutOfMemoryError="kill -9 %p"
Executing /bin/sh -c "kill -9 1411"...
Looking into process, GraphDB runs with parameter XX:MaxDirectMemorySize=128G
I was not able to changed, even with ./graph -Xmx3g, process is still running with XX:MaxDirectMemorySize=128G.
I've tried to configure ./grapdh parameter, setting the GDB_HEAP_SIZE=3072m, now process runs with additional -Xms3072m -Xmx3072m parameters, but remains XX:MaxDirectMemorySize=128G.
After update to GDB_HEAP_SIZE=3072m, repository went down again without .hprof file, no exception, nothing suspicious in logs. The following line was flushed into console: Java HotSpot(TM) 64-Bit Server VM warning:
Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00007f5b4b6d0000, 65536, 1) failed; error='Cannot allocate memory' (errno=12)
Please, can you help me to properly configure GraphDB tripplestore to get rid of the Heap Space exceptions?
Thank you.
By default, the value of the -XX:MaxDirectMemorySize (off heap memory) parameter in the JVM is equal to the -XMx (on heap memory). For very large repositories the size of the off heap memory may become insufficient so the GraphDB developers made this parameter 128GB or unlimited.
I suspect that your actual issue is actually allocating too much on heap memory, which leaves no space for the off heap in the RAM. When the database tries to allocate off heap RAM you hit this low OS-level error 'Cannot allocate memory'.
You have two options in solving this problem:
Increase the RAM of the server to 8GB and keep the same configuration - this would allow the 8 GB RAM to be distributed: 2GB (OS) + 3GB (on heap) + 3GB (off heap)
Decrease the -Xmx value to 2GB so the 4GB RAM will be distributed: 1GB (OS) + 2GB (on heap) + 1GB (off heap)
To get a good approximation how much RAM GraphDB needs please check the hardware sizing page:
http://graphdb.ontotext.com/documentation/8.6/free/requirements.html

apache rockerMQ broker doesn't start

i try to star rockerMQ broker,but i got the error message:
There is insufficient memory for the Java Runtime Environment to continue.
Native memory allocation (mmap) failed to map 8589934592 bytes for committing reserved memory.
An error report file with more information is saved as:
/usr/local/soft/rocketMQ/incubator-rocketmq/distribution/target/apache-rocketmq/hs_err_pid6034.log
Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00000005c0000000, 8589934592, 0) failed; error='Cannot allocate memory' (errno=12)
and i got something from the error log file about message of memory:
Memory: 4k page, physical 4089840k(551832k free), swap 2621432k(2621432k free)
vm_info: Java HotSpot(TM) 64-Bit Server VM (25.144-b01) for linux-amd64 JRE (1.8.0_144-b01), built on Jul 21 2017 21:57:33 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
how can i let the rockerMQ broker working for me
You can reduce the JVM heap size.
Open the distribution/bin/runbroker.sh file of your project and change the following line
JAVA_OPT="${JAVA_OPT} -server -Xms8g -Xmx8g -Xmn4g"
as
JAVA_OPT="${JAVA_OPT} -server -Xms4g -Xmx4g -Xmn2g"
now broker will only generate a 4G heap .I hope it will solve your problem.Now you can try to build and run.
Try modifying the start shell scripts to make a smaller JVM heap size in your dev/test env

How can I give the Intellij compiler more heap space?

When I make an Intellij project, I keep getting the following out of memory error.
I already increased my heap size in idea.vmoptions:
-Xms128m
-Xmx2048m
-XX:MaxPermSize=1024m
-XX:ReservedCodeCacheSize=64m
-ea
But I still get this error:
Information:The system is out of resources.
Information:Consult the following stack trace for details.
Information:java.lang.OutOfMemoryError: Java heap space
Information: at com.sun.tools.javac.util.Position$LineMapImpl.build(Position.java:139)
Information: at com.sun.tools.javac.util.Position.makeLineMap(Position.java:63)
Information: at com.sun.tools.javac.parser.Scanner.getLineMap(Scanner.java:1105)
Information: at com.sun.tools.javac.main.JavaCompiler.parse(JavaCompiler.java:512)
Information: at com.sun.tools.javac.main.JavaCompiler.parse(JavaCompiler.java:550)
Information: at com.sun.tools.javac.main.JavaCompiler.parseFiles(JavaCompiler.java:804)
Information: at com.sun.tools.javac.main.JavaCompiler.compile(JavaCompiler.java:727)
Information: at com.sun.tools.javac.main.Main.compile(Main.java:353)
Information: at com.sun.tools.javac.main.Main.compile(Main.java:279)
Information: at com.sun.tools.javac.main.Main.compile(Main.java:270)
Information: at com.sun.tools.javac.Main.compile(Main.java:69)
Information: at com.sun.tools.javac.Main.main(Main.java:54)
Information: at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
Information: at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
Information: at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
Information: at java.lang.reflect.Method.invoke(Method.java:597)
Information: at com.intellij.rt.compiler.JavacRunner.main(JavacRunner.java:71)
Information:Compilation completed with 1 error and 0 warnings
Information:1 error
Information:0 warnings
Error:Compiler internal error. Process terminated with exit code 3
What am I missing?
Current version:
Settings (Preferences on Mac) | Build, Execution, Deployment | Compiler |
Build process heap size.
Older versions:
Settings (Preferences on Mac) | Compiler | Java Compiler | Maximum heap size.
Compiler runs in a separate JVM by default so IDEA heap settings that you set in idea.vmoptions have no effect on the compiler.
Since IntelliJ 2016, the location is File | Settings | Build, Execution, Deployment | Compiler | Build process heap size.
I had a similar problem with Ant build (started by hand from IDEA GUI). In my case there was the right solution to right click on the Ant task, choose properties and set a higher value to "Maximum heap space (Mb):" and "Maximum stack space (Mb):" input fields.
To resolve this issue follow the below given steps-
1). In inteli Go to File> Setting Option and Search for Vm Option In the field of Vm Option for Importer give value -Xmx512m.
Intelij Setting Options
2). Go to Control Pannel Select View by :Large icons then Go to Java one promp window will appear with name java control pannel then go to java
Java VM Options
select view option.
java view options
In Java Run time Environment Setting pass Run time Parameters as -Xmx1024m.
3). If above given options does not work then change the size of pom.xml
GWT in Intellij 12
FWIW, I was getting a similar error with my GWT application during 'Build | Rebuild Project'.
This was caused by Intellij doing a full GWT compile which I didn't like because it is also a very lengthy process.
I disabled GWT compile by turning off the module check boxes under 'Project Structure | Facets | GWT'.
Alternatively there is a 'Compiler maximum heap size' setting in that location as well.
I was facing "java.lang.OutOfMemoryError: Java heap space" error while building my project using maven install command.
I was able to get rid of it by changing maven runner settings.
Settings | Build, Execution, Deployment | Build Tools | Maven | Runner | VM options to -Xmx512m
In my case the error was caused by the insufficient memory allocated to the "test" lifecycle of maven. It was fixed by adding <argLine>-Xms3512m -Xmx3512m</argLine> to:
<pluginManagement>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>2.16</version>
<configuration>
<argLine>-Xms3512m -Xmx3512m</argLine>
Thanks #crazycoder for pointing this out (and also that it is not related to IntelliJ; in this case).
If your tests are forked, they run in a new JVM that doesn't inherit Maven JVM options. Custom memory options must be provided via the test runner in pom.xml, refer to Maven documentation for details, it has very little to do with the IDE.
I like to share a revelation that I had. When you build a project, Intellij Idea runs a java process that resides in its core(ex: C:\Program Files\JetBrains\IntelliJ IDEA 2020.3\jbr\bin). The "build process heap size", as mentioned by many others, changes the heap size of this java process. However, the main java process is triggered later by the Idea's java process, hence have different VM arguments. I noticed that the max heap size of this process is 1/3 of the Idea's java process, while min heap is the half of max(1/6). To round up:
When you set 9g heap on "build process heap size" the actual heap size for the compiler is max 3g and min 1,5g. And no need for restart is neccessary.
PS: tested on version 2020.3
On android studio 4.2 or newer Arctic fox version you will option under appearance & behavior
Windows: file > settings > appearance & behavior > system settings > memory settings
Mac: file > preferences > appearance & behavior > system settings > memory settings
https://i.stack.imgur.com/PEqBK.png
There is a
idea64.exe
starter in
IntelliJ IDEA 13.1.5\bin
so you can address more space.

ROOT CAUSE: java.lang.OutOfMemoryError: Java heap space [Using Apache for Coldfusion 7 on localhost]

I am getting following error message all of the sudden. 5 minuted before everything was working fine.
500
ROOT CAUSE:
java.lang.OutOfMemoryError: Java heap space
javax.servlet.ServletException: ROOT CAUSE:
java.lang.OutOfMemoryError: Java heap space
at coldfusion.monitor.event.MonitoringServletFilter.doFilter(MonitoringServletFilter.java:70)
at coldfusion.bootstrap.BootstrapFilter.doFilter(BootstrapFilter.java:46)
at jrun.servlet.FilterChain.doFilter(FilterChain.java:94)
at jrun.servlet.FilterChain.service(FilterChain.java:101)
at jrun.servlet.ServletInvoker.invoke(ServletInvoker.java:106)
at jrun.servlet.JRunInvokerChain.invokeNext(JRunInvokerChain.java:42)
at jrun.servlet.JRunRequestDispatcher.invoke(JRunRequestDispatcher.java:286)
at jrun.servlet.ServletEngineService.dispatch(ServletEngineService.java:543)
at jrun.servlet.jrpp.JRunProxyService.invokeRunnable(JRunProxyService.java:203)
at jrunx.scheduler.ThreadPool$ThreadThrottle.invokeRunnable(ThreadPool.java:428)
at jrunx.scheduler.WorkerThread.run(WorkerThread.java:66)
Please help to correct this..
Have you tried increasing the Java VM maximum memory allocation ? e.g.
java -Xmx512m ...
will set the maximum memory allocation to 512m. You may be running with the default memory settings, and that may not be sufficient for your application. See here for an introduction to the available options and what they mean.