ROOT CAUSE: java.lang.OutOfMemoryError: Java heap space [Using Apache for Coldfusion 7 on localhost] - apache

I am getting following error message all of the sudden. 5 minuted before everything was working fine.
500
ROOT CAUSE:
java.lang.OutOfMemoryError: Java heap space
javax.servlet.ServletException: ROOT CAUSE:
java.lang.OutOfMemoryError: Java heap space
at coldfusion.monitor.event.MonitoringServletFilter.doFilter(MonitoringServletFilter.java:70)
at coldfusion.bootstrap.BootstrapFilter.doFilter(BootstrapFilter.java:46)
at jrun.servlet.FilterChain.doFilter(FilterChain.java:94)
at jrun.servlet.FilterChain.service(FilterChain.java:101)
at jrun.servlet.ServletInvoker.invoke(ServletInvoker.java:106)
at jrun.servlet.JRunInvokerChain.invokeNext(JRunInvokerChain.java:42)
at jrun.servlet.JRunRequestDispatcher.invoke(JRunRequestDispatcher.java:286)
at jrun.servlet.ServletEngineService.dispatch(ServletEngineService.java:543)
at jrun.servlet.jrpp.JRunProxyService.invokeRunnable(JRunProxyService.java:203)
at jrunx.scheduler.ThreadPool$ThreadThrottle.invokeRunnable(ThreadPool.java:428)
at jrunx.scheduler.WorkerThread.run(WorkerThread.java:66)
Please help to correct this..

Have you tried increasing the Java VM maximum memory allocation ? e.g.
java -Xmx512m ...
will set the maximum memory allocation to 512m. You may be running with the default memory settings, and that may not be sufficient for your application. See here for an introduction to the available options and what they mean.

Related

FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory

Getting heap limit allocation error. Need a clarity on how can we get through this using verdaccio config
Checked node config max-old-space-size can fix this. But want a way to fix this using verdaccio

I am getting an error when redeploying the Mule Api on my local anypoint studio or you can say local machine

when I deploy a Mule API and after deployment, I do some changes and save it then my API gets redeploy.
during redeployment, I am getting the below error in the console that stops the deployment.
java.lang.OutOfMemoryError: Metaspace
Dumping heap to java_pid19656.hprof ...
Heap dump file created [197920637 bytes in 0.811 secs]
#
# java.lang.OutOfMemoryError: Metaspace
# -XX:OnOutOfMemoryError="taskkill /F /PID %p"
# Executing "taskkill /F /PID 19656"...
JVM exited unexpectedly.
Automatic JVM Restarts disabled. Shutting down.
<-- Wrapper Stopped
anyone can help me to fix this issue?
Thanks
Probably you are having a known issue related to redeployments that cause the metaspace are to get exhausted. It is recommended to use the latest version of Mule and the latest version of every connector used in your applications to mitigate that issue.
Also ensure that the MetaspaceSize is half of MaxMetaspaceSize. You can increase MaxMetaspaceSize if you feel you are deploying lots of classes or applications, but keep the proportion mentioned.
wrapper.java.additional.7=-XX:MetaspaceSize=128m
wrapper.java.additional.8=-XX:MaxMetaspaceSize=256m

karate-gatling: how to resolve java heap space OutOfMemoryError?

Currently I'm trying to run our functional tests (about 300 requests) with 10 users in parallel using gatling-plugin
mvn clean test-compile gatling:test -Dkarate.env=test
with the following .mvn/jvm.config local maven options in the project folder:
-d64 -Xmx4g -Xms1g -XshowSettings:vm -Djava.awt.headless=true
At some point while processing some big response in parallel the gatling process is aborted:
[ERROR] Failed to execute goal io.gatling:gatling-maven-plugin:3.0.2:test (default-cli) on project np.rest-testing: Gatling failed.: Process exited with an error: -1 (Exit value: -1) -> [Help 1]
with the following stack trace:
java.lang.OutOfMemoryError: Java heap space
Dumping heap to java_pid25960.hprof ...
Heap dump file created [1611661680 bytes in 18.184 secs]
Uncaught error from thread [GatlingSystem-scheduler-1]: Java heap space, shutting down JVM since 'akka.jvm-exit-on-fatal-error' is enabled for ActorSystem[GatlingSystem]
java.lang.OutOfMemoryError: Java heap space
at akka.actor.LightArrayRevolverScheduler$$anon$3.nextTick(LightArrayRevolverScheduler.scala:269)
at akka.actor.LightArrayRevolverScheduler$$anon$3.run(LightArrayRevolverScheduler.scala:235)
at java.lang.Thread.run(Thread.java:748)
I have tried to increase heap space to 10 GB (-Xmx10g) in different ways:
via environment property MAVEN_OPTS=-Xmx10g
via local project maven options .mvn/jvm.config
via maven-surefire-plugin configuration as suggested here
Although 10GB is allocated for maven process as you can see at the start of maven process:
VM settings:
Min. Heap Size: 1.00G
Max. Heap Size: 10.00G
Ergonomics Machine Class: client
Using VM: Java HotSpot(TM) 64-Bit Server VM
but the OutOfMemoryError is still thrown during each gatling-plugin execution.
When analyzing each heap dump eclipse memory analyzer indicates always the same results:
84 instances of "com.intuit.karate.core.StepResult", loaded by "sun.misc.Launcher$AppClassLoader # 0xc0000000" occupy 954 286 864 (90,44 %) bytes.
Biggest instances:
•com.intuit.karate.core.StepResult # 0xfb93ced8 - 87 239 976 (8,27 %) bytes...
What can be done to reduce the heap space usage and prevent OutOfMemoryError?
Can someone share some thoughts and experience?
After some investigations I've finally noticed, that heap dump shows always 1GB. That means the increased heap space is not used by gatling-plugin.
By adding the following jvm argument to the plugin, the problem is solved even with 4GB:
<jvmArgs>
<jvmArg>-Xmx4g</jvmArg>
</jvmArgs>
So, with the following gatling-plugin configuration the error doesn't appear any more:
<plugin>
<groupId>io.gatling</groupId>
<artifactId>gatling-maven-plugin</artifactId>
<version>${gatling.plugin.version}</version>
<configuration>
<simulationsFolder>src/test/java</simulationsFolder>
<includes>
<include>performance.test.workflow.WorkflowSimulation</include>
</includes>
<compilerJvmArgs>
<compilerJvmArg>-Xmx512m</compilerJvmArg>
</compilerJvmArgs>
<jvmArgs>
<jvmArg>-Xmx4g</jvmArg>
</jvmArgs>
</configuration>
</plugin>
You can try this
<configuration>
<meminitial>1024m</meminitial>
<maxmem>4096m</maxmem>
</configuration>

GraphDB OutOfMemoryError: Java heap space

I'm using GraphDb Free 8.6.1 in research project, I'm running it with default configuration on linux server having 4GB memory in total.
Currently, we execute quite many CRUD operations in tripplestore.
GraphDB throwed exception in console:
java.lang.OutOfMemoryError: Java heap space
-XX:OnOutOfMemoryError="kill -9 %p"
Executing /bin/sh -c "kill -9 1411"...
Looking into process, GraphDB runs with parameter XX:MaxDirectMemorySize=128G
I was not able to changed, even with ./graph -Xmx3g, process is still running with XX:MaxDirectMemorySize=128G.
I've tried to configure ./grapdh parameter, setting the GDB_HEAP_SIZE=3072m, now process runs with additional -Xms3072m -Xmx3072m parameters, but remains XX:MaxDirectMemorySize=128G.
After update to GDB_HEAP_SIZE=3072m, repository went down again without .hprof file, no exception, nothing suspicious in logs. The following line was flushed into console: Java HotSpot(TM) 64-Bit Server VM warning:
Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00007f5b4b6d0000, 65536, 1) failed; error='Cannot allocate memory' (errno=12)
Please, can you help me to properly configure GraphDB tripplestore to get rid of the Heap Space exceptions?
Thank you.
By default, the value of the -XX:MaxDirectMemorySize (off heap memory) parameter in the JVM is equal to the -XMx (on heap memory). For very large repositories the size of the off heap memory may become insufficient so the GraphDB developers made this parameter 128GB or unlimited.
I suspect that your actual issue is actually allocating too much on heap memory, which leaves no space for the off heap in the RAM. When the database tries to allocate off heap RAM you hit this low OS-level error 'Cannot allocate memory'.
You have two options in solving this problem:
Increase the RAM of the server to 8GB and keep the same configuration - this would allow the 8 GB RAM to be distributed: 2GB (OS) + 3GB (on heap) + 3GB (off heap)
Decrease the -Xmx value to 2GB so the 4GB RAM will be distributed: 1GB (OS) + 2GB (on heap) + 1GB (off heap)
To get a good approximation how much RAM GraphDB needs please check the hardware sizing page:
http://graphdb.ontotext.com/documentation/8.6/free/requirements.html

Java - Invalid maximum heap size

When running project from IntelliJ IDEA I set VM options -XX:MaxPermSize=512m -Xmx=256m -Xms=256m and get the following error
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
Invalid maximum heap size: -Xmx=256m
Remove the equals sign after Xmx and Xms.
Correct VM options: -XX:MaxPermSize=512m -Xmx256m -Xms256m.