I'm using the Maven plugin for embedded Glassfish - here's my plugin declaration:
<plugin>
<artifactId>maven-glassfish-plugin</artifactId>
<packaging>maven-plugin</packaging>
<version>1.0-alpha-4</version>
<configuration>
<httpPort>8080</httpPort>
</configuration>
</plugin>
After several clicks through my data-intensive web app, I run out of PermGen space.
java.lang.OutOfMemoryError: PermGen space
I've already configured MAVEN_OPTS to use more memory:
set MAVEN_OPTS=-Xmx1024m
But it looks like the Java process spawned by mvn glassfish:run is only getting about half a gigabyte of memory before it seizes up.
Does the Glassfish plugin have any configuration settings for upping its memory?
Thanks!
Just to clarify. The permanent generation space contains loaded class objects and interned strings. It is allocated outside of the Java heap as illustrated below:
On recent Sun VMs, the default maximum size is 64m (i.e. -XX:MaxPermSize=64m) and is adequate for most applications (the problem is very likely related to frequent undeploy/redeploy here though). I would anyway try with -XX:MaxPermSize=128m or -XX:MaxPermSize=256m, 1024m seems really oversized!
After further consultation with some colleagues, it seems I was increasing the wrong memory value in Maven.
To increase PermGen space, I added this to my MAVEN_OPTS:
set MAVEN_OPTS=-Xmx1024m -XX:MaxPermSize=1024m
Related
I have a java program running in centos Box.
My -Xmx and -Xms set to 4000 Mb.
The program works fine.
But when i do free -m , the used memory is showing as 506 MB. As per my understanding , XMS memory should be reserved for JVM.Why does free command not showing the java used memory ?
I have also done jstat -gccapacity $(pidof java) and there NGCMN and NGCMX updated and have the same value ?
Any support would be helpful.
I'm running my program as java -Xms41000m -Xmx42000m -jar
Even when -Xmx and -Xms set to the same value, the space reserved for Java Heap is not immediately allocated in RAM.
Operating System typically allocates physical memory lazily, only on the first access to a virtual page. So, while unused part of Java Heap is not touched, it won't really consume memory.
You may use -XX:+AlwaysPreTouch option to forcibly touch all heap pages on JVM start.
I'm developing an Intellij-Plugin. The Plugin needs to have 2GB of HeapMemory (... yes it really needs to ;) ). I found out how I can increase the Memory of the Intellij IDEA VM bis editing the idea64.exe.vmoptions-file like this:
-Xms128m
-Xmx2048m
-XX:MaxPermSize=350m
-XX:ReservedCodeCacheSize=240m
-XX:+UseCodeCacheFlushing
-XX:+UseCompressedOops
If i enable the memory indicator, i can see that it worked.
But if i run/debug my plugin out of Intellij, the "sandbox"-Intellij has only 1GB of RAM.
It throws the flowing warning:
High memory usage (free 101 of 914 MB) while dumping threads
How can i increase the RAM of the sandbox-Plugin?
Setting VM Options in run configuration doesn't work for me.
I found more proper way to do it from this. Add the following to your build.gradle:
runIde {
jvmArgs '-Xmx2G'
}
I am developing simple distributed in memory key-value storage service.
In my case, I embed Ignite as using maven dependancy.
The application have simple controller, which is get and put API.
Get API is get object from ignite cache, Put API is put object to ignite cache.
Anyway, I do load test and I monitored jvm status using visual vm, I observed each heap area(e.g eden, suv, old) using visual gc plugin and direct buffer using buffer monitor plugin.
when I started load test, The eden area filled gradually and moved old area, not direct buffer. when the load test is end, direct buffer only use 150kb, but old area used 512m.
Image by visualGC
Image by buffer monitor plugin
※data size might be 500mb.
I guess, the direct buffer almost not used. why? hear is my configuration
-Djava.awt.headless=true -Dfile.encoding=UTF-8 -server -Xms1g -Xmx1g -XX:MaxDirectMemorySize=6g -XX:+AlwaysPreTouch -XX:NewSize=512m -XX:GCTimeRatio=4 -XX:InitiatingHeapOccupancyPercent=30 -XX:ConcGCThreads=4 -XX:+UseParNewGC -XX:+UseTLAB -XX:+ScavengeBeforeFullGC -XX:MaxNewSize=512m -XX:MaxMetaspaceSize=128m -XX:CompressedClassSpaceSize=32m -XX:SurvivorRatio=6 -XX:+UseConcMarkSweepGC -XX:+UseCMSInitiatingOccupancyOnly -XX:+CMSClassUnloadingEnabled -XX:+CMSPermGenSweepingEnabled -XX:+CMSScavengeBeforeRemark -XX:ParallelGCThreads=6 -XX:MaxTenuringThreshold=5 -XX:MaxGCPauseMillis=1000 -XX:+DisableExplicitGC -XX:+ExplicitGCInvokesConcurrent -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:/data/log/catalina/cc.rnd.subnode1/GC.log -XX:+CMSClassUnloadingEnabled -Dspring.profiles.active=production
Ignite uses sun.misc.Unsafe for storing data in off-heap space. This gives maximal performance and flexibility, but it's not reflected in monitoring tools. Direct buffers used mostly in communication between nodes. You may observe memory consumption by analyzing java process size and used heap - with large data, process size will be much bigger than heap.
If you use Apache Ignite 1.x version, you need to configure off-heap. 2.0+ versions use off-heap by default.
I would like to know the relation between the mapreduce.map.memory.mb and mapred.map.child.java.opts parameters.
Is mapreduce.map.memory.mb > mapred.map.child.java.opts?
mapreduce.map.memory.mb is the upper memory limit that Hadoop allows to be allocated to a mapper, in megabytes. The default is 512.
If this limit is exceeded, Hadoop will kill the mapper with an error like this:
Container[pid=container_1406552545451_0009_01_000002,containerID=container_234132_0001_01_000001]
is running beyond physical memory limits. Current usage: 569.1 MB of
512 MB physical memory used; 970.1 MB of 1.0 GB virtual memory used.
Killing container.
Hadoop mapper is a java process and each Java process has its own heap memory maximum allocation settings configured via mapred.map.child.java.opts (or mapreduce.map.java.opts in Hadoop 2+).
If the mapper process runs out of heap memory, the mapper throws a java out of memory exceptions:
Error: java.lang.RuntimeException: java.lang.OutOfMemoryError
Thus, the Hadoop and the Java settings are related. The Hadoop setting is more of a resource enforcement/controlling one and the Java is more of a resource configuration one.
The Java heap settings should be smaller than the Hadoop container memory limit because we need reserve memory for Java code. Usually, it is recommended to reserve 20% memory for code. So if settings are correct, Java-based Hadoop tasks should never get killed by Hadoop so you should never see the "Killing container" error like above.
If you experience Java out of memory errors, you have to increase both memory settings.
The following properties let you specify options to be passed to the JVMs running your tasks. These can be used with -Xmx to control heap available.
Hadoop 0.x, 1.x (deprecated) Hadoop 2.x
------------------------------- --------------------------
mapred.child.java.opts
mapred.map.child.java.opts mapreduce.map.java.opts
mapred.reduce.child.java.opts mapreduce.reduce.java.opts
Note there is no direct Hadoop 2 equivalent for the first of these; the advice in the source code is to use the other two. mapred.child.java.opts is still supported (but is overridden by the other two more-specific settings if present).
Complementary to these, the following let you limit total memory (possibly virtual) available for your tasks - including heap, stack and class definitions:
Hadoop 0.x, 1.x (deprecated) Hadoop 2.x
------------------------------- --------------------------
mapred.job.map.memory.mb mapreduce.map.memory.mb
mapred.job.reduce.memory.mb mapreduce.reduce.memory.mb
I suggest setting -Xmx to 75% of the memory.mb values.
In a YARN cluster, jobs must not use more memory than the server-side config yarn.scheduler.maximum-allocation-mb or they will be killed.
To check the defaults and precedence of these, see JobConf and MRJobConfig in the Hadoop source code.
Troubleshooting
Remember that your mapred-site.xml may provide defaults for these settings. This can be confusing - e.g. if your job sets mapred.child.java.opts programmatically, this would have no effect if mapred-site.xml sets mapreduce.map.java.opts or mapreduce.reduce.java.opts. You would need to set those properties in your job instead, to override the mapred-site.xml. Check your job's configuration page (search for 'xmx') to see what values have been applied and where they have come from.
ApplicationMaster memory
In a YARN cluster, you can use the following two properties to control the amount of memory available to your ApplicationMaster (to hold details of input splits, status of tasks, etc):
Hadoop 0.x, 1.x Hadoop 2.x
------------------------------- --------------------------
yarn.app.mapreduce.am.command-opts
yarn.app.mapreduce.am.resource.mb
Again, you could set -Xmx (in the former) to 75% of the resource.mb value.
Other configurations
There are many other configurations relating to memory limits, some of them deprecated - see the JobConf class. One useful one:
Hadoop 0.x, 1.x (deprecated) Hadoop 2.x
------------------------------- --------------------------
mapred.job.reduce.total.mem.bytes mapreduce.reduce.memory.totalbytes
Set this to a low value (10) to force shuffle to happen on disk in the event that you hit an OutOfMemoryError at MapOutputCopier.shuffleInMemory.
I am currently trying to build my project using hudson to call maven. I keep getting the problem of out of memoery error. I set the xmx and xms in all environmental variable, hudson configuration and hudson project config. I set the xmx to 1500 mb which should be more than enough as the whole project is less than 1000mb. the machine used to build the project is a server where the maven repo for the team is stored.
Anyone have come acrossed the same problem? Any idea of how it happen?
If you get an OOM during the tests, then you must tell the surefire plugin to fork a new VM for the tests:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>2.5</version>
<configuration>
<forkMode>once</forkMode>
<argLine>-Xms512m -Xmx512m</argLine>
</configuration>
</plugin>
Thank you everyone for answering my question. I have solved the problem by making a heap dump and analysing it. I make a heap dump by passing the following VM args:
-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=E:/.
I then use Eclipse Memory Analyser to open the java_pidxxxxx.hprof file.
I found out that the listener we used to catch the exception cannot catch the exception. So the exception sort of stays in the VM and hence, memory leak!
Thanks again for all the answers
Do make sure you have enough MaxPermSpace. I've run into problems where the memory allocated to the JVM was sufficient, but the OutOfMemoryError was due to the PermSpace being exhausted. That is not too uncommon when we are dealing with compiling code--particularly if it is compiling code, throwing it away and compiling again. For more information about tuning the garbage collector (and memory) check out these references:
Tuning Garbage Collectionwith the 5.0 Java TM Virtual Machine
Memory Management in the Java HotSpotâ„¢ Virtual Machine
In the Memory Management Whitepaper on pages 16-17, it outlines possible reasons for OutOfMemoryErrors. Another defense is to fork the maven process and/or the compiler.
Assuming you are using Sun's JDK, then in Hudson / Manage Hudson / Maven Project Configuration / Global MAVEN_OPTS set the following: -Xmx512m -XX:MaxPermSize=256m
Hudson kicks off a separate task to run Maven jobs. You will need to configure the increased memory in the MAVEN_OPTS text field. The field is located in individual job configuration windows.
Edit: Following up with your comments. Are you by chance forking your compile, running it in a separate execution or forking your junit testing.
Try in your compiler configuration (assuming you have one):
<maxmem>512m</maxmem>
i got the same problem. when trying to instrument the code, hudson (or JVM) throw java out of memory. The solution is: in the configuration of plugin cobertura-maven-plugin, use <maxmem>512m</maxmem>.