I am using a SOLVER to solve an issue in Java using OptaPlanner but after some point of time, i get an exception saying Java.lang.OutOfMemoryError: Java Heap Space. What does this signify
JVM-Java Virtual Machine will limit your execution with some space of memory if you exceed the allocated memory you got to experience this "JAVA HEAP SPACE" ERROR.
You can also increase the Heap space by doing this,
java -Xms<initial heap size> -Xmx<maximum heap size> CLASS_FILE_TO_EXECUTE
-Xms<size> set initial Java heap size
-Xmx<size> set maximum Java heap size
Ie
java -Xmx2g assign 2 gigabytes of ram as maximum to your app
But you should see if you don't have a memory leak first.
Btw What is meant above: adjust the -Xmx setting in runExamples.bat or .sh file in: optaplanner-distribution-6.0.1.Final\examples\ (by default it is 512m). What I do to circumvent this problem and getting better results: I monitor the progress by checking MEM usage in Windows Task Manager. When it gets close to the max value I hit the "Terminate solving early" button. Than I click Save As and store a file in the "Solved" folder. Than I kill all OptaPlanner windows and restart it afresh. In "Quick Open" window I click on the file I just saved and hit Solve again. It will lower the number of Soft Constraints violated a bit better till I see the MEM is close to the limit again. I do this a couple of times till I see 2-3 times that I don't get better results. TADA, workaround.
Related
I'm running hypergraphql in a docker container with the Dockerfile:
FROM adoptopenjdk/openjdk8
RUN curl https://www.hypergraphql.org/resources/hypergraphql-1.0.3-exe.jar --output hypergraphql-1.0.3-exe.jar
EXPOSE 8080
CMD ["java", "-jar", "hypergraphql-1.0.3-exe.jar", "--config", "/config/config.json"]
I think I should adjust the JVM size inside my container in order to prevent JVM from taking all available memory into use https://developers.redhat.com/blog/2017/03/14/java-inside-docker/.
But I don't have any idea about the default JVM heap size. How can I find it and what could be the optimal value for it ?
The default for "max heap size" is usually 25% of available RAM.
It used to take the host memory into account but was later fixed for containers too (the fix was backported to Java 8u191 too: https://merikan.com/2019/04/jvm-in-a-container/#backported-to-java-8)
Usually the easiest option to adjust the default "max heap size" is -XX:MaxRAMPercentage=60.0 - here is an example of changing the default 25% to 60%.
As apangin said, there's no "optimal heap size" - you'll need to experiment with it and see what's suitable for your application. You can try to aggressively downsize "max heap size" to the point where your application is barely usable and then multiple that by a factor of 3-5:
Gil Tene - Really Understanding Garbage Collection (QCon SF 2019) (start at 56:05)
Start with big heap and shrink it down until it breaks; then tripple that size and go home
How to estimate memory consumption?
For the impatient ones – the answer will be to start with the memory equal to approximately 5 x [amount of memory consumed by Live Data] and start the fine-tuning from there.
My IntelliJ goes unbearably slow, so I was fiddling with memory settings. If you select Help -> Change Memory Settings, you can set the max heap size for IntelliJ. But even after restarting, then running Mac's Activity Monitor, I see it using 5.5GB even though I set the heap to 4092MB.
It's using 1.5GB more than allocated for heap. That's a lot of memory for permgen + stack, don't you think? Or, could it be that this memory setting actually doesn't have any effect on the program?
It's the virtual memory you see, it may also include memory mapped files and many other things occupied by the JVM internals, plus the native libraries for a dozen of Apple frameworks loaded into the process. There is nothing to worry about unless you get OOM or IDE becomes slow.
If it happens, refer to the KB documents and report the issues to YouTrack with the CPU/Memory snapshots.
Usually I set -Xms512m and -Xmx1g so that when JVM starts it allocates 512MB and gradually increases heap to 1GB as necessary. But I see these values set to same say 1g in a dedicated server instance. Is there any advantage for the having both set to the same value?
Well there are couple of things.
Program will start with -Xms value and if the value is lesser it will eventually force GC to occur more frequently
Once the program reaches -Xms heap, jvm request OS for additional memory and eventually grabs -Xmx that requires additional time leading to performance issue, you might as well set it to that at the beginning avoiding jvm to request additional memory.
It is very nicely answered here - https://developer.jboss.org/thread/149559?_sscc=t
From Oracle Java SE 8 docs:
https://docs.oracle.com/javase/8/docs/technotes/guides/vm/gctuning/sizing.html
By default, the virtual machine grows or shrinks the heap at each
collection to try to keep the proportion of free space to live objects
at each collection within a specific range. This target range is set
as a percentage by the parameters -XX:MinHeapFreeRatio=<minimum> and
-XX:MaxHeapFreeRatio=<maximum>, and the total size is bounded below by -Xms<min> and above by -Xmx<max>. Setting -Xms and -Xmx to the same value increases predictability by removing the most important sizing
decision from the virtual machine. However, the virtual machine is
then unable to compensate if you make a poor choice.
if the value of -Xms and -Xmx is same JVM will not have to adjust the heap size and that means less work by JVM and more time to your application. but if the chosen value is a poor choice for -Xms then some of the memory allocated will never be used because the heap will never shrink and if it is a poor choice for -Xmx you will get OutOfMemoryError.
AFAIK One more reason, is that expansion of heap is a stop-the-world event; setting those to the same value will prevent that.
There are some advantages.
if you know the size is going to grow to the maximum, e.g. in a benchmark, you may as well start with the size you know you need.
you can get better performance giving the program more memory that it might naturally give itself. YMWV
In general, I would make the Xms a value I am confident it will use, and the double this for head room for future use cases or situations we haven't tested for. i.e. a size we don't expect but it might use.
In short, the maximum is the point you would rather the program fail than use any more.
Application will suffer frequent GC with lower -Xms value.
Every time asking for more memory from OS with consume time.
Above all, if your application is performance critical then you would certainly want to avoid memory pages swapping out to/from disk as this will cause GC consuming more time. To avoid this, memory can be locked. But if Xms and Xmx are not same then memory allocated after initial allocation will not be locked.
Netbeans out of memory exception
I increased the -Xmx value in netbeans file.
but the IDE is busy acquiring more memory to scan projects ?
the memory usage increases and the process is slow, and non responsive
Sounds like your system is thrashing. The heap size is now so large that there is not enough physical memory on your system to hold it ... and all of the other things you are running.
The end result is that your system has to copy memory pages between physical memory and the disc page file. Too much of that and the system performance will drop dramatically. You will see that the disc activity light is "on" continually. (The behaviour is worst during Java garbage collection, because that entails accessing lots of VM pages in essentially random order.)
If this is your problem then there is no easy solution:
You could reduce the -Xmx a bit ...
You could stop other applications running; e.g. quit your web browser, email client, etc.
You could buy more memory. (This only works up to a point ... if you are using a 32bit system / 32bit OS / 32bit JVM.)
You could switch to a less memory-hungry operating system or distro.
I have a Java service that currently runs with a 14GB heap. I am keen to try out the -XX:+UseLargePages option to see how this might affect the performance of the system. I have configured the OS as described by Oracle using appropriate shared memory and page values (these can also be calculated with an online tool).
Once the OS is configured, I can see that it allocates the expected amount of memory as huge-pages. However, starting the VM with the -XX:+UseLargePages option set always results in one of the following errors:
When -Xms / -Xmx is almost equal to the huge page allocation:
Failed to reserve shared memory (errno = 28). // 'No space left on device'
When -Xms / -Xmx is less than the huge page allocation:
Failed to reserve shared memory (errno = 12). // 'Out of memory'
I did try introducing some leeway - so on a 32GB system I allocated 24GB of shared memory and hugepages to use with a JVM configured with a 20GB heap, of which only 14GB is currently utilized. I also verified that the user executing the JVM did have group rights consistent with /proc/sys/vm/hugetlb_shm_group.
Can anyone give me some pointers on where I might be going wrong and what I could try next?
Allocations/utilization:
-Xms / -Xmx - 20GB
Utilized heap - 14GB
/proc/sys/kernel/shmmax - 25769803776 (24GB)
/proc/sys/vm/nr_hugepages - 12288
Environment:
System memory - 32GB
System page size - 2048KB
debian 2.6.26-2-amd64
Sun JVM 1.6.0_20-b02
Solution
Thanks to #jfgagne for providing an answer that lead to a solution. In addition to the /proc/sys/kernel/shmall setting (specified as 4KB pages), I had to add entries to /etc/security/limits.conf as described on Thomas' blog. However, as my application is started using jsvc I also had to duplicate the settings for the root user (note that the limits are specified in KB):
root soft memlock 25165824
root hard memlock 25165824
pellegrino soft memlock 25165824
pellegrino hard memlock 25165824
It's also worth mentioning that settings could be tested quickly by starting the JVM with the -version argument:
java -XX:+UseLargePages -Xmx20g -version
When you use huge pages with Java, there is not only the heap using huge pages, but there is also the PermGen: do not forget to allocate space for it. It seems this is why you have a different errno message when you set Xmx near the amount of huge pages.
There is also the shmall kernel parameter that needs to be set which you did not mention, maybe it is what is blocking you. In your case, you should set it to 6291456.
The last thing to say: when using huge pages, the Xms parameter is not used anymore: Java reserves all Xmx in shared memory using huge pages.