Java Virtual Machine and Swap Space - jvm

Apppreciate any expert here could advise for below JVM and swap space related queries. Thanks in advance
1) Am I right that Operating System will use swap space when OutOfMemory occured in JVM Java Heap, Perm Generation or Native Heap ? Or swap space is used for OutOfMemory in Native Heap ?
2) Am I right that Native heap size is not configurable at JVM, because OS will assign available RAM to JVM during runtime ?
3) How can we enable swap space for JVM, or swap space is enabled for all processes at Unix and Window level by default ?
4) Understand that swap space can affect application performance, is that best practice to disable swap space for JVM ? If not, what is the reason ?
5) How can we disable swap space and change the swap space size for particular JVM in both Unix and Window OS, or it is only configurable at OS level which is applied to all processes in the OS ?

There are a lot of questions here... Operating systems indeed use swap space to create the so called virtual memory (which is obviously bigger then the RAM you might have). It is usually enabled by default, but you need to check.
You can not instruct the JVM to use only the physical RAM AFAIK, but that would be a limitation of the OS itself and not JVM (this should answer 5).
You can disable swap (again for the OS, not JVM), but that is a bad idea. There are multiple processes that run inside the operating system and they each need space to run into (that at some point in time might exceed your actual RAM). It indeed affects performance, but what is worse - some performance penalties (I assume the OS has many things to make this better for you) or the death of the application? (this should answer 4).
Regarding (2) there are two parameters that control how much heap you will have: Xmx - maximum heap that JVM process will use. And Xms - initial heap. Actually just recent there was a very good talk about this: here.

I think -Xmx and -Xms configure how much heap is available for the java process that is run inside the virtual machine. The virtual machine itself is a native process that requires additional heap for running the virtual machine itself. The JVM process can therefore consume more memory then that indicated by the -Xmx option.

Related

Logstash take over 1GB memory even though Xms and Xmx are set to 512MB [duplicate]

For my application, the memory used by the Java process is much more than the heap size.
The system where the containers are running starts to have memory problem because the container is taking much more memory than the heap size.
The heap size is set to 128 MB (-Xmx128m -Xms128m) while the container takes up to 1GB of memory. Under normal condition, it needs 500MB. If the docker container has a limit below (e.g. mem_limit=mem_limit=400MB) the process gets killed by the out of memory killer of the OS.
Could you explain why the Java process is using much more memory than the heap? How to size correctly the Docker memory limit? Is there a way to reduce the off-heap memory footprint of the Java process?
I gather some details about the issue using command from Native memory tracking in JVM.
From the host system, I get the memory used by the container.
$ docker stats --no-stream 9afcb62a26c8
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
9afcb62a26c8 xx-xxxxxxxxxxxxx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.0acbb46bb6fe3ae1b1c99aff3a6073bb7b7ecf85 0.93% 461MiB / 9.744GiB 4.62% 286MB / 7.92MB 157MB / 2.66GB 57
From inside the container, I get the memory used by the process.
$ ps -p 71 -o pcpu,rss,size,vsize
%CPU RSS SIZE VSZ
11.2 486040 580860 3814600
$ jcmd 71 VM.native_memory
71:
Native Memory Tracking:
Total: reserved=1631932KB, committed=367400KB
- Java Heap (reserved=131072KB, committed=131072KB)
(mmap: reserved=131072KB, committed=131072KB)
- Class (reserved=1120142KB, committed=79830KB)
(classes #15267)
( instance classes #14230, array classes #1037)
(malloc=1934KB #32977)
(mmap: reserved=1118208KB, committed=77896KB)
( Metadata: )
( reserved=69632KB, committed=68272KB)
( used=66725KB)
( free=1547KB)
( waste=0KB =0.00%)
( Class space:)
( reserved=1048576KB, committed=9624KB)
( used=8939KB)
( free=685KB)
( waste=0KB =0.00%)
- Thread (reserved=24786KB, committed=5294KB)
(thread #56)
(stack: reserved=24500KB, committed=5008KB)
(malloc=198KB #293)
(arena=88KB #110)
- Code (reserved=250635KB, committed=45907KB)
(malloc=2947KB #13459)
(mmap: reserved=247688KB, committed=42960KB)
- GC (reserved=48091KB, committed=48091KB)
(malloc=10439KB #18634)
(mmap: reserved=37652KB, committed=37652KB)
- Compiler (reserved=358KB, committed=358KB)
(malloc=249KB #1450)
(arena=109KB #5)
- Internal (reserved=1165KB, committed=1165KB)
(malloc=1125KB #3363)
(mmap: reserved=40KB, committed=40KB)
- Other (reserved=16696KB, committed=16696KB)
(malloc=16696KB #35)
- Symbol (reserved=15277KB, committed=15277KB)
(malloc=13543KB #180850)
(arena=1734KB #1)
- Native Memory Tracking (reserved=4436KB, committed=4436KB)
(malloc=378KB #5359)
(tracking overhead=4058KB)
- Shared class space (reserved=17144KB, committed=17144KB)
(mmap: reserved=17144KB, committed=17144KB)
- Arena Chunk (reserved=1850KB, committed=1850KB)
(malloc=1850KB)
- Logging (reserved=4KB, committed=4KB)
(malloc=4KB #179)
- Arguments (reserved=19KB, committed=19KB)
(malloc=19KB #512)
- Module (reserved=258KB, committed=258KB)
(malloc=258KB #2356)
$ cat /proc/71/smaps | grep Rss | cut -d: -f2 | tr -d " " | cut -f1 -dk | sort -n | awk '{ sum += $1 } END { print sum }'
491080
The application is a web server using Jetty/Jersey/CDI bundled inside a fat far of 36 MB.
The following version of OS and Java are used (inside the container). The Docker image is based on openjdk:11-jre-slim.
$ java -version
openjdk version "11" 2018-09-25
OpenJDK Runtime Environment (build 11+28-Debian-1)
OpenJDK 64-Bit Server VM (build 11+28-Debian-1, mixed mode, sharing)
$ uname -a
Linux service1 4.9.125-linuxkit #1 SMP Fri Sep 7 08:20:28 UTC 2018 x86_64 GNU/Linux
https://gist.github.com/prasanthj/48e7063cac88eb396bc9961fb3149b58
Virtual memory used by a Java process extends far beyond just Java Heap. You know, JVM includes many subsytems: Garbage Collector, Class Loading, JIT compilers etc., and all these subsystems require certain amount of RAM to function.
JVM is not the only consumer of RAM. Native libraries (including standard Java Class Library) may also allocate native memory. And this won't be even visible to Native Memory Tracking. Java application itself can also use off-heap memory by means of direct ByteBuffers.
So what takes memory in a Java process?
JVM parts (mostly shown by Native Memory Tracking)
1. Java Heap
The most obvious part. This is where Java objects live. Heap takes up to -Xmx amount of memory.
2. Garbage Collector
GC structures and algorithms require additional memory for heap management. These structures are Mark Bitmap, Mark Stack (for traversing object graph), Remembered Sets (for recording inter-region references) and others. Some of them are directly tunable, e.g. -XX:MarkStackSizeMax, others depend on heap layout, e.g. the larger are G1 regions (-XX:G1HeapRegionSize), the smaller are remembered sets.
GC memory overhead varies between GC algorithms. -XX:+UseSerialGC and -XX:+UseShenandoahGC have the smallest overhead. G1 or CMS may easily use around 10% of total heap size.
3. Code Cache
Contains dynamically generated code: JIT-compiled methods, interpreter and run-time stubs. Its size is limited by -XX:ReservedCodeCacheSize (240M by default). Turn off -XX:-TieredCompilation to reduce the amount of compiled code and thus the Code Cache usage.
4. Compiler
JIT compiler itself also requires memory to do its job. This can be reduced again by switching off Tiered Compilation or by reducing the number of compiler threads: -XX:CICompilerCount.
5. Class loading
Class metadata (method bytecodes, symbols, constant pools, annotations etc.) is stored in off-heap area called Metaspace. The more classes are loaded - the more metaspace is used. Total usage can be limited by -XX:MaxMetaspaceSize (unlimited by default) and -XX:CompressedClassSpaceSize (1G by default).
6. Symbol tables
Two main hashtables of the JVM: the Symbol table contains names, signatures, identifiers etc. and the String table contains references to interned strings. If Native Memory Tracking indicates significant memory usage by a String table, it probably means the application excessively calls String.intern.
7. Threads
Thread stacks are also responsible for taking RAM. The stack size is controlled by -Xss. The default is 1M per thread, but fortunately things are not so bad. The OS allocates memory pages lazily, i.e. on the first use, so the actual memory usage will be much lower (typically 80-200 KB per thread stack). I wrote a script to estimate how much of RSS belongs to Java thread stacks.
There are other JVM parts that allocate native memory, but they do not usually play a big role in total memory consumption.
Direct buffers
An application may explicitly request off-heap memory by calling ByteBuffer.allocateDirect. The default off-heap limit is equal to -Xmx, but it can be overridden with -XX:MaxDirectMemorySize. Direct ByteBuffers are included in Other section of NMT output (or Internal before JDK 11).
The amount of direct memory in use is visible through JMX, e.g. in JConsole or Java Mission Control:
Besides direct ByteBuffers there can be MappedByteBuffers - the files mapped to virtual memory of a process. NMT does not track them, however, MappedByteBuffers can also take physical memory. And there is no a simple way to limit how much they can take. You can just see the actual usage by looking at process memory map: pmap -x <pid>
Address Kbytes RSS Dirty Mode Mapping
...
00007f2b3e557000 39592 32956 0 r--s- some-file-17405-Index.db
00007f2b40c01000 39600 33092 0 r--s- some-file-17404-Index.db
^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^
Native libraries
JNI code loaded by System.loadLibrary can allocate as much off-heap memory as it wants with no control from JVM side. This also concerns standard Java Class Library. In particular, unclosed Java resources may become a source of native memory leak. Typical examples are ZipInputStream or DirectoryStream.
JVMTI agents, in particular, jdwp debugging agent - can also cause excessive memory consumption.
This answer describes how to profile native memory allocations with async-profiler.
Allocator issues
A process typically requests native memory either directly from OS (by mmap system call) or by using malloc - standard libc allocator. In turn, malloc requests big chunks of memory from OS using mmap, and then manages these chunks according to its own allocation algorithm. The problem is - this algorithm can lead to fragmentation and excessive virtual memory usage.
jemalloc, an alternative allocator, often appears smarter than regular libc malloc, so switching to jemalloc may result in a smaller footprint for free.
Conclusion
There is no guaranteed way to estimate full memory usage of a Java process, because there are too many factors to consider.
Total memory = Heap + Code Cache + Metaspace + Symbol tables +
Other JVM structures + Thread stacks +
Direct buffers + Mapped files +
Native Libraries + Malloc overhead + ...
It is possible to shrink or limit certain memory areas (like Code Cache) by JVM flags, but many others are out of JVM control at all.
One possible approach to setting Docker limits would be to watch the actual memory usage in a "normal" state of the process. There are tools and techniques for investigating issues with Java memory consumption: Native Memory Tracking, pmap, jemalloc, async-profiler.
Update
Here is a recording of my presentation Memory Footprint of a Java Process.
In this video, I discuss what may consume memory in a Java process, how to monitor and restrain the size of certain memory areas, and how to profile native memory leaks in a Java application.
https://developers.redhat.com/blog/2017/04/04/openjdk-and-containers/:
Why is it when I specify -Xmx=1g my JVM uses up more memory than 1gb
of memory?
Specifying -Xmx=1g is telling the JVM to allocate a 1gb heap. It’s not
telling the JVM to limit its entire memory usage to 1gb. There are
card tables, code caches, and all sorts of other off heap data
structures. The parameter you use to specify total memory usage is
-XX:MaxRAM. Be aware that with -XX:MaxRam=500m your heap will be approximately 250mb.
Java sees host memory size and it is not aware of any container memory limitations. It doesn't create memory pressure, so GC also doesn't need to release used memory. I hope XX:MaxRAM will help you to reduce memory footprint. Eventually, you can tweak GC configuration (-XX:MinHeapFreeRatio,-XX:MaxHeapFreeRatio, ...)
There is many types of memory metrics. Docker seems to be reporting RSS memory size, that can be different than "committed" memory reported by jcmd (older versions of Docker report RSS+cache as memory usage).
Good discussion and links: Difference between Resident Set Size (RSS) and Java total committed memory (NMT) for a JVM running in Docker container
(RSS) memory can be eaten also by some other utilities in the container - shell, process manager, ... We don't know what else is running in the container and how do you start processes in container.
TL;DR
The detail usage of the memory is provided by Native Memory Tracking (NMT) details (mainly code metadata and garbage collector). In addition to that, the Java compiler and optimizer C1/C2 consume the memory not reported in the summary.
The memory footprint can be reduced using JVM flags (but there is impacts).
The Docker container sizing must be done through testing with the expected load the application.
Detail for each components
The shared class space can be disabled inside a container since the classes won't be shared by another JVM process. The following flag can be used. It will remove the shared class space (17MB).
-Xshare:off
The garbage collector serial has a minimal memory footprint at the cost of longer pause time during garbage collect processing (see Aleksey Shipilëv comparison between GC in one picture). It can be enabled with the following flag. It can save up to the GC space used (48MB).
-XX:+UseSerialGC
The C2 compiler can be disabled with the following flag to reduce profiling data used to decide whether to optimize or not a method.
-XX:+TieredCompilation -XX:TieredStopAtLevel=1
The code space is reduced by 20MB. Moreover, the memory outside JVM is reduced by 80MB (difference between NMT space and RSS space). The optimizing compiler C2 needs 100MB.
The C1 and C2 compilers can be disabled with the following flag.
-Xint
The memory outside the JVM is now lower than the total committed space. The code space is reduced by 43MB. Beware, this has a major impact on the performance of the application. Disabling C1 and C2 compiler reduces the memory used by 170 MB.
Using Graal VM compiler (replacement of C2) leads to a bit smaller memory footprint. It increases of 20MB the code memory space and decreases of 60MB from outside JVM memory.
The article Java Memory Management for JVM provides some relevant information the different memory spaces.
Oracle provides some details in Native Memory Tracking documentation. More details about compilation level in advanced compilation policy and in disable C2 reduce code cache size by a factor 5. Some details on Why does a JVM report more committed memory than the Linux process resident set size? when both compilers are disabled.
Java needs a lot a memory. JVM itself needs a lot of memory to run. The heap is the memory which is available inside the virtual machine, available to your application. Because JVM is a big bundle packed with all goodies possible it takes a lot of memory just to load.
Starting with java 9 you have something called project Jigsaw, which might reduce the memory used when you start a java app(along with start time). Project jigsaw and a new module system were not necessarily created to reduce the necessary memory, but if it's important you can give a try.
You can take a look at this example: https://steveperkins.com/using-java-9-modularization-to-ship-zero-dependency-native-apps/. By using the module system it resulted in CLI application of 21MB(with JRE embeded). JRE takes more than 200mb. That should translate to less allocated memory when the application is up(a lot of unused JRE classes will no longer be loaded).
Here is another nice tutorial: https://www.baeldung.com/project-jigsaw-java-modularity
If you don't want to spend time with this you can simply get allocate more memory. Sometimes it's the best.
How to size correctly the Docker memory limit?
Check the application by monitoring it for some-time. To restrict container's memory try using -m, --memory bytes option for docker run command - or something equivalant if you are running it otherwise
like
docker run -d --name my-container --memory 500m <iamge-name>
can't answer other questions.

Is there any advantage in setting Xms and Xmx to the same value?

Usually I set -Xms512m and -Xmx1g so that when JVM starts it allocates 512MB and gradually increases heap to 1GB as necessary. But I see these values set to same say 1g in a dedicated server instance. Is there any advantage for the having both set to the same value?
Well there are couple of things.
Program will start with -Xms value and if the value is lesser it will eventually force GC to occur more frequently
Once the program reaches -Xms heap, jvm request OS for additional memory and eventually grabs -Xmx that requires additional time leading to performance issue, you might as well set it to that at the beginning avoiding jvm to request additional memory.
It is very nicely answered here - https://developer.jboss.org/thread/149559?_sscc=t
From Oracle Java SE 8 docs:
https://docs.oracle.com/javase/8/docs/technotes/guides/vm/gctuning/sizing.html
By default, the virtual machine grows or shrinks the heap at each
collection to try to keep the proportion of free space to live objects
at each collection within a specific range. This target range is set
as a percentage by the parameters -XX:MinHeapFreeRatio=<minimum> and
-XX:MaxHeapFreeRatio=<maximum>, and the total size is bounded below by -Xms<min> and above by -Xmx<max>. Setting -Xms and -Xmx to the same value increases predictability by removing the most important sizing
decision from the virtual machine. However, the virtual machine is
then unable to compensate if you make a poor choice.
if the value of -Xms and -Xmx is same JVM will not have to adjust the heap size and that means less work by JVM and more time to your application. but if the chosen value is a poor choice for -Xms then some of the memory allocated will never be used because the heap will never shrink and if it is a poor choice for -Xmx you will get OutOfMemoryError.
AFAIK One more reason, is that expansion of heap is a stop-the-world event; setting those to the same value will prevent that.
There are some advantages.
if you know the size is going to grow to the maximum, e.g. in a benchmark, you may as well start with the size you know you need.
you can get better performance giving the program more memory that it might naturally give itself. YMWV
In general, I would make the Xms a value I am confident it will use, and the double this for head room for future use cases or situations we haven't tested for. i.e. a size we don't expect but it might use.
In short, the maximum is the point you would rather the program fail than use any more.
Application will suffer frequent GC with lower -Xms value.
Every time asking for more memory from OS with consume time.
Above all, if your application is performance critical then you would certainly want to avoid memory pages swapping out to/from disk as this will cause GC consuming more time. To avoid this, memory can be locked. But if Xms and Xmx are not same then memory allocated after initial allocation will not be locked.

Netbeans out of memory exception

Netbeans out of memory exception
I increased the -Xmx value in netbeans file.
but the IDE is busy acquiring more memory to scan projects ?
the memory usage increases and the process is slow, and non responsive
Sounds like your system is thrashing. The heap size is now so large that there is not enough physical memory on your system to hold it ... and all of the other things you are running.
The end result is that your system has to copy memory pages between physical memory and the disc page file. Too much of that and the system performance will drop dramatically. You will see that the disc activity light is "on" continually. (The behaviour is worst during Java garbage collection, because that entails accessing lots of VM pages in essentially random order.)
If this is your problem then there is no easy solution:
You could reduce the -Xmx a bit ...
You could stop other applications running; e.g. quit your web browser, email client, etc.
You could buy more memory. (This only works up to a point ... if you are using a 32bit system / 32bit OS / 32bit JVM.)
You could switch to a less memory-hungry operating system or distro.

Shinking JVM memory and Swap

Virtual Machine:
4CPU
10GB RAM
10GB swap
Java 1.7
-Xms=-Xmx=6144m
Tomcat 7
We observed a very strange behaviour with the JVM. The JVm resident memory began to shrink and the swap usage shot up to over 50%.
Please see below stats from monitoring tools.
http://i44.tinypic.com/206n6sp.jpg
http://i44.tinypic.com/m99hl0.jpg
Any pointers to understand this is grateful.
Thanks!
Or maybe your Java program was idle and it didn't need that memory, and you have high swappiness? In such situation your OS would free RAM just in case and leave only used part.
In my opinion, that is actually good behaviour, why should you waste RAM for process that won't use it?
Unless you run only this one process on VM, then it would be quite good idea to set swappiness to 0 or other small number - this memory was given to this single process, so we may disable swapping it.
Thanks for the response. Yes this is more close to a system troubleshooting than Java but I thought this the right forum to initiate this topic incase anybody has seen such a phenomena with JVM.
Anyways, I had already checked the top and no there was no other process than Java which was hungry for memory. Actually the second top process was utilizing 72MB (RSS).
No the swappiness is not aggressive set on this system but at default 60. One additional information I missed to share is we have 4 app servers in cluster and all showed this behaviour exactly at the same time. AFAIK, JVM does not swap out but the OS would. But all of it is what confusing me.
All these app servers are production and busy serving request so not idle. The used Heap size was at Avg 5 GB used of the the 6GB.
The other interesting thing I found out were some failed messages in the Vmware logs at the same time which is what I'm investigating.

Cannot create JVM with -XX:+UseLargePages enabled

I have a Java service that currently runs with a 14GB heap. I am keen to try out the -XX:+UseLargePages option to see how this might affect the performance of the system. I have configured the OS as described by Oracle using appropriate shared memory and page values (these can also be calculated with an online tool).
Once the OS is configured, I can see that it allocates the expected amount of memory as huge-pages. However, starting the VM with the -XX:+UseLargePages option set always results in one of the following errors:
When -Xms / -Xmx is almost equal to the huge page allocation:
Failed to reserve shared memory (errno = 28). // 'No space left on device'
When -Xms / -Xmx is less than the huge page allocation:
Failed to reserve shared memory (errno = 12). // 'Out of memory'
I did try introducing some leeway - so on a 32GB system I allocated 24GB of shared memory and hugepages to use with a JVM configured with a 20GB heap, of which only 14GB is currently utilized. I also verified that the user executing the JVM did have group rights consistent with /proc/sys/vm/hugetlb_shm_group.
Can anyone give me some pointers on where I might be going wrong and what I could try next?
Allocations/utilization:
-Xms / -Xmx - 20GB
Utilized heap - 14GB
/proc/sys/kernel/shmmax - 25769803776 (24GB)
/proc/sys/vm/nr_hugepages - 12288
Environment:
System memory - 32GB
System page size - 2048KB
debian 2.6.26-2-amd64
Sun JVM 1.6.0_20-b02
Solution
Thanks to #jfgagne for providing an answer that lead to a solution. In addition to the /proc/sys/kernel/shmall setting (specified as 4KB pages), I had to add entries to /etc/security/limits.conf as described on Thomas' blog. However, as my application is started using jsvc I also had to duplicate the settings for the root user (note that the limits are specified in KB):
root soft memlock 25165824
root hard memlock 25165824
pellegrino soft memlock 25165824
pellegrino hard memlock 25165824
It's also worth mentioning that settings could be tested quickly by starting the JVM with the -version argument:
java -XX:+UseLargePages -Xmx20g -version
When you use huge pages with Java, there is not only the heap using huge pages, but there is also the PermGen: do not forget to allocate space for it. It seems this is why you have a different errno message when you set Xmx near the amount of huge pages.
There is also the shmall kernel parameter that needs to be set which you did not mention, maybe it is what is blocking you. In your case, you should set it to 6291456.
The last thing to say: when using huge pages, the Xms parameter is not used anymore: Java reserves all Xmx in shared memory using huge pages.