Ignite start with "Set max direct memory size if getting 'OOME: Direct buffer memory' (add '-XX:MaxDirectMemorySize=" - ignite

My ignite server have 128G RAM, with Xmx 10G off-heap 70G, when start, the log shows:
[11:30:27,376][INFO][main][IgniteKernal] Performance suggestions for grid (fix if possible)
[11:30:27,377][INFO][main][IgniteKernal] To disable, set -DIGNITE_PERFORMANCE_SUGGESTIONS_DISABLED=true
[11:30:27,377][INFO][main][IgniteKernal] ^-- Set max direct memory size if getting 'OOME: Direct buffer memory' (add '-XX:MaxDirectMemorySize=<size>[g|G|m|M|k|K]' to JVM options)
I have search web, and i found this article said it is not necessary to configure MaxDirectMemorySize, http://apache-ignite-users.70518.x6.nabble.com/Do-we-require-to-set-MaxDirectMemorySize-JVM-parameter-td21200.html
and some articles says the default MaxDirectMemorySize will be same as Xmx, so what should i configure for this option, i am just confused, and if it is no useful, why ignite dump that suggestion log to fix this problem?

This is not an indication of failure, you can just ignore this suggestion unless your node/cluster failing due to OOM in direct buffer memory. This is an option to give you ability to control how many direct memory could be allocated, otherwise it is controlled using default direct memory policy by JVM you are using. Ignite only checks if it's set in JVM options.
Do you experience any issues with OOME in direct buffer memory in your app?
Regards.

Direct buffer memory is used by some file operations (like read and write) when the program calls functions from the NIO libray.
Due to a bug, its value if not specified and you set Xmx ... it is copied from Xmx.
Direct buffer memory default is 64 Mb (if you don't set it ans also don't set Xmx).
I suggest MaxDirectMemorySize=64 or 256 Mb
Bigger values: perhaps you don't see errors, but I doubt you get better performance.

Related

Akka Stream application using more memory than the jvm's heap

Summary:
I have a Java application that uses akka streams that's using more memory than I have specified the jvm to use. The below values are what I have set through the JAVA_OPTS.
maximum heap size (-Xmx) = 700MB
metaspace (-XX) = 250MB
stack size (-Xss) = 1025kb
Using those values and plugging them into the formula below, one would assume the application would be using around 950MB. However that is not the case and it's using over 1.5GB.
Max memory = [-Xmx] + [-XX:MetaspaceSize] + number_of_threads * [-Xss]
Question: Thoughts on how this is possible?
Application overview:
This java application uses alpakka to connect to pubsub and consumes messages. It utilizes akka stream's parallelism where it performs logic on the consumed messages and then it produces those messages to a kafka instance. See the heap dump below. Note, the heap is only 912.9MB so something is taking up 587.1MB and getting the memory usage over 1.5GB
Why is this a problem?
This application is deployed on a kubernetes cluster and the POD has a memory limit specified to 1.5GB. So when the container, where the java application is running, consumes more that 1.5GB the container is killed and restarted.
The short answer is that those do not account for all the memory consumed by the JVM.
Outside of the heap, for instance, memory is allocated for:
compressed class space (governed by the MaxMetaspaceSize)
direct byte buffers (especially if your application performs network I/O and cares about performance, it's virtually certain to make somewhat heavy use of those)
threads (each thread has a stack governed by -Xss ... note that if mixing different concurrency models, each model will tend to allocate its own threads and not necessarily provide a means to share threads)
if native code is involved (e.g. perhaps in the library Alpakka is using to interact with pubsub?), that can allocate arbitrary amounts of memory outside of the heap)
the code cache (typically 48MB)
the garbage collector's state (will vary based on the GC in use, including the presence of any tunable options)
various other things that generally aren't going to be that large
In my experience you're generally fairly safe with a heap that's at most (pod memory limit minus 1 GB), but if you're performing exceptionally large I/Os etc. you can pretty easily get OOM even then.
Your JVM may ship with support for native memory tracking which can shed light on at least some of that non-heap consumption: most of these allocations tend to happen soon after the application is fully loaded, so running with a much higher resource limit and then stopping (e.g. via SIGTERM with enough time to allow it to save results) should give you an idea of what you're dealing with.

why Ignite server shows heap usage without any activity?

Ignite version : 2.12
OS : Windows 10
I am trying to understand ignites heap usage.
I started Ignite server with below command and no special vm args. As suggested by https://ignite.apache.org/docs/latest/quick-start/java
ignite.bat -v ..\examples\config\example-ignite.xml
Post that started analyzing heap usage of same with visualvm tool and the heap usage looks like this
Next thing that I tried is increase the heap memory and restart the server.
Surprisingly Now ignite is consuming even more memory as seen in this graph
I Know the GC is working its way to clear the heap, but why does ignite memory consumption increases with increase in heap space ?
How will this impact a server with ~40-60G memory, how much memory I can expect to be consumed by Ignite?
I'm planning to use ignite as in memory cache along with Cassandra as DB.
Just like Cassandra, Hadoop or Kafka, Ignite is a Java middleware that uses the Java Heap for various needs. But your data is always stored in an off-heap memory that allows utilizing all available memory space without worrying about garbage collection. This gives Ignite complete control over how the data is managed, and ensures the long-term performance of the system.
Ignite uses a page memory model for storing everything, including user data, indices, meta information, etc. This allows Ignite to utilize memory management, improve performance and it also can use the whole disk without any data modifications.
In other words, you might think that direct page memory access is being performed by memory pointers (outside of JVM), but some internal tasks like bootstrapping Ignite itself, performing local SQL processing tasks, etc. do require JVM heap because Ignite itself is written in Java.
Check this and that pages for details.
How will this impact a server with ~40-60G memory, how much memory I
can expect to be consumed by Ignite?
You would need 40-60 GB of RAM + something for JVM itself (Java heap), recommended values might differ, but 2GB of Java heap should be enough.

ActiveMQ heap increases even using persitent storage

In my project we are using activemq queue with Kahadb as persistance mechanism. Though we have enought disk space, we see that 6GB of RAM is also increasing. Any idea why heap should increase when the persistance is in kahadb. IS there anyway to completely offload heap to persistance storge?
Please help.
You cannot and most likely would not be happy with performance if it offloaded everything in the heap to disk. Its most likely caching messages, which is a major performance boost.
Connect to the broker using jconsole.. invoke a garbage collection at the java level and the "GC" operation on the broker object. It should go down.
Note: The "shark tooth" heap graph isn't always readily visible with a busy message broker, since it favors caching messages for performance.

OrientDB: MaxDirectMemorySize vs Dstorage.diskCache.bufferSize

It is not clear what the difference is between MaxDirectMemorySize and Dstorage.diskCache.bufferSize. They both seem to be off heap. Are they redundant when both are specified?
Docs : The size of direct memory consumed by OrientDB is limited by
the size of the disk cache (variable storage.diskCache.bufferSize). https://orientdb.com/docs/2.2/Embedded-Server.html
The docs seem to imply that they refer to the same space, but that the driectmemorysize is limited by the buffer size. Is this correct?
MaxDirectMemorySize is a JVM setting that limits all direct byte buffer allocations within that JVM instance.
storage.diskCache.bufferSize is an application setting that limits direct byte buffer allocations for the purpose of IO caching in orientdb.

Cannot create JVM with -XX:+UseLargePages enabled

I have a Java service that currently runs with a 14GB heap. I am keen to try out the -XX:+UseLargePages option to see how this might affect the performance of the system. I have configured the OS as described by Oracle using appropriate shared memory and page values (these can also be calculated with an online tool).
Once the OS is configured, I can see that it allocates the expected amount of memory as huge-pages. However, starting the VM with the -XX:+UseLargePages option set always results in one of the following errors:
When -Xms / -Xmx is almost equal to the huge page allocation:
Failed to reserve shared memory (errno = 28). // 'No space left on device'
When -Xms / -Xmx is less than the huge page allocation:
Failed to reserve shared memory (errno = 12). // 'Out of memory'
I did try introducing some leeway - so on a 32GB system I allocated 24GB of shared memory and hugepages to use with a JVM configured with a 20GB heap, of which only 14GB is currently utilized. I also verified that the user executing the JVM did have group rights consistent with /proc/sys/vm/hugetlb_shm_group.
Can anyone give me some pointers on where I might be going wrong and what I could try next?
Allocations/utilization:
-Xms / -Xmx - 20GB
Utilized heap - 14GB
/proc/sys/kernel/shmmax - 25769803776 (24GB)
/proc/sys/vm/nr_hugepages - 12288
Environment:
System memory - 32GB
System page size - 2048KB
debian 2.6.26-2-amd64
Sun JVM 1.6.0_20-b02
Solution
Thanks to #jfgagne for providing an answer that lead to a solution. In addition to the /proc/sys/kernel/shmall setting (specified as 4KB pages), I had to add entries to /etc/security/limits.conf as described on Thomas' blog. However, as my application is started using jsvc I also had to duplicate the settings for the root user (note that the limits are specified in KB):
root soft memlock 25165824
root hard memlock 25165824
pellegrino soft memlock 25165824
pellegrino hard memlock 25165824
It's also worth mentioning that settings could be tested quickly by starting the JVM with the -version argument:
java -XX:+UseLargePages -Xmx20g -version
When you use huge pages with Java, there is not only the heap using huge pages, but there is also the PermGen: do not forget to allocate space for it. It seems this is why you have a different errno message when you set Xmx near the amount of huge pages.
There is also the shmall kernel parameter that needs to be set which you did not mention, maybe it is what is blocking you. In your case, you should set it to 6291456.
The last thing to say: when using huge pages, the Xms parameter is not used anymore: Java reserves all Xmx in shared memory using huge pages.