Ignite version : 2.12
OS : Windows 10
I am trying to understand ignites heap usage.
I started Ignite server with below command and no special vm args. As suggested by https://ignite.apache.org/docs/latest/quick-start/java
ignite.bat -v ..\examples\config\example-ignite.xml
Post that started analyzing heap usage of same with visualvm tool and the heap usage looks like this
Next thing that I tried is increase the heap memory and restart the server.
Surprisingly Now ignite is consuming even more memory as seen in this graph
I Know the GC is working its way to clear the heap, but why does ignite memory consumption increases with increase in heap space ?
How will this impact a server with ~40-60G memory, how much memory I can expect to be consumed by Ignite?
I'm planning to use ignite as in memory cache along with Cassandra as DB.
Just like Cassandra, Hadoop or Kafka, Ignite is a Java middleware that uses the Java Heap for various needs. But your data is always stored in an off-heap memory that allows utilizing all available memory space without worrying about garbage collection. This gives Ignite complete control over how the data is managed, and ensures the long-term performance of the system.
Ignite uses a page memory model for storing everything, including user data, indices, meta information, etc. This allows Ignite to utilize memory management, improve performance and it also can use the whole disk without any data modifications.
In other words, you might think that direct page memory access is being performed by memory pointers (outside of JVM), but some internal tasks like bootstrapping Ignite itself, performing local SQL processing tasks, etc. do require JVM heap because Ignite itself is written in Java.
Check this and that pages for details.
How will this impact a server with ~40-60G memory, how much memory I
can expect to be consumed by Ignite?
You would need 40-60 GB of RAM + something for JVM itself (Java heap), recommended values might differ, but 2GB of Java heap should be enough.
Related
We are running a .NET application in fargate via terraform where we specify CPU and memory in the aws_ecs_task_definition resource.
The service has just 1 task e.g.
resource "aws_ecs_task_definition" "test" {
....
cpu = 256
memory = 512
....
From the documentation this is required for Fargate.
You can also specify cpu and memory in the container_definitions, but the documentation states that the field is optional, and as we are already setting values at the task level we did not set them here.
We have observed that our memory was growing after the tasks started, depending on application, sometimes quite fast and others over a period of time.
So we starting thinking we had a memory leak and went to profile using the dotnet-monitor tool as a sidecar.
As part of introducing the sidecar we set cpu and memory values for our .NET application at the container_definitions level.
After we done this, we have observed that our memory in our applications is behaving much better.
From .NET monitor traces we are seeing that when we set memory at the container_definitions level:
Working Set is much smaller
Gen 0/1/2 GC Count is above 1(GC occurring early)
GC 0/1/2 Size is less
GC Committed Bytes is smaller
So to summarize when we do not set memory at container_definitions level, memory continues to grow and no GC occurring until we are almost running out of memory.
When we set memory at container_definitions level, GC occurring regularly and memory not spiking up.
So we have a solution, but do not understand why this is the case.
Would like to know why it is so
Summary:
I have a Java application that uses akka streams that's using more memory than I have specified the jvm to use. The below values are what I have set through the JAVA_OPTS.
maximum heap size (-Xmx) = 700MB
metaspace (-XX) = 250MB
stack size (-Xss) = 1025kb
Using those values and plugging them into the formula below, one would assume the application would be using around 950MB. However that is not the case and it's using over 1.5GB.
Max memory = [-Xmx] + [-XX:MetaspaceSize] + number_of_threads * [-Xss]
Question: Thoughts on how this is possible?
Application overview:
This java application uses alpakka to connect to pubsub and consumes messages. It utilizes akka stream's parallelism where it performs logic on the consumed messages and then it produces those messages to a kafka instance. See the heap dump below. Note, the heap is only 912.9MB so something is taking up 587.1MB and getting the memory usage over 1.5GB
Why is this a problem?
This application is deployed on a kubernetes cluster and the POD has a memory limit specified to 1.5GB. So when the container, where the java application is running, consumes more that 1.5GB the container is killed and restarted.
The short answer is that those do not account for all the memory consumed by the JVM.
Outside of the heap, for instance, memory is allocated for:
compressed class space (governed by the MaxMetaspaceSize)
direct byte buffers (especially if your application performs network I/O and cares about performance, it's virtually certain to make somewhat heavy use of those)
threads (each thread has a stack governed by -Xss ... note that if mixing different concurrency models, each model will tend to allocate its own threads and not necessarily provide a means to share threads)
if native code is involved (e.g. perhaps in the library Alpakka is using to interact with pubsub?), that can allocate arbitrary amounts of memory outside of the heap)
the code cache (typically 48MB)
the garbage collector's state (will vary based on the GC in use, including the presence of any tunable options)
various other things that generally aren't going to be that large
In my experience you're generally fairly safe with a heap that's at most (pod memory limit minus 1 GB), but if you're performing exceptionally large I/Os etc. you can pretty easily get OOM even then.
Your JVM may ship with support for native memory tracking which can shed light on at least some of that non-heap consumption: most of these allocations tend to happen soon after the application is fully loaded, so running with a much higher resource limit and then stopping (e.g. via SIGTERM with enough time to allow it to save results) should give you an idea of what you're dealing with.
In my project we are using activemq queue with Kahadb as persistance mechanism. Though we have enought disk space, we see that 6GB of RAM is also increasing. Any idea why heap should increase when the persistance is in kahadb. IS there anyway to completely offload heap to persistance storge?
Please help.
You cannot and most likely would not be happy with performance if it offloaded everything in the heap to disk. Its most likely caching messages, which is a major performance boost.
Connect to the broker using jconsole.. invoke a garbage collection at the java level and the "GC" operation on the broker object. It should go down.
Note: The "shark tooth" heap graph isn't always readily visible with a busy message broker, since it favors caching messages for performance.
Greeting,
We are facing memory leak issue recently on one of our apps.
Development environment : Lucene2.4.0, hibernate search3.2.0, hibernate 3.5.0, spring2.5 and ehcache 1.4.1
The problem is that memory in old gen gradually goes up in a time period. Eventually, JVM runs out of memory as we see from jvm stats that old generation capacity reaches the maximum. As a result, I have to restart web to release all memory.
I generated a heap dump from app and use memory analyzer to check it. I see this:
123,726 instances of "org.apache.lucene.index.TermInfosReader$ThreadResources", loaded by "org.apache.catalina.loader.WebappClassLoader # 0x7f5d71ffe3c8" occupy 3,139,449,272 (79.54%) bytes. These instances are referenced from one instance of "java.util.concurrent.ConcurrentHashMap$Segment[]", loaded by "<system class loader>"
Can you give me some advices please?
thanks
I'm rather new to Redis and before using it I'd like to learn some important (as for me) details on it. So....
Redis is using RAM and HDD for storing data. RAM is used as fast read/write storage, HDD is used to make this data persistant. When Redis is started it loads all data from HDD to RAM or it loads only often queried data to the RAM? What if I have 500Mb Redis storage on HDD, but I have only 100Mb or RAM for Redis. Where can I read about it?
Redis loads everything into RAM. All the data is written to disk, but will only be read for things like restarting the server or making a backup.
There are a couple of ways you can use it with less RAM than data though. You can set it up in combination with MySQL or another disk based store to work much like memcached - you manage cache misses and persistence manually.
Redis has a VM mode where all keys must fit in RAM but infrequently accessed data can be on disk. However, I'm not sure if this is in the stable builds yet.
Recent versions (>2.0) have improved significantly and memory management is more efficient. See this blog post that explains how to use hashes to optimize RAM memory footprint: http://antirez.com/post/redis-weekly-update-7.html
The feature called Virtual Memory and it official deprecated
Redis VM is now deprecated. Redis 2.4 will be the latest Redis version featuring Virtual Memory (but it also warns you that Virtual Memory usage is discouraged). We found that using VM has several disadvantages and problems. In the future of Redis we want to simply provide the best in-memory database (but persistent on disk as usual) ever, without considering at least for now the support for databases bigger than RAM. Our future efforts are focused into providing scripting, cluster, and better persistence.
more information about VM: https://redis.io/topics/virtual-memory