When i delete 50 million data, my RAM is not change, but my disk is decrease.
when is ignite releases the RAM?
I try to shutdown the node of ignite, the RAM is changed very fast. I want to release my RAM when i
delete my data.
Ignite will not release RAM that it does not use. We never considered this scenario as priority: we only expect amount of data to go up :)
Decide beforehand how much RAM space is to be allocated to an Ignite node and set the value as maxSize in a data region configuration.
Study this page if you'd like to learn the internals of memory management in Ignite.
Related
Ignite version : 2.12
OS : Windows 10
I am trying to understand ignites heap usage.
I started Ignite server with below command and no special vm args. As suggested by https://ignite.apache.org/docs/latest/quick-start/java
ignite.bat -v ..\examples\config\example-ignite.xml
Post that started analyzing heap usage of same with visualvm tool and the heap usage looks like this
Next thing that I tried is increase the heap memory and restart the server.
Surprisingly Now ignite is consuming even more memory as seen in this graph
I Know the GC is working its way to clear the heap, but why does ignite memory consumption increases with increase in heap space ?
How will this impact a server with ~40-60G memory, how much memory I can expect to be consumed by Ignite?
I'm planning to use ignite as in memory cache along with Cassandra as DB.
Just like Cassandra, Hadoop or Kafka, Ignite is a Java middleware that uses the Java Heap for various needs. But your data is always stored in an off-heap memory that allows utilizing all available memory space without worrying about garbage collection. This gives Ignite complete control over how the data is managed, and ensures the long-term performance of the system.
Ignite uses a page memory model for storing everything, including user data, indices, meta information, etc. This allows Ignite to utilize memory management, improve performance and it also can use the whole disk without any data modifications.
In other words, you might think that direct page memory access is being performed by memory pointers (outside of JVM), but some internal tasks like bootstrapping Ignite itself, performing local SQL processing tasks, etc. do require JVM heap because Ignite itself is written in Java.
Check this and that pages for details.
How will this impact a server with ~40-60G memory, how much memory I
can expect to be consumed by Ignite?
You would need 40-60 GB of RAM + something for JVM itself (Java heap), recommended values might differ, but 2GB of Java heap should be enough.
I restarted my redis server after 120 days.
Before restart, memory usage 29.5GB
After restarted, memory usage 27.5GB
So, how 2GB reduced comes?
Free memory in ram like this article https://redis.io/topics/memory-optimization
Redis will not always free up (return) memory to the OS when keys are
removed. This is not something special about Redis, but it is how most
malloc() implementations work. For example if you fill an instance
with 5GB worth of data, and then remove the equivalent of 2GB of data,
the Resident Set Size (also known as the RSS, which is the number of
memory pages consumed by the process) will probably still be around
5GB, even if Redis will claim that the user memory is around 3GB. This
happens because the underlying allocator can't easily release the
memory. For example often most of the removed keys were allocated in
the same pages as the other keys that still exist. The previous point
means that you need to provision memory based on your peak memory
usage. If your workload from time to time requires 10GB, even if most
of the times 5GB could do, you need to provision for 10GB.
However allocators are smart and are able to reuse free chunks of
memory, so after you freed 2GB of your 5GB data set, when you start
adding more keys again, you'll see the RSS (Resident Set Size) to stay
steady and don't grow more, as you add up to 2GB of additional keys.
The allocator is basically trying to reuse the 2GB of memory
previously (logically) freed.
Because of all this, the fragmentation ratio is not reliable when you
had a memory usage that at peak is much larger than the currently used
memory. The fragmentation is calculated as the amount of memory
currently in use (as the sum of all the allocations performed by
Redis) divided by the physical memory actually used (the RSS value).
Because the RSS reflects the peak memory, when the (virtually) used
memory is low since a lot of keys / values were freed, but the RSS is
high, the ratio mem_used / RSS will be very high.
Or free memory of caches which was used by my redis server?
Is redis using cache? Cache of cache?
Thanks!
i'm using redis and noticed that it crashes with the following error :
MISCONF Redis is configured to save RDB snapshots
I tried the solution suggested in this post
but everything seems to be OK in term of permissions and space.
htop command tells me that redis is consuming 70% of RAM. i tried to stop / restart redis in order to flush but at startup, the amount of RAM used by redis was growing up dramatically and stops around 66%. I'm pretty sure at this moment no processus was using any redis instance !
what happens there ?
The growing up ram issue is an expected behaviour of Redis at first data load, after restarts, writing the data to disk (snapshot process). Redis tends to allocate memory as much as it can unless you don't use "maxmemory" option at your conf file.
It allocates memory but not release immediately. Sometimes it takes hours, I saw such cases.
Well known fact about Redis is that, it can allocate memory up to twice size of the dataset it keeps.
I suggest you to wait couple of hours without any restart (Redis can work in this time, get/set operations etc.) and keep watching the memory.
Please check that too
Redis will not always free up (return) memory to the OS when keys are
removed. This is not something special about Redis, but it is how most
malloc() implementations work. For example if you fill an instance
with 5GB worth of data, and then remove the equivalent of 2GB of data,
the Resident Set Size (also known as the RSS, which is the number of
memory pages consumed by the process) will probably still be around
5GB, even if Redis will claim that the user memory is around 3GB. This
happens because the underlying allocator can't easily release the
memory. For example often most of the removed keys were allocated in
the same pages as the other keys that still exist.
confused, is redis a in-memory only store that also rights to disk for backup/restore?
if so, how long does a 16GB db take to write and read back to memory?
As seen on the Redis README, all data is in memory but also stored on disk for persistency and backup value.
As for the 16gb database issue, it depends entirely on your server. For example;
Does the server have any other I/O bound software running?
What type of server is it? Shared? VPS? Dedicated?
The hardware specifications of both the hard disk and RAM.
For those reasons, it is impossible to give you an accurate estimate as to how long reading and writing 16gb of data would take.
Honestly, if you're storing 16gb of data then Redis is most likely not the correct database program to be using simply because it is so heavy on RAM and disk space by design.
I'm rather new to Redis and before using it I'd like to learn some important (as for me) details on it. So....
Redis is using RAM and HDD for storing data. RAM is used as fast read/write storage, HDD is used to make this data persistant. When Redis is started it loads all data from HDD to RAM or it loads only often queried data to the RAM? What if I have 500Mb Redis storage on HDD, but I have only 100Mb or RAM for Redis. Where can I read about it?
Redis loads everything into RAM. All the data is written to disk, but will only be read for things like restarting the server or making a backup.
There are a couple of ways you can use it with less RAM than data though. You can set it up in combination with MySQL or another disk based store to work much like memcached - you manage cache misses and persistence manually.
Redis has a VM mode where all keys must fit in RAM but infrequently accessed data can be on disk. However, I'm not sure if this is in the stable builds yet.
Recent versions (>2.0) have improved significantly and memory management is more efficient. See this blog post that explains how to use hashes to optimize RAM memory footprint: http://antirez.com/post/redis-weekly-update-7.html
The feature called Virtual Memory and it official deprecated
Redis VM is now deprecated. Redis 2.4 will be the latest Redis version featuring Virtual Memory (but it also warns you that Virtual Memory usage is discouraged). We found that using VM has several disadvantages and problems. In the future of Redis we want to simply provide the best in-memory database (but persistent on disk as usual) ever, without considering at least for now the support for databases bigger than RAM. Our future efforts are focused into providing scripting, cluster, and better persistence.
more information about VM: https://redis.io/topics/virtual-memory