on-heap eviction in apache ignite - ignite

In Apache Ignote, when an entry is evicted from an on-heap cache, is it placed in the off-heap region?
From the docs, it doesn't seem so but looking for a clarification.
The on-heap eviction policies remove the cache entries from Java heap only. The entries stored in the off-heap region of the memory are not affected.

Starting with Ignite 2.x, entries are always stored in off-heap memory and on-heap option allows enabling a lookup on-heap cache for the off-heap entries. When an entry is evicted from on-heap cache there is always a backing off-heap counterpart.
Ignite before 2.x uses different memory modes and eviction behavior differs depending on it.

Related

Moving from caffeine to redisson redis with AWS elatic cache CPU increase

We are moving an in-memory cache implementation of DB results cache to redis(Aws elastic cache). The JVM memory usage for the redisson based redis implementaion in performance test show more CPU usage (about 30 to 50%
The blue line is the redisson to redis implementation of distributed cache and the yellow line is the in memory caffeine implementation. Is this expected legitimate increase due to more I/O ? Or some redission configuration tuning needed?

Used Memory on GCP Memorystore instance despite no data in redis

We just created this GCP memorystore instance for redis. It shows 0.22 GB already used, however we are 100% certain that there is no data in the redis cache. We connect to the memorystore instance via a Compute Engine and run flushall to ensure that the cache is empty. What could possibly be the 0.22GB being used here?
Based on this documentation, when you are using Standard Tier on your Redis Instance, memory usage will provision an extra reserve 10% of instance capacity as a replication buffer.

Ignite data backup in hard disk

So i'm totally new to ignite here. Is there any configuration or strategy to export all data present in the cache memory to the local hard disk in ignite.
Basically what i'm hoping for is some kind of a logger/snapshot that shows the change in data when any kind of sql update operation is performed on the data present in the caches.
If someone could sugest a solution, i'd appreciate it a lot.
You can create and configure persistence store for any cache [1]. If cluster is restarted, all the data will be there and can be reloaded into memory using IgniteCache#loadCache(..) method. Out of the box Ignite provides integration with RDBMS [2] and Cassandra [3].
Additionally, in one of the future versions (most likely next 2.1) Ignite will provide a local disk persistence storage which will allow to run with a cold cache, i.e. without explicit reloading after cluster restart. I would recommend to monitor dev and user Apache Ignite mailing lists for more details.
[1] https://apacheignite.readme.io/docs/persistent-store
[2] https://apacheignite-tools.readme.io/docs/automatic-rdbms-integration
[3] https://apacheignite-mix.readme.io/docs/ignite-with-apache-cassandra

Does redis delete all the keys when one master and its slave fails in redis cluster

I have a question. Suppose I am using a Redis cluster with 3 shards (with master and slave). I came to know that if a master and its slave fails at the same time Redis Cluster is not able to continue to operate. What happen after that.
Would Redis cluster delete all the other keys from other 2 nodes as well? (When it comes back)
Do we need to manually restart this cluster and can we somehow retain the other keys values (on other nodes)?
How will it behave if I use Azure Redis Cache?
Thanks In Advance
1. Would Redis cluster delete all the other keys from other 2 nodes as well? (When it comes back)
First of all only the operations are blocked not the cluster activity and nothing is done with the data so says the documentation
Redis Cluster failure detection is used to recognize when a master or slave node is no longer reachable by the majority of nodes and then respond by promoting a slave to the role of master. When slave promotion is not possible the cluster is put in an error state to stop receiving queries from clients.
Next regarding if the data gets deleted or not (Under Replication document)
In setups where Redis replication is used, it is strongly advised to have persistence turned on in the master
Which means that only if the persistence was turned off and the master server pair went down then you will loose the data. When the pair comes back up, you will not be able to recover the data. So keep Redis persistence turned on.
2. Do we need to manually restart this cluster and can we somehow retain the other keys values (on other nodes)?
I think the above answer covers it up.
3. How will it behave if I use Azure Redis Cache?
From Azure Redis Cache FAQ
High Availability/SLA: Azure Redis Cache guarantees that a Standard/Premium cache will be available at least 99.9% of the time. To learn more about our SLA, see Azure Redis Cache Pricing. The SLA only covers connectivity to the Cache endpoints. The SLA does not cover protection from data loss. We recommend using the Redis data persistence feature in the Premium tier to increase resiliency against data loss.
So it's kinda their headache
OR
Redis Cluster: If you want to create caches larger than 53 GB or want to shard data across multiple Redis nodes, you can use Redis clustering which is available in the Premium tier. Each node consists of a primary/replica cache pair for high availability. For more information, see How to configure clustering for a Premium Azure Redis Cache.

Redis out of memory, even with allkeys-lru policy

I have a Redis server with maxmemory 512MB and maxmemory-policy allkeys-lru but once the server has filled up after a day of usage, I can't add any more items:
redis 127.0.0.1:6379[3]> set foooo 123
(error) OOM command not allowed when used memory > 'maxmemory'.
IMHO that never should happen with the LRU policy.
I copied some server info to this Pasebin: http://pastebin.com/qkax4C7A
How can I solve this problem?
Note: I'm trying to use maxmemory because my Redis server is continously eating up memory even though nearly all keys have an expire setting and because FLUSHDB does not release system memory - perhaps this is related..
In the end I'm trying to use Redis as a cache.
Your info output suggests that a lot of your server's memory is taken by Lua scripts:
used_memory_lua:625938432
Note that Lua scripts remain in memory until the server is restarted or SCRIPT FLUSH is called. It would appear as if you're generating Lua scripts on the fly...