I have a multi-node Ignite cluster. With 1.7.0 version of Ignite.
Was just hit with this error causing my whole ignite cluster to fail to
startup:
org.apache.ignite.IgniteCheckedException: Affinity key backups mismatch (fix
affinity key backups in cache configuration or set
-DIGNITE_SKIP_CONFIGURATION_CONSISTENCY_CHECK=true system property)
[cacheName=ignite-atomics-sys-cache, localAffinityKeyBackups=1,
remoteAffinityKeyBackups=0, rmtNodeId=d663345e-x7ba-5c85-6144-1234a7d3f721]
at
org.apache.ignite.internal.processors.cache.GridCacheUtils.checkAttributeMismatch(GridCacheUtils.java:1144)
~[ignite-core-1.7.0.jar:1.7.0]
at
org.apache.ignite.internal.processors.cache.GridCacheProcessor.checkCache(GridCacheProcessor.java:2915)
~[ignite-core-1.7.0.jar:1.7.0]
at
org.apache.ignite.internal.processors.cache.GridCacheProcessor.onKernalStart(GridCacheProcessor.java:756)
~[ignite-core-1.7.0.jar:1.7.0]
at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:930)
[ignite-core-1.7.0.jar:1.7.0]
at
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:1739)
[ignite-core-1.7.0.jar:1.7.0]
at
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1589)
[ignite-core-1.7.0.jar:1.7.0]
at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1042)
[ignite-core-1.7.0.jar:1.7.0]
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:569)
[ignite-core-1.7.0.jar:1.7.0]
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:516)
[ignite-core-1.7.0.jar:1.7.0]
at org.apache.ignite.Ignition.start(Ignition.java:322)
[ignite-core-1.7.0.jar:1.7.0]
What does it mean by "fix affinity key backups in cache configuration" ?
This error means that different nodes have different number of backups configured for atomic data structures. This is done via AtomicConfiguration#backups property and you need to make sure that value for this property is equal on all nodes.
Related
is there a way to promote a replica node to primary with cluster mode enabled? I have googled but only found solution with cluster mode disabled.
You can send CLUSTER FAILOVER command to the replica to promote it to be a new master. However, you must ensure that the replica has already been known by majority of the masters in the cluster.
Check the doc on manually failover for detail.
In a redis cluster what is the default behaviour for the read operations that happen? Does a client read from the master? I know that a client writes/updates/deletes to the master but what about read operations? If the default behaviour is to read from the master node how can I configure it to read from the slave nodes instead?
It all depends on Redis client library. For Jedis/Lettuce, all operations(CRUD) will be send to corresponding master node, and slaves are only used for failover.
If you wan to implement READONLY slave, you need some customization on Redis client.
The default behavior of replica nodes in cluster-mode enabled clusters
is to redirect all client read/write requests to an authoritative
master node of the shard that belongs to the key's hash slot. The
replica node serves the read request only if that shard belongs to the
hash slot and a readonly command was initiated by the client. This
means that the replica node processes the request only if readonly is
issued by the client before the request. Otherwise, the request is
redirected to the primary node of the shard that the hash slot belongs
to.
https://aws.amazon.com/premiumsupport/knowledge-center/elasticache-redis-client-readonly/
I have configured two slaves and one master using sentinel.
I have turned off persistence on all the servers. Now the sync happens between master and slave using BGSAVE command.
So shall i assume that though i have persistence off redis is still persisting since it created rdb file for syncing data?
I guess you just have to use replication diskless to avoid Redis using BGSAVE. In fact diskless replication use socket instead of the file to send data on their slaves
https://deepsource.io/blog/redis-diskless-replication/
regard,
Now that Redis Cluster comes with sharding, replication and automatic failover, do i still need to use Sentinel for failover handling ?
No. Sentinel is for managing the availability and providing service discovery when using Redis in single instance mode (single-master/one-or-more-slaves). When using Redis in cluster mode, Sentinel isn't needed.
I am planning on adding Redis to our application as a session and cache store. I have been looking at how to make Redis highly available on an on-premise hosted solution.
The standard approach appears to be to set up Redis as a 3 node replica and use Sentinel for the monitoring and automatic failover.
Redis 2.8 introduces Redis cluster. Does that mean it brings in automatic failover etc and we no longer need to use Sentinel?
No, Cluster and Failover are different scenarios. Also Cluster is in 3.0, not 2.8.
The standard (and minimum) setup for HA is a master and one slave (aka "a pod"), with a separate set of three nodes which run Sentinel and monitor the pod.
This is to ensure failover of the server. However, either your client library has to support using Sentinel to discover master and reconnect on failure, you implement it in your code, or you set up a TCP load balancer and a sentinel monitoring daemon to update your load balancer configuration when a failover occurs at which point the client code doesn't know or care about sentinel.
Cluster isn't there to provide HA, it is there for server-side sharding of data. For Cluster you're looking at 6-7 nodes minimum (3 master, 3 slave, 1 spare) as well as Cluster support in the client and restrictions about commands and Lua script which need to access multiple keys.