I have configured Redis in non-cluster mode, and for specific hours of the day want to disable it or make it readonly (any one would do). Found that in cluster mode, READWRITE command can be used to disable queries to slave node if earlier that was enabled using READONLY. Tried CLIENT PAUSE command but that didn't work, seems even that is meant for cluster mode.
Related
Can Redis run in cluster mode enabled but with single instance. It seems we get cluster status as fail when we try to deploy with single node because no slots are added to it.
https://serverfault.com/a/815813/510599
I understand we can manually add SLOTS after deployment. But I wonder if we can modify Redis source code to make this change.
What is Ignite maintenance mode of Ignite, and how to change an ignite to this mode? I was stuck joining the node to the cluster and complains cleaning up the persistent data, however the data can be cleaned (using control.sh) only in the maintenance mode only.
This is a special mode, similar to running Windows in a safe mode after a crash or a data corruption where most of the cluster functionality is disabled and a user is asked to perform some maintenance task to resolve the issue, most straightforward example I can think of is - to clean (remove) some corrupted files on disk just like in your question. You can refer to IEP-53: Maintenance Mode proposal for the details.
I don't think that there is a way to enter this mode manually unless you trigger some preconfigured conditions like stopping a node in the middle of checkpointing with WAL disabled. Once the state is fixed, maintenance mode should be resolved automatically allowing a node to join the cluster.
Also, from my understanding, this mode is about a particular node rather than a complete cluster. I.e. you can have a 4-nodes cluster with only 1 node in maintenance mode, in that case, you have to run control.sh commands locally for the concrete failed node, not from another healthy node. If that's not the case, please provide more details or file a JIRA ticket because reported behavior looks quite broken to me.
We have 3 node Gridgain server and there are 3 client nodes deployed in GCP Kubernetes engine. Cluster is native persistence enabled. Also <property name="shutdownPolicy" value="GRACEFUL"/> as shutdown policy. There is one backup for each cache. After automatic cluster restart getting partition loss. Need to reset these partitions by executing control commands.
Can you provide proper solution for this. We have around 60GB persistent data.
<property name="shutdownPolicy" value="GRACEFUL"/> is supposed to protect from partition loss if certain conditions are met:
The caches must be either PARTITIONED with backups > 0 or REPLICATED. Check your configs. Default cache config in Ignite is PARTITIONED with backups = 0 (for historical reasons), so the defaults won't work.
There must be more than one baseline node (only baseline nodes store data!). Here is the doc.
You must stop the nodes in a graceful way. This is a bit tricky since you don't always control this.
If you stop with a kill to the process, make sure it uses SIGTERM and not SIGKILL because the later always kills the process immediately
If you stop with Ignite.close() this should just work
If you stop with Java System.exit() it'll work, but if you use System.halt() - it won't (because halt() is not graceful)
If you use orchestrators such as Kubernetes, you need to make sure they'll stop the nodes gracefully. For example, in Kubernetes you normally have to set terminationGracePeriodSeconds to a high value so that Kubernetes waits for the nodes to finish graceful shutdown instead of killing them.
If you use custom startup scripts, you need to make sure they forward signals to the Ignite process.
To debug this, check the points above. I would normally start by looking at the server logs (with IGNITE_QUIET=false!) to see if "Invoking shutdown hook" message is there. If it isn't there then your shutdown hook isn't getting called, and the problem is one of the points under 3. Otherwise, there should be other log messages explaining the situation.
I want to read directly from a replica inside a Redis cluster but I am getting redirected even if the key actually exist inside the Redis node.
Checked READONLY command docs. However, that is seemed to be ignored in case of Redis Cluster.
How should I configure the Redis cluster so that I can directly read from a replica node for the reads issue against its hash slots? I am OK to read stale data in this case.
The READONLY command (as well as the READWRITE command that counteracts it) are applicable per connection in a clustered environment.
While READWRITE is the default behavior, in order to change that you'll need to call READONLY in the context of the connection that you're using.
I have configured 2 node clusers in Jboss AS 7.1.1-Final. I am planning to use sticky sessions. Meanwhile I am also recording number of active online users in Infinispan cache with node IP from where that user session was created for reporting purpose.
I have taken care of scenarios for login/logout where I would clear our cache entries. Problem is if one of the server node goes down, I need to write clean up routine to clear such records of that node from cache too.
One of the option is to write a client and check at specific interval if server is alive otherwise trigger a clean up routine. This approach would work but I am looking for more cleaner approach if I could detect server node failure that gets notified to other live nodes then I could hit cleanup.
From console I know that it shows when server goes down or comes up. But what would be that listerner to listen to such events. Any thoughts?
If you just need to know when the node leaves within some server module (inside JBoss server) you can use the ViewChanged listener
You cannot get this information on clients connected via REST or memcached protocols - with HotRod protocol it is doable but pretty hackish, you'd have to override TransportFactory.updateServers (probably just extend TcpTransportFactory - see configuration property infinispan.client.hotrod.transport_factory)