Is there a way to restrict writes to master only in redis? - redis

I have a redis cluster created with master slave mode. I want to create a redisson client to access the cluster but I want to specify separate endpoints for reads and writes. Writes should go to master and reads should happen from the slaves. There is a config readMode that can I set to SLAVE to read only from slave nodes but how do I restrict writes to master only?

In Redis, writes happen only in master nodes. So you don't need separate config to handle that.

There is a config readMode that can I set to SLAVE to read only from slave nodes but how do I restrict writes to master only?
Writes are executed in master nodes.
All commands are executed in the master nodes, by default: while Redis Enterprise allows for a multi-master active-active clusters, Redis (Open Source) only allows one master node and zero or more replicas per slot range. In all cases, all nodes can receive both read and write commands but, by default, replicas reply with a -MOVED redirection error along with the endpoint of the master which is believed to handle a given target key. Clients may use that information to contact the master which will actually execute the command.
With that being said, replicas can be configured to reply to read-only commands - provided they handle the slot range of the given target key. In that context, most cluster-aware Redis clients allow to read from replicas with the goal of distributing the load - with the risk of reading stale data: Redisson manages that through the readMode setting and automatically deals with the aforementioned connection configuration.

Related

Does setting "slave-read-only no" will make slave confirm every hash lookup with the master?

I want to configure slave to enable writes (slave-read-only no). The use case is to enable ephemeral cache.
However, this paragraph in the documentation made me concerned:
Normally slave nodes will redirect clients to the authoritative master for the hash slot involved in a given command, however clients can use slaves in order to scale reads using the READONLY command.
– http://redis.io/commands/readonly
Does setting slave-read-only no will make slave confirm every hash lookup with the master?
Please take note that slave-read-only config refers to replication and READONLY refers to the redis-cluster command.
If you are not using redis-cluster, you can safely ignore the READONLY command documentation. Refer to https://raw.githubusercontent.com/antirez/redis/2.8/redis.conf instead. Writes should not replicate nor require lookups to the master. My wireshark dumps on redis with slave-read-only no shows no indication of any communication with master as a consequence of writes to the slave itself.
If you are using redis-cluster on the other hand, and referring to the READWRITE behavior: Cluster nodes' communication with each other for hash slot updates and other cluster specific messages are optimized to use minimal bandwidth and the least processing time. Communicating hash slot updates most likely do not happen for every write on the slave.

Redis connect single instance slave (slave of) to cluster or sentinel

When running a single instance redis, I can use "slave of" to create a (or as many I like) readonly replica of this one redis node.
When using redis cluster, I split my Data into Partitons (Masters) and can create a slave for each partition.
Is it possible to treat this cluster as a single instance and connect a "slave of" Slave to this cluster which will hold a replica of all Data in the cluster and not just the partition of the connected node?
If not possible with redis cluster, is this might a working solution when using sentinel?
Our current Problem:
We are using the "slave of" feature together with keepalived to failover our redis instance on an outage of the master.
But we have lots of "slave of" slaves connected to the virtual IP of the failover setup, to deliver cached data.
Now everytime the system fails over (for maintenance reasons e.g.) all connected slaves have a timout for up to 30 seconds, when they have to resync their data with the new master.
We allready played with all possible redis config parameters but can't get this syncing time to be shorter (e.g. by relying on the replication-backlog, which isn't available on the new master after the failover).
Anyone any ideas?
a very good doc here : http://redis.io/presentation/Redis_Cluster.pdf and here http://fr.slideshare.net/NoSQLmatters/no-sql-matters-bcn-2014 (slide #9) or better https://www.javacodegeeks.com/2015/09/redis-clustering.html
If you want "slave" in Redis cluster mode, you need use replication of all nodes.
Regards,
Well, I just read this article:
https://seanmcgary.com/posts/how-to-build-a-fault-tolerant-redis-cluster-with-sentinel
The author used a single master with Redis Cluster, with 2 slaves per master, instead of one, and he let Redis Sentinel take care of the election of a slave to a master when the master is down.
You could play with this setup to see if the election of Master occurs quickly. While it's happening, clients would be served by a slave and should experience no downtime.

Does redis delete all the keys when one master and its slave fails in redis cluster

I have a question. Suppose I am using a Redis cluster with 3 shards (with master and slave). I came to know that if a master and its slave fails at the same time Redis Cluster is not able to continue to operate. What happen after that.
Would Redis cluster delete all the other keys from other 2 nodes as well? (When it comes back)
Do we need to manually restart this cluster and can we somehow retain the other keys values (on other nodes)?
How will it behave if I use Azure Redis Cache?
Thanks In Advance
1. Would Redis cluster delete all the other keys from other 2 nodes as well? (When it comes back)
First of all only the operations are blocked not the cluster activity and nothing is done with the data so says the documentation
Redis Cluster failure detection is used to recognize when a master or slave node is no longer reachable by the majority of nodes and then respond by promoting a slave to the role of master. When slave promotion is not possible the cluster is put in an error state to stop receiving queries from clients.
Next regarding if the data gets deleted or not (Under Replication document)
In setups where Redis replication is used, it is strongly advised to have persistence turned on in the master
Which means that only if the persistence was turned off and the master server pair went down then you will loose the data. When the pair comes back up, you will not be able to recover the data. So keep Redis persistence turned on.
2. Do we need to manually restart this cluster and can we somehow retain the other keys values (on other nodes)?
I think the above answer covers it up.
3. How will it behave if I use Azure Redis Cache?
From Azure Redis Cache FAQ
High Availability/SLA: Azure Redis Cache guarantees that a Standard/Premium cache will be available at least 99.9% of the time. To learn more about our SLA, see Azure Redis Cache Pricing. The SLA only covers connectivity to the Cache endpoints. The SLA does not cover protection from data loss. We recommend using the Redis data persistence feature in the Premium tier to increase resiliency against data loss.
So it's kinda their headache
OR
Redis Cluster: If you want to create caches larger than 53 GB or want to shard data across multiple Redis nodes, you can use Redis clustering which is available in the Premium tier. Each node consists of a primary/replica cache pair for high availability. For more information, see How to configure clustering for a Premium Azure Redis Cache.

Redis - Tomcat Session Manager : Read from Slave

I am using redis(Redis 3.1) as session store for tomcat(Tomcat 7). To ensure high availability, there is a sentinel setup and two instances(master and slave) of redis server. The slave is configured as read-only. After running few tests and verifying the statistics, it's observerd there are no read requests sent to the slave. All the read requests are processed by the master alone.
Could you please let me know how I can make the slave serve the read requests?
You could use Redis based Tomcat Session Manager provided by Redisson. It allows to manage which type of node use for read operation (master, slave or both master and slave). Perfectly works in Sentinel/Cluster modes.

Redis DB Master Slave set up

I have installed Redis for my nodeJS application and configured it to be a slave for another instance of Redis DB running on a different server. Can I have the same instance (different DB) of Redis (running as slave) act as Master for locally installed application?
Thanks in advance
Yes, you can, but with a big caveat.
Any slave instance can be master of one or several other instances. So you can imagine daisy chaining slaves and build a hierarchical replication system.
Now, my understanding is you don't need your slave to feed another Redis instance, but just allow an application to perform read/write operations in another database of the slave instance.
To allow it, you need to set the value of the slave-read-only parameter to "no" in the slave configuration:
# You can configure a slave instance to accept writes or not. Writing against
# a slave instance may be useful to store some ephemeral data (because data
# written on a slave will be easily deleted after resync with the master) but
# may also cause problems if clients are writing to it because of a
# misconfiguration.
#
# Since Redis 2.6 by default slaves are read-only.
#
# Note: read only slaves are not designed to be exposed to untrusted clients
# on the internet. It's just a protection layer against misuse of the instance.
# Still a read only slave exports by default all the administrative commands
# such as CONFIG, DEBUG, and so forth. To a limited extend you can improve
# security of read only slaves using 'rename-command' to shadow all the
# administrative / dangerous commands.
slave-read-only no
Now, all the write operations you will run on this instance will be ephemeral. If the link is lost between the master and the slave, the slave will completely synchronize again to the master. All the data you have set on the slave will be lost. If you stop and restart the slave, you will also loose all the ephemeral data.
It may or may not suit your needs.
There is no way to parameter the synchronization (or the persistence options) at the database level. You cannot tell Redis to synchronize a given database, and not another one. Configuration always applies at the instance level.