Redis Cluster or Replication without proxy - redis

Is it possible to build one master (port 6378) + two slave (read only port: 6379, 6380) "cluster" on one machine and increase the performances (especially reading) and do not use any proxy? Can the site or code connect to master instance and read data from read-only nodes? Or if I use 3 instances of Redis I have to use proxy anyway?
Edit: Seems like slave nodes don't have any data, they try to redirect to master instance, but it is not correct way, am I right?

Definitely. You can code the paths in your app so writes and reads go to different servers. Depending on the programming language that you're using and the Redis client, this may be easier or harder to achieve.
Edit: that said, I'm unsure how you're running a cluster with a single master - the minimum should be 3.

You need to send a READONLY command after connecting to the slave before you could execute any read commands.
A READONLY command only affects during the current socket session which means you need this command for every TCP connection.

Related

How to configure Redis clients when connecting to master-replica setup?

I have a Redis setup with 1 master and 2 replicas - so totally 3 nodes.
Given that writes can happen only via master node while reads can happen via all 3 nodes, how do I configure the clients?
Can I pass all nodes IP in the IP list and the client will take care of connecting to the right node by itself?
Or do the clients need to be configured separately for reads and writes?
It depends on the specific client you are using; some clients automatically split read-only / write commands based on the client-connection configuration, while others allow to specify the preferred replication target at the command or invocation level.
For example, ioredis automatically handles that through the scaleReads configuration option, while StackExchange.Redis allows to handle that through the CommandFlags enum at the command invocation level: you should really check the documentation of the specific Redis client you want to use.
Finally, redis-cli does not split read-only / write commands; it can connect (via the -c option) to a Redis Cluster and follow slot redirections automatically - but the connection will always happen against a master node, with no read/write splitting.

Should I run haproxy for db and redis sentinel on web nodes?

I am setting up a cluster of servers using vagrant and playing with Redis sentinel and HAProxy for Postgresql db connection (with pgpool). I was curious if it make sense to put haproxy and redis sentinel on each of my web server nodes and have them connect directly to those. The thought is that it can create a distributed connection to the DB and redis and reduce the single point of failure to having a single haproxy that they connect to and then split to different db nodes. I can also keep the database connect (via haproxy) and redis (via sentinel) encapsulated to the localhost. Does this make sense?
It only makes sense if you're trying to save up on resources/costs.
Please note that redis sentinel must have a finite list of sentinel instances, which doesn't fit the scenario of placing one per machine, as your maching count would probably scale/change.
Otherwise , it's always makes the most sense to put different infrastructure components ( especially those with clustering/HA nature, such as redis ) on different machines.
By mixing them all together, you usually end up with applications getting in the way of each other and stealing CPU from each-other once the load increases. You also risk designing your applications/scripts/flows to be location aware (i.e assume external resources are always local ) which is also not a really good practice.

Failing over with single Replication Group on ElastiCache Redis

I'm testing out ElastiCache backed by Redis with the following specs:
Using Redis 2.8, with Multi-AZ
Single replication group
1 master node in us-east-1b, 1 slave node in us-east-1c, 1 slave node in us-east-1d
The part of the application writing is directly using the endpoint for the master node (primary-node.use1.cache.amazonaws.com)
The part of the application doing only reads is pointing to a custom endpoint (readonly.redis.mydomain.com) configured in HAProxy, which then points to the two other read slave end points. (readslave1.use1.cache.amazonaws.com and readslave2.use1.cache.amazonaws.com)
Now lets say the primary node (master) fails in us-east-1b.
From what I understand, if the master instance fails, I won't have to change the url for the end point for writing to Redis (primary-node.use1.cache.amazonaws.com), although from there, I still have the following questions:
Do I have to change the endpoint names for the read only slaves?
How long until the missing slave is added into the pool?
If there's anything else I'm missing, I'd appreciate the advice/information.
Thanks!
If you are using ElastiCache, you should make use the "Primary EndpointThe" provided by AWS.
That endpoint actually is backed by Route53, if the primary (master) redis is down, since you enable MutliA-Z, it will auto fail over to one of the read replica (slave).
In that case, you don't need to modify the endpoint of your redis.
I don't know why you have such design, seems you only want write to master, but always read from slave.
For HA Proxy part, you should include TCP check for ALL 3 redis nodes, using their "Read Endpoint"
In haproxy, you can check if the endpoint is SLAVE, if yes, your haproxy should redirect the traffic to that.
Notice that in the application layer, if your redis driver don't support auto reconnect, your script will fail to connect to the new master nodes.
In addition to "auto reconnect", since AWS is using Route53 DNS to do fail over, some lib will NOT do NS lookup again, which means the DNS is still pointing to the OLD ip which is the old master.
Using HAproxy can solve this problem.

How does ServiceStack PooledRedisClientManager failover work?

According to the git commit messages, ServiceStack has recently added failover support. I initially assumed this meant that I could pull one of my Redis instances down, and my pooled client manager would handle the failover elegantly and try to connect with one of my alternate Redis instances. Unfortunately, my code just bugs out and says that it can't connect with the initial Redis instance.
I am currently running instances of Redis 2.6.12 on a Windows, with the master at port 6379 and a slave at 6380, with sentinels set up to automatically promote the slave to a master if the master goes down. I am currently instantiating my client manager like this:
PooledRedisClientManager pooledClientManager =
new PooledRedisClientManager(new string[1] { "localhost:6379"},
new string[1] {"localhost:6380"});
where the first array is read-write hosts (for the master), and the second array is read-only hosts (for the slave).
When I terminate the master at port 6379, the sentinels promote the slave to a master. Now, when I try to run my C# code, instead of failing over to port 6380, it simply breaks and returns the error "could not connect to redis Instance at localhost:6379".
Is there a way around this, or will failover simply not work the way I want it to?
PooledRedisClientManager.FailoverTo allows you to reset which are the read/write hosts, vs readonly hosts, and restart the factory. This allows for a quick transition without needing to recreate clients.

redirect to slave

Does REDIS has built-in mechanism that will use slave when master is down?
Can I use virtual IP to direct to master and when Master is down is it possible to direct to slave?
As per the documentaion:
elect the slave to master using the SLAVEOF NO ONE command, and shut down your master.
But how the application will know about the changed IP?
mysql has a third party utility that is called MMM (master master replication with monitor). Is there such an utility for REDIS?
You can use a virtual IP in a load balancer, though this is not built in to Redis. Any quality hardware or software load balancer should be able to do it. For example you can use "balance" or HAProxy to front the VIP and use a script or rules that checks the status of Redis instances to see which is master and sets that as the destination in the load balancer (LB).
Going this route would require one or more additional servers (or VMs depending on your setup) but it would provide you with a configuration that has clients talking to a single IP and being clueless about which server they need to talk to on the back end. How you update the LB with which server to talk to is dependent on what LB you use. Fortunately, none of them would need to know or handle the Redis protocol; they are just balancing a port.
When I go this route I go with a Slave-VIP and a Master-VIP. The slave-VIP load balances across all Redis instances, whereas the Master-VIP only has the current master enabled. If your write load is very heavy you can leave the current master out of the Slave-VIP pool. Otherwise I would leave it in; that eliminates the need for failover updating of the Slave-VIP pool.