MaxScale cluster (master-master) setup - load-balancing

When deploying multiple MaxScale in a Master-Slave typology (failover from master to slave with Keepalived or similar) in front of a Galera Cluster in read-write-split mode , everything goes fine. But what about a Master-Master like typology in a round-robin fashion, is this possible?
Ex.: Having one MaxScale at 10.0.0.1, a 2nd at 10.0.0.2 and Haproxy in front of it with a roundrobin or leastconn distribution algorithm (or even without Haproxy/load-balancer, where applications just connect randomly to one or another) is this possible/well supported by MaxScale?

Usually you can connect to any number of MaxScale instances as long as certain features are enabled to guarantee that all MaxScale instances pick the same server where they send writes to.
Galera Clusters
If you are using a Galera cluster, this can be done in a safe and conflict-free manner by enabling the root_node_as_master parameter. It uses the Galera cluster itself to select which node it uses for writes. This allows your applications to connect to either of the MaxScale instances.
Even without this parameter it would not cause any problems with the database itself but due to the way Galera works, you increase the likelihood of running into a conflict when you COMMIT your transaction if you write to multiple nodes.
Asynchronous Replication Clusters
If you use MaxScale with a cluster that uses asynchronous replication, you still can do this as long as you configure your mariadbmon monitor to use cooperative_monitoring_locks. This causes the MaxScale instances to communicate via the database about which servers they see and which of them they chose for writes.
An added benefit of cooperative_monitoring_locks is that you can enable the automated cluster management parameters auto_failover and auto_rejoin without having to worry about two MaxScales attempting to change the replication configuration at the same time: cooperative_monitoring_locks makes sure only one MaxScale does it.

Related

What is the difference between:Redis Replicated setup, Redis Cluster setup Redis Sentinel setup and Redis with Master with Slave only?[REDISSON]

I've read https://github.com/redisson/redisson
And I found out that there are several
Redis Replicated setup (including support of AWS ElastiCache and Azure Redis Cache)
Redis Cluster setup (including support of AWS ElastiCache Cluster and Azure Redis Cache)
Redis Sentinel setup
Redis with Master with Slave only
I am not a big expert in clusters and I don't understand the difference between these setups.
Could you beiefly explain the differences ?
Disclaimer I am an AWS employee.
I do not know how Redis Replicated Setup is different from Redis in Master-Slave mode. Maybe they mean cross-region replication?
In any case, I can try and explain setups I know about:
Redis with Master with Slave only - is a single shard setup where you create a primary replica together with one or more secondary (slave) replicas (let's hope PC police won't arrest me). This setup is used to improve the durability of your in-memory store. It's not advised to use your secondaries for reads because such setup has eventual consistency guarantees and your replica reads may be stale (depending on the replication lag).
Redis Cluster setup - the setup supported by cloud provides such as AWS Elasticache. In this setup your workload can be spread horizontally across multiple shards and each shard may have its own secondary replicas. Your client library must support this setup since it requires maintaining multiple connections to several nodes at a client level. Moreover, there are some locality rules you need to follow in order to use cluster mode efficiently:
Keys with foo{<shard>}bar notation will be routed to their shard according to what is stored inside curly brackets.
You can not use mset, mget and other multi-key commands across shards. You can still use these commands if their keys contain the same {shard} part.
There are additional cluster mode admin commands that are exposed by Redis but they are usually hijacked and hidden from users by cloud providers since cloud provides use them in order to manage redis cluster themselves.
Redis cluster have an ability to migrate part of your workload between shards. However, it still obliged to preserve correctness with respect to {shard} notation. Since your client library is responsible to fetch data from specific shard it must handle "moved" response when a shard might redirect it to another node.
Redis Sentinel setup - using an additional server that provides service discovery functionality for Redis clusters. Not strictly required and I believe is less popular across users. It serves as a single source of truth regarding each node's health and state. It provides monitoring, management, and service discovery functions for managing your Redis cluster. Many Redis client libraries provide the option of connecting to Redis sentinel nodes in order to achieve automatic service discovery and seamless failover flow. One of the reasons why this setup is less popular is because cloud companies like AWS Elasticache provide this service out of the box.

Redis Cluster configuration for CacheManager.NET

I have a basic question about Redis connection parameters from CacheManager.NET perspective. In case when we have Redis cluster with a master and 2 slaves, and with quorum of sentinel processes, should we provide the IP:PORT combinations pointing to the sentinel processes OR the actual Redis server processes.
As suggested in https://seanmcgary.com/posts/how-to-build-a-fault-tolerant-redis-cluster-with-sentinel, it is advisable to ask the sentinel process about the actual master before making the connection. And probably that goes in line with Jedis which provides JedisSentinelPool to do the initial lookup.
Essentially what we want is that the load balancing on reads (via CacheManager.NET) and the writes should go to the current master node of the cluster.
CacheManager relies on StackExchange.Redis for the Redis implementation. Therefor, whatever this client library supports, CacheManager does, too.
Unfortunately, sentinel support is not implemented, there are issues on github for years regarding that
That being said, I did some testing with a Multi Master/Slave + Sentinel setup. Added all the non-sentinel nodes as endpoints to the Multiplexer configuration and it kinda works because the Redis Client knows how to handle multiple master/slave instances.
In the process of switching to another master, the client might throw exceptions that it cannot write to a readonly slave and such. CacheManager might retry those calls and after a short amount of time, when the leader election is done, the call should go through.
But this is not 100% stable and I would not put that in production, as "official" support is still missing...
Alternative to running with sentinels, you could run Redis in Cluster mode which should just work, or behind a proxy which deals with all that master/slave stuff.
Twemproxy is one alternative.
I still have to add support for Twemproxy to CacheManager, as many features are simply not available, like Lua scripting or get a list of servers or flush commands...
This will come in 1.0.2
Hope that helps.

Should I run haproxy for db and redis sentinel on web nodes?

I am setting up a cluster of servers using vagrant and playing with Redis sentinel and HAProxy for Postgresql db connection (with pgpool). I was curious if it make sense to put haproxy and redis sentinel on each of my web server nodes and have them connect directly to those. The thought is that it can create a distributed connection to the DB and redis and reduce the single point of failure to having a single haproxy that they connect to and then split to different db nodes. I can also keep the database connect (via haproxy) and redis (via sentinel) encapsulated to the localhost. Does this make sense?
It only makes sense if you're trying to save up on resources/costs.
Please note that redis sentinel must have a finite list of sentinel instances, which doesn't fit the scenario of placing one per machine, as your maching count would probably scale/change.
Otherwise , it's always makes the most sense to put different infrastructure components ( especially those with clustering/HA nature, such as redis ) on different machines.
By mixing them all together, you usually end up with applications getting in the way of each other and stealing CPU from each-other once the load increases. You also risk designing your applications/scripts/flows to be location aware (i.e assume external resources are always local ) which is also not a really good practice.

Is automatic failover built into Redis 2.8?

I am planning on adding Redis to our application as a session and cache store. I have been looking at how to make Redis highly available on an on-premise hosted solution.
The standard approach appears to be to set up Redis as a 3 node replica and use Sentinel for the monitoring and automatic failover.
Redis 2.8 introduces Redis cluster. Does that mean it brings in automatic failover etc and we no longer need to use Sentinel?
No, Cluster and Failover are different scenarios. Also Cluster is in 3.0, not 2.8.
The standard (and minimum) setup for HA is a master and one slave (aka "a pod"), with a separate set of three nodes which run Sentinel and monitor the pod.
This is to ensure failover of the server. However, either your client library has to support using Sentinel to discover master and reconnect on failure, you implement it in your code, or you set up a TCP load balancer and a sentinel monitoring daemon to update your load balancer configuration when a failover occurs at which point the client code doesn't know or care about sentinel.
Cluster isn't there to provide HA, it is there for server-side sharding of data. For Cluster you're looking at 6-7 nodes minimum (3 master, 3 slave, 1 spare) as well as Cluster support in the client and restrictions about commands and Lua script which need to access multiple keys.

twemproxy (nutcracker) adding redis instance and keeping consistency

I set up twemproxy (nutcracker) with 2 redis servers as backends including slaves, sentinel and failover.
As soon as I add another redis server some of the keys are not able to be read, probably due to twemproxy redirecting to another redis.
How do I add another redis instance without breaking the consistency?
I want to use the setup as a consistent and very fast database.
Here are my settings:
redis_cluster:
auto_eject_hosts: false
distribution: ketama
hash: fnv1a_32
listen: 127.0.0.1:6379
preconnect: true
redis: true
servers:
- 127.0.0.1:7004:1 redis_1
- 127.0.0.1:7005:1 redis_2
I want to keep sharding a job of the server and be able to add instances. Do I need to use another setup?
Twemproxy can't do that. You can use Redis Cluster, or if you want to use Twemproxy you have to use a technique called presharding. Which is, start directly with, like, 32 or 64 instances or alike, even if them all run in the same host to start. Then start moving instances from one box to another in order to scale to multiple actual servers. The word to the right of the instances configured inside Twemproxy "redis_1" are used in order to hash, so that you can change IP address when you move instances, and still the hashing will be the same for that server.
Redis Cluster is release candidate 2 at this point. While it needs more testing and deployments to be battle tested as Redis is, it is already a viable product, so you may want to test it as well.