How to setup ServiceStack ServerEvents with Redis backpane geographically distributed - redis

My situation is this:
Site "A" (Romania): multiple apphost (1 per PC) exanging serverevents using Redis Backpane.
Site "B" (Turkey): multiple apphost (1 per PC) exanging serverevents using Redis Backpane.
Now, I need to create on Site "0" (italy) a "collector" of all sites serverevents. How can I do it? Is it possible?
I am using ServiceStack 5.4.0 with MSOpenTech Redis 3.0.

ServiceStack's Redis Server Events lets you connect to a Redis Master instance either directly or via Redis Sentinel but doesn't support having AppHosts connect to the same geographically distributed Redis cluster.

Related

Connecting an off-the-shelf application (Liveswitch) to Redis in an automatic failover configuration

Part of the installation of Liveswitch (a WebRTC application) is to connect the gateway and media server to a Redis cache (see the docs). We want to implement a high-availability architecture, in which two (load balanced) gateway servers and two media servers connect to Redis instances in an autofailover configuration.
I know that normally Redis Sentinel is the tool of choice for such an undertaking, but from my understanding for this to be working with the aforementioned gateway/media servers , the servers (which are the clients to Redis in this case) have to be able to communicate with Sentinel additional to their connection to the (Redis) cache to "find" the new master in the case of failover. I do not find any hint in Liveswitche's configuration that would allow for that.
The connection of Liveswitch to the Redis instance is configured in a json file using a connection string like the following (from the product documentation):
"ConnectionStrings": {
"Default": "postgresql://user:password#10.37.129.2:25061/postgres_server?sslmode=require&Server_Compatibility_Mode=Redshift",
"Cache": "redis://lxjis.redis.cache.windows.net:6380,password=2KfYkEQOCYH8sdfdsfdsfdsEdMV18nSvtE4FrMqnncAFZNsRc=,ssl=True,abortConnect=False",
}
Is there a way to install Redis instances with autofailover in a transparent way, i.e., so that the client application's connection to the cache is retained in the case of failover without relying on Sentinel to "report" the new master to the application?

is stackexchange.redis a "smart client" when using Redis cluster?

A smart client for Redis cluster will "take persistent connections to many nodes, will cache hashslot -> node info, and will update the table when they receive a -MOVED error".
I checked numerous documents but can't find a definitive answer on whether Stackexchange.Redis is a smart client. Can anyone advice? Thanks.
I am using Stackexchange.Redis client in my web application to connect to Redis cluster having 6 instances of Redis servers. Stackexchange.Redis client works perfectly with Redis cluster and we did not get -MOVED error.

Does it require to put load balancer before Redis cluster

I am using Redis Cluster on 3 Linux servers (CentOS 7). I have standard configuration i.e. 6 nodes, 3 master instances, and 3 slave instances (one master have one slave) distributed on these 3 Linux servers. I am using this setup for my web application for data caching, HTTP response caching. My aim is to read primary and write secondary i.e. Read operation should not fail or delayed.
Now I would like to ask is it necessary to configure any load balancer before by 3 Linux servers so that my web application requests to Redis cluster instances can be distributed properly on these Redis servers? Or Redis cluster itself able to handle the load distribution?
If Yes, then please mention any reference link to configure the same. I have checked official documentation Redis Cluster but it does not specify anything regarding load balancer setup.
If you're running Redis in "Cluster Mode" you don't need a load balancer. Your Redis client (assuming it's any good) should contact Redis for a list of which slots are on which nodes when your application starts up. It will hash keys locally (in your application) and send requests directly to the node which owns the slot for that key (which avoids the extra call to Redis which results in a MOVED response).
You should be able to configure your client to do reads on slave and writes on master - or to do both reads and writes on only masters. In addition to configuring your client, if you want to do reads on slaves, check out the READONLY command: https://redis.io/commands/readonly .

How to get all Connected Clients of Redis Cluster?

How to get all connected clients of a redis cluster?
I am using AWS elasticCache redis with non cluster mode and redission as my redis client.
My Use Case:
I need to run specific code from only 1 connected redis client.
Thanks
redis has command about client information like CLIENT LIST, check out this page .
you could checkout this page for the command redisson has not supported yet.

Is automatic failover built into Redis 2.8?

I am planning on adding Redis to our application as a session and cache store. I have been looking at how to make Redis highly available on an on-premise hosted solution.
The standard approach appears to be to set up Redis as a 3 node replica and use Sentinel for the monitoring and automatic failover.
Redis 2.8 introduces Redis cluster. Does that mean it brings in automatic failover etc and we no longer need to use Sentinel?
No, Cluster and Failover are different scenarios. Also Cluster is in 3.0, not 2.8.
The standard (and minimum) setup for HA is a master and one slave (aka "a pod"), with a separate set of three nodes which run Sentinel and monitor the pod.
This is to ensure failover of the server. However, either your client library has to support using Sentinel to discover master and reconnect on failure, you implement it in your code, or you set up a TCP load balancer and a sentinel monitoring daemon to update your load balancer configuration when a failover occurs at which point the client code doesn't know or care about sentinel.
Cluster isn't there to provide HA, it is there for server-side sharding of data. For Cluster you're looking at 6-7 nodes minimum (3 master, 3 slave, 1 spare) as well as Cluster support in the client and restrictions about commands and Lua script which need to access multiple keys.