I am trying out redistimeseries module with redis cluster and it's go and node clients. Is that supported ?
Related
I am trying to setup an airflow cluster. I am planning to use redis as celery backend.
I have seen people using sentinel redis successfully. I wanted to know if it is possible to use redis cluster instead?
If not then why not?
Celery doesn't have support for using Redis cluster as broker. It can use Redis highly available setup as broker (with Sentinels), but has no support for Redis cluster to be used as broker.
Reference:
Airflow CROSSSLOT Keys in request don't hash to the same slot error using AWS ElastiCache
How to use more than 2 redis nodes in django celery
To make Redis cluster to work we need to change the celery backend! not a feasible solution.
https://github.com/hbasria/celery-redis-cluster-backend
My situation is this:
Site "A" (Romania): multiple apphost (1 per PC) exanging serverevents using Redis Backpane.
Site "B" (Turkey): multiple apphost (1 per PC) exanging serverevents using Redis Backpane.
Now, I need to create on Site "0" (italy) a "collector" of all sites serverevents. How can I do it? Is it possible?
I am using ServiceStack 5.4.0 with MSOpenTech Redis 3.0.
ServiceStack's Redis Server Events lets you connect to a Redis Master instance either directly or via Redis Sentinel but doesn't support having AppHosts connect to the same geographically distributed Redis cluster.
I am trying to move away from a single AWS ElastiCache (Redis) server as Celery broker to a Redis cluster. Trouble is - nowhere in the Celery or redis-py documentation can I find the way to connect to the AWS RedisCluster.
redis-py that is used by Celery to communicate with the Redis server can be configured to use Redis Sentinel, but AWS does not support it (at least I did not find sentinel support in the AWS ElastiCache documentation).
So is there a way to communicate somehow with the ElastiCache Redis cluster using redis-py, or, is there a way to instruct Celery to use redis-py-cluster (a separate project)?
Elasticache should give you a configuration endpoint address that you can use for connecting to celery. Just use that endpoint in either the setting for the broker_url or results_backend.
By this famous guestbook example:
https://github.com/kubernetes/examples/tree/master/guestbook
It will create Redis master/slave deployment and services. It also has a subfolder named redis-slave which used for create a docker image and run Redis replication command.
Dockerfile
run.sh
The question is, if deployed the Redis master and slave to the k8s cluster. Then how to run that command? Deploy a new container? That will not relate to the slave container already deployed.
Is there a better way to do Redis repliaciton between master and slave running in k8s cluster?
One option you have is using helm to deploy the redis-ha app.
Info about helm: https://github.com/kubernetes/helm
The redis-ha helm app page: https://hub.kubeapps.com/charts/stable/redis-ha
Redis Sentinel is often suggested for simple master-slave replication and high availability.
Unfortunately, Sentinel does not fit Kubernetes world well and it also requires a Sentinel-aware client to talk to Redis.
You could try Redis operator which can be considered a Kubernetes-native replacement for Sentinel and allows to create a Redis deployment that would resist without human intervention to most kind of failures.
Here is how you can setup Redis HA Master Slave Cluster in Kubernetes/Openshift OKD
Basically you have to use configMap, StatefulSet in collaboration with VolumeClaims
https://reachmnadeem.wordpress.com/2020/10/01/redis-ha-master-slave-cluster-up-and-running-in-openshift-okd-kubernetes/
How to get all connected clients of a redis cluster?
I am using AWS elasticCache redis with non cluster mode and redission as my redis client.
My Use Case:
I need to run specific code from only 1 connected redis client.
Thanks
redis has command about client information like CLIENT LIST, check out this page .
you could checkout this page for the command redisson has not supported yet.