How to migrate rabbitmq (everything) to bitnami rabbitmq cluster - rabbitmq

How to migrate rabbitmq (everything) to bitnami rabbitmq cluster?
I have single node rabbitmq server with plenty of queues, exchanges, channels, producers and consumers.
We have a task to migrate everything of this rabbitmq server to a bitnami helm chart(https://bitnami.com/stack/rabbitmq/helm) rabbitmq with 3 node cluster, running in kubernetes cluster.
Current rabbitmq server hostname is different from what rabbitmq hostnames of three rabbitmq cluster nodes.
I'm a beginner with rabbitmq aspects.
So how do we approach this?
how to migrate data of a rabbitmq server whose hostname will be different from target rabbitmq server of bitnami helm chart cluster?

Related

How to send Filebeat, Meticbeat and Packetbeat data to Fluentd daemonset deployed on 3 node Kubernetes cluster?

I have a 3 nodes Kubernetes cluster, on which I have fluentd deployed as damonset. This fluentd is tailing all the container logs on Kubernetes cluster and sending it to Elasticsearch deployed on the same Kubernetes cluster. I have an external Linux server which has Filebeat, Metricbeat and Packetbeat and I want to sends its beats data to the Fluentd running on my Kubernetes cluster, so that the beats data could be stored in the Elasticsearch. How to do that? Fluentd has elasticbeats plugin. But I am not sure how to expose the fluentd outside the kubernetes cluster (probably it would need some service), so that it can accept the beats data from the Linux server.

Is it possible to use redis cluster instead of sentinel as celery backend for airflow cluster

I am trying to setup an airflow cluster. I am planning to use redis as celery backend.
I have seen people using sentinel redis successfully. I wanted to know if it is possible to use redis cluster instead?
If not then why not?
Celery doesn't have support for using Redis cluster as broker. It can use Redis highly available setup as broker (with Sentinels), but has no support for Redis cluster to be used as broker.
Reference:
Airflow CROSSSLOT Keys in request don't hash to the same slot error using AWS ElastiCache
How to use more than 2 redis nodes in django celery
To make Redis cluster to work we need to change the celery backend! not a feasible solution.
https://github.com/hbasria/celery-redis-cluster-backend

Can redis-py reliably use AWS ElastiCache Redis cluster?

I am trying to move away from a single AWS ElastiCache (Redis) server as Celery broker to a Redis cluster. Trouble is - nowhere in the Celery or redis-py documentation can I find the way to connect to the AWS RedisCluster.
redis-py that is used by Celery to communicate with the Redis server can be configured to use Redis Sentinel, but AWS does not support it (at least I did not find sentinel support in the AWS ElastiCache documentation).
So is there a way to communicate somehow with the ElastiCache Redis cluster using redis-py, or, is there a way to instruct Celery to use redis-py-cluster (a separate project)?
Elasticache should give you a configuration endpoint address that you can use for connecting to celery. Just use that endpoint in either the setting for the broker_url or results_backend.

How to do Redis slave repalication in k8s cluster?

By this famous guestbook example:
https://github.com/kubernetes/examples/tree/master/guestbook
It will create Redis master/slave deployment and services. It also has a subfolder named redis-slave which used for create a docker image and run Redis replication command.
Dockerfile
run.sh
The question is, if deployed the Redis master and slave to the k8s cluster. Then how to run that command? Deploy a new container? That will not relate to the slave container already deployed.
Is there a better way to do Redis repliaciton between master and slave running in k8s cluster?
One option you have is using helm to deploy the redis-ha app.
Info about helm: https://github.com/kubernetes/helm
The redis-ha helm app page: https://hub.kubeapps.com/charts/stable/redis-ha
Redis Sentinel is often suggested for simple master-slave replication and high availability.
Unfortunately, Sentinel does not fit Kubernetes world well and it also requires a Sentinel-aware client to talk to Redis.
You could try Redis operator which can be considered a Kubernetes-native replacement for Sentinel and allows to create a Redis deployment that would resist without human intervention to most kind of failures.
Here is how you can setup Redis HA Master Slave Cluster in Kubernetes/Openshift OKD
Basically you have to use configMap, StatefulSet in collaboration with VolumeClaims
https://reachmnadeem.wordpress.com/2020/10/01/redis-ha-master-slave-cluster-up-and-running-in-openshift-okd-kubernetes/

RabbitMQ Set the HA Policy

I know the HA Policy is set by the following command:
$ rabbitmqctl set_policy ha-all "" '{"ha-mode":"all","ha-sync-mode":"automatic"}'
My question which seems basic:
Do I have to issue this command on each node or just one of them?
RabbitMQ provides to distributes the policy to all the cluster, so it does not matter which node you select the info will be distribute to the other nodes.
Please read here: https://www.rabbitmq.com/clustering.html
A RabbitMQ broker is a logical grouping of one or several Erlang
nodes, each running the RabbitMQ application and sharing users,
virtual hosts, queues, exchanges, bindings, and runtime parameters.
Sometimes we refer to the collection of nodes as a cluster.