Clone RabbitMQ admin users, etc. on replacement server - rabbitmq

We have a couple of crusty AWS hosts running a RabbitMQ implementation in a cluster. We need to upgrade the hardware, and therefore we developed a Chef cookbook to spawn replacement servers.
One thing that we would rather not recreate by hand is the admin users, the queues, etc.
What is the best method to get that stuff from the old hosts to the new ones? I believe it's everything that lives in the /var/lib/rabbitmq/mnesia directory.
Is it wise to copy the files from one host to another?
Is there a programmatic means to do this?
Can it be coded into our Chef cookbook?

You can definitely export and import configuration via command line: https://www.rabbitmq.com/management-cli.html
I'm not sure about admin user, though.

If you create new rabbitmq nodes on your new hardware, you will get all the users in that new node. This is easy to try:
run docker container with image of rabbitmq (with management plugin)
and create a user
run another container and add that node to the
cluster of the first one
kill rabbitmq on the first one, or delete
the docker container and you will see that you still have the newly
created user on the 2nd (but now master) node
I wrote docker since it's faster to create a cluster this way, but if you already have a cluster you could use it for testing if you prefer.
For the queues and exchanges, I don't want to quote almost everything found in the rabbitmq doc page for the high availability, but I will just say that you have to pay attention to the following:
exclusive queues because they are gone once the client connection is gone
queue mirroring (if you have any set up, if not it would be wise to consider it, if not even necessary)
I would do the migration gradually, waiting for the queues to get emptied and then kill of the nodes on the old hardware. It maybe doable in a big-bang fashion, but seems riskier. If you have a running system, than set up queue mirroring and try to find appropriate moment to do manual sync - but careful, this has a huge impact on the broker performance.
Additionally there is this shovel plugin (I have to point out that I did not use it or even explore it) but that may be another way to go since (quoting form the link):
In essence, a shovel is a simple pump. Each shovel:
connects to the source broker and the destination broker, consumes
messages from the queue, re-publishes each message to the destination
broker (using, by default, the original exchange name and
routing_key).

Related

Is it possible to reconfigure Rabbit MQ once you have changed machine to point at the new machine without uninstalling or demising it

I've come into an issue post install of Rabbit MQ where it was all set up and configured with the web apps on the machine and communicating to local applications however the machine had to be moved to a different tranch of machines and renamed as a result. Now Rabbit MQ can no longer serve or handle comms as intended as it's config points to rabbit#PREVIOUS_MACHINE instead of rabbit#CURRENT_MACHINE.
In the rabbit MQ config however, to complicate this, there was some configuration that was done from the users on the system that were fed into the local apps that are then encrypted into that local app's database and used for communicating with all the local apps. the issue here is if I drop and recreate Rabbit MQ a make a new user this won't align to what the other internal apps are using and I believe they are not configurable post install so a reinstall of everything is the potential impact.
the question is, is it possible to re-config or update the current RabbitMQ installation files to now point at the local machine name instead of the previous machine name AND I guess by proxy is this something that would even work. The docs over at rabbitmq don't quite deal with this specific scenario, unfortunately from what I've read through.
so i want to confirm that RMQ is the absolute dogs tits of a black magic box.
anyway,
following these steps from here minus the first two
How to change RabbitMQ node name without changing my hostname
this is the inverse of my problem pretty much. But for those in the future who have this issue;
I had Rabbit MQ on another machine installed and running, the machines name was changed and the solution was to uninstall the service, delete the db and reinstall the service. SOMEHOW rmq manages to keep the config knowledge of all the queues that were in the db in the system and when you reinstall the service it brings all the queues back as well. the only issue I had after that was to remember my username and password that were not default user set ups and I did so that solved my issue. still have no idea how RMQ manages to remember the previous configs despite deleting the local db, crazy cool. very grateful to whoever built that into the tool

Manage In-memory cache in multiple servers in aws

Once or twice a day some files are being uploaded to S3 Bucket. I want the uploaded data to be refreshed with the In-memory data of each server on every s3 upload.
Note there are multiple servers running and I want to store the same data in all the servers. Also, the servers are scaling based on the traffic(also on start-up of the new server goes up and older ones go down means server instances will not be the same always).
Like I want to keep updated data in the cache.
I want to build an architecture where auto-scaling of the server can be supported. I came across the FAN-OUT architecture of AWS by using the SNS and multiple SQS from which different servers can poll.
How can we handle the auto-scaling of the queue with respect to servers?
Or is there any other way to handle the scenario?
PS: I m totally new to the AWS environment.
It Will be a great help for any reference.
To me there are a few things that you need to have to make this work. These are opinions and, as with most architectural designs, there is certainly more than one way to handle this.
I start with the assumption that you've got an application running on an EC2 of some sort (Elastic Beanstalk, Fargate, Raw EC2s with auto scaling, etc.) and that you've solved for having the application installed and configured when a scale-up event occurs.
Conceptually I'd have this diagram:
The setup involves having the S3 bucket publish likely s3:ObjectCreated events to the SNS topic. These events will be published when an object in the bucket is updated or created.
Next:
During startup your application will pull the current data from S3.
As part of application startup create a queue named after the instance id of the EC2 (see here for some examples) The queue would need to subscribe to the SNS topic. If the queue already exists then that's not an error.
Your application would have a background thread or process that polls the SQS queue for messages.
If you get a message on the queue then that needs to tell the application to refresh the cache from S3.
When an instance is shut down there is an event from at least Elastic Beanstalk and the load balancers that your instance will be shut down. Remove the SQS queue tied to the instance at that time.
The only issue might be that a hard crash of an environment would leave orphan queues. It may be advisable to either manually clean these up or have a periodic task clean them up.

Redis as a queue - Configuration review

I need to setup a Redis DB (2.8), which i suppose to use as a queue, which means that it's must be fully persistent (no message can be missed).
I'm pretty new with Redis, and i would like a get a review for my configuration:
I want to use both AOF and RDB persistence models, while always will be selected as appendfsync policy. According to their decontamination, always is not recommended, but i must select this option as i use Redis as a queue, and i can't endure any massages missing.
I would like a create a Master-Slave-Slave cluster using Sentinel with automatic failover.
Redis service will be automatically started after server boot.
Any kind of comments and suggestions will be great. The administration point of view is more important to me (persistence, backup, restore, high availability, etc).

RabbitMq Clustering

I am new to RabbitMq. I am not able to understand the concept here. Please find the scenario.
I have two machines (RMQ1, RMQ2) where I have installed rabbitmq in both the machines which are running. Again I clustered RMQ2 to join RMQ1
cmd:/> rabbitmqctl join_cluster rabbit#RMQ1
If you see the status of the machines here it is as below
In RMQ1
c:/> rabbitmqctl cluster_status
Cluster status of node rabbit#RMQ1...
[{nodes,[{disc,[rabbit#RMQ1,rabbit#RMQ2]}]},
{running_nodes,[rabbit#RMQ1,rabbit#RMQ2]}]
In RMQ2
c:\> rabbitmqctl cluster_status
Cluster status of node rabbit#RMQ2 ...
[{nodes,[{disc,[rabbit#RMQ1,rabbit#RMQ2]}]},
{running_nodes,[rabbit#RMQ1,rabbit#RMQ2]}]
The in order to publish and subscribe message I am connecting to RMQ1. Now I see the whenever I sent or message to RMQ1, I see message mirrored in both RMQ1 and RMQ2. This I understand clearly that as both the nodes are in same cluster they are getting mirrored across nodes.
Let say I bring down the RMQ2, I still see message getting published to RMQ1.
But when I bring down the RMQ1, I cannot publish the message anymore. From this I understand that RMQ1 is master and RMQ2 is slave.
Now I have below questions, without changing the code :
How do I make the RMQ2 take up the job of accepting the message.
What is the meaning of Highly Available Queues.
How should be the strategy for implementing this kind scenario.
Please help
Question #2 is best answered first, since it will clear up a lot of things for you.
What is the meaning of highly available queues?
A good source of information for this is the Rabbit doc on high availability. It's very important to understand that mirroring (which is how you achieve high availability in Rabbit) and clustering are not the same thing. You need to create a cluster in order to mirror, but mirroring doesn't happen automatically just because you create a cluster.
When you cluster Rabbit, the nodes in the cluster share exchanges, bindings, permissions, and other resources. This allows you to manage the cluster as a single logical broker and utilize it for scenarios such as load-balancing. However, even though queues in a cluster are accessible from any machine in the cluster, each queue and its messages are still actually located only on the single node where the queue was declared.
This is why, in your case, bringing down RMQ1 will make the queues and messages unavailable. If that's the node you always connect to, then that's where those queues reside. They simply do not exist on RMQ2.
In addition, even if there are queues and messages on RMQ2, you will not be able to access them unless you specifically connect to RMQ2 after you detect that your connection to RMQ1 has been lost. Rabbit will not automatically connect you to some surviving node in a cluster.
By the way, if you look at a cluster in the RabbitMQ management console, what you see might make you think that the messages and queues are replicated. They are not. You are looking at the cluster in the management console. So regardless of which node you connect to in the console, you will see a cluster-wide view.
So with this background now you know the answer to your other two questions:
What should be the strategy for implementing high availability? / how to make RMQ2 accept messages?
From your description, you are looking for the failover that high availability is intended to provide. You need to enable this on your cluster. This is done through a policy, and there are various ways to do it, but the easiest way is in the management console on the Admin tab in the Policies section:
The previously cited doc has more detail on what it means to configure high availability in Rabbit.
What this will give you is mirroring of queues and messages across your cluster. That way, if RMQ1 fails then RMQ2 will still have your queues and messages since they are mirrored across both nodes.
An important note is that Rabbit will not automatically detect a loss of connection to RMQ1 and connect you to RMQ2. Your client needs to do this. I see you tagged your question with EasyNetQ. EasyNetQ provides this "failover connect" type of feature for you. You just need to supply both node hosts in the connection string. The EasyNetQ doc on clustering has details. Note that EasyNetQ even lets you inject a simple load balancing strategy in this case as well.

Does RabbitMQ contain functionality to deal with offline target nodes

Being new to the RabbitMQ I was wondering how to deal with an offline target node.
As an example this scenario:
1 log recording application that stores logs to some persistent storage
N log publishing applications that want their logs to be written to the persistent storage via the log recording server.
There would be two options:
Each publishing application publishes it's log messages to it's local RabbitMQ instance and the log recording server must subscribe to each of these
The log recording application has it's local RabbitMQ instance on which each log publishing application delivers it's messages.
Option 1 would require me to reconfigure/recode/notify the recording application each time a new application appears or moves. Therefore I would think Option 2 is the right one, each new publishing application simply writes to the RabbitMQ Node of the recording application.
The only thing I am struggling with is how to deal with a situation in which the Node of the recording application is down. Do I need to build my own system to store the messages until it's back online or can I use some functionality of RabbitMQ to deal with that? I.e. could the local RabbitMQ of each of the publishing applications just receive the messages and forward them to the recording application RabbitMQ as soon as it's back online?
I found something about the Federated plugin be couldn't understand if that's the solution. Maybe I need something different or maybe I have to write my own local queueing system (which I hope I don't have to) to queue messages when the target Node is offline.
Any links to architectural examples or solutions are more than welcome.
BTW: https://groups.google.com/forum/#!topic/easynetq/nILIKSjxyMg states that you shouldn't be installing a RabbitMQ Node for each application, so maybe I should resort to something like MSQM or ZeroMQ (?)
From experience in what sounds like a similar situation, I would suggest using something other than a queue to store the messages locally, when offline.
Years ago, I built a system that had to work offline - no network connection at all - and then had to push messages through a message queue to the central server, when the laptop was brought back to the office.
I solved this by using a local database (sqlite at the time) to store my messages when the message queue was not available.
You should do something similar. Use a local database or even a plain text file or CSV file to store your messages when RabbitMQ is offline. When it reconnects, read the messages from your local file system and send them through RabbitMQ.
This is a good strategy to use, even if you do not expect RabbitMQ to go offline. Frankly, it will go offline at some point and you will have to deal with it. You should be prepared for that situation, and having a local store for your messages will help that.
...
regarding rqm node per application: bad idea. this adds a ton of complexity to your system. You want as few RabbitMQ nodes as you can get away with. Meaning, 1 per system (a system being comprised of many applications) when possible... with the exception of RabbitMQ clusters for availability - but that's another line of questions and design, entirely.
...
I did an interview with Aria Stewart about designing for failure with RabbitMQ and messaging systems, and have a small excerpt where she talks about how networks fail.
The point is, the network or RabbitMQ or something will fail and you will need a solution like a local datastore so that you can recover when RabbitMQ comes back online.