I'm new to RabbitMQ and i need some help.
how to do backup and restore to RabbitMQ, and what is the important data i need to save.
thanks!
If you have the management plugin installed you can back-up and restore the broker on the Overview page. At the bottom you will see Import/Export Definitions and you can use this to download a JSON representation of your broker.
This will restore Exchanges, Queues, Virtual Hosts, Policies and Users.
Hope that helps.
For those looking for the HTTP API endpoint, it is:
http://rabbit:15672/api/definitions
Another way to automate this is to use the command line tool rabbitmqadmin (http://rabbit:15672/cli/) add pass the export subcommand, e.g.
rabbitmqadmin export rabbit-backup.config
Related
Per my understanding, cloudhub is a PaaS service and we can deploy applications directly to Cloudhub. I have below questions
Can we create intermediate files on cloudhub. If yes, how can we define the path ?
When we use SFTP to pull file from particular location, what should be the path on cloudhub server for processing
Can we do SSH on cloudhub server
If we need to externalize cron timings of scheduler ,(via config etc to avoid code change) , what is the best practice for setting cron expression.
All above questions are related to Cloudhub deployment model.
Thanks in advance
The scheduler already gets externalized in the platform when you deploy to CloudHub.
You can technically store the files in /temp, but don't expect them to persist. That would be an "ephemeral" file system.
You can not SSH into the CloudHub server.
Rather than downloading the entire SFTP file and saving it, and then working on it, I would suggest streaming it if possible. You can process JSON/XML/CSV files as a stream, and even use deferred DataWeave with them enabling end-to-end streaming.
We are developing a system which uses rabbitMQ for sending and receiving data between its clients and servers.
The internet connection may sometimes be lost.
1- Can all the messages in the queue be exported to a file ? And somehow be imported to the client using this file?
2- In a different scenario, a client wants to send some messages to the queue but it has no internet connection! So we want to export all the message from client and make a file and somehow send it to the server (eg. transfer it to another location which has internet), Is this possible to import this file to the queue?
I had the same questions as I wanted to replay messages for testing / load testing purposes.
I made RabbitDump, a dotnet tool, to do this. It allows you to do all possible transfers from AMQP to and from Zip (bunch of messages). Examples: AMQP => ZIP, AMQP => AMQP, ZIP => AMQP and ZIP => ZIP (because why not ..).
The tool can be found here. It's installable as a dotnet tool, using dotnet tool install --global MBW.Tools.RabbitDump.
This tool will be useful to export messages from the remote queue and push them on a local RabbitMQ.
https://github.com/jecnua/rabbitmq-export-to-local
You can import/export messages using QueueExplorer.
Disclaimers: I'm the author, it's a commercial tool, and for now on Linux it runs under Wine.
https://www.cogin.com/QueueExplorer/rabbitmq/
I am using RabbitMq on windows. I am trying to explore rabbitmqctl options.
i could see options to purge queue, create and delete shovels.
can you please tell me the rabbitmqctl usage to,
1. Create and delete exchange
2. Create and delete queues.
3. Bind and unbind queues.
i am trying to write scripts that can automate all the configurations based on input.
Look at rabbitmqadmin tool, it ships with RabbitMQ Management Plugin. It can declare/delete exchanges/queues/bindings.
Also look at this question and this post.
Just google "rabbitmqadmin your action"
Also you can use Management REST API
How to
create/delete exchange,
create/delete Queues
bind/unbind queues
using rabbitmqctl.
Please suggest me.
I am using rabbitmq on windows.
With rabbitmqctlyou can't.
You can so that using Management Command Line Tool
The management plugin ships with a command line tool rabbitmqadmin
which can perform the same actions as the web-based UI, and which may
be more convenient for use when scripting. Note that rabbitmqadmin is
just a specialised HTTP client; if you are contemplating invoking
rabbitmqadmin from your own program you may want to consider using the
HTTP API directly.
We have a couple of crusty AWS hosts running a RabbitMQ implementation in a cluster. We need to upgrade the hardware, and therefore we developed a Chef cookbook to spawn replacement servers.
One thing that we would rather not recreate by hand is the admin users, the queues, etc.
What is the best method to get that stuff from the old hosts to the new ones? I believe it's everything that lives in the /var/lib/rabbitmq/mnesia directory.
Is it wise to copy the files from one host to another?
Is there a programmatic means to do this?
Can it be coded into our Chef cookbook?
You can definitely export and import configuration via command line: https://www.rabbitmq.com/management-cli.html
I'm not sure about admin user, though.
If you create new rabbitmq nodes on your new hardware, you will get all the users in that new node. This is easy to try:
run docker container with image of rabbitmq (with management plugin)
and create a user
run another container and add that node to the
cluster of the first one
kill rabbitmq on the first one, or delete
the docker container and you will see that you still have the newly
created user on the 2nd (but now master) node
I wrote docker since it's faster to create a cluster this way, but if you already have a cluster you could use it for testing if you prefer.
For the queues and exchanges, I don't want to quote almost everything found in the rabbitmq doc page for the high availability, but I will just say that you have to pay attention to the following:
exclusive queues because they are gone once the client connection is gone
queue mirroring (if you have any set up, if not it would be wise to consider it, if not even necessary)
I would do the migration gradually, waiting for the queues to get emptied and then kill of the nodes on the old hardware. It maybe doable in a big-bang fashion, but seems riskier. If you have a running system, than set up queue mirroring and try to find appropriate moment to do manual sync - but careful, this has a huge impact on the broker performance.
Additionally there is this shovel plugin (I have to point out that I did not use it or even explore it) but that may be another way to go since (quoting form the link):
In essence, a shovel is a simple pump. Each shovel:
connects to the source broker and the destination broker, consumes
messages from the queue, re-publishes each message to the destination
broker (using, by default, the original exchange name and
routing_key).