Migration of Active MQ version from 5.5.1 to 5.11.2 - activemq

Planning to migrate Active MQ version form 5.5.1 to 5.11.2 how to migrate the existing messages from older version(5.5.1) to newer version(5.11.2)
Thanks in advance.

This assumes you have already taken care of any migration issues noted in each release note from 5.6.0 to 5.11.2.
There are essentially two ways to upgrade/migrate a broker.
Simply install the new broker and point out the old (kahaDB) database. This will automatically upgrade to a new version. This may cause some downtime during store upgrade (at least if there are a lot of messages in the store).
Have two parallell brokers running at once and let the old "fade out". You can setup a shiny new 5.11 broker side by side. This also makes it possible to migrate to other store types (JDBC or LevelDB). It's a little more work but will keep you uptime maximized. If you depend on message order, I would not recommend this method.
Setup the new broker.
Remove transportConnector from the old broker, and add a network connector from old to new.
Stop old, start new, start old.
Now, clients (using failover, right?) will fail over to the new broker and messages from the old brokers will be copied over to new as long as there are connected consumers on all queues.
When no more messages are left on old broker, shut it down and uninstall.
As with all upgrades, bypassing a lot of versions will make the upgrade less reliable. I would try some dry run upgrade of a production replica to ensure that everything goes as planned.

Related

Is it possible to reconfigure Rabbit MQ once you have changed machine to point at the new machine without uninstalling or demising it

I've come into an issue post install of Rabbit MQ where it was all set up and configured with the web apps on the machine and communicating to local applications however the machine had to be moved to a different tranch of machines and renamed as a result. Now Rabbit MQ can no longer serve or handle comms as intended as it's config points to rabbit#PREVIOUS_MACHINE instead of rabbit#CURRENT_MACHINE.
In the rabbit MQ config however, to complicate this, there was some configuration that was done from the users on the system that were fed into the local apps that are then encrypted into that local app's database and used for communicating with all the local apps. the issue here is if I drop and recreate Rabbit MQ a make a new user this won't align to what the other internal apps are using and I believe they are not configurable post install so a reinstall of everything is the potential impact.
the question is, is it possible to re-config or update the current RabbitMQ installation files to now point at the local machine name instead of the previous machine name AND I guess by proxy is this something that would even work. The docs over at rabbitmq don't quite deal with this specific scenario, unfortunately from what I've read through.
so i want to confirm that RMQ is the absolute dogs tits of a black magic box.
anyway,
following these steps from here minus the first two
How to change RabbitMQ node name without changing my hostname
this is the inverse of my problem pretty much. But for those in the future who have this issue;
I had Rabbit MQ on another machine installed and running, the machines name was changed and the solution was to uninstall the service, delete the db and reinstall the service. SOMEHOW rmq manages to keep the config knowledge of all the queues that were in the db in the system and when you reinstall the service it brings all the queues back as well. the only issue I had after that was to remember my username and password that were not default user set ups and I did so that solved my issue. still have no idea how RMQ manages to remember the previous configs despite deleting the local db, crazy cool. very grateful to whoever built that into the tool

Best Practice to Upgrade Redis with Sentinels?

I have three redis nodes being watched by 3 sentinels. I've searched around and the documentation seems to be unclear as to how best to upgrade a configuration of this type. I'm currently on version 3.0.6 and I want to upgrade to the latest 5.0.5. I have a few questions on the procedure around this.
Is it ok to upgrade two major versions? I did this in our staging environment and it seemed to be fine. We use pretty basic redis functionality and there are no breaking changes between the versions.
Does order matter? Should I upgrade say all the sentinels first and then the redis nodes, or should the sentinel plane be last after verifying the redis plane? Should I do one sentinel/redis node at a time?
Any advice or experience on this would be appreciated.
I am surprised by the lack of response to this, but I understand that the subject kind of straddles something like stackoverflow and something like stack exchange. I'm also surprised at the lack of documentation I was able to find on the subject.
I did some extensive testing in a staging environment and then proceeded to our production and the procedure I followed seemed to work for the most part:
Upgrading from 3.0.6 to 5.0.5 in our case seems to be working without a hitch. As I said in the original post, we use the basics in redis and there hasn't been much changed from the client perspective.
I went forward upgrading in this order:
The first two sentinel peers and then the sentinel currently in the leader status.
Each of the redis nodes listed as slaves (now known as replicas).
After each node is upgraded, it will want to copy its dump.rdb from the master
A sync can be done to a 5 node from a 3 node, but once a 5 node is the master, a 3 node cannot sync, so once you've failed over to an upgraded node, you can't go back to the earlier version.
Finally use the sentinels to failover to an upgraded node as master and upgrade the former master
Hopefully someone might find this useful going forward.

Upgrade ActiveMQ 5.12.1 to 5.14.1

I've currently got ActiveMQ 5.12.1 running and want to upgrade to ActiveMQ 5.14.1. I can't seem to find any documentation on upgrading from one version to another. Is it as simple as copying the files over? I don't want to lose any of my queues or subscribers.
The important data is in the persistence-store, i.e. typically KahaDB.
You can simply install the new ActiveMQ version and copy the store (or simply point it out if it's on a shared disk or whatnot) and all messages should remain.
Although the configuration is probably just fine to copy, I would make it a habit to look through the release notes of new features and improvements and see if there is something you can use. This is even more important if you use advanced AMQ features such as network of brokers or newer features where there are more updates, such as AMQP, MQTT and WebSockets.
If there is some major thing that needs attention from one version to another, there will be a note in the release-notes.

Clone RabbitMQ admin users, etc. on replacement server

We have a couple of crusty AWS hosts running a RabbitMQ implementation in a cluster. We need to upgrade the hardware, and therefore we developed a Chef cookbook to spawn replacement servers.
One thing that we would rather not recreate by hand is the admin users, the queues, etc.
What is the best method to get that stuff from the old hosts to the new ones? I believe it's everything that lives in the /var/lib/rabbitmq/mnesia directory.
Is it wise to copy the files from one host to another?
Is there a programmatic means to do this?
Can it be coded into our Chef cookbook?
You can definitely export and import configuration via command line: https://www.rabbitmq.com/management-cli.html
I'm not sure about admin user, though.
If you create new rabbitmq nodes on your new hardware, you will get all the users in that new node. This is easy to try:
run docker container with image of rabbitmq (with management plugin)
and create a user
run another container and add that node to the
cluster of the first one
kill rabbitmq on the first one, or delete
the docker container and you will see that you still have the newly
created user on the 2nd (but now master) node
I wrote docker since it's faster to create a cluster this way, but if you already have a cluster you could use it for testing if you prefer.
For the queues and exchanges, I don't want to quote almost everything found in the rabbitmq doc page for the high availability, but I will just say that you have to pay attention to the following:
exclusive queues because they are gone once the client connection is gone
queue mirroring (if you have any set up, if not it would be wise to consider it, if not even necessary)
I would do the migration gradually, waiting for the queues to get emptied and then kill of the nodes on the old hardware. It maybe doable in a big-bang fashion, but seems riskier. If you have a running system, than set up queue mirroring and try to find appropriate moment to do manual sync - but careful, this has a huge impact on the broker performance.
Additionally there is this shovel plugin (I have to point out that I did not use it or even explore it) but that may be another way to go since (quoting form the link):
In essence, a shovel is a simple pump. Each shovel:
connects to the source broker and the destination broker, consumes
messages from the queue, re-publishes each message to the destination
broker (using, by default, the original exchange name and
routing_key).

Use ActiveMQ JAR with an older broker

In our production environment we use ActiveMQ 5.4.3
We encouter a problem since we added the option schedulerSupport="true" in the broker. The problem encountered is : javax.jms.JMSException: PageFile is not loaded
I recently discovered that this problem is fixed in the version 5.8
Would it be a problem to use the jar activemq-all-5.8.0.jar with that broker or do I have to upgrade the broker from 5.4.3 to 5.8 too ?
Thanks
It's recommend that client and Broker use the same versions. In theory you can mix versions as the underlying Openwire protocol is backwards compatible however it's not something we test heavily. The usual case if that people can upgrade their Broker and need to leave client's behind and that is known to work better. The problem with mixing versions is that there may be bug fixes in one that are necessary for the other to function correctly so you might still see bad behaviour even though you think it should improve things.