Upgrade ActiveMQ 5.12.1 to 5.14.1 - activemq

I've currently got ActiveMQ 5.12.1 running and want to upgrade to ActiveMQ 5.14.1. I can't seem to find any documentation on upgrading from one version to another. Is it as simple as copying the files over? I don't want to lose any of my queues or subscribers.

The important data is in the persistence-store, i.e. typically KahaDB.
You can simply install the new ActiveMQ version and copy the store (or simply point it out if it's on a shared disk or whatnot) and all messages should remain.
Although the configuration is probably just fine to copy, I would make it a habit to look through the release notes of new features and improvements and see if there is something you can use. This is even more important if you use advanced AMQ features such as network of brokers or newer features where there are more updates, such as AMQP, MQTT and WebSockets.
If there is some major thing that needs attention from one version to another, there will be a note in the release-notes.

Related

Embedded BrokerService vs installed ActiveMQ broker

I would like to know are there feature wise same or different? Could you also mention any pros and cons about both of these? Also please mention real-world use case for both Embedded BrokerService vs installed ActiveMQ broker. Thanks in advance!
ActiveMQ is just a Java application, and the embedded version offers essentially the same features as the stand-alone version. In fact, you can configure an embedded broker to take its configuration from an XML file, in which case it will look very similar to the stand-alone broker.
Embedding a broker is a reasonable thing to do if you need the benefit of programmatic configuration; that is, you want to configure things according to rules which are hard to implement in an XML file. It also makes sense if you want close-coupled operation between the broker and the application components, with message data being passed in memory. This might be the situation if you're using JMS as an inter-module communication mechanism within the application.
Embedding a broker has the disadvantage -- and it can be a profound one -- of making it difficult to disentangle problems in the broker from problems in your application. Figuring out the cause of, say, runaway memory consumption could be very difficult. You can get commercial support for ActiveMQ, should you need it, but it will be hard for any commercial organization to support a hybrid broker+application installation.

Clone RabbitMQ admin users, etc. on replacement server

We have a couple of crusty AWS hosts running a RabbitMQ implementation in a cluster. We need to upgrade the hardware, and therefore we developed a Chef cookbook to spawn replacement servers.
One thing that we would rather not recreate by hand is the admin users, the queues, etc.
What is the best method to get that stuff from the old hosts to the new ones? I believe it's everything that lives in the /var/lib/rabbitmq/mnesia directory.
Is it wise to copy the files from one host to another?
Is there a programmatic means to do this?
Can it be coded into our Chef cookbook?
You can definitely export and import configuration via command line: https://www.rabbitmq.com/management-cli.html
I'm not sure about admin user, though.
If you create new rabbitmq nodes on your new hardware, you will get all the users in that new node. This is easy to try:
run docker container with image of rabbitmq (with management plugin)
and create a user
run another container and add that node to the
cluster of the first one
kill rabbitmq on the first one, or delete
the docker container and you will see that you still have the newly
created user on the 2nd (but now master) node
I wrote docker since it's faster to create a cluster this way, but if you already have a cluster you could use it for testing if you prefer.
For the queues and exchanges, I don't want to quote almost everything found in the rabbitmq doc page for the high availability, but I will just say that you have to pay attention to the following:
exclusive queues because they are gone once the client connection is gone
queue mirroring (if you have any set up, if not it would be wise to consider it, if not even necessary)
I would do the migration gradually, waiting for the queues to get emptied and then kill of the nodes on the old hardware. It maybe doable in a big-bang fashion, but seems riskier. If you have a running system, than set up queue mirroring and try to find appropriate moment to do manual sync - but careful, this has a huge impact on the broker performance.
Additionally there is this shovel plugin (I have to point out that I did not use it or even explore it) but that may be another way to go since (quoting form the link):
In essence, a shovel is a simple pump. Each shovel:
connects to the source broker and the destination broker, consumes
messages from the queue, re-publishes each message to the destination
broker (using, by default, the original exchange name and
routing_key).

Migration of Active MQ version from 5.5.1 to 5.11.2

Planning to migrate Active MQ version form 5.5.1 to 5.11.2 how to migrate the existing messages from older version(5.5.1) to newer version(5.11.2)
Thanks in advance.
This assumes you have already taken care of any migration issues noted in each release note from 5.6.0 to 5.11.2.
There are essentially two ways to upgrade/migrate a broker.
Simply install the new broker and point out the old (kahaDB) database. This will automatically upgrade to a new version. This may cause some downtime during store upgrade (at least if there are a lot of messages in the store).
Have two parallell brokers running at once and let the old "fade out". You can setup a shiny new 5.11 broker side by side. This also makes it possible to migrate to other store types (JDBC or LevelDB). It's a little more work but will keep you uptime maximized. If you depend on message order, I would not recommend this method.
Setup the new broker.
Remove transportConnector from the old broker, and add a network connector from old to new.
Stop old, start new, start old.
Now, clients (using failover, right?) will fail over to the new broker and messages from the old brokers will be copied over to new as long as there are connected consumers on all queues.
When no more messages are left on old broker, shut it down and uninstall.
As with all upgrades, bypassing a lot of versions will make the upgrade less reliable. I would try some dry run upgrade of a production replica to ensure that everything goes as planned.

Use ActiveMQ JAR with an older broker

In our production environment we use ActiveMQ 5.4.3
We encouter a problem since we added the option schedulerSupport="true" in the broker. The problem encountered is : javax.jms.JMSException: PageFile is not loaded
I recently discovered that this problem is fixed in the version 5.8
Would it be a problem to use the jar activemq-all-5.8.0.jar with that broker or do I have to upgrade the broker from 5.4.3 to 5.8 too ?
Thanks
It's recommend that client and Broker use the same versions. In theory you can mix versions as the underlying Openwire protocol is backwards compatible however it's not something we test heavily. The usual case if that people can upgrade their Broker and need to leave client's behind and that is known to work better. The problem with mixing versions is that there may be bug fixes in one that are necessary for the other to function correctly so you might still see bad behaviour even though you think it should improve things.

Embedded or an External ActiveMQ Broker with Glassfish

This would be my first time using ActiveMQ (instead of the out-of-the-box OpenMQ in GF) and I am trying to determine which approach is better in terms of scaling and maintaining an ActiveMQ environment. We do have experience in setting up and maintaining Glassfish clusters and deploy applications to it. But we are contemplating on what approach is better as we don't want to go down a rabbit hole that we can't get out of because we built environments around it and seeing towards the end that the infrastructure we had setup wouldn't scale.
Has anybody tried using both approaches? Even if anybody implemented one of the approaches with Glassfish, telling us their experience (gains and pains) would be very helpful and appreciated.
For 99% of cases, it's usually better to deploy a standalone broker - this way you're treating your messaging as just another layer of the infrastructure, much like a database. When a broker is standalone, you can set it up as highly available, upgrade it at will without modifying your applications (a broker can be upgraded without upgrading the client libraries), and can scale it out as appropriate later on if you need to (most projects don't).
I have seen people deploy brokers as embedded, with a convoluted network of brokers to get all the boxes in a cluster talking to each other. This usually ends in tears and reverting back to a separate master-slave pair of brokers. Which is all they needed all along.