Keep ActiveMQ running when losing connection to database - activemq

I have an instance of ActiveMQ 5.16.4 running that is using MySQL as a persistent data storage. Recently the MySQL server had some issues, and ActiveMQ lost its connection to MySQL. That caused multiple Spring microservices to throw errors because ActiveMQ wasn't working.
Is it possible to have master/slave ActiveMQ running where master and slave uses separate persistence storage?
I have done some research and found "pure master slave", but it says that it is deprecated and not recommend to use and will be removed in 5.8. It says to use shared storage which I am trying to avoid (cause my problem is what if storage itself is down).
What are my options to keep running ActiveMQ if it loses connection to database?

If you're using ActiveMQ "Classic" (i.e. 5.x) then your only option is to use shared storage between the master and the slave. This could be a shared file system or a relational database. This, of course, is a single point of failure.
However, there are both file system and database technologies that can mitigate this risk. For example you could use a replicated file system (e.g. Ceph or GlusterFS) or a replicated database (e.g. MySQL).
You might also consider using ActiveMQ Artemis (i.e. the next-generation broker from ActiveMQ) which supports replication natively.

Related

Roll back Gcloud Redis upgrade

I like to upgrade the redis memorystore instance in our gcloud because 5.x (at least in Github) appears to have reached its end of life. It's being use for simple key value pairs, so I don't expect anything unexpected during the upgrade to 6.x. However management is nervous and wants a way to rollback the upgrade if there are issues. Is there a way to do this? The documentation appears to say that rollback is not possible. I plan to do the usual backup and then upgrade. The instance is just the basic.
In order to Upgrade the redis memorystore instance, follow the best practices mentioned in the Public Documentation as the following :
We recommend exporting your instance data before running a version upgrade operation.
Note that upgrading an instance is irreversible. You cannot downgrade the Redis version of a Memorystore for a Redis instance.
For Standard Tier instances, to increase the speed and reliability of your version upgrade operation, upgrade your instance during
periods of low instance traffic. To learn how to monitor instance
traffic, see Monitoring Redis instances.
As mentioned in the documentation which recommends you to enable RDB Snapshots.
Memorystore for Redis is primarily used as an in-memory cache. When
using Memorystore as a cache, your application can either tolerate
loss of cache data or can very easily repopulate the cache from a
persistent store.
However, there are some use cases where downtime for a Memorystore
instance, or a complete loss of instance data, can cause long
application downtimes. We recommend using the Standard Tier as the
primary mechanism for high availability. Additionally, enabling RDB
snapshots on Standard Tier instances provides extra protection from
failures that can cause cache flushes. The Standard Tier provides a
highly available instance with multiple replicas, and enables fast
recovery using automatic failover if the primary fails.
In some scenarios you may also want to ensure data can be recovered
from snapshot backups in the case of catastrophic failure of Standard
Tier instances. In these scenarios, automated backups and the ability
to restore data from RDB snapshots can provide additional protection
from data loss. With RDB snapshots enabled, if needed, a recovery is
made from the latest RDB snapshot.
For more information, you can refer to the documentation related to version upgrade behavior.

Weblogic : node in cluster shuts down, no JMS message is sent

I have a weblogic cluster which has 4 nodes (managed servers). Today I found two of them are down, and I found in suprise that some JMS messages are not sent.
I wonder if it's the normal behaviour ? Shouldn't the cluster continue to deliver JMS using the two available nodes ?
In order to reach high availability for JMS you should configure two things
Migratable targets.
Persistance based either on shared storage or a database.
Why migratable targets? This is because messages produced by i.e. JMSServer01 can only be processed by JMSServer01. Thus, when you configure migratable targets the JMSServer01 will be migrated automatically into another Weblogic server.
Why persistance based on shared storage or a database? This is because once the JMS Server is migrated into another server, it will try to process the messages, which must be in a shared storage or database that can be seen by all your Weblogic servers.
You can find more information here https://docs.oracle.com/middleware/1213/core/ASHIA/jmsjta.htm#ASHIA4396

Is there any good way to integrate OpenLDAP or ApacheDS servers with JMS to propagate LDAP database modification to another service?

Is there any good way to integrate OpenLDAP or ApacheDS servers (or maybe another open-source LDAP server) with JMS to propagate LDAP database modification to another service?
Basically I need to have LDAP server cluster (several instances with master to master replication) and another standalone Java application, connected via a JMS server (e.g. ActiveMQ), so that:
All changes to LDAP data structure are sent to the Java app.
The Java app. can send messages to the LDAP database via JMS server to update LDAP data
I found out that there is a way to set up JMS replication for ApacheDS (https://cwiki.apache.org/DIRxSRVx11/replication-requirements.html#ReplicationRequirements-GeneralRequirements), but I am in doubt whether it will work in case we have a cluster of several ApacheDS masters + one JMS replication node to send all modifications to the cluster.
UPDATE: The page describing JMS replication for ApacheDS turned out to be 5 ears old, so currently the only way of replication in ApacheDS, I know about, is LDAP protocol based replication.
There IDM products that will perform what you are asking about.
I know NetIQs IDM products works well with JMS.
OpenLDAP and ApacheDS have a changeLog that you could use to determine the changes made.
You could then write some code to send the changes to JMS Queue.
I can't speak for ApacheDS, but OpenLDAP already contains a full-blown replication system, with about six different ways to configure it; in other words, you can do it perfectly well, and much more efficiently, without Java and JMS.

Does using ActiveMQ in Master/Slave mode with JDBC preclude use of journaling?

My group is looking to distribute our ActiveMQ queues across multiple brokers to achieve high availability. Of the three supported master-slave setups (pure, shared filesystem, JDBC) we are considering shared file system and JDBC.
I am seeing conflicting statements within the ActiveMQ documentation. Can, or can not, JDBC master-slave setup use ActiveMQ's high-performance journal?
On this page, ActiveMQ claims that
it cannot use the high performance journal.
On this page, ActiveMQ suggests that the two can, in fact, be used together:
For long term persistence we recommend using JDBC coupled with our high performance journal.
Can anyone shed light on this apparent conflict?
you should not use journaling with JDBC master/slave because the journal is not replicated. Any messages in the journal of the master that have not yet been batch submitted to the jdbc store will be isolated till restart. ie: the journal is not visible to the slave.

Cassandra failover vs other databases?

Cassandra offers controlled consistency like "write to 2 nodes and tell me it's done".
Two "master" nodes and some slaves makes system good failover.
MongoDB offers replication pairs - simmilar failover force like cassandra?
Is there any other database with this form-box functionality?
Cassandra is a fully distributed system, so there is no need for explicit failover. If the machine you are sending requests to dies, you just reconnect to another (RRDNS, haproxy, any method is fine). Even losing an entire datacenter is handled by Cassandra without your app having to care.