We are using rabbit MQ as messaging broker in our project. we gone through some of the blogs and documents about persistence store in Rabbit MQ.
There are default options for the message store like in memory , queue index and flat files on the disk.
As we are more concern on the message durability , we are searching for best approach for data store with rabbit mq.
Again just to confirm, can we use database like ** for another broker active mq we can use database as persistence store**.
Any help/thoughts would be appreciated.
RabbitMQ uses a custom DB and it is not possible to change it as ActiveMQ.
Starting from the version 3.7.0 is possible to change the index DB using eleveldb, see: https://github.com/rabbitmq/rabbitmq-msg-store-index-eleveldb
Related
I have an instance of ActiveMQ 5.16.4 running that is using MySQL as a persistent data storage. Recently the MySQL server had some issues, and ActiveMQ lost its connection to MySQL. That caused multiple Spring microservices to throw errors because ActiveMQ wasn't working.
Is it possible to have master/slave ActiveMQ running where master and slave uses separate persistence storage?
I have done some research and found "pure master slave", but it says that it is deprecated and not recommend to use and will be removed in 5.8. It says to use shared storage which I am trying to avoid (cause my problem is what if storage itself is down).
What are my options to keep running ActiveMQ if it loses connection to database?
If you're using ActiveMQ "Classic" (i.e. 5.x) then your only option is to use shared storage between the master and the slave. This could be a shared file system or a relational database. This, of course, is a single point of failure.
However, there are both file system and database technologies that can mitigate this risk. For example you could use a replicated file system (e.g. Ceph or GlusterFS) or a replicated database (e.g. MySQL).
You might also consider using ActiveMQ Artemis (i.e. the next-generation broker from ActiveMQ) which supports replication natively.
I have a weblogic cluster which has 4 nodes (managed servers). Today I found two of them are down, and I found in suprise that some JMS messages are not sent.
I wonder if it's the normal behaviour ? Shouldn't the cluster continue to deliver JMS using the two available nodes ?
In order to reach high availability for JMS you should configure two things
Migratable targets.
Persistance based either on shared storage or a database.
Why migratable targets? This is because messages produced by i.e. JMSServer01 can only be processed by JMSServer01. Thus, when you configure migratable targets the JMSServer01 will be migrated automatically into another Weblogic server.
Why persistance based on shared storage or a database? This is because once the JMS Server is migrated into another server, it will try to process the messages, which must be in a shared storage or database that can be seen by all your Weblogic servers.
You can find more information here https://docs.oracle.com/middleware/1213/core/ASHIA/jmsjta.htm#ASHIA4396
Cloud hub workers are NOT clustered , however we get Message loss protection and workload distribution across mule instances using Persistent queues. Also we can use default persistent object store (_defaultUserObjectStore ) for distributed caching ( with tweak). Correct me if I am wrong here.
With above features present , What is that we are missing in CloudHub as compared to On -premise clusters ? ( Is it Concurrency / one-time message delivery issue preventions ?)
First of all why did Mulesoft not enable clustering feature on Cloud hub ?
I would say that with the above features present you do not miss out anything. Also keep in mind that even in the On Prem HA Cluster the shared queues and states (object stores) are by default keept in shared memory and there is no persistens if the complete cluster goes down. To get the persistence you need to do tweaks also for a on prem cluster. As such for true message reliability I would suggest you look at a external message broker or service such as Anypoint MQ.
As for why Mulesoft did no enable clustering I can not answer since I'm not a Mulesoft employee. However best practices in integrations and API design is to keep the application stateless. When this is followed and you use a external message broker, such as Anypoint MQ, to implement the reliable messaging pattern the need for the Mule runtime HA cluster capabilities are small.
I'm investigating the usage of some message broker that does not depend on any external services. I hit upon ActiveMQ which was using replicated LevelDB and that apparently required ZooKeeper services. With ActiveMQ now switching to KahaDB, is zookeeper still required for using ActiveMQ ?
Any recommendations on what the best message broker would be, my deployment does not deal with high scale pub-sub. I'm looking for something very lightweight that can support reliable message delivery, persistent messages and HA.
I found the answer to my own question
http://activemq.apache.org/kahadb-master-slave.html
yes, even KahaDB requires zookeeper at the moment
ActiveMQ does not require ZooKeeper to run, the default store KahaDB does not have a replication feature such as that in LevelDB and so does not need any ZooKeeper instances.
For HA you might want to look into ActiveMQ Artemis which offers solutions beyond what exists in ActiveMQ proper.
I am using replicated levelDB in ActiveMQv5.11.1. I have a use case of delayed message handling. I have gone through the documentation and it looks like I can't use it with leveldb (Only kahadb supports scheduler)
I have also seen couple of posts about In-memory scheduler (https://dzone.com/articles/coming-activemq-v511-memory) but I think I need to run the broker with persistent=false.
Is there a way I can use In memory scheduler with replicated levelDB ?
Thanks,
Anuj
When using LeveDB if you've enabled scheduler support and you have the activemq-kahadb-store jar on the classpath the broker will still create a Job Scheduler store as the default store is based on KahaDB but is not intrinsically tied to it so it can be created standalone.
There are also some setters in the broker service where you can set the scheduler store you want to use as well.