I have a many RabbitMQ brokers and single Celery application.
I would like to route all messages from all brokers to this single application. What is the simplest way to do this?
I know that it can be done by creating a federation between RabbitMQ brokers and I would like to know is there another way to do it.
I think you can't do this in Celery. There is an option to provide list of brokers, but it will be used for failover strategy. Read more here
broker_url = [
'transport://userid:password#localhost:port//',
'transport://userid:password#hostname:port//'
]
Related
I have a project where we are using Rabbit MQ has message broker, I have below concern, please help on the same.
If Rabbit MQ goes down, how we can retrieve the queued message.is there any configuration in rabbit MQ?
Can we implement same in java thread and collection combination, that can be used as alternative to rabbit MQ? if yes help with an example.
'You should listen to ShutdownListenercallback on both Connection and Channelclasses'. By this way, you know if the queue is down. After that, you need to re-transmit your queued messages. This is what official documentations says. https://www.rabbitmq.com/reliability.html
Of course you can implement your own library, but you have to think if this would be better for you. I suggest you not to do that. RabbitMQ is a well-known open source library that many people use and trust for years. I think there is no side-effect using that in any project.
Deploy RabbitMQ on Kubernetes with stateful sets. This will replicate state in multiple instances. One of them will be primary. Failover will be handled by Kubernetes.
See https://kublr.com/blog/reliable-fault-tolerant-messaging-rabbitmq-kublr/
When creating a RabbitMQ cluster, non-mirrored queues from other nodes are "remotely accessible" from other nodes.
To a naive developer they will seemingly be able to publish to and consume from any node in an cluster and it will give them a false sense of high-availability.
If the node hosting the queue dies, the consumer will no longer be able to reach the queue from the other node.
Is there a way to disable this behaviour so that it's obvious that one has to either have a mirrored queue or needs to create a distinct queues on each server, consume from both and then handle duplicates.
Thanks
It is not possible disable this behaviour, this is one of the main reasons why you create a cluster.
BTW, you can create a federated cluster by using federation plug-in.
So you can:
have isolated nodes
share only the exchanges or/and queues you prefer.
Q: we want publish same message in different Activemq servers. can we have any approach. like we will publish once and activemq changes will give a forward that message to another instance.
or is there any way we can do it by the activemq config changes?
There is not much context in the question but a simple Topic together with Network of brokers should do that.
The idea is that you connect multiple brokers using "network of brokers", then messages sent to a topic will be available to all clients on all brokers throughout the network.
There are a lot of corner cases when it comes to network of brokers and topics, but it should do the work.
There're 2 brokers which are configured as a cluster through network connector.
Allways, messages are sent by a producer to broker0, and consumed by a consumer of broker0. But we found that some duplicated messages are sent to broker1, even broker0 are working well.
That's say, this duplicated messages are contains in both broker0 and broker1. Could anyone tell me the reason ?
Thank you
Such kind of situation can occur, if you are trying to use two independent ActiveMQ instances in a cluster and client has been given access to both broker URLs.
The solution is to use the master-slave feature that is designed to provide high availability.
Are Activemq, Redis and Apache camel a right combination?
Am planning for a high performant enterprise level integration solution accross multiple applications
My objective is to make the solution
a. independent of the consumers performance
b. able to trouble shoot in case of any issue
c. highly available with failover support
d. Hanlde 10k msgs per second
Here I'm planning to have
a. network of activemq brokers running in all app servers and storing the consumed messages in redis data store
b. from redis data store, application can retrieve the messages through camel end points
(camel end point is chosen to process the messages before reaching the app).
Also can ActiveMQ be removed with only Redis + Apache camel, as I see from the discussions forms that Redis does most of the ActiveMQ stuff
Could any one advise on this technology stack.
ActiveMQ and Camel works great together and scales very well - should be no problem to handle the load given proper hardware.
Are you thinking about something like this?
Message producer App -> ActiveMQ -> Camel -> Redis
Message Consumer App <- Camel [some endpoint] <- Redis
Puting ActiveMQ in between is usually a very good way to achieve HA, load balancing and making the solution elastic. Depending on your specific setup with machines etc. ActiveMQ can help in many ways to solve HA issues.
Removing ActiveMQ can a good option if your apps use some other protocol than JMS/ActiveMQ messaging, i.e. HTTP, raw tcp or similar. Can you elaborate on how the apps will communicate with Camel? ActiveMQ, by default, supports transactions, guaranteed delivery and you can live with a limited number of threads on the server, even for your heavy traffic. For other protocols, this might be a bit trickier to achieve. Without a HA layer (cluster) in ActiveMQ you need to setup Redis to handle HA in all aspects, which might be just as easy, but Redis is a bit memory hungry, so be aware of that.