Rabbit-mq multiple-queues, same consumer, share load - rabbitmq

!(https://image.ibb.co/jXgPRy/stackoverflow.jpg)
I'm trying to achieve this using rabbitmq queues and exchanges. Direct queues
with routing keys just fanout to queues with same routing key. What i am trying to acheieve is more of load balancing and queues scaling.
Please do help me out.
Updated image
https://image.ibb.co/cCgSUJ/stackoverflow.jpg

Related

ActiveMQ topics vs RabbitMQ Direct exchanges

We are currently looking at ActiveMQ.
Previously we've used RabbitMQ and in particular Direct exchanges whereby a producer can send a single message to a broker which then fans this out onto 1:N other queues.
We would like a similar setup in ActiveMQ where the broker holds the configuration for which messages go where, rather than the services sending messages directly to specific queues or consumers needing to subscribe to specific topics.
I've dug into the documentation and found Virtual Topic Composite Destinations which looks to provide this functionality.
What I am trying to understand now is if this is the ActiveMQ recommended approach and if there are any pitfalls I should be wary of?
Any ActiveMQ war stories much appreciated!

Can I disable remote queue access in RabbitMQ cluster?

When creating a RabbitMQ cluster, non-mirrored queues from other nodes are "remotely accessible" from other nodes.
To a naive developer they will seemingly be able to publish to and consume from any node in an cluster and it will give them a false sense of high-availability.
If the node hosting the queue dies, the consumer will no longer be able to reach the queue from the other node.
Is there a way to disable this behaviour so that it's obvious that one has to either have a mirrored queue or needs to create a distinct queues on each server, consume from both and then handle duplicates.
Thanks
It is not possible disable this behaviour, this is one of the main reasons why you create a cluster.
BTW, you can create a federated cluster by using federation plug-in.
So you can:
have isolated nodes
share only the exchanges or/and queues you prefer.

Rabbitmq federation delete messages

I'm planning to use rabbitmq federation plugin to replicate messages from master data center to standby, so I can't use cluster mirrored queues.
Is it possible to replicate message deletion to auto sync queue?
In case you need to replicate message from one queue to many consumers use an shovel to map the desired queue to a fanout exchange...then consume directly from the exchange using exclusive queues for each consumer

why duplicate messages happen in multi brokers of activemq cluster?

There're 2 brokers which are configured as a cluster through network connector.
Allways, messages are sent by a producer to broker0, and consumed by a consumer of broker0. But we found that some duplicated messages are sent to broker1, even broker0 are working well.
That's say, this duplicated messages are contains in both broker0 and broker1. Could anyone tell me the reason ?
Thank you
Such kind of situation can occur, if you are trying to use two independent ActiveMQ instances in a cluster and client has been given access to both broker URLs.
The solution is to use the master-slave feature that is designed to provide high availability.

Can topic messages be made persistent in activemq?

I am very new to JMS and ESB.
I am using activemq as JMS and mule as ESB. When i am forwarding the messages from one queue to another with jms connector parameter "persistentDelivery" as "true" it retains the messages in the target queue after activemq re-start. But in case of forwarding messages from one topic to another,the messages are not retained in the target topic after restart.
Is there any limitation for persistence of messages in case of topic in activemq?
Thanks in advance.
Regards,
Arijit
topics are different in that messages are only retained if there is a durable consumer.
see these for more info...
http://activemq.apache.org/how-do-durable-queues-and-topics-work.html
http://stefanlearninglog.blogspot.com/2009/07/persistent-jms-topics-using-activemq.html
Topics in Activemq are not durable and persistent, so in case one of your consumer is down. You would lost your messages.
To make topic durable and persistent you can create a durable consumer by creating unique client id per consumer.
But again, that is not distributed in case you are following microservices architecture. So multiple pods or replicas will create problem while consuming messages as in no load balancing is possible for durable consumers.
To mitigate this scenario, there is a option of Virtual topics in Activemq.More details have been provided below,
You can send your messages via your producer in topic named as VirtualTopic.MyTopic.
** Note: you must have to follow this naming convention for default activemq configuration. But yes there is also a way to override this naming convention.
Now, to consume your messages via multiple consumers, you have to set naming convention for your consumer side destination as well for eg. Consumer.A.VirtualTopic.MyTopic
Consumer.B.VirtualTopic.MyTopic
These two consumer will receive messages through the topic created above, also with load balancing enabled between multiple replicas of same consumer.
I hope this will help you fixing your problem with activemq topic.