We are currently looking at ActiveMQ.
Previously we've used RabbitMQ and in particular Direct exchanges whereby a producer can send a single message to a broker which then fans this out onto 1:N other queues.
We would like a similar setup in ActiveMQ where the broker holds the configuration for which messages go where, rather than the services sending messages directly to specific queues or consumers needing to subscribe to specific topics.
I've dug into the documentation and found Virtual Topic Composite Destinations which looks to provide this functionality.
What I am trying to understand now is if this is the ActiveMQ recommended approach and if there are any pitfalls I should be wary of?
Any ActiveMQ war stories much appreciated!
Related
I've noticed that some apps send incorrect messages to a certain queue in RabbitMQ. Since lots of apps can access RabbitMQ in my environment, it's too difficult to identify the producers by reviewing codes.
Is there any way to find the producers? Do I need some monitoring tools?
Not without modifying the clients.
The AMQP protocol provides the app-id property exactly for that purpose. Message publishers should set the app-id when publishing messages to RabbitMQ so that consumers can infer the source of the message.
Is not Apache Kafka another implementation of JMS?
I am using JMS+AMQ in my application, and migrating to Apache Kafka. Do I have to change all JMS codes?
No, Kafka is different from JMS systems such as ActiveMQ.
see ActiveMQ vs Apollo vs Kafka
Kafka has less features than ActiveMQ, as the stress has been put on performances. So before migrating, check that the features you use in AMQ are in Kafka.
However, there is an open suggestion for a bridge between JMS and Kafka, to allow exactly what you need. Maybe the provided links can help you
https://issues.apache.org/jira/browse/KAFKA-1995
Actually, the two are not the same. And with a little more time seeing the two co-exist - and listening to problems and happy points from those deploying each in the field - there is a little more to say about each one.
Firstly, JMS supports both point-to-point messaging (where messages are sent to single consumers; the consumers themselves maintain their message queues) and the publish-and-subscribe (pub/sub) model (where messages are written to a single topic, and consumers, independently, decide which messages to consume).
In a point-to-point messaging architecture, message producers and consumers know each other, where as in a pub/sub model they do not. Apache Kafka focuses on a pub/sub model, maintaining a separate log/topic from which consumers read from offsets. Kafka is also built for the cloud, with high-throughput a core consideration.
Many in our community and at meetups throw their hands up in frustration at MOMs (message-oriented middlewares) like JMS and switch to Kafka, for, what boils down to one reason: scalability. They argue that Kafka is better suited for scale than other MOMs because Kafka maintains a partitioned topic log. In so doing, Kafka can split up message flow to groups of consumers by partition and batch transmit the messages.
This concept also allows Kafka to have more granular control over ACLs (access control) to Kafka Consumers, although there are some issues there, which Apache Pulsar is addressing.
Finally, on Kafka, since the client/consumer decides which messages to consume (by offset in the topic), this removes some of the producer-side complexity of routing rules built into MOMs like JMS.
There's more differences than that, but this is a distillation of some of the ones that keep coming up! Hope this helps.
No, Kafka uses its own non-standard protocol and clients.
However, there's a 3rd-party JMS Client for Kafka from Confluent.
I would like to use MassTransmit similar to NServiceBus, every publisher and subscriber has a local queue. However I want to use RabbitMQ.
So do all my desktop clients have to have RabbitMQ installed, I think so, then should I just connect the 50 desktop clients and 2 servers into a cluster?
I know the two servers must be in the same cluster. However 50 client nodes, seems a bi tmuch to put in one cluster.....Or should I shovel them or Federate them to the server cluster exchange?
The desktop machine send messages like: LockOrder, UnLock Order.
The Servers are dealing with backend hl7 messages.
Any help and advice here is much appreciated, this is all on windows machines.
Basically I am leaving NServiceBus behind, as it is now too expensive, they aiming it at large corporations with big budgets, hence Masstransmit.
However I want reliable/durable messaging, hence local queues on ALL publishers and ALL subscribers.
The desktops also use CQS to update their views.
should I just connect the 50 desktop clients and 2 servers into a cluster?
Yes, you have to connected your clients to the cluster.
However 50 client nodes, seems a bi tmuch to put in one cluster.
No, (or it depends how big are your servers) 50 clients is a small number
Or should I shovel them or Federate them to the server cluster exchange?
The desktop machine send messages like: LockOrder, UnLock Order.
I think it's better the cluster, because federation and shovel are asynchronous, it means that your LockOrder could be not replicated in time.
However I want reliable/durable messaging, hence local queues on ALL publishers and ALL subscribers
Withe RMQ you can create a persistent queue and messages, and it is not necessary if the client(s) is connected. It will get the messages when it will connect to the broker.
I hope it helps.
I have a FOSS ESB rpoject called Shuttle, if you would like to give it a spin: https://github.com/Shuttle/shuttle-esb
I haven't used NServiceBus for a while and actually started Shuttle when it went commercial. The implementation is somewhat different from NServiceBus. I don't know MassTransit at all, though. Currently process managers (sagas) have to be hand-rolled in Shuttle whereas MassTransit and NServiceBus have this incorporated. If I do get around to adding sagas I'll be adding them as a Module that can be plugged into the receiving pipeline. This way one could have various implementations and choose the flavour you like :)
Back to your issue. Shuttle has the concept of an optional outbox for queuing technologies like RabbitMQ. Shuttle does have a RabbitMQ implementation. I believe the outbox works somewhat like 'shovel' does. So the outbox would be local and sending messages would first go to the outbox. It would periodically try to send messages on to the recipients and, after a configurable number of attempts, send the message to an error queue. It can then be returned to the outbox for further attempts, or even moved directly to the recipient queue once it is up.
Documentation here: http://shuttle.github.io/shuttle-esb/
For the company I work for we would like to use RabbitMQ as our main message bus. The idea we have is that every single application uses their own vhost for internal communication and that via the shovel or federation plugin we would make it possible to share certain type of the events across multiple vhosts (maybe even multiple machines (non-clustered)).
We chose for application per vhost to separate internal communication from public events and to keep the security adjustable per application.
Based on the information published on the RabbitMQ website I don't get it when I have to choose for shovels or when I have to choose for the federation plugin.
RabbitMQ has the following explanation when to use what:
Typically you would use the shovel to link brokers across the internet when you need more control than federation provides.
What is the fine grain control in shovels which I am missing when I choose for federation?
At this moment I think I would prefer the federation plugin because I could automate the inter-vhost-communication via the REST API provided by the federation plugin.
In case of shovels I would need to change the shovel configuration and reboot the RabbitMQ instance every time we would like to share an event between vhosts. Are my thoughts correct about this?
We are currently running RMQ on Windows with clients connecting from .NET. In the near future Java/Perl/PHP clients will join.
To summarize my questions:
What is the fine grain control in shovels which I am missing when I
choose for federation?
Is it correct that the only way to change the
inter-vhost-communication when I use shovels is by changing theconfig file and rebooting the instance?
Does the setup (vhost per application) make sense or am I missing the point completely?
Shovels and queue provide different means to be forward messages from one RabbitMQ node to another.
Federated Exchange
With a federated exchange, queues can be connected to the queue on the upstream(source) node. In addition, an exchange on the downstream(destination) node will receive a copy of messages that are published to the upstream node.
Federated exchanges are a similar to exchange-to-exchange bindings, in that, they can (optionally) subscribe to a limited set of messages from an upstream exchange.
Federated Queue
(NOTE: These are new in RabbitMQ 3.2.x)
With a federated queue, consumers can be connected to the queue on both the upstream(source) and downstream(destination) nodes.
In essence the downstream queue is a consumer on the upstream queue, with the expectation that there will be additional downstream consumers that process the messages in the same manner as a consumer attached to the upstream queue.
Any messages consumed by the downstream (federated) queue will not be available for consumers on the upstream queue.
Use Case:
If consumers are being migrated from one node to another, a federated queue will allow this to happen without messages being missed, or processed twice.
Use Case: from the RabbitMQ docs
The typical use would be to have the same "logical" queue distributed
over many brokers. Each broker would declare a federated queue with
all the other federated queues upstream. (The links would form a
complete bi-directional graph on n queues.)
Shovel
Shovels on the other hand, attach an "upstream" queue to a "downstream" exchange. (I place the terms in quotes because the shovel documentation does not describe the nodes with the same semantics as the federation documentation.)
The shovel consumes the messages from the queue and sends them to the exchange on the destination node. (NOTE: While not normally discussed as part of this the pattern, there is nothing stopping a consumer from connecting to the queue on the origin node.)
To answer the specific questions:
What is the fine grain control in shovels which I am missing when I
choose for federation?
A shovel does not have to reside on an "upstream" or "downstream" node. It can be configured and operate from an independent node.
A shovel can create all of the elements of the linkage by itself: the source queue, the bindings of the queue, and the destination exchange. Thus, it is non-invasive to either the source or destination node.
Is it correct that the only way to change the
inter-vhost-communication when I use shovels is by changing theconfig
file and rebooting the instance?
This has generally been the accepted downside of the shovel.
With the following command (caveat: only tested on RabbitMQ 3.1.x, and with a very specific rabbitmq.config file that only contain ) you can reload a shovel configuration from the specified file. (in this case /etc/rabbitmq/rabbitmq.config)
rabbitmqctl eval 'application:stop(rabbitmq_shovel), {ok, [[{rabbit, _}|[{rabbitmq_shovel, [{shovels, Shovels}] }]]]} = file:consult("/etc/rabbitmq/rabbitmq.config"), application:set_env(rabbitmq_shovel, shovels, Shovels), application:start(rabbitmq_shovel).'
.
Does the setup (vhost per application) make sense or am I missing the
point completely?
This decision is going to depend on your use case. vhosts primarily provide logical (and access) separation between queues/exchanges and authorized users.
Shovel acts like a well-designed built-in consumer. It can consume messages from a source broker and queue, and publish them into a destination broker and exchange. You could write an application to do that, but shovel already got it right - if all you need is to move messages from a queue to an exchange in the same or another broker, shovel can do it for you. Just as a well-behaving app, it can declare exchanges/queues/bindings, reconnect, change the routing key etc. You can set it up on the source or on the destination broker, or even use a third broker. It's basically an AMQP client.
Federation, on the other hand is used to connect your broker to one or multiple upstream brokers, or you can even create chains of brokers, bending the topology any way you like. You can federate exchanges or queues, and e.g. distribute messages to multiple brokers without the need to bind additional queues to a topic exchange or using a fanout exchange, and shoveling messages from each queue to a downstream broker.
To recap, federation operates at a higher level, while shovel is mostly "just" a well-written client.
To reconfigure shovel, you have to restart the broker, unfortunately.
I don't think you really need a per app vhost. You can add a per-app user to the broker without separate vhosts. Not sure what you mean on "share an event between vhosts", though.
I am very new to JMS and ESB.
I am using activemq as JMS and mule as ESB. When i am forwarding the messages from one queue to another with jms connector parameter "persistentDelivery" as "true" it retains the messages in the target queue after activemq re-start. But in case of forwarding messages from one topic to another,the messages are not retained in the target topic after restart.
Is there any limitation for persistence of messages in case of topic in activemq?
Thanks in advance.
Regards,
Arijit
topics are different in that messages are only retained if there is a durable consumer.
see these for more info...
http://activemq.apache.org/how-do-durable-queues-and-topics-work.html
http://stefanlearninglog.blogspot.com/2009/07/persistent-jms-topics-using-activemq.html
Topics in Activemq are not durable and persistent, so in case one of your consumer is down. You would lost your messages.
To make topic durable and persistent you can create a durable consumer by creating unique client id per consumer.
But again, that is not distributed in case you are following microservices architecture. So multiple pods or replicas will create problem while consuming messages as in no load balancing is possible for durable consumers.
To mitigate this scenario, there is a option of Virtual topics in Activemq.More details have been provided below,
You can send your messages via your producer in topic named as VirtualTopic.MyTopic.
** Note: you must have to follow this naming convention for default activemq configuration. But yes there is also a way to override this naming convention.
Now, to consume your messages via multiple consumers, you have to set naming convention for your consumer side destination as well for eg. Consumer.A.VirtualTopic.MyTopic
Consumer.B.VirtualTopic.MyTopic
These two consumer will receive messages through the topic created above, also with load balancing enabled between multiple replicas of same consumer.
I hope this will help you fixing your problem with activemq topic.