I wonder if it’s possible using nServiceBus to subscribe to all Messages of a Type without specifying the publisher’s end point.
The Background for this, is a distributed algorithm, that uses the distributor infra structure of nServiceBus to delegate sub problems to distributed workers on the network.
After a task is finished, the worker should send a message to notifying the sender.
I could use IBus.Reply() to notify it but I have also some monitoring and logging services, which are also interested in those messages. Making the sender republish all received replied doesn’t sound right.
Can I subscribe to a message from multiple publisher in nServiceBus?
You're exactly right to use Reply - simple and works.
In order to do logging/monitoring, you can audit messages so each endpoint forwards the messages it receives.
Related
Requirement
A system undergoes some state change, and multiple other parts of the system has to know this(lets call them observers) so that they can perform some actions based on the current state, the actions of the observers are important, if some of the observers are not online(not listening currently due to some trouble, but will be back soon), the message should not be discarded till all the observers gets the message.
Trying to accomplish this with pub/sub model, here are my findings, (please correct if this understanding is wrong) -
The publisher creates an event on specific topic, and multiple subscribers can consume the same message. This model either provides no delivery guarantee(in redis), or delivery is guaranteed once(with messaging queues), ie. when one of the consumer acknowledges a message, the message is discarded(rabbitmq).
Example
A new Person Profile entity gets created in DB
Now,
A background verification service has to know this to trigger the verification process.
Subscriptions service has to know this to add default subscriptions to the user.
Now both the tasks are important, unrelated and can run in parallel.
Now In Queue model, if subscription service is down for some reason, a BG verification process acknowledges the message, the message will be removed from the queue, or if it is fire and forget like most of pub/sub, the delivery is anyhow not guaranteed for both the services.
One more point is both the tasks are unrelated and need not be triggered one after other.
In short, my need is to make sure all the consumers gets the same message and they should be able to acknowledge them individually, the message should be evicted only after all the consumers acknowledged it either of the above approaches doesn't do this.
Anything I am missing here ? How should I approach this problem ?
This scenario is explicitly supported by RabbitMQ's model, which separates "exchanges" from "queues":
A publisher always sends a message to an "exchange", which is just a stateless routing address; it doesn't need to know what queue(s) the message should end up in
A consumer always reads messages from a "queue", which contains its own copy of messages, regardless of where they originated
Multiple consumers can subscribe to the same queue, and each message will be delivered to exactly one consumer
Crucially, an exchange can route the same message to multiple queues, and each will receive a copy of the message
The key thing to understand here is that while we talk about consumers "subscribing" to a queue, the "subscription" part of a "pub-sub" setup is actually the routing from the exchange to the queue.
So a RabbitMQ pub-sub system might look like this:
A new Person Profile entity gets created in DB
This event is published as a message to an "events" topic exchange with a routing key of "entity.profile.created"
The exchange routes copies of the message to multiple queues:
A "verification_service" queue has been bound to this exchange to receive a copy of all messages matching "entity.profile.#"
A "subscription_setup_service" queue has been bound to this exchange to receive a copy of all messages matching "entity.profile.created"
The consuming scripts don't know anything about this routing, they just know that messages will appear in the queue for events that are relevant to them:
The verification service picks up the copy of the message on the "verification_service" queue, processes, and acknowledges it
The subscription setup service picks up the copy of the message on the "subscription_setup_service" queue, processes, and acknowledges it
If there are multiple consuming scripts looking at the same queue, they'll share the messages on that queue between them, but still completely independent of any other queue.
Here's a screenshot from this interactive visualisation tool that shows this scenario:
As you mentioned it is not something that you can control with Redis Pub/Sub data structure.
But you can do it easily with Redis Streams.
Streams will allow you to post messages using the XADD command and then control which consumers are dealing with the message and acknowledge that message has been processed.
You can look at these sample application that provides (in Java) example about:
posting and consuming messages
create multiple consumer groups
manage exceptions
Links:
Getting Started with Redis Streams and Java
Redis Streams in Action ( Project that shows how to use ADD/ACK/PENDING/CLAIM and build an error proof streaming application with Redis Streams and SpringData )
I am using RabbitMQ as a MQ broker. Is it possible to get a notification that a certain message has been acknowledged by all queues? That is, if it was sent to 5 queues, we get a notification after the acknowledgment of the last/5th consumer.
I know you can introduce reply-to queues, but that's not what I am looking for. I don't want to force the consumer to send an acknowledgment message to some queue after acknowledgment.
Is it also possible to continue this follow-up after a broker and/or publisher restart?
No, it is not possible as you state it.
You cannot, from the publisher side, know whether a message has been ACK'd at the consumer side, and in most patterns it's not really something you'd want anyway.
You can, however, use Publisher Confirms. These would inform the publisher that the message has been routed to all the bound queues.
There are several mechanisms for data safety on both the publisher and consumer side. You would normally trust that the broker does not miss messages in between, the same way you trust that a database will hold the records over time.
If nevertheless your workflow requires that your publisher side is informed about the completion of a complex distributed task, and you really can't get away with fire and forget, then you will need to implement that response yourself, normally by means of an additional message.
I was looking for an ActiveMQ broker admin command, to tell it to pause a queue - that is:
continue accepting messages from producing clients
cease delivering to consuming clients, allowing the queue backlog to grow until the queue is resumed, whereupon the backlog is sent to clients.
I was unable to find such a command. The commonest answer was that it should be managed at the client end -- that is, locate every consumer and stop it. Other answers were workarounds, like manipulating network routes or firewalls so that the clients and broker could no longer communicate.
A cursory survey of other message queues indicates that ActiveMQ is not unusual in this regard.
It seems to me there are two reasons this functionality might not be implemented:
It is difficult to implement -- but I can't think of any reason why.
It is counter to the design philosophy of message queues
Which is it, and why?
Being able to pause a queue is supported in the newly released ActiveMQ 5.12.0:
When the queue is "paused":
NO messages sent to the associate consumers
messages still to be enqueued on the queue
ability to be able to browse the queue
all the JMX counters for the queue to be available and correct.
...
implemented pause/resume/isPaused queue view mbean ops and attribute
when paused, there is no dispatch to regular queue consumers, send
and browse work as normal. Any inflight messages will continue inflight
till ackes as normal.
See https://issues.apache.org/jira/browse/AMQ-5229
If you have Jolokia enabled (I think it is enabled by default nowadays), you can use something like the following curl request to pause the queue:
curl --user admin:admin http://127.0.0.1:8161/api/jolokia/exec/org.apache.activemq:brokerName=localhost,destinationName=myQueue,destinationType=Queue,type=Broker/pause
(Using the default username, password and broker name and a queue called myQueue)
Replace "pause" with "resume" in order to resume the queue.
Probably not too complicated to implement - as you say.
I don't know if it's an active design decision of if there has been no demand. Other similar products such as IBM WebSphere MQ implements "get/put inhibited" on queues, so it's obviously is not totally against the philosofy of messaging - rather a tool to operate and trouble shoot live systems.
I'm a bit biased, but I actually like to decouple the sender from the receive (if the are two different systems, that might eventually get switched/upgraded/changed..).
An easy way to decouple the systems, and be able to do what you want is to make the sender send to one queue "DATA.OUT" and the receiver listen to another "DATA.IN". Then you can use Apache Camel (which is typically bundled with ActiveMQ to achieve Enterprise Integration Patterns), to route from DATA.OUT to DATA.IN.
A Camel Route is possible to start/stop via JMX, which will achieve something similar to what you described.
I guess ActiveMQ design in the matter rather have you do these kind of things in a middleware layer, such as Apache Camel, rather than direct on the queues.
Just started learning NServiceBus and trying to understand the concept.
When it talks about queues, are we talking about MSMQs on both publisher and subscriber?
So, if I have an application that generates a list of something (say, name of animals), then it dumps the list into publisher’s queue. The publisher polls the queue every minute and if there is something in the queue, it will publish to subscriber’s queue for further processing. Does this make sense?
Thanks.
The sequence of events for a publish is as follows:
The Publisher will start up(Windows Service)
A Subscriber will start up and place a message into the Publisher's input queue(MSMQ)
The Publisher will take that message, read the address of the Subscriber and place that into storage(subscription storage: memory, MSMQ, or RDBMS)
When it is time to publish and event, the Publisher will inspect the type of message and then read subscription storage to find Subscribers interested in that message
The Publisher will then send a message to each of the Subscribers found in subscription storage
The Subscriber receives the message in its input queue(MSMQ) and processes it
You can leverage other messaging platforms instead of MSMQ, but MSMQ is the default. There really is no polling done, all the endpoints are signaled when a message hits the queues.
MSMQ is a transport layer. It passes the messages around.
The application will publish something using a NServiceBus queue. If you configured it to use MSMQ, that's what it will use for its transport layer and this is what the subscribers will be looking at.
NServiceBus follows the publisher/subscriber model as you have correctly stated. However your confusion is based on the use of two queues. This is incorrect. The server (publisher) will maintain the queue which is interfaced via the MSMQ protocol and so your application would communicate directly with this possibly remotely or locally.
You would typically use a WCF service which would raise an event upon a new message being pushed onto the queue. Your application can then make use of this new message as desired. See the NServiceBus documentation for examples: http://www.nservicebus.com/ArchitecturalPrinciples.aspx
I am getting a little confused with NServiceBus. It seems like a lot of examples that I see, they always use publish() and subscribe(). What I am trying to do is that I have a publisher that polling from its queue and distributes the message to subscriber’s queue. The messages are being generated by other application and the body of message will contain a text, which will be parsed later.
Do I still need to call publish() and subsribe() to transfer the messages from publisher's queue to subscriber's queue? The way I understood was that I only need to configure the queue names in both config file and call LoadAllMessages() on subscriber side, will take above scenario. I don't even have to handle the message on the subscriber side.
Thanks.
Your Publisher will still need to call Publish. What this does is the Publisher then looks into Subscription Storage to find out who is interested in that message type. It then will send a message to each Subscriber. On the Subscriber side you need to implement message handlers to do something with those messages. This is done via implementing the IHandleMessages<T> interface in the Subscriber assembly. NSB will discover this and autowire everything up. Be aware by default, the Subscriber will subscriber to all message types. If you want to only subscribe to certain messages, use the .DoNotAutoSubscribe setting in the manual configuration.