anypoint mq subscriber and anypoint mq consumer.
what is the basic difference between anypoint mq subscriber and anypoint mq consumer.I know both can consume message but what are key difference?
what is preftech concept , please do not share mulesoft docs link , i am already aware about this , i am looking your practical knowledge based on your hand on.
Subscriber is an Event source, which can be used to trigger a flow. It has inbuilt scheduler which you can configure for polling. (You are ready to consume message, but dont know when the message will come)
Consumer is an Event Processor, which cannot trigger a flow, therefore it can be used only after an event has generated. (You are sure a message is expected in the que)
prefetch is like assuming that the flow which the subscriber is triggering is capable of handling the messages. prefetch makes a copy in local buffer, which in turn signals that the message is already in use, so any other consumer or subscriber listening to the que will not get the message. The flow consumes the messages as per availability of threads.
Both works similarly but trade-offs can be only compared as per use case. So if you can give some perspective of your use case, we can compare. esp volume of messages, size of messages etc.
Related
I am using RabbitMQ as a MQ broker. Is it possible to get a notification that a certain message has been acknowledged by all queues? That is, if it was sent to 5 queues, we get a notification after the acknowledgment of the last/5th consumer.
I know you can introduce reply-to queues, but that's not what I am looking for. I don't want to force the consumer to send an acknowledgment message to some queue after acknowledgment.
Is it also possible to continue this follow-up after a broker and/or publisher restart?
No, it is not possible as you state it.
You cannot, from the publisher side, know whether a message has been ACK'd at the consumer side, and in most patterns it's not really something you'd want anyway.
You can, however, use Publisher Confirms. These would inform the publisher that the message has been routed to all the bound queues.
There are several mechanisms for data safety on both the publisher and consumer side. You would normally trust that the broker does not miss messages in between, the same way you trust that a database will hold the records over time.
If nevertheless your workflow requires that your publisher side is informed about the completion of a complex distributed task, and you really can't get away with fire and forget, then you will need to implement that response yourself, normally by means of an additional message.
I'm working for a company where we're considering Mule ESB. We would need to set up Mule in a clustered configuration to get what Mule coins a Mule High Availability (HA) Cluster.
Now, we need to persist incoming messages to a queue in case of power outage or disk failure. As far as I understand, we can either go with the default Mule Object Store which "persists" messages to a shared memory grid. However, my first thought here is that this can't be any good if we get a power outage which takes the entire cluster out of action.
Our other option is to use a separate queue product such as RabbitMQ or ActiveMQ. However, do these integrate alright with a HA cluster? Are there any mechanism in these products which ensures that the same message won't be picked up by two machines at the same time?
Consider this scenario (based on the observer pattern):
Mule receives a message, puts it on a queue and responds with an OK
to the client which delivered the message.
Mule picks up a message from the queue, and attempts to deliver it to a subscriber.
The subscriber accepts the message, and Mule removes it from the queue.
What happens if another Mule instance in the HA cluster attempts to pick up the message between 2 and 3 above? Is there a mechanism where Mule can indicate that a message is picked up from the queue to be "attempted delivered" but then, if the delivery fails, update the message on the queue as "not delivered" if delivery fails?
Both RabbitMQ and ActiveMQ will give you the once-and-only-once functionality I think you are looking for.
Both platforms ensure that each message in a queue is received by only one subscriber.
In ActiveMQ, to return a message to a queue in the event of a failure, you can use explicit message acknowledgement or JMS transactions. Here's a quick overview.
In RabbitMQ, you do it using acknowledgements.
Also, you might want to consider reliability for your message broker. Both ActiveMQ and RabbitMQ offer highly available broker configuration options.
I'm using NServiceBus 4.x with RabbitMQ 3.2.x as my transport.
I made the assumption that by using RabbitMQ as my transport I would be given the competing consumer model as an option. I understand that NServiceBus employs the "Fannout" exchange type for all exchanges and does not support round robin at this time. However is there a way to configure NServiceBus to take advantage of the levels of indirection via Exchanges and channels that RabbitMQ offers.
I have several consumers I would like to compete for messages from a given queue. What I am observing is subscribers' blocking access to further message retrieval from the queue until the message is consumed. So having more then one consumer at this point does me no good other then redundancy.
After reading some documentation on RabbitMQ I'm assuming that it's normal to block until the Ack receipt is sent from the subscriber. But I had assumed that subscriber #2 would have free access to the queue to fetch another message.
There is mention of increasing the prefetch count on RabbitMQ channel.
Example:
channel.BasicQos(0,prefetchcount,false)
I don't see anywhere that I can change this setting via configuration in NServiceBus. Furthermore as I read what prefetch does I'm really not sure this what i'm looking for.
Is it possible to use RabbitMQ with out a distirbutor type pattern used with MSMQ? Or should I move to MassTransit or Rebus?
Put prefetchcount=2 in your connection string. Any value above 1 will tell the broker to allow more than X unacked message to go out. You need to fiddle with this setting to find the optimum for your scenario.
I need to build a system that uses a Publish/Subscribe bus (e.g. Mule, ZeroMQ, RabbitMQ), but the literature all implies that subscriber applications are reliably available to receive messages from topics to which they subscribe as soon as the Pub/Sub bus is able to deliver the message.
I have a system where some of the applications will be reliably connected to the Publish/Subscribe bus, but other applications will not be active or connected to the bus all the time.
The obvious solution is to have some sort of "presence" protocol between the unreliable application and the Publish/Subscribe bus so that "present" applications get their messages delivered immediately, and "not present" applications have their messages queued up in a persistent buffer of some kind, and as soon as they complete the "presence handshake", the queued messages are delivered to the newly present application.
Are there any Publish/Subscribe buses which have this kind of feature built in, or are there any open-source add-ons which do this? Can you point me to any URLs which describe this?
You can achieve this behaviour quite easily with any AMQP-compliant broker (such as RabbitMQ).
Choose the correct exchange type for your usage model. You'll want to use a direct exchange if you're always sending to absolutely named destinations, something like chat.messages.
If you want to do pattern-based routing, you'll want to use topic exchange. Then you can route based on patterns such a chat.messages.*.
Routing is described in more detail in the RabbitMQ Tutorials.
To create the kind of persistent subscription that you mention, have each subscriber create a queue that is private to that subscriber. The queue is then bound to the relevant routing keys on your chosen exchange.
Since each subscriber has its own queue, messages will be consumed by the subscriber when active and stored when subscriber is inactive or disconnected.
You haven't mentioned your language of choice, but in Java you can accomplish this with JMS using durable subscribers. Any implementation of JMS (there are many, including the aforementioned RabbitMQ) will support this feature.
Just started learning NServiceBus and trying to understand the concept.
When it talks about queues, are we talking about MSMQs on both publisher and subscriber?
So, if I have an application that generates a list of something (say, name of animals), then it dumps the list into publisher’s queue. The publisher polls the queue every minute and if there is something in the queue, it will publish to subscriber’s queue for further processing. Does this make sense?
Thanks.
The sequence of events for a publish is as follows:
The Publisher will start up(Windows Service)
A Subscriber will start up and place a message into the Publisher's input queue(MSMQ)
The Publisher will take that message, read the address of the Subscriber and place that into storage(subscription storage: memory, MSMQ, or RDBMS)
When it is time to publish and event, the Publisher will inspect the type of message and then read subscription storage to find Subscribers interested in that message
The Publisher will then send a message to each of the Subscribers found in subscription storage
The Subscriber receives the message in its input queue(MSMQ) and processes it
You can leverage other messaging platforms instead of MSMQ, but MSMQ is the default. There really is no polling done, all the endpoints are signaled when a message hits the queues.
MSMQ is a transport layer. It passes the messages around.
The application will publish something using a NServiceBus queue. If you configured it to use MSMQ, that's what it will use for its transport layer and this is what the subscribers will be looking at.
NServiceBus follows the publisher/subscriber model as you have correctly stated. However your confusion is based on the use of two queues. This is incorrect. The server (publisher) will maintain the queue which is interfaced via the MSMQ protocol and so your application would communicate directly with this possibly remotely or locally.
You would typically use a WCF service which would raise an event upon a new message being pushed onto the queue. Your application can then make use of this new message as desired. See the NServiceBus documentation for examples: http://www.nservicebus.com/ArchitecturalPrinciples.aspx