Non-persistent jms message get lost for non durable subscriber - activemq

I have started with ActiveMQ just one day ago, so my knowledge on it is limited.
My target is to check the ActiveMQ stability and throughput in different scenario for JMS message.
So, following is one scenario.
1. I am publishing 1 mil non-persistent synchronous messages to topic and subscribing it synchronously non durable manner. One publisher and one subscriber.
2. The broker, publisher and subscriber is up during the test.
Unfortunately nearly most of the times(only one time I got all messages out of 14 try), I am not getting all the messages(1 mil) in subscriber end. nearly 5500 messages are lost.
I did the same test for tibco ems and ibm mq and did not get this issue.
So, for ActiveMQ, if I need all messages to be received, is it necessary to use persistent message and durable subscriber always?
Don't think form the angle of guaranteed messaging or fail-over scenario.
Any suggestion is welcome.
Thanks,
Smith

Not sure about your exact scenario. But ActiveMQ will limit the memory used for buffering messages when the producer is faster than the consumer by dropping old messages above a certain limit.
This is configurable.

Related

Rabbit MQ .Net Delivery Count

jms queues has jmsdeliverycount.If message read from queue one more time we can track with this property.What is the equvaliant of this RabbitMQ .net.
I want to check how many times this message read from queue(in transactional reads)
Currently there is nothing out-of-the-box. All you get is a redelivery flag to say it has been redelivered, but not how many times.
You can track this issue.
Quorum queues track the number of redeliveries, in the x-delivery-count header. See here
There is a feature request to add the same to classic queues.

RabbitMQ delivery throttle

So I'm testing RabbitMQ in one node. Plain and simple,
One producer sends messages to the queue,
Multiple consumers take tasks from that queue.
Currently consumers execute thousands of messages per second, they are too fast so I need them to slow down. Managing consumer-side throttling is not possible due to network unreliable nature.
Collectively consumers must not take more than 10 messages per second altogether from that queue.
Is there a way to configure RabbitMQ so as the queue dispatches a maximum of 10 messages per second?
If I remember correctly, once Rabbit MQ has delivered a message to the queue, it's up to consumers to consume a message. There are various consumers in different languages, you haven't mentioned anything specific, so I'm giving a generic answer.
In my understanding, you shouldn't try to impose any restrictions on Rabbit MQ itself, instead, consider implementing connection pool of message consumers that will be able to handle not more than X messages simultaneously on the client side. Alternatively, you can provide some kind of semaphore at the handler itself, but not on the Rabbit MQ server itself.

Smart Broker vs. Dumb Broker (Kafka and RabbitMQ)

In discussing the differences between Kafka and RabbitMQ, "dumb broker" and "smart broker" keeps popping up in their interactions with consumers. Kafka is described as having a dumb broker while RabbitMQ is said to have a smart broker/dumb consumer model.
What exactly does this mean? I'm familiar with the basics of Kafka and a little bit more about RabbitMQ. However, what features of RabbitMQ makes the broker smarter than Kafka's?
This is a question that bothered me for sometime too :) Here's what I have understood so far...
In the case of RabbitMQ the broker makes sure the messages are delivered to the consumers and dequeue them only when it gets acknowledgement from all the consumers that need that message. It also keeps track of consumer state.
Kafka does not keep track of "which messages were read by consumers". The Kafka broker keeps all messages in queues for a fixed duration of time and it's the responsibility of the consumer to read it from the queue. It also does not have this overhead operation of keeping track of consumer state.
You can read more about it in this awesome Pivotal blog post comparing RabbitMQ and Kafka.
The point about Kafka using a dumb broker while Rabbit MQ using a smart broker is one of the points used while deciding which Messaging System to use. Since RabbitMQ is a smart broker implementing global startegies for retry is far easier and listener agnostic than in Kafka.
Given a set of microservices accessed through an API gateway I believe that the above point, combined with the advantages of Rabbit MQ being much more maintainable and the knowledge that the data passed across microservices will never amount to the same load as that of streaming data, makes Rabbit MQ a far better choice than Kafka for Inter Service Communication
Dumb vs Smart broker means that the Broker can be smart to route messages based on certain conditions.
In the case of RabbitMQ, producer sends message to Exchange and Exchange routes the message to Queue. Here "Exchange" does the routing and thats what they call as Smart broker. Again people have made Brokers really smart and ended up with ESB which we all know what happened and Industry is moving away from Bloated ESB's.
In the case of Kafka, broker doesn't route messages. It is up to the user to create topics, producers partition the events into topic-partitions, and consumer groups and decide which consumer groups listens to which topic.
Smart vs Dumb broker has nothing to do with Message acknowledgment. In case of RabbitMQ, it tracks the status of each message to see whether it is consumer or not. In the case of Kafka, it happens but differently by using offsets on partitions and offset is stored in Kafka itself ( consumer can also store). But both provide the functionality.

Multiple servers to interact with a Rabbit MQ

I'm working for a company where we're considering Mule ESB. We would need to set up Mule in a clustered configuration to get what Mule coins a Mule High Availability (HA) Cluster.
Now, we need to persist incoming messages to a queue in case of power outage or disk failure. As far as I understand, we can either go with the default Mule Object Store which "persists" messages to a shared memory grid. However, my first thought here is that this can't be any good if we get a power outage which takes the entire cluster out of action.
Our other option is to use a separate queue product such as RabbitMQ or ActiveMQ. However, do these integrate alright with a HA cluster? Are there any mechanism in these products which ensures that the same message won't be picked up by two machines at the same time?
Consider this scenario (based on the observer pattern):
Mule receives a message, puts it on a queue and responds with an OK
to the client which delivered the message.
Mule picks up a message from the queue, and attempts to deliver it to a subscriber.
The subscriber accepts the message, and Mule removes it from the queue.
What happens if another Mule instance in the HA cluster attempts to pick up the message between 2 and 3 above? Is there a mechanism where Mule can indicate that a message is picked up from the queue to be "attempted delivered" but then, if the delivery fails, update the message on the queue as "not delivered" if delivery fails?
Both RabbitMQ and ActiveMQ will give you the once-and-only-once functionality I think you are looking for.
Both platforms ensure that each message in a queue is received by only one subscriber.
In ActiveMQ, to return a message to a queue in the event of a failure, you can use explicit message acknowledgement or JMS transactions. Here's a quick overview.
In RabbitMQ, you do it using acknowledgements.
Also, you might want to consider reliability for your message broker. Both ActiveMQ and RabbitMQ offer highly available broker configuration options.

Does NServiceBus 4.x with RabbitMQ support round robing consumers or the competing consumer model?

I'm using NServiceBus 4.x with RabbitMQ 3.2.x as my transport.
I made the assumption that by using RabbitMQ as my transport I would be given the competing consumer model as an option. I understand that NServiceBus employs the "Fannout" exchange type for all exchanges and does not support round robin at this time. However is there a way to configure NServiceBus to take advantage of the levels of indirection via Exchanges and channels that RabbitMQ offers.
I have several consumers I would like to compete for messages from a given queue. What I am observing is subscribers' blocking access to further message retrieval from the queue until the message is consumed. So having more then one consumer at this point does me no good other then redundancy.
After reading some documentation on RabbitMQ I'm assuming that it's normal to block until the Ack receipt is sent from the subscriber. But I had assumed that subscriber #2 would have free access to the queue to fetch another message.
There is mention of increasing the prefetch count on RabbitMQ channel.
Example:
channel.BasicQos(0,prefetchcount,false)
I don't see anywhere that I can change this setting via configuration in NServiceBus. Furthermore as I read what prefetch does I'm really not sure this what i'm looking for.
Is it possible to use RabbitMQ with out a distirbutor type pattern used with MSMQ? Or should I move to MassTransit or Rebus?
Put prefetchcount=2 in your connection string. Any value above 1 will tell the broker to allow more than X unacked message to go out. You need to fiddle with this setting to find the optimum for your scenario.