RabbitMq replay limitations - rabbitmq

I need to use rabbitmq for a client requirement. Client suggested rabbit mq.
Based on some googling it looks like rabbitmq does not support replay of past messages from arbitary offsets unlike say kafka.
I just need a confirmation on whether this limitation is still valid. Any official url will be helpful.
Thanks.
R

The RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.
Based on some googling it looks like rabbitmq does not support replay
of past messages
That's correct. Once a message is delivered and acknowledged (if the queue requires an ack) it is never available again and no trace of it remains in RabbitMQ.

Related

How to put limit on the message size for RabbitMQ?

I am looking for something similar to https://www.rabbitmq.com/maxlength.html
for the message size of each message in the queue.
RabbitMQ does not support this at this time. Version 3.8.0 will support a global maximum message size across all queues that could be used (link).
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.

Reading all headers of an exchange without transmitting payload

I would like to generate some statistics on massage traffic on one exchange based on header information, mainly routing key but ideally also other headers. Due to the large bandwidth involved I would like to not actually transmit the payload but only look at the header. I am looking at continuous traffic rates, not snapshot queue states.
Is this something that could be done with a specific configuration and an external program or would one have to approach this as a RabbitMQ plugin?
This would be best approached as a plugin. Please feel free to use the rabbitmq-users mailing list for assistance. I and other RabbitMQ core engineers monitor the list and help out.
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.

Testing Rabbitmq ack/nack response

I would like to test my Rabbitmq implementation. I have a queue and consumer, and I would like to have a third element which listens/sniffs the queue response, so the test will fail in case of queue responds nack or pass in case of it is ack.
Do you know how I could do it?
Many thanks
You should try out the tracing plugin. Note that this plugin should never be used in production due to the performance overhead it incurs.
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.

Is there a way to set a TOTAL message number limit in a RabbitMQ queue? x-max-length doesn't take into account the unacked messages

I need a queue with limited message inside considering not only message in queue but also the unacked ones. Is there a way to configure this server side? If yes is it possible using kobu as library?
Thank you
The RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.
The documentation clearly states that queue length uses the count of ready messages: https://www.rabbitmq.com/maxlength.html

Read all messages from the very begining

Consider a group chat scenario where 4 clients connect to a topic on an exchange. These clients each send an receive messages to the topic and as a result, they all send/receive messages from this topic.
Now imagine that a 5th client comes in and wants to read everything that was send from the beginning of time (as in, since the topic was first created and connected to).
Is there a built-in functionality in RabbitMQ to support this?
Many thanks,
Edit:
For clarification, what I'm really asking is whether or not RabbitMQ supports SOW since I was unable to find it on the documentations anywhere (http://devnull.crankuptheamps.com/documentation/html/develop/configuration/html/chapters/sow.html).
Specifically, the question is: is there a way for RabbitMQ to output all messages having been sent to a topic upon a new subscriber joining?
The short answer is no.
The long answer is maybe. If all potential "participants" are known up-front, the participant queues can be set up and configured in advance, subscribed to the topic, and will collect all messages published to the topic (matching the routing key) while the server is running. Additional server configurations can yield queues that persist across server reboots.
Note that the original question/feature request as-described is inconsistent with RabbitMQ's architecture. RabbitMQ is supposed to be a transient storage node, where clients connect and disconnect at random. Messages dumped into queues are intended to be processed by only one message consumer, and once processed, the message broker's job is to forget about the message.
One other way of implementing such a functionality is to have an audit queue, where all published messages are distributed to the queue, and a writer service writes them all to an audit log somewhere (usually in a persistent data store or text file). This would be something you would have to build, as there is currently no plug-in to automatically send messages out to a persistent storage (e.g. Couchbase, Elasticsearch).
Alternatively, if used as a debug tool, there is the Firehose plug-in. This is satisfactory when you are able to manually enable/disable it, but is not a good long-term solution as it will turn itself off upon any interruption of the broker.
What you would like to do is not a correct usage for RabbitMQ. Message Queues are not databases. They are not long term persistence solutions, like a RDBMS is. You can mainly use RabbitMQ as a buffer for processing incoming messages, which after the consumer handles it, get inserted into the database. When a new client connects to you service, the database will be read, not the message queue.
Relevant
Also, unless you are building a really big, highly scalable system, I doubt you actually need RabbitMQ.
Apache Kafka is the right solution for this use-case. "Log Compaction enabled topics" a.k.a. compacted topics are specifically designed for this usecase. But the catch is, obviously your messages have to be idempotent, strictly no delta-business. Because kafka will compact from time to time and may retain only the last message of a "key".