Rabbitmq: overflow to different queue + requeue after timeout - rabbitmq

I set up an exchange (A) with queue with dead-letter exchange B.
B re-queue to exchange A after 5 mins.
Everything works except that when A is full and B (after 5 mins) requeue to A messages get lost (I expected them to be requeued to B).
Why aren't messages requeued to B?

Related

Does google pub sub deliver messages to dead letter queues with the same message id?

If you have topic t1 with subscriber s1 and s1 has dead letter forwarding to topic t2 with subscription s2, then do messages delivered to s1 have the same ids as their versions delivered to s2?
My preliminary testing indicates the ids are not the same but I'll need to double check.
Your observation is correct only messageids are unique per topic even for dead letters.
According to the google cloud documentation
ID of the message, is assigned by the server when the message is published to be unique within the topic. Pub/sub guarantees that messageId is always unique per topic.

How does newly elected leader apply entries in raft?

Let's say you have 3 servers S1, S2 and S3. S1 (leader), replicates a log to S2 and S3 and then applies the log respond to the client and crashes. So we have
S1 1
S2 1
S3 1
Now when S2 becomes the leader (with the vote from S3) how will it apply the log? According to the Raft paper
If there exists an N such that N > commitIndex, a majority
of matchIndex[i] ≥ N, and log[N].term == currentTerm:
set commitIndex = N.
In the above case, the term of S2 (commitIndex = 0) would be 2 while the term of the log would always be 1; hence, the last condition would not be satisfied? Am I missing something?
Every node has an event log with two core pointers: the committed events and the uncommitted events. The point of raft the protocol is to replicate both of these pointers across the system.
| 0 1 2 3 | 4 5 6 7 8 9 |
^ ^
Committed Uncommitted
Every replication message a Follower receives from the Leader updates both of these pointers. The message has events to append to the log (updating the uncommitted pointer). It also has an index to update the committed pointer.
When a Follower receives this message and updates its committed pointer then it applies all the events that just moved from uncommitted to committed.
The committed pointer sent to the Followers is a copy of what the Leader has on its log. The Leader updates its committed pointer when it receives a quorum from the Followers, and applies all the events that moved from uncommitted to committed.
A newly-elected Leader first needs to ensure that its version of the log is replicated to the Followers, and as the new Leader receives quorum from the Followers it updates its committed pointer, replicates that pointer back to the Followers, and applies the events as above.

RabbitMQ: How to combine a task queue and a fanout/routing/topic models?

I have an environment with one producer and a number of consumers.
The producer creates 2 types of messages:
A message that needs to be processed by ONE consumer only (any consumer will do).
A message that needs to be processed by ALL consumers.
How can this be implemented?
For message type 1, a work queue is the suitable model.
For message type 2, a fanout/direct/routing/topic are suitable.
But how do I combine them?
RabbitMQ is very flexible, you could have many different exchange and queue design solutions to meet your requirement.
But, first, we need to understand the relationship and basic rule between queues and consumers:
If you want a message type to be consumed by only one of all consumers, as you said, you need a worker queue, all the consumers should subscribe to it.
If you want to a message type to be consumed by each of the consumers, you need to have queues for each consumer, and each consumer only subscribe to its own queue.
When number of queues are clear based on the above understanding. Things left is how to route your messages to these queues. There will be many solutions. Below are just some examples.
One workable solution is to create two exchanges, one for each message type.
| message type | exchange name | exchange type | bound queues |
|------------------------------------------------------------------|
| type_1 | exchange1 | fanout | shared_queue |
| type_2 | exchange2 | fanout | queue1,queue2,... |
Another workable solution is, if you want to have only one exchange for publishing the two message types to, use 'direct' exchange type:
| routing_key | binding_key | bound queues |
|-----------------------------------------------|
| type_1 | type_1 | shared_queue |
| type_2 | type_2 | queue1,queue2,... |
One exchange could have multiple queues bound to it with same binding key. So, when publishing message type 1 with a publish routing key - "type_1", only the shared_queue will receive the message; when publishing message type 2 with publish routing key - "type_2", all the queue1,queue2,... will receive the message.
Use different binding keys for each messages might not be ideal for real cases in case you have more message types and you don't want to use the same routing keys. If so, you might want to use "topic" exchange type instead:
| routing_key | binding_key | bound queues |
|-----------------------------------------------|
| type_1.1 | type_1.* | shared_queue |
| type_2.2 | type_2.* | queue1,queue2,... |
Direct vs. fanout routing are properties of RabbitMQ exchanges. All RabbitMQ messages are published to exchanges, rather than directly to queues--when you create and use a queue without explicitly creating an exchange, you are actually using a pre-declared default exchange.
You can create multiple exchanges on a single RabbitMQ broker, and you can bind a single queue to more than one exchange. If you want workers to use the same queue for both message types, you can create a direct exchange (for message type 1) and a fanout exchange (for message type 2) and bind each queue to both exchanges. Otherwise, you can create separate queues for each exchange type.
RabbitMQ's AMQP concept guide has a good explanation of exchanges and queues, and Tutorial 3 on the RabbitMQ Getting Started page shows you how to create and bind exchanges.

Preserving order of execution in case of an exception on ActiveMQ level

Is there an option on Active MQ level to preserve the order of execution of messages in case of an exception? . In other words, assume that we have inside message ID=1 info about an object called student having for example ID=Student_1000 and this message failed and entered in DLQ for a certain reason but we have in the principal queue message ID= 2 and message ID = 3 having the same ID of this student (ID=Student_1000) . We should not allow those messages from getting processed because they are containing info about same ID of object as inside message ID = 1; ideally, they should be redirected directly to DLQ to preserve the order of execution because if we allow this processing, we will loose the order of execution in case we are performing an update.
Please note that I'm using message groups of Active MQ.
How to do that on Active MQ level?
Many thanks,
Rosy
Well, not really. But since the DLQ is by default shared, you would not have ordered messages there unless you configure individual DLQs.
Trying to rely on strict, 100% message order on queues to keep business logic simple is a bad idea, from my experience. That is, unless you have a single broker, a single producer and a single consumer and no DLQ handling (infinite redeliviers on RedeliveryPolicy).
What you should do is to read the entire group in a single transaction. Roll it back or commit it as a group. It will require you to set the prefetch size accordingly. DLQ handling and reading is actually a client concern and not a broker level thing.

Apache QPID queue size and count

I have a qpid queue with this parameters:
bus-sync-queue --durable --file-size=48 --file-count=64
I want to put to this queue 1 000 000 messages. Each message is just a string with 12 characters. (002000333222, 002000342678 and so on). What values I must set to config --file-size=X --file-count=Y to able to fit all messages to queue?
There is quite a big overhead on single persistent message, in you case one message will require at least 128 bytes of storage. You should rethink your design, either decrease expected number of no-acknowledged messages or use different approach.