RabbitMQ: expires-policy does not delete queues - rabbitmq

I'm using RabbitMQ. The problem is, the queues are not getting deleted, despite me having a policy set up for this and I cannot figure out why it is not working.
This is the policy definition:
And this is a typical queue; it is idle for a long time and has 0 consumers.
I know the rules for expiring, however I cannot see that any of this would be the case. Any hints on what could be wrong here?

The pattern you provide restuser* doesn't match the name of the queue restresult-user001n5gb2, you can also confirm that from the Policy applied to the queue, here being ha.
Two additional points to pay attention to:
the pattern is a regular expression, and unless you "anchor" the beginning of the match or its end, as long as your pattern shows somewhere in the name it's good. restuse as a pattern should yield the same result as your pattern. If you want to match any queue starting with restuser, the pattern should be ^restuser
Policies are not cumulative (if you have configured high availability through policies, and you want to keep it for your restuser queues, you'll need to add the ha parameters to your clearrestuser policy too.

Related

Apache Pulsar topic replication with increase in cluster size

I want to understand how the namespace/topic replication works in Apache Pulsar and what affect does the change in cluster size have on the replication factor of the existing and new namespaces/topics.
Consider the following scenario:
I am starting with a single node with the following broker configuration:
# Number of bookies to use when creating a ledger
managedLedgerDefaultEnsembleSize=1
# Number of copies to store for each message
managedLedgerDefaultWriteQuorum=1
# Number of guaranteed copies (acks to wait before write is complete)
managedLedgerDefaultAckQuorum=1
After a few months I decide to increase the cluster size to two with the following configuration for the new broker:
# Number of bookies to use when creating a ledger
managedLedgerDefaultEnsembleSize=2
# Number of copies to store for each message
managedLedgerDefaultWriteQuorum=2
# Number of guaranteed copies (acks to wait before write is complete)
managedLedgerDefaultAckQuorum=2
In the above scenario what will be the behaviour of the cluster:
Does this change the replication factor(RF) of the existing topics?
Do newly created topics have the old RF or the new specified RF?
How does the namespace/topic(Managed Ledger) -> Broker ownership work?
Please note that the two broker nodes have different configurations at this point.
TIA
What you are changing is the default replication settings (ensemble, write, ack). You shouldn't be using different defaults on different brokers, because then you'll get inconsistent behavior depending on which broker the client connects to.
The replication settings are controlled at namespace level. If you don't explicitly set them, you get the default settings. However, you can change the settings on individual namespaces using the CLI or the REST interface. If you start with settings of (1 ensemble, 1 write, 1 ack) on the namespace and then change to (2 ensemble, 2 write, 2 ack), then the following happens:
All new topics in the namespace use the new settings, storing 2 copies of each message
All new messages published to existing topics in the namespace use the new settings, storing 2 copies. Messages that are already stored in existing topics are not changed. They still have only 1 copy.
An important point to note is that the number of brokers doesn't affect the message replication. In Pulsar, the broker just handles the serving (producing/consuming) of the message. Brokers are stateless and can be scaled horizontally. The messages are stored on Bookkeeper nodes (bookies). The replication settings (ensemble, write, ack) refer to Bookkeeper nodes, not brokers. Here is an diagram from the Pulsar website that illustrates this:
So, to move from a setting of (1 ensemble, 1 write, 1 ack) to (2 ensemble, 2 write, 2 ack), you need to add a Bookkeeper node to your cluster (assuming you start with just 1), not another broker.

Message broker with dynamic queues

I have application that accepts data for updating products prices and I wondering how I can optimize it.
Data is received in some kind of queue ( rabbitMQ )
Few key notes:
I can't change incoming data format ( data is received from third party )
Updates must be performed in order from product perspective ( due attributes )
Each of product CAN have additional attributes by which system can behave differently when updating prices internally
I was thinking about using some messaging system too to distribute processing something like that:
where :
Q1 is queue for handling only p1 product updates.
Q2 is queue for handling only p2 product updates.
and so on..
However I have found it is likely to be more anti-pattern: Dynamic queue creation with RabbitMQ
For example seems with RabbitMQ it would be even quite hard to achieve since we need to have predefined queues in order to listen them.
The question is:
1) Should I use another pattern in case this is not valid and which pattern I should use
2) In case this pattern is valid is there some kind different messaging system that would allow distribute data by this pattern

Preserving order of execution in case of an exception on ActiveMQ level

Is there an option on Active MQ level to preserve the order of execution of messages in case of an exception? . In other words, assume that we have inside message ID=1 info about an object called student having for example ID=Student_1000 and this message failed and entered in DLQ for a certain reason but we have in the principal queue message ID= 2 and message ID = 3 having the same ID of this student (ID=Student_1000) . We should not allow those messages from getting processed because they are containing info about same ID of object as inside message ID = 1; ideally, they should be redirected directly to DLQ to preserve the order of execution because if we allow this processing, we will loose the order of execution in case we are performing an update.
Please note that I'm using message groups of Active MQ.
How to do that on Active MQ level?
Many thanks,
Rosy
Well, not really. But since the DLQ is by default shared, you would not have ordered messages there unless you configure individual DLQs.
Trying to rely on strict, 100% message order on queues to keep business logic simple is a bad idea, from my experience. That is, unless you have a single broker, a single producer and a single consumer and no DLQ handling (infinite redeliviers on RedeliveryPolicy).
What you should do is to read the entire group in a single transaction. Roll it back or commit it as a group. It will require you to set the prefetch size accordingly. DLQ handling and reading is actually a client concern and not a broker level thing.

How to prevent a NServiceBus saga from being started multiple times?

I want to create a saga which is started by message "Event1" but which will ignore receipt of "duplicate" start messages with the same applicative id (which may result from two or more users hitting a UI button within a short period of time). The documentation seems to suggest that this approach would work:
Saga declares IAmStartedByMessages<Event1>
Saga configures itself with ConfigureMapping<Event1>(s => s.SomeID, m => m.SomeID);
Handle(Event1 evt) sets a boolean flag when it processes the first message, and falls out of the handler if the flag has already been set.
Will this work? Will I have a race condition if the subscribers are multithreaded? If so, how can I achieve the desired behavior?
Thanks!
The race condition happens when two Event1 messages are processed concurrently. The way to prevent two saga instances from being created is by setting a unique constraint on the SomeID column.

What does the last digit in the ActiveMQ message ID represent?

I have a system that seems to be working fine, but when a certain process writes a message, I get 10 messages appear in the queue. They are all almost duplicates, but the last section of the message id is incremented.
Example:
c6743810-65e6-4bcd-b575-08174f9cae73:1:1:1
c6743810-65e6-4bcd-b575-08174f9cae73:1:1:2
c6743810-65e6-4bcd-b575-08174f9cae73:1:1:3
c6743810-65e6-4bcd-b575-08174f9cae73:1:1:4
.
.
.
What does this mean? From what I can tell, the process is only writing one message.
Nevermind, I found it... The process WAS writing multiple messages, but using the same producer and transaction. ActiveMQ seems to use this as a session ID or something of that sort. Feel free to expand on this topic if you deem it necessary.
The message id is generated to be globally unique - and consists of a combination of a your host, a unique MessageProducer Id and an incrementing sequence for each message