RabbitMQ Queues HA and Dead Letter Exchanges Not Working - rabbitmq

I have 3 nodes (A,B,C) in my cluster . Right now I want to configure the queue High Availability using the ha-nodes option with nodes A and C as the params.I am successfully configured the HA policy and its working. But after I use the DLX policy for all queues, the HA policy is not working anymore.
Is that normal or am I missing something here?
I want to use the HA policy and DLX policy together, but now it seems impossible. Thanks.

Only one policy is applied at a time for a given queue or exchange:
http://www.rabbitmq.com/parameters.html#policies
But you still can configure HA and dead-lettering together: you just need to do that in one policy. Here is an example:
{
"ha-mode": "nodes",
"ha-params": ["A", "C"],
"dead-letter-exchange": "my-dlx"
}

Related

How to consume a dead letter queue in AWS SQS using Justsaying

I'm working with SQS in my application. I have the following configuration.
justSaying
.WithSqsTopicSubscriber()
.IntoQueue(_busNamingConvention.QueueName())
.ConfigureSubscriptionWith(x =>
{
x.VisibilityTimeoutSeconds = 60;
x.RetryCountBeforeSendingToErrorQueue = 3;
})
.WithMessageHandler<MyMessage>(_handlerResolver)
.WithSqsMessagePublisher<MyMessage>(config => config.QueueName = _busNamingConvention.QueueName());
So, there will be 3 re-attempts before the messages gets to Dead Letter Queue. I want to consume this dead letter queue and process the message separately. In essence, I want to create a handler to deal with the messages in the DLQ.
I'm not sure if this is possible or SQS is not intended to be used this way. Please post if this is possible and if yes, is it okay to do this or is this an anti pattern.

Do RabbitMq policies override queue parameters?

Problem
Our clients can create their own queues on the RabbitMq cluster and we need to control the important parameters on the queue (ttl, expiration etc.).
The issue is that we cannot be sure what value is actually applied: the one from x-arguments or the policy.
Question
In this rabbitmq documentation, there is nicely explained how are different policies resolved but it does not mention the priority of x-arguments.
So if the queue is created with x-message-ttl : 180000 and the applied policy defines message-ttl : 100000, like this :
... what will be the applied value?
Answer is likely Yes
It looks like policies do override the queue x-attribute.
Why ?
Well, it did for max-length in this small test (with ver 3.10.11) :
Queue was created with x-max-length: 5
Policy of max-lenght: 3 applied
Number of ready messages dropped from 5 to 3

How can I check consumer lag for subject?

I have jetstream with two publishers for different subjects:
subject.a
subject.b
And one consumer for subject.a
I have some problems with checking lag of consumer for definite subject.
/jsz?accounts=1&streams=1&consumers=1 shows sequence of all stream.
"delivered": {
"consumer_seq": 1108,
"stream_seq": 2216,
"last_active": "2022-09-27T15:11:29.952186581Z"
}
So how can I see the sequence only for subject.a ?
I use sync Publish for publishers and QueueSubscribe for consumer.

RabbitMQ delete a corrupted queue after node crash

RabbitMQ Version 3.7.21
Erlang Version Erlang 21.3.8.10
My team had 2 nodes hit the memory watermark last night and so I rebuilt the bad nodes but it left some queues in a bad state. I want to clear them out so that we can recreate them.
The stats show NaN for Ready, Unacked, and Total and the stats in queue look like:
It looks like the queue's node is one that no longer exists so unfortunately I can't access it. It's completely gone.
I have tried the following commands:
rabbitmqctl eval 'Q = rabbit_misc:r(<<"/">>, queue, <<"QUEUE">>), rabbit_amqqueue:internal_delete(Q).'
rabbitmqctl eval 'Q = {resource, <<"/">>, queue, <<"QUEUE">>}, rabbit_amqqueue:internal_delete(Q).'
but get this error:
{:undef, [{:rabbit_amqqueue, :internal_delete, [{:resource, "/", :queue, "QUEUE"}], []}, {:erl_eval, :do_apply, 6, [file: 'erl_eval.erl', line: 680]}, {:rpc, :"-handle_call_call/6-fun-0-", 5, [file: 'rpc.erl', line: 197]}]}
Which I assume means it's trying to make an RPC call to a node that no longer exists and it fails. This seems crazy to me because not just is the node gone but it has been forgotten from the cluster but still a couple queues remain.
Looks like there are 3 options:
Comb through the Mnesia tables and delete the corrupted ones
Fully rebuild the cluster and migrate to a new cluster
Rename your queues and ignore corrupted ones
We're going to go with Option 3 for now but I'm sure eventually there will be a breaking change in RabbitMQ that will make Option 2 more appealing but for now the quick fix is best for me.
According to https://groups.google.com/g/rabbitmq-users/c/VSjzvOUfS3s/m/q8OmFTqACAAJ, the internal_delete function in 3.7.x takes two arguments:
In 3.7.x rabbit_amqqueue:internal_delete takes two arguments (acting user name is the second one).
Therefore, the next time you need to delete a queue in a bad state, try
rabbitmqctl eval 'Q = {resource, <<"/">>, queue, <<"QUEUE">>}, rabbit_amqqueue:internal_delete(Q, <<"CLI">>).'

Celery with rabbitmq creates results multiple queues

I have installed Celery with RabbitMQ.
Problem is that for every result that is returned, Celery will create in the Rabbit, queue with the task's ID in the exchange celeryresults.
I still want to have results, but on ONE queue.
my celeryconfig:
from datetime import timedelta
OKER_URL = 'amqp://'
CELERY_RESULT_BACKEND = 'amqp'
#CELERY_IGNORE_RESULT = True
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_ACCEPT_CONTENT=['json', 'application/json']
CELERY_TIMEZONE = 'Europe/Oslo'
CELERY_ENABLE_UTC = True
from celery.schedules import crontab
CELERYBEAT_SCHEDULE = {
'every-minute': {
'task': 'tasks.remote',
'schedule': timedelta(seconds=30),
'args': (),
},
}
Is that possible? How?
Thanks!
amqp backend creates a new queue for each task. Alternatively, there is a new rpc backend which keeps results in a single queue.
http://docs.celeryproject.org/en/master/whatsnew-3.1.html#new-rpc-result-backend
Nothing unusual.
That is how celery works when we use amqp as result backend. It will create a new temporary queue for every result corresponding to each tasks that worker consumes.
If you are not interested in the result, you can try CELERY_IGNORE_RESULT = True setting
If you do want to store the result, then i would recommend using a different result backend like Redis.
You say you want Celery to keep the result on one queue. Now, to answer your question, let me ask you one:
How do you expect each producer to check for it's relevant result without reading every single message off the queue to find the one it needs/wants?
In essence, what you want is a database of key-value pairs so that the lookup is O(1). The only way to do that with a queue broker is to create one queue for each "pair".
I understand that having many GUID queues is not neat or pretty, but it's conceptually the only way to do it on a messaging broker.
This solution won't keep all the results to ONE queue, but it will at least clean up the extra queues right when you're done with them.
If you use Redis as your backend, when you're done with a result that has created an errant queue, run result.forget(). This will cause both the result and the queue for the result to disappear. This can help you manage the number of queues you have, and prevent OOM issues.