How to update TTL for dead letter existing queue - rabbitmq

Not able to update existing TTL for particular Queue
Got some command but it will be for all queue.
rabbitmqctl set_policy TTL ".*" '{"message-ttl":60000}' --apply-to queues
I need to update only for one Queue. Please help

Related

unable to set rabbitmq lazy queue

I have referred the link https://www.rabbitmq.com/lazy-queues.html here and set the rabbitmq queue to lazy queue using the following command
rabbitmqctl set_policy Lazy "^lazy-queue$" '{"queue-mode":"lazy"}' --apply-to queues
However, when checking it using the command curl -u guest:guest 'localhost:15672/api/queues' it still shows the default queue as below
"mode":"default" .
How do I set the queue to lazy queue in rabbitmq. Could someone please help
A Policy defines a rule that applies to all queues whose name matches a particular pattern.
Let's have a closer look at the command you've copied:
rabbitmqctl set_policy Lazy "^lazy-queue$" '{"queue-mode":"lazy"}' --apply-to queues
We are creating or updating a policy called "Lazy"; as far as I know, this can be any name you like
The pattern we want it to apply to is ^lazy-queue$; this is a regular expression which only matches the exact name "lazy-queue"
The configuration we want to apply is to set "queue-mode" to "lazy"
So, if you want it to apply to multiple queues, you need to adjust the policy to apply to those queues. For instance, you could apply it to all queues whose names begin "lazy-":
rabbitmqctl set_policy Lazy "^lazy-" '{"queue-mode":"lazy"}' --apply-to queues
Or any name that ends in a four-digit number:
rabbitmqctl set_policy Lazy "-[0-9]{4}$" '{"queue-mode":"lazy"}' --apply-to queues
Or just apply to every queue:
rabbitmqctl set_policy LazyEverything ".*" '{"queue-mode":"lazy"}' --apply-to queues

Get the latest message in the queue

I want to fetch the last / latest message added in the queue, is there a specific option available in the rabbitmqadmin utility.
The following command is giving the first message in the queue,
./rabbitmqadmin get queue='log' -H localhost -P 15672 -u <username> -p <password> --vhost=logging count=1
Are you looking to consume or view the messages? I use this tool plumber to view the latest incoming messages without removing them from the queue. If you are looking to consume just the latest message you may have to write a script.
To read the latest inbound message and exit:
plumber read messages rabbitmq --address amqp://user#pass:127.0.0.1:5672 --exchange events --routing-key \#
To watch all messages as they come in:
plumber read messages RabbitMQ --address amqp://user#pass:127.0.0.1:5672 --exchange events --routing-key \# --follow
If you use plumber your queue must be set up on its own exchange you can not use the RabbitMQ default exchange see.

Rabbitmq can not purge all messages in the queue

I am using rabbitmq, and I try to purge a queue by using commands like below:
[root#test xxx]# rabbitmqctl purge_queue metering.sample
Purging queue 'metering.sample' in vhost '/' ...
[root#test xxx]# rabbitmqadmin purge queue
name=metering.sample
queue purged
[root#test xxx]# rabbitmqctl list_queues | grep sample
metering.sample 17172
Initially, the queue was filled with 296533 messages, after I ran both of the commands, the queue is still filled with 17172 messages. (I am sure there is no publisher running anymore)
why did it happen? is it a bug or I used it by wrong way?
need some help, thanks in advance.
Keep in mind that the unacked messages are not purged by those commands.
https://stackoverflow.com/a/25116528/2047138

rabbitmq cluster join does not work

I have a rabbitmq cluster with 2 nodes. Node A and B. Node A is up and running. Everytime I run the following commadn on node A I get:
./rabbitmqctl cluster_status
Cluster status of node rabbit#A ...
[{nodes,[{disc,[rabbit#A,rabbit#B]}]},
{running_nodes,[rabbit#A]},
{partitions,[]}]
...done.
Interestingly node B is up and running. Everytime I have it join the other node (A) to get it clustered it states:
rabbitmqctl join_cluster rabbit#A
...done (already_member).
rabbitmqctl cluster_status
Cluster status of node rabbit#B ...
[{nodes,[{disc,[rabbit#B]}]}]
...done.
So somehow node A cannot see B. And on be the "already_member" does not seem to be reflected from the cluster_status command...
I can check the queues on both nodes and they are different. Node A has dozens of queues and node B none, therefore it is clear the cluster is not established. Both node A and B can ping each other and nothing gets reported in the rabbitmq's logs
Any idea how this is not working ?
In case of cluster I will suggest you to go for Load Balancer . Make sure you already set HA Policy for your cluster.
To set HA Policy
$ rabbitmqctl set_policy ha-all "" '{"ha-mode":"all","ha-sync-mode":"automatic"}'
More here RabbitMQ Cluster
I was able to solve this same problem. Given NodeA is the parent and NodeB is trying to join the cluster.
Stop NodeB app rabbitmqctl stop_app
On NodeA forget cluster node rabbitmqctl forget_cluster_node rabbit#NodeB
Reset NodeB rabbitmqctl reset
Join NodeB to the cluster rabbitmqctl join_cluster rabbit#NodeA
Start NodeB rabbitmqctl start_app

Multiple federation policies in RabbitMQ

I have a number of RabbitMQ servers arranged effectively in a star topology. I need to federate a different exchange bi-directionally between the central hub server and each of the outer servers. Configuration of the outer servers isn't problematic, but although the exchanges are different the hub doesn't want to accept more than one federation policy.
Defining multiple upstreams and upstream sets works as expected:
$ rabbitmqctl list_parameters
Listing runtime parameters ...
federation-upstream-set leaf1 [{"upstream":"leaf1-1"}]
federation-upstream-set leaf2 [{"upstream":"leaf2-1"}]
federation-upstream leaf2-1 {"uri":"--snipped--","expires":3600000}
federation-upstream leaf1-1 {"uri":"--snipped--","expires":3600000}
...done.
The first federation policy applies as expected:
$ rabbitmqctl set_policy --apply-to exchanges federate-me "^leaf1$" '{"federation-upstream-set":"leaf1"}'
Setting policy "federate-me" for pattern "^leaf1$" to "{\"federation-upstream-set\":\"leaf1\"}" with priority "0" ...
...done.
$ rabbitmqctl list_policies
Listing policies ...
/ federate-me exchanges ^leaf1$ {"federation-upstream-set":"leaf1"} 0
...done.
But as soon as I try to specify a second federation policy, it simply replaces the first one:
$ rabbitmqctl set_policy --apply-to exchanges federate-me "^leaf2$" '{"federation-upstream-set":"leaf2"}'
Setting policy "federate-me" for pattern "^leaf2$" to "{\"federation-upstream-set\":\"leaf2\"}" with priority "0" ...
...done.
$ rabbitmqctl list_policies
Listing policies ...
/ federate-me exchanges ^leaf2$ {"federation-upstream-set":"leaf2"} 0
...done.
It doesn't matter if I specify different priorities for the two policies, either; whatever I do, only the single most recently entered federation policy is listed. I know that only a single policy can apply to each exchange, but the exchange specification for each policy here is different, and moreover the documentation suggests that the policy with the highest priority should win in the event that there are multiple matching policies.
Can anyone help?
You have to specify unique name for each policy you want to add. Setting different policy with existent name will just override existent policy with that name.