how to explicitely give priorities to the queues with python-rq - python-rq

I am trying python-rq and I do not see how to explicitely give priorities to the queues?
Does the priority come from the order they are defined when the worker is launched?
rqworker queueA queueB queueC
queueA is prioritized compare to queueB and queueC ?

You are right.
The order of the arguments queueA, queueB, queueC defines the priority.
For more details - http://python-rq.org/docs/workers/

Related

RabbitMQ: expires-policy does not delete queues

I'm using RabbitMQ. The problem is, the queues are not getting deleted, despite me having a policy set up for this and I cannot figure out why it is not working.
This is the policy definition:
And this is a typical queue; it is idle for a long time and has 0 consumers.
I know the rules for expiring, however I cannot see that any of this would be the case. Any hints on what could be wrong here?
The pattern you provide restuser* doesn't match the name of the queue restresult-user001n5gb2, you can also confirm that from the Policy applied to the queue, here being ha.
Two additional points to pay attention to:
the pattern is a regular expression, and unless you "anchor" the beginning of the match or its end, as long as your pattern shows somewhere in the name it's good. restuse as a pattern should yield the same result as your pattern. If you want to match any queue starting with restuser, the pattern should be ^restuser
Policies are not cumulative (if you have configured high availability through policies, and you want to keep it for your restuser queues, you'll need to add the ha parameters to your clearrestuser policy too.

Preserving order of execution in case of an exception on ActiveMQ level

Is there an option on Active MQ level to preserve the order of execution of messages in case of an exception? . In other words, assume that we have inside message ID=1 info about an object called student having for example ID=Student_1000 and this message failed and entered in DLQ for a certain reason but we have in the principal queue message ID= 2 and message ID = 3 having the same ID of this student (ID=Student_1000) . We should not allow those messages from getting processed because they are containing info about same ID of object as inside message ID = 1; ideally, they should be redirected directly to DLQ to preserve the order of execution because if we allow this processing, we will loose the order of execution in case we are performing an update.
Please note that I'm using message groups of Active MQ.
How to do that on Active MQ level?
Many thanks,
Rosy
Well, not really. But since the DLQ is by default shared, you would not have ordered messages there unless you configure individual DLQs.
Trying to rely on strict, 100% message order on queues to keep business logic simple is a bad idea, from my experience. That is, unless you have a single broker, a single producer and a single consumer and no DLQ handling (infinite redeliviers on RedeliveryPolicy).
What you should do is to read the entire group in a single transaction. Roll it back or commit it as a group. It will require you to set the prefetch size accordingly. DLQ handling and reading is actually a client concern and not a broker level thing.

calling multiple sidekiq workers from a worker

I have to run multiple jobs upon a request from the user. However only one is important among those.
So i have a MainWorker in whose perform method I call different other workers like Worker1, Worker2.
Worker1 and Worker2 can be delayed, I need to give priority to MainWorker.
so here is how my perform method looks now
class MainWorker
def perform( user_id )
User.find( user_id ).main_task
Worker1.perform_async( user_id )
Worker2.perform_async( user_id )
end
end
I might have more sub workers coming up later. I want to know if this is a good practice or there is a much better way to do this. I however give custom queue names and priority to those based on the worker.
There are some 3rd party add-ons for Sidekiq. See here: https://github.com/mperham/sidekiq/wiki/Related-Projects
One that might be helpful for you is: SidekiqSuperworker.

How to prevent a NServiceBus saga from being started multiple times?

I want to create a saga which is started by message "Event1" but which will ignore receipt of "duplicate" start messages with the same applicative id (which may result from two or more users hitting a UI button within a short period of time). The documentation seems to suggest that this approach would work:
Saga declares IAmStartedByMessages<Event1>
Saga configures itself with ConfigureMapping<Event1>(s => s.SomeID, m => m.SomeID);
Handle(Event1 evt) sets a boolean flag when it processes the first message, and falls out of the handler if the flag has already been set.
Will this work? Will I have a race condition if the subscribers are multithreaded? If so, how can I achieve the desired behavior?
Thanks!
The race condition happens when two Event1 messages are processed concurrently. The way to prevent two saga instances from being created is by setting a unique constraint on the SomeID column.

What does the last digit in the ActiveMQ message ID represent?

I have a system that seems to be working fine, but when a certain process writes a message, I get 10 messages appear in the queue. They are all almost duplicates, but the last section of the message id is incremented.
Example:
c6743810-65e6-4bcd-b575-08174f9cae73:1:1:1
c6743810-65e6-4bcd-b575-08174f9cae73:1:1:2
c6743810-65e6-4bcd-b575-08174f9cae73:1:1:3
c6743810-65e6-4bcd-b575-08174f9cae73:1:1:4
.
.
.
What does this mean? From what I can tell, the process is only writing one message.
Nevermind, I found it... The process WAS writing multiple messages, but using the same producer and transaction. ActiveMQ seems to use this as a session ID or something of that sort. Feel free to expand on this topic if you deem it necessary.
The message id is generated to be globally unique - and consists of a combination of a your host, a unique MessageProducer Id and an incrementing sequence for each message