calling multiple sidekiq workers from a worker - ruby-on-rails-3

I have to run multiple jobs upon a request from the user. However only one is important among those.
So i have a MainWorker in whose perform method I call different other workers like Worker1, Worker2.
Worker1 and Worker2 can be delayed, I need to give priority to MainWorker.
so here is how my perform method looks now
class MainWorker
def perform( user_id )
User.find( user_id ).main_task
Worker1.perform_async( user_id )
Worker2.perform_async( user_id )
end
end
I might have more sub workers coming up later. I want to know if this is a good practice or there is a much better way to do this. I however give custom queue names and priority to those based on the worker.

There are some 3rd party add-ons for Sidekiq. See here: https://github.com/mperham/sidekiq/wiki/Related-Projects
One that might be helpful for you is: SidekiqSuperworker.

Related

How can I find out the JOBCOUNT value of a periodic background job?

I have a report which creates a background job. The user can decide if the job should be periodic or not. Now I want to show information about the actual job. So how can I find out the JOBCOUNT of the following job (periodic) after the old one was executed?
I guess SAP would store that information only if it's needed for internal operations.
I think there's no need, so you won't find that information stored anywhere.
You might do an approximation yourself by searching the currently-scheduled job which has its creation date/time (TBTCO/TBTCS) close to the end of a previous one (TBTCO), with same characteristics (including it(s) step(s) in table TBTCP)... You may get inspired from a few programs prefixed BTCAUX (04, 13).
If you do this piece of code, don't hesitate to post it as a separate answer, that could be very helpful for future visitors.
You can use BP_JOB_SELECT FM for that, it mainly resembles SM37 selection parameters.
Set JOBSELECT_DIALOG param to N to omit GUI screen and fill in job name into JOBSEL_PARAM_IN-JOBNAME param, these are the only two mandatory parameters.
The JOBCOUNT value resides in JOBSELECT_JOBLIST table:

Ansible forks set to 1 for only one specific role

I am trying to execute 3 to 4 roles in ansible, parllel all at a time, but I want only one specific role to have forks option set to 1 whereas the other roles can have any number of forks
I'm not sure what your exact end goal is but I suspect the serial directive is what you're looking for to limit how the execution operates. Keep in mind you may have to split into multiple plays within the playbook depending on how you have things setup now.
http://docs.ansible.com/ansible/latest/playbooks_delegation.html#rolling-update-batch-size

Message broker with dynamic queues

I have application that accepts data for updating products prices and I wondering how I can optimize it.
Data is received in some kind of queue ( rabbitMQ )
Few key notes:
I can't change incoming data format ( data is received from third party )
Updates must be performed in order from product perspective ( due attributes )
Each of product CAN have additional attributes by which system can behave differently when updating prices internally
I was thinking about using some messaging system too to distribute processing something like that:
where :
Q1 is queue for handling only p1 product updates.
Q2 is queue for handling only p2 product updates.
and so on..
However I have found it is likely to be more anti-pattern: Dynamic queue creation with RabbitMQ
For example seems with RabbitMQ it would be even quite hard to achieve since we need to have predefined queues in order to listen them.
The question is:
1) Should I use another pattern in case this is not valid and which pattern I should use
2) In case this pattern is valid is there some kind different messaging system that would allow distribute data by this pattern

How to make monit start processes in order?

In the monit config file, we have a list of processes we expect monit to check for. Each one looks like:
check process process_name_here
with pidfile /path/to/file.pid
start program = "/bin/bash ..."
stop program = "/bin/bash ..."
if totalmem is greater than X MB for Y cycles then alert
if N restarts within X cycles then alert
group group_name
Since we have about 30-40 processes in this list that we monitor, I have two questions:
1) If we restart the services (kill them all), can we have monit start all processes at the same time instead of the way it's done now (sequentially, one by one).
2) Can we specify the order in which we would like the processes to start? How is the order determined? Is it the order that they appear in the conf file? Is it by process name? Anything else? This is especially important if #1 above is not possible...
You can use the depends on syntax. I use this for custom Varnish builds.
For example, process a, process b, and process c. Process a needs to start first, then followed by b and c.
Your first process won't depend on anything. In your check for process b, you'll want:
depends on process a
Then in your process c check, you'll want:
depends on process b
This should make sure that the processes are started in the correct order. Let me know if this works for you.
Going only by documentation, there is nothing related to point one other than the fact that monit runs single-threaded.
As for point two, under "SERVICE POLL TIME":
Checks are performed in the same order as they are written in the .monitrc file, except if dependencies are setup between services, in which case the services hierarchy may alternate the order of the checks.
Note that if you have an include string that matches multiple files they are included in no specific order.
If you require a specific order you should use DEPENDS where possible

How to prevent a NServiceBus saga from being started multiple times?

I want to create a saga which is started by message "Event1" but which will ignore receipt of "duplicate" start messages with the same applicative id (which may result from two or more users hitting a UI button within a short period of time). The documentation seems to suggest that this approach would work:
Saga declares IAmStartedByMessages<Event1>
Saga configures itself with ConfigureMapping<Event1>(s => s.SomeID, m => m.SomeID);
Handle(Event1 evt) sets a boolean flag when it processes the first message, and falls out of the handler if the flag has already been set.
Will this work? Will I have a race condition if the subscribers are multithreaded? If so, how can I achieve the desired behavior?
Thanks!
The race condition happens when two Event1 messages are processed concurrently. The way to prevent two saga instances from being created is by setting a unique constraint on the SomeID column.