How to obtain the return value from a function enqueued in rq? - redis

From the documentation of redis queue https://python-rq.org/docs, I came to know that the worker can return results only after a certain time and till then return None.
Is there any way to find out that the worker execution is complete (not with time.sleep() pls.) ?
In my case what is happening is the worker keeps running and the data displayed on the UI is None as the control moves to my rendering of UI code as soon as worker is assigned the task and doesnot wait to complete the execution ?
Pls. help me.

I know it's been a year, but if someone else needs it:
that depends on your needs - I'd use supervisor, do you can easily see updated output of each running worker/process, either with the output file, or in the browser, with the inet_http_server section
if you want something done, after the current job has finished - just chain jobs to the queue.
you only need to specify the job in the "depends_on" parameter.docs

Related

Spring AMQP RabbitMQ does not consume all messages, workers finish prematurely

I am struggling to find proper setting to delay timeout for workers in RabbitMQ.
By default prefetchCount since the version 2.0 are set to 250 and exactly this amount of messages are being received and processed.
I would like to keep workers busy, until they clear up an entire queue (lets say 10k messages).
I can manipulate this number manually, such as changing default limit or assigning more threads resulting in multiplying default number.
Results are always the same. Once the number is reached, workers stop their job and application finish its execution
o.s.a.r.l.SimpleMessageListenerContainer : Successfully waited for workers to finish.
I would like them to finish when the queue is empty. Any ideas?
The logger.info("Successfully waited for workers to finish."); happens only in one place - doShutdown(). And this one is called from the shutdown(), which is called from the destroy() or stop().
I somehow think that you exit from your application by some reason. You just don't block the main() to work permanently.
Please, share a simple project we can play with.

Is there a way to get confirmation from the scheduler that the job has been accepted?

I'm using dask.distributed.Client to connect to a remote Dask scheduler running that manages a bunch of workers. I am submitting my job using client.submit and keeping track of the returned Future:
client = Client("some-example-host:8786")
future = client.submit(job, job_args)
I want to be able to know if/when the job has been sent to and accepted by the scheduler. This is so that I can add some retry logic in cases when the scheduler goes down.
Is there an easy way to get confirmation that the scheduler has received and accepted the job?
Some additional points:
I realise that distributed.client.Future has a status property, but I'm hesitant to use it as it was not documented in the API.
I have tried using dask.callbacks.Callback but with no success. Any assistance with using callbacks with distributed.Client would be appreciated.
EDIT: I could also have the job post back a notification when it starts, but I would like to leave this approach as a last resort if the Client does not support this.

IBM Worklight - 6.1 How to stop first adapter call while making second adapter call

If we fire multiple adapter call with the gap of 2 -3 sec then how to to stop first call which is running in background ?
Lets say :
I am calling A-Adapter which gives some data after success but at the same time with a gap of 2-3 second if i call B-Adapter which gives some small data within millisecond.
But still the first adapter call is taking time and respond back after 4 second or suppose timed out. Now we are getting success or failure of A-Adapter after B-Adapter success.
Now My doubt is
Can we stop or unsubscribe first adapter call at some point of time whenever required ?
Is there anything in worklight for doing this ?
Issue which we are facing right now is major issue, given below.
Lets say :
I am calling login adapter which gives login success or failure and it is taking some time let say 5 minutes. So what i did i close the app and launched the application again.
I again clicked on login and i am getting successful login and now i am inside the app and doing some work. Now at this point of time , I am getting failure response of login adapter which was taking time.
The answer to your direct question is, no, there is no API that will let you terminate an adapter procedure invocation that is in progress, prior to when it finishes on it's own. Once an adapter procedure is invoked, it must either succeed, fail, or time out.
Where you discuss the possibility of A-adapter finishing after B-adapter, I could not tell if you just intended that as an observation regarding a situation that could possibly occur, or if you see it as a problem or a bug - if the latter, you should understand that since adapter procedure invocations are completely asynchronous, there is no guarantee that adapter procedures will finish in the order they are invoked, and there is not intended to be any such guarantee.
In order to handle the issue you have described, what I would suggest would be to use an invocationContext to make sure that, when your success or failure callback fires, that this corresponds to an adapter procedure invocation that you are expecting a response for, and to ignore the result if it does not. Fore more information, see the section of the Worklight Information Center that describes the options Object.
If the usual, "normal" response time of the adapter procedure is small, you could also try to mitigate this issue by setting the procedure invocation timeout to a small amount of time. So, for instance, if an adapter procedure normally completes within about 4 seconds, maybe set the timeout to 15 seconds - assuming that, if the adapter procedure hasn't finished after that amount of time, something is wrong (maybe the back-end system you're retrieving data from has hung or crashed, or something like that) and it's just going to eventually fail anyway, so just let it return a timeout failure and give up. That way, you don't need to worry about what happens when it eventually fails some minutes later... There was another StackOverflow question asked in the past, where it was explained how to change this timeout.

Celery - RabbitMQ - execution order

Im running some long tasks where I need to ensure that the queued tasks execute in order of reception. What I've found in my first tests is that when I reach the max number of workers (CELERYD_CONCURRENCY), the following tasks that are sent are queued, and then the first of those to be executed is actually the latest one to be received.
Of course the opposite behavior is what Im after, that the oldest messages are the first to be executed when there is a free worker.
What is the explanation for this behavior and how can it be changed?
This turned out to be a result of the Rabbitmq setting prefetchCount, which prefetches a set of messages per channel.
Since I am using the queue for long-running tasks, I solved the problem by setting CELERYD_PREFETCH_MULTIPLIER to 1, which is by default 4, so that only one message is prefetched and so the execution order is preserved.

Using django-celery chord, celery.chord_unlock keeps executing forever not calling the provided callback

I'm using Django Celery with Redis to run a few tasks like this:
header = [
tasks.invalidate_user.subtask(args = (user)),
tasks.invalidate_details.subtask(args = (user))
]
callback = tasks.rebuild.subtask()
chord(header)(callback)
So basically the same as stated in documentation.
My problem is, that when this task chord is called, celery.chord_unlock task keeps retrying forever. Tasks in header finish successfully, but because of chord_unlock never being done, callback is never called.
Guessing that my problem is with not being able to detect that the tasks from header are finished, I turned to documentation to look how can this be customized. I've found a section, describing how the synchronization is implemented, there is an example provided, what I'm missing is how do I get that example function to be called (i.e. is there a signal for this?).
Further there's a note that this method is not used with Redis backend:
This is used by all result backends except Redis and Memcached, which increment a counter after each task in the header, then applying the callback when the counter exceeds the number of tasks in the set.
But also says, that Redis approach is better:
The Redis and Memcached approach is a much better solution
What approach is that? How is it implemented?
So, why is chord_unlock never done and how can I make it detect finished header tasks?
I'm using: Django 1.4, celery 2.5.3, django-celery 2.5.5, redis 2.4.12
You don't have an example of your tasks, but I had the same problem and my solution might apply.
I had ignore_result=True on the tasks that I was adding to a chord, defined like so:
#task(ignore_result=True)
Apparently ignoring the result makes it so that the chord_unlock task doesn't know they're complete. After I removed ignore_result (even if the task only returns true) the chord called the callback properly.
I had the same error, I changed the broker to rabbitmq and chord_unlock is working until my task finishes (2-3 minutes tasks)
when using redis the task finishes and chord_unlock only retried like 8-10 times every 1s, so callback was not executing correctly.
[2012-08-24 16:31:05,804: INFO/MainProcess] Task celery.chord_unlock[5a46e8ac-de40-484f-8dc1-7cf01693df7a] retry: Retry in 1s
[2012-08-24 16:31:06,817: INFO/MainProcess] Got task from broker: celery.chord_unlock[5a46e8ac-de40-484f-8dc1-7cf01693df7a] eta:[2012-08-24 16:31:07.815719-05:00]
... just like 8-10 times....
changing broker worked for me, now I am testing #Chris solution and my callback function never receives the results from the header subtasks :S, so, it does not works for me.
celery==3.0.6
django==1.4
django-celery==3.0.6
redis==2.6
broker: redis-2.4.16 on Mac OS X
This could cause a problem such that; From the documentation;
Note:
If you are using chords with the Redis result backend and also overriding the Task.after_return() method, you need to make sure to call the super method or else the chord callback will not be applied.
def after_return(self, *args, **kwargs):
do_something()
super(MyTask, self).after_return(*args, **kwargs)
As my understanding, If you have overwritten after_return function in your task, it must be removed or at least calling super one.
Bottom of the topic:http://celery.readthedocs.org/en/latest/userguide/canvas.html#important-notes