I have a route in my API which kicks off a job to a message queue (I'm using the AMQP package). I would like to leave the connection open and respond to the web request when the job finishes.
With AMQP, I have no way of knowing when a job I started will finish. Instead, I can subscribe to a channel that will report when a job finishes.
Here's some pseudo-code:
main = do
doneChannel = openAMQPChannel "done"
-- assume some other program subscribes to jobs, performs them
-- and puts them onto "done" when finished.
jobsChannel = openAMQPChannel "jobs"
forkIO $ subscribeAMQP doneChannel onDone
startServer
onDone jobId info = do
-- How can I return this info in my request?
-- assume it has unique identifying information
???
startServer jobs = do
Warp.run 8000 $ do
post "/jobs" $ \payload -> do
jobId <- generateUniqueId
publishAMQP jobs jobId payload
-- TODO wait until onDone happens for my particular request
info <- waitForJobToComplete jobId
return info
(See here for the real AMQP interface. I'm using Servant for routing)
Is there some way to do this with Control.Concurrent? MVar and Chan don't seem to be addressable by an ID like this. How could I implement it in such a way that I could handle thousands of requests at once?
Any other ideas?
Related
I am using ActiveMQ with Java Spring. I have enabled scheduler and I managed to create scheduled jobs (programmatically). I have also managed to write a method to remove them based on job id. I have been using JmsTemplate to browse a queue, but it only works when the queue has some messages waiting. I can't find job id when the queue is empty.
My question is how am I supposed to get scheduled job id?
From your question it sounds like you want to see what messages are scheduled, so to accomplish that you need to create a Producer that publishes on the Destination named: "ActiveMQ.Scheduler.Management". Once that's done you create a new Message and set some properties and add a Reply To destination so the scheduler knows where to send your Messages. Then all you need to do is process the messages with a Consumer that is subscribed to that Reply To destination.
Connection connection = createConnection();
Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
// Create the Browse Destination and the Reply To location
Destination requestBrowse = session.createTopic(ScheduledMessage.AMQ_SCHEDULER_MANAGEMENT_DESTINATION);
Destination browseDest = session.createTemporaryQueue();
// Create the "Browser"
MessageConsumer browser = session.createConsumer(browseDest);
connection.start();
// Send the browse request
MessageProducer producer = session.createProducer(requestBrowse);
Message request = session.createMessage();
request.setStringProperty(ScheduledMessage.AMQ_SCHEDULER_ACTION,
ScheduledMessage.AMQ_SCHEDULER_ACTION_BROWSE);
request.setJMSReplyTo(browseDest);
producer.send(request);
Message scheduled = browser.receive(5000);
while (scheduled != null) {
// Do something clever...
}
Additional details are documented in this blog post.
I created a small application (Spring Boot and camunda) to process an order process. The Order-Service receives the new order via Rest and calls the Start Event of the BPMN Order workflow. The order process contains two asynchronous JMS calls (Customer check and Warehouse Stock check). If both checks return the order process should continue.
The Start event is called within a Spring Rest Controller:
ProcessInstance processInstance =
runtimeService.startProcessInstanceByKey("orderService", String.valueOf(order.getId()));
The Send Task (e.g. the customer check) sends the JMS message into a asynchronous queue.
The answer of this service is catched by a another Spring component which then trys to send an intermediate message:
runtimeService.createMessageCorrelation("msgReceiveCheckCustomerCredibility")
.processInstanceBusinessKey(response.getOrder().getBpmnBusinessKey())
.setVariable("resultOrderCheckCustomterCredibility", response)
.correlate();
I deactivated the warehouse service to see if the order process waits for the arrival of the second call, but instead I get this exception:
1115 06:33:08.564 WARN [o.c.b.e.jobexecutor] ENGINE-14006 Exception while executing job 67d2cc24-0769-11ea-933a-d89ef3425300:
org.springframework.messaging.MessageHandlingException: nested exception is org.camunda.bpm.engine.MismatchingMessageCorrelationException: ENGINE-13031 Cannot correlate a message with name 'msgReceiveCheckCustomerCredibility' to a single execution. 4 executions match the correlation keys: CorrelationSet [businessKey=1, processInstanceId=null, processDefinitionId=null, correlationKeys=null, localCorrelationKeys=null, tenantId=null, isTenantIdSet=false]
This is my process. I cannot see a way to post my bpmn file :-(
What can't it not correlate with the message name and the business key? The JMS queues are empty, there are other messages with the same businessKey waiting.
Thanks!
Just to narrow the problem: Do a runtimeService eventSubscription query before you try to correlate and check what subscriptions are actually waiting .. maybe you have a duplicate message name? Maybe you (accidentally) have another instance of the same process running? Once you identified the subscriptions, you could just notify the execution directly without using the correlation builder ...
Documentation says, that "RPC-style result backend, using reply-to and one queue per client."
So, how to set result queue in rpc result backend?
I need it for that cases:
I'm doing result=send_task('name',args) in one script (and saving result.id as send_task_id) and trying to get result in another script with asyncresult = AsyncResult(id=send_task_id). I can't get this result because each script has own connection to broker and rpc declare own result queue for each client.
In second case I try send_task and AsyncResult (with retry when result.state == PENDING) in one script. When I run it as worker with concurrency = 1 it is OK. When concurrency >1 result may be never returned. Each worker fork get own connection to broker and own result queue. It will be OK when same worker fork doing send_task and proceed retry.
I'm using celery 4.0.2 and 4.1.0.
I want to be able to delete all job in queue, but I don't know what queue is it. I'm in perform method of my worker and I need to get the "current queue", the queue where the current job is come from.
for this time I use :
require 'sidekiq/api'
queue = Sidekiq::Queue.new
queue.each do |job|
job.delete
end
because I just use "default queue", It's work.
But now I will use many queues and I can't specify only one queue for this worker because I need use a lots for a server load balancing.
So how I can get the queue where we are in perform method?
thx.
You can't by design, that's orthogonal context to the job. If your job needs to know a queue name, pass it explicitly as an argument.
This is much faster:
Sidekiq::Queue.new.clear
These docs show that you can access all running job information wwhich includes the jid (job ID) and queue name for each job
inside the perform method you have access to the jid with the jid accessor. From that you can find the current job and get the queue name
workers = Sidekiq::Workers.new
this_worker = workers.find { |_, _, work|
work['payload']['jid'] == jid
}
queue = this_worker[2]['queue']
however, the content of Sidekiq::Workers can be up to 5 seconds out of date, so you should only try this after your worker has been running at least 5 seconds, which may not be ideal
First of all please don't consider this question as a duplicate of this question
I have a setup an environment which uses celery and redis as broker and result_backend. My question is how can I make sure that when the celery workers crash, all the scheduled tasks are re-tried, when the celery worker is back up.
I have seen advice on using CELERY_ACKS_LATE = True , so that the broker will re-drive the tasks until it get an ACK, but in my case its not working. Whenever I schedule a task its immediately goes to the worker which persists it until the scheduled time of execution. Let me give some example:
I am scheduling a task like this: res=test_task.apply_async(countdown=600) , but immediately in celery worker logs i can see something like : Got task from broker: test_task[a137c44e-b08e-4569-8677-f84070873fc0] eta:[2013-01-...] . Now when I kill the celery worker, these scheduled tasks are lost. My settings:
BROKER_URL = "redis://localhost:6379/0"
CELERY_ALWAYS_EAGER = False
CELERY_RESULT_BACKEND = "redis://localhost:6379/0"
CELERY_ACKS_LATE = True
Apparently this is how celery behaves.
When worker is abruptly killed (but dispatching process isn't), the message will be considered as 'failed' even though you have acks_late=True
Motivation (to my understanding) is that if consumer was killed by OS due to out-of-mem, there is no point in redelivering the same task.
You may see the exact issue here: https://github.com/celery/celery/issues/1628
I actually disagree with this behaviour. IMO it would make more sense not to acknowledge.
I've had the issue, where I was using some open-source C libraries that went totaly amok and crashed my worker ungraceful without throwing an exception. For any reason whatsoever, one can simply wrap the content of a task in a child process and check its status in the parent.
n = os.fork()
if n > 0: //inside the parent process
status = os.wait() //wait until child terminates
print("Signal number that killed the child process:", status[1])
if status[1] > 0: // if the signal was something other then graceful
// here one can do whatever they want, like restart or throw an Exception.
self.retry(exc=SomeException(), countdown=2 ** self.request.retries)
else: // here comes the actual task content with its respected return
return myResult // Make sure there are not returns in child and parent at the same time.