flow-ref and processing strategy - mule

Can someone pls explain how a Mule processing strategy works when one flow is calling another one with flow-ref?
Case 1.
Let's say we have 2 flows: flowA and flowB, with processing strategies procA and procB, both are asynchronous but procA has 10 threads allowed while procB has only 1.
<queued-asynchronous-processing-strategy name="procA" maxThreads="10" doc:name="procA"/>
<queued-asynchronous-processing-strategy name="procB" maxThreads="1" doc:name="procB"/>
flowA is reading from a queue and calling flowB with
<flow-ref name="flowB" doc:name="flowB"/>
Will be another queue created in this case between the flowA and flowB so that all the flowB calls executed in a single thread one by one?
Or flowB will be following the flowA strategy with possible 10 messages processed at the same time?
Case 2.
flowA is a synchronous flow reading from a queue.
It's calling an asynchronous flowB with 1 max thread allowed like this:
<queued-asynchronous-processing-strategy name="procB" maxThreads="1" doc:name="procB"/>
The async block has it's own strategy procC with 10 threads allowed:
<queued-asynchronous-processing-strategy name="procC" maxThreads="10" doc:name="procC"/>
flowA is calling flowB like this:
<async doc:name="Async" processingStrategy="procC">
<flow-ref name="flowB" doc:name="flowB"/>
</async>
The question is similar:
Will be another queue created in this case between the async block and flowB so that all the flowB calls executed in a single thread one by one?
Or flowB will be following the procC strategy with 10 messages processed at the same time?

Case 1.
Another queue with 1 thread will be created for Flow B.
VM receiver pool thread-> SEDA thread from procA -> SEDA thread from procB
Case 2.
As above, another queue with 1 thread will be created for Flow B
VM Receiver pool thread -> SEDA thread from procC -> SEDA thread from procB
Flow processing strategies are covered in the Mule documentation but I didn't find that overly useful. It is straightforward to set these flows up in Anypoint Studio and use Loggers to determine the thread that is running at a particular time.

Related

Camunda - Intermedia message event cannot correlate to a single execution

I created a small application (Spring Boot and camunda) to process an order process. The Order-Service receives the new order via Rest and calls the Start Event of the BPMN Order workflow. The order process contains two asynchronous JMS calls (Customer check and Warehouse Stock check). If both checks return the order process should continue.
The Start event is called within a Spring Rest Controller:
ProcessInstance processInstance =
runtimeService.startProcessInstanceByKey("orderService", String.valueOf(order.getId()));
The Send Task (e.g. the customer check) sends the JMS message into a asynchronous queue.
The answer of this service is catched by a another Spring component which then trys to send an intermediate message:
runtimeService.createMessageCorrelation("msgReceiveCheckCustomerCredibility")
.processInstanceBusinessKey(response.getOrder().getBpmnBusinessKey())
.setVariable("resultOrderCheckCustomterCredibility", response)
.correlate();
I deactivated the warehouse service to see if the order process waits for the arrival of the second call, but instead I get this exception:
1115 06:33:08.564 WARN [o.c.b.e.jobexecutor] ENGINE-14006 Exception while executing job 67d2cc24-0769-11ea-933a-d89ef3425300:
org.springframework.messaging.MessageHandlingException: nested exception is org.camunda.bpm.engine.MismatchingMessageCorrelationException: ENGINE-13031 Cannot correlate a message with name 'msgReceiveCheckCustomerCredibility' to a single execution. 4 executions match the correlation keys: CorrelationSet [businessKey=1, processInstanceId=null, processDefinitionId=null, correlationKeys=null, localCorrelationKeys=null, tenantId=null, isTenantIdSet=false]
This is my process. I cannot see a way to post my bpmn file :-(
What can't it not correlate with the message name and the business key? The JMS queues are empty, there are other messages with the same businessKey waiting.
Thanks!
Just to narrow the problem: Do a runtimeService eventSubscription query before you try to correlate and check what subscriptions are actually waiting .. maybe you have a duplicate message name? Maybe you (accidentally) have another instance of the same process running? Once you identified the subscriptions, you could just notify the execution directly without using the correlation builder ...

How to set result queue in celery result backend RPC?

Documentation says, that "RPC-style result backend, using reply-to and one queue per client."
So, how to set result queue in rpc result backend?
I need it for that cases:
I'm doing result=send_task('name',args) in one script (and saving result.id as send_task_id) and trying to get result in another script with asyncresult = AsyncResult(id=send_task_id). I can't get this result because each script has own connection to broker and rpc declare own result queue for each client.
In second case I try send_task and AsyncResult (with retry when result.state == PENDING) in one script. When I run it as worker with concurrency = 1 it is OK. When concurrency >1 result may be never returned. Each worker fork get own connection to broker and own result queue. It will be OK when same worker fork doing send_task and proceed retry.
I'm using celery 4.0.2 and 4.1.0.

RabbitMQ - subscribe to multiple queues

I have a system of producers and workers. The producers produce tasks. I need to distribute the tasks to the workers. The tasks have types. The workers are specialized. Each worker can handle a subset of task types. For instance, I have tasks types: A, B, C, D
Worker1 can handle tasks types { A, B, C }
Worker2 can handle: { A, C }
Worker3 can handle: { A, B, D }
etc...
Not all the workers run all the time. So it would be even possible that for some task type there won't be a running worker which is able to handle the task so the task would have to wait in a queue until the worker which is capable to handle the task starts.
The system should be fair, so the tasks should be handled in the same order (if possible) as they come.
It seems I would need to create a queue for the each task type and subscribe the worker to that types which they are capable to handle.
However, it seem it is not possible to blocking subscribe to multiple queues. The only way how to implement this is active waiting.
Any ideas? Is RabbitQM appropriate for this problem?

Mule loop of flow references run in very odd order

I have a loop that runs three flow references in order. At least that is the plan. Run in the debugger, processing takes place in the following unexpected order:
the first flow-ref (A)
the second flow-ref (B)
the first component of flow A
the third flow-ref (C)
the first component of flow B
the second component of the flow A
the first component of flow C
the second component of flow B
the third component of flow A
...now things blow up (in 1st of flow C), since payload is not expected
I changed the processing strategy from implicit to 'synchronous' with no noticeable change.
What is going on?
<flow name="Loop_until_successfull" doc:name="Loop_until_successfull" processingStrategy="synchronous">
<flow-ref name="A" doc:name="Go to A"></flow-ref>
<flow-ref name="B" doc:name="Go to B"></flow-ref>
<flow-ref name="C" doc:name="Go to C"></flow-ref>
</flow>
Changing the "Loop_until_successful" flow to synchronous will only assure that calls to "Loop_until_successful" are processed synchronously, not necessarily any other flow called by it. You need to change each of flows called by "Loop_until_successful" to be processed synchronously to ensure you get the response back from each call out before you make a call to the next flow. If you do this, then Loop_until_successful (I'll call L.U.S for now on) calls A, waits for a response, then calls B, waits for a response, then calls C. The way it is configured now, L.U.S. calls A and then moves immediately on to B using the payload it has rather than waiting for the response from A.

In celery, how to ensure tasks are retried when worker crashes

First of all please don't consider this question as a duplicate of this question
I have a setup an environment which uses celery and redis as broker and result_backend. My question is how can I make sure that when the celery workers crash, all the scheduled tasks are re-tried, when the celery worker is back up.
I have seen advice on using CELERY_ACKS_LATE = True , so that the broker will re-drive the tasks until it get an ACK, but in my case its not working. Whenever I schedule a task its immediately goes to the worker which persists it until the scheduled time of execution. Let me give some example:
I am scheduling a task like this: res=test_task.apply_async(countdown=600) , but immediately in celery worker logs i can see something like : Got task from broker: test_task[a137c44e-b08e-4569-8677-f84070873fc0] eta:[2013-01-...] . Now when I kill the celery worker, these scheduled tasks are lost. My settings:
BROKER_URL = "redis://localhost:6379/0"
CELERY_ALWAYS_EAGER = False
CELERY_RESULT_BACKEND = "redis://localhost:6379/0"
CELERY_ACKS_LATE = True
Apparently this is how celery behaves.
When worker is abruptly killed (but dispatching process isn't), the message will be considered as 'failed' even though you have acks_late=True
Motivation (to my understanding) is that if consumer was killed by OS due to out-of-mem, there is no point in redelivering the same task.
You may see the exact issue here: https://github.com/celery/celery/issues/1628
I actually disagree with this behaviour. IMO it would make more sense not to acknowledge.
I've had the issue, where I was using some open-source C libraries that went totaly amok and crashed my worker ungraceful without throwing an exception. For any reason whatsoever, one can simply wrap the content of a task in a child process and check its status in the parent.
n = os.fork()
if n > 0: //inside the parent process
status = os.wait() //wait until child terminates
print("Signal number that killed the child process:", status[1])
if status[1] > 0: // if the signal was something other then graceful
// here one can do whatever they want, like restart or throw an Exception.
self.retry(exc=SomeException(), countdown=2 ** self.request.retries)
else: // here comes the actual task content with its respected return
return myResult // Make sure there are not returns in child and parent at the same time.