Is there a way to wait for batch process to complete in Mule 4? - batch-processing

I have a mule flow that runs a batch job. What I have found is that the batch job will branch and my flow will continue on. I would like to wait till the On Complete gets hit and return those results. I found that in older version there was a scope of run-and-wait. I don't see an equivalent in Mule 4. Any suggestion would be much appreciated.

Batch runs asynchronously with respect to the flow. That was the same in Mule 3.
I guess you could send a message to a VM queue in the onComplete stage and wait for it to be received on the flow.

Related

Does a Mule manual shutdown allow the current flow to complete?

I'm running a flow on mule-standalone-3.5.0 on Linux and trying to understand what happens in the case of a manual server shutdown.
Right now I see the currently running 'main' flow which is a recursive loop continuing during the shutdown, long enough to finish processing of the current message. I would like to know if that is part of 'graceful shutdown' or just good luck.
It does try to call itself again which I stop by checking the run state of a flow I know stops quickly.
def flow = muleContext.getRegistry().lookupFlowConstruct("initFlow")
if ((flow.isStopped() || flow.isStopping())) {
message.setInvocationProperty('runState', 'stopping')
}
else {
message.setInvocationProperty('runState', 'running')
}
Whenever Mule receives the shutdown signal, it makes sure to finish processing the current message to avoid loss of data and then starts closing down all endpoints and flows, together with thread pool cleanups. HTH.

How to figure out if mule flow message processing is in progress

I have a requirement where I need to make sure only one message is being processed at a time by a mule flow.Flow is triggered by a quartz scheduler which reads one file from FTP server every time
My proposed solution is to keep a global variable "FLOW_STATUS" which will be set to "RUNNING" when a message is received and would be reset to "STOPPED" once the processing of message is done.
Any messages fed to the flow will check for this variable and abort if "FLOW_STATUS" is "RUNNING".
This setup seems to be working , but I was wondering if there is a better way to do it.
Is there any best practices around this or any inbuilt mule helper functions to achieve the same instead of relying on global variables
It seems like a more simple solution would be to set the maxActiveThreads for the flow to 1. In Mule, each message processed gets it's own thread. So setting the maxActiveThreads to 1 would effectively make your flow singled threaded. Other pending requests will wait in the receiver threads. You will need to make sure your receiver thread pool is large enough to accommodate all of the potential waiting threads. That may mean throttling back your quartz scheduler to allow time process the files so the receiver thread pool doesn't fill up. For more information on the thread pools and how to tune performance, here is a good link: http://www.mulesoft.org/documentation/display/current/Tuning+Performance

Mule ESB: How to achieve typical ReTry Mechanism in MULE ESB

I need to implement a logic on Retry. Inbound endpoint pushes the messages to Rest (Outbound). If the REST is unavailable, I need to retry for 1 time and put it in the queue. But the second upcoming messages should not do any retry, it has to directly put the messages in to queue until the REST service is available.
Once the service is available, I need to pushes all the messages from QUEUE to REST Service (in ordering) via batch job.
Questions:
How do I know the service is unavailable for my second message? If I use until Successful, for every message it do retry and put in queue. Plm is 2nd message shouldn't do retry.
For batch, I thought of using poll, but how to tell to poll, when the service becomes available to begin the batch process. (bcz,Poll is more of with configuring timings to run batch)?
Other ticky confuses me is - Here ordering has to be preserved. once the service is available. Queue messages ( i,e Batch) has to move first to REST Services then with real time. I doubt whether Is it applicable.
It will be very helpful for the quick response to implement the logic.
Using Mule: 3.5.1
I could try something like below: using flow controls
process a message; if exception or bad response code, set a variable/property like serviceAvailable=false.
subsequent message processing will first check the property serviceAvailable to process the messages. if property is false, en-queue the messages to a DB table with status=new/unprocessed
create a flow/scheduler to process the messages from DB sequentially, but it will not check the property serviceAvailable and call the rest service.
If service throws exception it will not store the messages in db again but if processes successfully change the property serviceAvailable=true and de-queue the messages or change the status. Add another property and set it to true if there are more messages in db table like moreDBMsg=true.
New messages should not be processed/consumed until moreDBMsg=false
once moreDBMsg=false and serviceAvailable=true start processing the messages from queue.
For the timeout I would still look at the response code and catch time-outs to determine if the call was successful or requires a retry. Practically you normally do multi threading anyway, so you have multiple calls in parallel anyway. Or simply one call starts before the other ends.
That is just quite normal.
But you can simply retry calls in a queue that time out. And after x amounts of time-outs you "skip" or defer the retry.
But all of this has been done using actual Mule flow components like either:
MEL http://www.mulesoft.org/documentation/display/current/Mule+Expression+Language+Reference
Or flow controls: http://www.mulesoft.org/documentation/display/current/Choice+Flow+Control+Reference
Or for example you reference a Spring Bean and do it in native Java code.
One possibility for the queue would be to persist it in a database. Mule has database connector that has a "poll" feature, see: http://www.mulesoft.org/documentation/display/current/JDBC+Transport+Reference#JDBCTransportReference-PollingTransport

Mule ESB: How I can run flow(s) automatically

How I can run flow(s) automatically
Anyone can tell me how flow(s) run automatically.
Basically I am trying to read data from csv file and wants to store it in database. I have created flow for it and run, it start the application as you can see below:
INFO 2013-11-26 11:31:47,401 [main] org.mule.module.launcher.MuleDeploymentService:
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+ Started app 'read_csv_file' +
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
After this point I have no idea what I need to do to execute this flow. From where I need to hit this flow. I am stuck here please help me
Mule Flows are triggered by an event. It can be an event generated from another flow or it can be an event on the inbound-endpoint.
From the post it can be understood that the flow needs to be triggered from outside the application. So better use an Inbound-Endpointat the start of the flow to trigger the flow.
For your case you can use an file:inbound-endpoint at the start of your flow.
<flow name="main_flow">
<file:inbound-endpoint path="/path/to/input/folder" doc:name="File"></file:inbound-endpoint>
......
......... Message processors ....
</flow>
Flows are automatically started, you don't need to "run" them.
Messages will be processed depending on the message sources you have in your flows, which are the ones triggering the flow execution.
I would suggest you to carefully read the documentation:
http://www.mulesoft.org/documentation/display/current/Mule+Application+Architecture
You can either use a File component that reads the file from the location you mention depending on the polling frequency. Or If you try to read your file using a java component or expression or Groovy script you can automatically trigger using flow by using a Quartz component at the beginning of the flow.

Does RabbitMQ have the inbuilt code to call an exe file

I wanted to know if RabbitMQ has any built capabilities to call an external exe once its message queue get populated. I understand that we can implement task queues/worker queues in rabbitmq but it has to be done by writing an external application(say in java like they have mentioned in tuttorials http://www.rabbitmq.com/tutorials/tutorial-two-java.html) . Please help me out with this
Adding to my previous question :
I have decided to write an application that will run an exe . But i dont want the application that i write to poll my queue. Instead i want my rabbitmq to trigger my application whenever there is a new message by sending a job to process. Can i do this? how can i add jobs to the queues?
You are probably going to have to write your own consumer. The question is what is sending the messages in the first place and what is the format o the message and do you need that data.
Python is probably the best choice for this task.