Non blocking flow with Mulesoft - mule

I have a mule flow that has to work the following way.
HTTP listener listens to incoming calls and immediately responds with a job id.
The incoming message is queued into a worker. It works on it for a while and returns the message back to the sender.
I tried using non-blocking flow. But it didn't work. How is such a thing architected in Mulesoft? Would be great to have any leads on this.

Sounds like you could use an async scope for that second part. This would allow the HTTP listener to trigger that part and yet respond immediately with a response.

flow one Http listner---->Async Vm queue ---->set Jon ID--->Send response
Flow two VM inbound--->Process message-->Http requestor

Related

ActiveMQ CMS: Can messages be lost between creating a consumer and setting a listener?

Setting up a CMS consumer with a listener involves two separate calls: first, acquiring a consumer:
cms::MessageConsumer* cms::Session::createConsumer( const cms::Destination* );
and then, setting a listener on the consumer:
void cms::MessageConsumer::setMessageListener( cms::MessageListener* );
Could messages be lost if the implementation subscribes to the destination (and receives messages from the broker/router) before the listener is activated? Or are such messages queued internally and delivered to the listener upon activation?
Why isn't there an API call to create the consumer with a listener as a construction argument? (Is it because the JMS spec doesn't have it?)
(Addendum: this is probably a flaw in the API itself. A more logical order would be to instantiate a consumer from a session, and have a cms::Consumer::subscribe( cms::Destination*, cms::MessageListener* ) method in the API.)
I don't think the API is flawed necessarily. Obviously it could have been designed a different way, but I believe the solution to your alleged problem comes from the start method on the Connection object (inherited via Startable). The documentation for Connection states:
A CMS client typically creates a connection, one or more sessions, and a number of message producers and consumers. When a connection is created, it is in stopped mode. That means that no messages are being delivered.
It is typical to leave the connection in stopped mode until setup is complete (that is, until all message consumers have been created). At that point, the client calls the connection's start method, and messages begin arriving at the connection's consumers. This setup convention minimizes any client confusion that may result from asynchronous message delivery while the client is still in the process of setting itself up.
A connection can be started immediately, and the setup can be done afterwards. Clients that do this must be prepared to handle asynchronous message delivery while they are still in the process of setting up.
This is the same pattern that JMS follows.
In any case I don't think there's any risk of message loss regardless of when you invoke start(). If the consumer is using an auto-acknowledge mode then messages should only be automatically acknowledged once they are delivered synchronously via one of the receive methods or asynchronously through the listener's onMessage. To do otherwise would be a bug in my estimation. I've worked with JMS for the last 10 years on various implementations and I've never seen any kind of condition where messages were lost related to this.
If you want to add consumers after you've already invoked start() you could certainly call stop() first, but I don't see any problem with simply adding them on the fly.

Is there a way to wait for the reception of a rabbitmq message?

I have a use case where I need my controller action to wait for the reception of a specific rabbitmq message so I can return the result to the client, this message would come from a separate worker performing a certain task.
My api project and the worker project are separated and rabbitmq bus is the only intermediary between them.
EDIT: This is the current Scenario:
Client sends request to the web api to ask for let's call it 'DATA'
The web api publishes a Message-A through rabbitmq
A separate service project handles the published Message-A, does some work, and publishes a new Message-B that contains the result of that work which we called 'DATA'
Here is the problem: My web api controller have to return the results contained in Message-B, so the controller action should wait for that message before returning to the client
You need to use a TaskCompletionSource<T>.
You need to subscribe to the reply messages and, if it's the reply you're waiting for, set the result of the task completion source.
Then await the task of the task completion source.

Audit mule inbound outbound messages

we are trying to audit all incoming/outgoing messages, header information in our mule flow.
For same we have tried to use 'wire-tap' which we dint found so useful also its working on mule 3.6.1 but giving error in 3.7.
Any idea/suggestion for auditing?
Ok let me add some more details:
What we are trying to do is- Whatever message comes or flows via flow components we want to copy it in some sub flow (say in queue) without interrupting the main flow so that we can check the message.
you can make it work in several ways
1) Wire tap is one of the choice. You can route your messages asynchronously to sub flow and sub flow will do the auditing work. But I don't know why you didn't found wiretap useful. Can you explain in more.
2) All the messages your receive from the main flow, those you can post to JMS queue. So another flow will read from there and do the auditing work. Use of this multiple projects can use the same piece of code and post JMS queue for auditing.
This can be done in many different ways and you kind of mentioned them in your question such as logger component and interceptor.
All the headers are available as message properties so if you log the entire message they are shown. Simply put an logger component after the inbound endpoint and one before the outbound endpoint and this is easily done.
If you need some log entries transformation you could always put that in a wire tap so you don't interfere with the functionality of your flow.

How do I handle push messages from IronMQ when my endpoint is an IronWorker?

The documentation for IronMQ push queues describes how endpoints should handle/respond to push messages. However, I get the impression this is for normal webhooks and I can't find any documentation or examples of what to do when the endpoint for a push queue is an IronWorker.
Does the IronWorker framework take care of responding to the IronMQ service when it starts a new IronWorker task for the message pushed onto the queue, or does my IronWorker code need to handle the response? If I need to handle it in my code, are there any variables automatically provided to me that represent the webhook request and/or response?
As I mentioned above, I've looked for example code but all I've found are IronWorker webhook examples that receive POSTs from something like GitHub, not from IronMQ. If there are examples out there for what I'm trying to do please point me to it!
There's actually a special subscriber format just for IronWorker as specified in the Push Queue documentation here: http://dev.iron.io/mq/reference/push_queues/#subscribers . Eg:
ironworker:///my_worker
That will kick off a worker task whenever something hits your queue. Or you can use the worker's webhook URL. And you don't need to deal with the response, as #thousandsofthem said, IronWorker will return a 200 which acknowledges the pushed message.
IronWorker API will respond immediately for a post request with "HTTP 200 OK" status and queue a task after that, it's too late to respond something from running task.
You could find exact webhook value on "Code" page (https://hud.iron.io).
Screenshot: http://i.imgur.com/aza7g0h.png
Just use it "as is"

Mule ESB: How to achieve typical ReTry Mechanism in MULE ESB

I need to implement a logic on Retry. Inbound endpoint pushes the messages to Rest (Outbound). If the REST is unavailable, I need to retry for 1 time and put it in the queue. But the second upcoming messages should not do any retry, it has to directly put the messages in to queue until the REST service is available.
Once the service is available, I need to pushes all the messages from QUEUE to REST Service (in ordering) via batch job.
Questions:
How do I know the service is unavailable for my second message? If I use until Successful, for every message it do retry and put in queue. Plm is 2nd message shouldn't do retry.
For batch, I thought of using poll, but how to tell to poll, when the service becomes available to begin the batch process. (bcz,Poll is more of with configuring timings to run batch)?
Other ticky confuses me is - Here ordering has to be preserved. once the service is available. Queue messages ( i,e Batch) has to move first to REST Services then with real time. I doubt whether Is it applicable.
It will be very helpful for the quick response to implement the logic.
Using Mule: 3.5.1
I could try something like below: using flow controls
process a message; if exception or bad response code, set a variable/property like serviceAvailable=false.
subsequent message processing will first check the property serviceAvailable to process the messages. if property is false, en-queue the messages to a DB table with status=new/unprocessed
create a flow/scheduler to process the messages from DB sequentially, but it will not check the property serviceAvailable and call the rest service.
If service throws exception it will not store the messages in db again but if processes successfully change the property serviceAvailable=true and de-queue the messages or change the status. Add another property and set it to true if there are more messages in db table like moreDBMsg=true.
New messages should not be processed/consumed until moreDBMsg=false
once moreDBMsg=false and serviceAvailable=true start processing the messages from queue.
For the timeout I would still look at the response code and catch time-outs to determine if the call was successful or requires a retry. Practically you normally do multi threading anyway, so you have multiple calls in parallel anyway. Or simply one call starts before the other ends.
That is just quite normal.
But you can simply retry calls in a queue that time out. And after x amounts of time-outs you "skip" or defer the retry.
But all of this has been done using actual Mule flow components like either:
MEL http://www.mulesoft.org/documentation/display/current/Mule+Expression+Language+Reference
Or flow controls: http://www.mulesoft.org/documentation/display/current/Choice+Flow+Control+Reference
Or for example you reference a Spring Bean and do it in native Java code.
One possibility for the queue would be to persist it in a database. Mule has database connector that has a "poll" feature, see: http://www.mulesoft.org/documentation/display/current/JDBC+Transport+Reference#JDBCTransportReference-PollingTransport