Is there a way to configure a Quartz inbound endpoint in Mule to have multiple triggers? Say I want an event every day at 9:00, plus one at 1:00 a.m. on the first day of the month.
Here is what you might work for you --
<flow name="MultipleIBEndpoints" doc:name="MultipleIBEndpoints">
<composite-source doc:name="Composite Source">
<quartz:inbound-endpoint jobName="QuartzDaily" doc:name="Quartz Daily"
cronExpression="0 0 9 1/1 * ? *">
<quartz:event-generator-job>
<quartz:payload>dummy</quartz:payload>
</quartz:event-generator-job>
</quartz:inbound-endpoint>
<quartz:inbound-endpoint jobName="QuartzMonthly" doc:name="Quartz Monthly"
cronExpression="0 0 1 1 1/1 ? *">
<quartz:event-generator-job>
<quartz:payload>dummy</quartz:payload>
</quartz:event-generator-job>
</quartz:inbound-endpoint>
</composite-source>
<logger level="INFO" doc:name="Logger" />
</flow>
The above flow uses composite source scope which allows you to embed into a single message source two or more inbound endpoints.
In the case of Composite, the embedded building blocks are actually message sources (i.e. inbound endpoints) that listen in parallel on different channels for incoming messages. Whenever any of these receivers accepts a message, the Composite scope passes it to the first message processor in the flow, thus triggering that flow.
You can do you requirement just by using one quartz endpoint with the required quartz endpoint
CRON Expression 0 0 1,21 1 * *
Please refer to the below link for more tweaks.
Mulesoft quartz reference
wikipedia reference
List of Cron Expression examples
In that case you need to configure two crontrigger and add them to the scheduler.
Please go through the below link where i have described the whole thing.
Configure multiple cron trigger
Hope this will help.
Related
I am configuring transaction and response timeout for API in Mule 4, Is there anyway that I set different timeout for different methods(GET,POST,DELETE) for single API in Mule Soft because the API has different SLA for different operations ?
As the http timeout is set at the connector level that does not allow you to set timeouts per method.
One way you could try to achieve this is via separating your interface flow from your logic. Then have your interface flow call your logic flow via something like vm where you can set a timeout individually. You can then catch the timeout error and do what you want with it.
Here is an example that has a flow for a POST method. All this flow does is offload the logic to another floe and invokes that logic using v:publish-consume and awaits the response. It sets a timeout of 2 seconds(configurable with properties etc.) and catches VM:QUEUE-TIMEOUT errors and sets an 'SLA exceeded'error message:
<flow name="myPOSTInterface">
<vm:publish-consume queueName="postQueue" config-ref="vm" timeout="2" timeoutUnit="SECONDS" />
<logger level="INFO" message="Result from logic flow: #[payload]" />
<error-handler>
<on-error-continue type="VM:QUEUE_TIMEOUT">
<set-payload value="#[{error: 'SLA exceeded'}]" />
</on-error-continue>
</error-handler>
</flow>
<flow name="myPOSTLogic">
<vm:listener config-ref="vm" queueName="postQueue" />
<set-payload value="#[{result: 'Result from my logic'}]" />
</flow>
I am trying to do a test on how to limit the number of concurrent incoming HTTP requests.
So I tried to simulate the following scenario.
I have created a simple flow with
1. Http Listener as Msg source
2. Using Groovy Script, Sleeping for 15 seconds to introduce delay
3. Set the Payload with Hello world.
So any single request will have Min 15 sec of response time. Then in order to limit number of active threads (i.e assuming to control concurrent requests / processing threads) , I have set the maxActiveThreads to 1. So ideally I would allow 1 concurrent thread to process the flow.
Now when I fire using apache benchmarking, as simple get with 1 request, I receive with response time as 15 seconds, which is fine. Now when I increase the number of concurrent requests to 2, I still receive the response time as 15 seconds . I am expecting it be 30 seconds
I see the behaviour until 9 concurrent requests. Beyond 9, then from 10th request onwards it is placed in the waiting queue.
So can an expert please explain how Can I limit the number of active threads to 1. And how is the number of concurrent requests set to 9 (I see in threads using JConsole there are 9 SelectorRunner threads, which I assume there are linked to it).
Below is the simple flow.
<http:listener-config name="HTTP_Listener_Configuration" host="localhost" port="8082" doc:name="HTTP Listener Configuration">
<http:worker-threading-profile maxThreadsActive="1" />
</http:listener-config>
<flow name="getting-sartedFlow">
<http:listener config-ref="HTTP_Listener_Configuration" path="/" doc:name="HTTP" />
<scripting:component doc:name="Groovy">
<scripting:script engine="Groovy">
<![CDATA[Thread.currentThread().sleep((long)(15000));]]>
</scripting:script>
</scripting:component>
<set-payload value="Hello World" doc:name="Set Payload" />
</flow>
As answered in the forum, you need to define a poolExhaustedAction for the worker-threading-profile. If you don't then the default one will be used which is RUN, which explains the behavior you're seeing. From what I understand you should use WAIT.
<http:worker-threading-profile maxThreadsActive="1" poolExhaustedAction="WAIT"/>
As of now ,when I poll the database with a query,It will fetch me multiple records during a specified duration.Problem is ,these multiple records will be pushed as a single message into the queue.How do I push every record from the set of records as an individual message?
As you have explained the JDBC endpoint is fetching a collection of records and sending them as one single message to the queue. Solution for this is two options.
Using Mule's For-Each message processor. This helps in iterating through the collection object and processes each item as one message.
Using Mule's collection splitter to iterate through the collection of records.
Solution for option 1 looks like as shown in the image below.
The code for this flow loks like this.
<flow name="JDBC-For-Each-JMS-Flow" >
<jdbc-ee:inbound-endpoint queryKey="SelectAll" mimeType="text/javascript" queryTimeout="500000" pollingFrequency="1000" doc:name="Database">
<jdbc-ee:query key="SelectAll" value="select * from users"/>
</jdbc-ee:inbound-endpoint>
<foreach doc:name="For Each" collection="#[payload]" >
<jms:outbound-endpoint doc:name="JMS"/>
</foreach>
<catch-exception-strategy doc:name="Catch Exception Strategy"/>
</flow>
Note: This is a sample flow.
Hope this helps.
In Mule 3.3.1, during async processing, when any of my external services are down, I would like to place the message on a queue (retryQueue) with a particular "next retry" timestamp. The flow that processes messages from this retryQueue selects messages based on "next retry" time as in if "next retry" time is past current time, select the message for processing. Similar to what has been mentioned in following link.
Retry JMS queue implementation to deliver failed messages after certain interval of time
Could you please provide sample code to achieve this?
I tried:
<on-redelivery-attempts-exceeded>
<message-properties-transformer scope="outbound">
<add-message-property key="putOnQueueTime" value="#[function:datestamp:yyyy-MM-dd hh:mm:ssZ]" />
</message-properties-transformer>
<jms:outbound-endpoint ref="retryQueue"/>
</on-redelivery-attempts-exceeded>
and on the receiving flow
<jms:inbound-endpoint ref="retryQueue">
<!-- I have no idea how to do the selector....
I tried....<jms:selector expression="#[header:INBOUND:putOnQueueTime > ((function:now) - 30)]"/>, but obviously it doesn't work. Gives me an invalid message selector. -->
</jms:inbound-endpoint>.
Another note: If I set the outbound property using
<add-message-property key="putOnQueueTime" value="#[function:now]"/>,
it doesn't get carried over as part of header. That's why I changed it to:
<add-message-property key="putOnQueueTime" value="#[function:datestamp:yyyy-MM-dd hh:mm:ssZ]" />
The expression in:
<jms:selector expression="#[header:INBOUND:putOnQueueTime > ((function:now) - 30)]"/>
should evaluate to a valid JMS selector, which is not the case here. Try with:
<jms:selector expression="putOnQueueTime > #[XXX]"/>
replacing XXX with an expression that creates the time you want.
We were trying to achieve this in one of the projects I'm working on, and tried what was being suggested in the other answer here, and it did not work, with various variantions. The problem is that the jms:selector doesn't support MEL, since it's relies on ActiveMQ classes.
We registered a support-ticket to Mulesoft, and their reply was that this is not supported.
What we ended up doing was this:
Create a simple Component, which does a Thread.sleep(numberOfMillis), where the number of millis is defined in a property.
In the flow that was supposed to delay processing, we added this component as the first step after reading the message from the inbound endpoint.
Not the best solution ever made, but it works..
I have a mule application that generates individual xml files and places them in a folder on the basis of a query, now I want to create aggregate reports which will consist data from various individual reports. Since services are run randomly, I want to make sure that I delay the generation of the aggregate report so that all the individual files exist before the service for aggregate report is called. Is it possible to set a timer on a service?
Seems like a candidate for aggregation:
http://blogs.mulesoft.org/asynchronous-message-processing-with-mule/
You can use quartz for scheduling execution of flow in mule. User cron expression to customize the schedule for your needs. Here is an example of flow with quartz scheduler -
<flow name="resendFailedMessages">
<description>
"*/15 07-18 * * ?" run every 15 minutes from 7 am to 6 pm every day -->
</description>
<quartz:inbound-endpoint jobName="hostRedeliveryJob" cronExpression="*/15 07-18 * * ?">
<quartz:endpoint-polling-job>
<quartz:job-endpoint ref="redeliverToHost" />
</quartz:endpoint-polling-job>
</quartz:inbound-endpoint>
<set-variable variableName="hostXML" value="#[payload]" />
<logger message="QUARTZ found message for host" level="INFO" />
<flow-ref name="webServiceCall" />
<flow-ref name="inspectWSResponse" />
<exception-strategy ref="retryExceptionStrategy" />
Also check this out - https://github.com/ddossot/mule-in-action-2e/blob/master/chapter07/src/main/app/quartz-config.xml