How to set a Max number of parallel VM flows - mule

I have a request-response flow that starts with VM. Is there a way to restrict a number of the requests that could be processed in parallel by the flow? I'm on 3.7. Thanks.
Ps. I've tried using maxThreadsActive in VM connector but it still runs in the "source" thread. This is how the VM connector in defined:
<vm:connector name="myvm" validateConnections="true" doc:name="VM">
<receiver-threading-profile maxThreadsActive="1"/>
<vm:queue-profile>
<default-in-memory-queue-store/>
</vm:queue-profile>
</vm:connector>
and then in the flow:
<vm:inbound-endpoint exchange-pattern="request-response" path="myqueue" connector-ref="myvm" doc:name="getevent">
<vm:transaction action="NONE"/>
</vm:inbound-endpoint>
This is how it's called from the "source" flow:
<vm:outbound-endpoint exchange-pattern="request-response" path="myqueue" connector-ref="myvm" doc:name="VM">
<vm:transaction action="NONE"/>
</vm:outbound-endpoint>

You can configure the number of receiver threads for the connector or your inbound-endpoint to one:
<vm:connector name="VM" validateConnections="true">
<receiver-threading-profile maxThreadsActive="1"/>
</vm:connector>
<flow name="testFlow1">
<vm:inbound-endpoint path="in" connector-ref="VM"/>
<echo-component/>
</flow>

You can control this with the threading profiles. For example:
<configuration >
<default-threading-profile maxBufferSize="100" maxThreadsActive="20" maxThreadsIdle="10" threadTTL="60000" poolExhaustedAction="RUN" />
</configuration>
You can read more here: https://docs.mulesoft.com/mule-user-guide/v/3.7/tuning-performance

Related

Mule file inbound connector with poll scope

I'm trying to use mule inbound file connector with poll scope got error saying couldn't start endpoint. If I remove poll scope and use file connector with default polling and its working fine without any file path changes.
I was wondering why is Poll scope giving error? If file inbound connector not allowed to wrapped in poll scope, why anypoint studio showing poll scope in the wrap in option ?
I found similar question, but I didn't see detailed explanations.
Mule won't allow POLL message processor to read file using file Inbound?
Advance thanks for your response.
Use mule-module-requester https://github.com/mulesoft/mule-module-requester, together with the Poll Scheduler.
relevant posts: http://blogs.mulesoft.com/dev/mule-dev/introducing-the-mule-requester-module/
Another way is,
Set the FTP flow initialState="stopped", and let the poll scheduler start the flow. After the FTP processing, stop the flow again.
see sample code:
<ftp:connector name="FTP" pollingFrequency="1000"
validateConnections="true" moveToDirectory="/work/ftp/processed"
doc:name="FTP" />
<flow name="scheduleStartFTPFlow">
<poll doc:name="Poll">
<fixed-frequency-scheduler frequency="1"
timeUnit="MINUTES" />
<expression-component doc:name="START FTP FLOW"><![CDATA[if(app.registry.processFTPFlow.isStopped()){
app.registry.processFTPFlow.start();
}]]></expression-component>
</poll>
<logger message="Poll Logging: #[payload]" level="INFO"
doc:name="Logger" />
</flow>
<flow name="processFTPFlow" initialState="stopped">
<ftp:inbound-endpoint host="localhost" port="21"
path="/data/ftp" user="Sanjeet" password="sanjeet123" responseTimeout="10000"
doc:name="FTP" connector-ref="FTP" />
<logger message="Logging FTP #[payload]" level="INFO" doc:name="Logger" />
<expression-component doc:name="STOP FTP FLOW"><![CDATA[app.registry.processFTPFlow.stop();]]></expression-component>
</flow>
Please, provide SSCCE.
Based on your question you do not need Poll at all. File Connector already has this feature to check file periodically. Here is example which polls file every 0.123 seconds
<file:inbound-endpoint path="/tmp" responseTimeout="10000" doc:name="File" pollingFrequency="123"/>
My suggestion is to use the quartz connector beside the file connector and set the interval in the quartz connector. Or use the file connector itself having the poll frequency so no need to wrap the file in poll scope.
you can create a file endpoint in the global element section and then use mule requester to invoke that endpoint inside a poll scope.
<file:connector name="File1" autoDelete="true" streaming="true" validateConnections="true" doc:name="File"/>
<file:endpoint connector-ref="File1" name="File" responseTimeout="10000" doc:name="File" path="/"/>
<flow name="pocforloggingFlow1">
<poll doc:name="Poll">
<mulerequester:request resource="File" doc:name="Mule Requester"/>
</poll>
</flow>

How to acknowledge the activemq message in mule using client acknowledge?

Below is my mule configuration, i want to acknowledge using client acknoledge , how can i do it?
<mule>
<jms:activemq-connector name="Active_MQ" brokerURL="tcp://localhost:61616" validateConnections="true" doc:name="Active MQ" maxRedelivery="2" persistentDelivery="true"/>
<flow name="activemqFlow">
<file:inbound-endpoint path="D:\mule\input" responseTimeout="10000" doc:name="File"/>
<object-to-string-transformer doc:name="Object to String"/>
<set-property propertyName="fileName" value="#[message.inboundProperties.originalFilename]" doc:name="Property"/>
<jms:outbound-endpoint queue="logfilequeue" connector-ref="Active_MQ" doc:name="JMS">
<jms:transaction action="NONE"/>
</jms:outbound-endpoint>
</flow>
<flow name="JmsInboundFlow">
<jms:inbound-endpoint queue="logfilequeue" connector-ref="Active_MQ" doc:name="JMS">
<jms:client-ack-transaction action="ALWAYS_BEGIN"/>
</jms:inbound-endpoint>
<logger message="#[payload.toString()]" level="INFO" doc:name="Logger"/>
<file:outbound-endpoint path="D:\mule\output" responseTimeout="10000" doc:name="File" outputPattern="#[message.inboundProperties.fileName]"/>
</flow>
</mule>
Note: Be REALLY sure you want to use CLIENT_ACKNOWLEDGE it doesn't work like most people think. It ack's the current message AND all previous within the session. If you have parallel/threaded consumers this setting will inadvertently ack messages that aren't ready to be ack'd yet. ActiveMQ has a INDIVIDUAL_ACKNOWLEDGE which ack's just the single message.
JMS Spec 2.0 has feat requests to make this add'l ack mode a standard.
Try adding acknowledgementMode="CLIENT_ACKNOWLEDGE" in your JMS connector.
You can refer this question for more details
Mule jms with CLIENT_ACKNOWLEDGE mode? Message automatically consumed even though I didn't acknoeledge it

Mule ESB: How to achieve unlimited Retry till the consuming Service is up and running

I'm not sure how to apply logic to handle this issue.
I have a simple flow, Where I'm consuming the service in between the flow. I have tried until successful, but it required Max retries field( but I dont want to limit my retry by giving any number). To my scenario, I'm not sure when my consuming service is up, But need to retry until the service up and running ( not required retry exhausted). Could anyone suggest to handle the scenario.
<flow name="newFlow1" doc:name="newFlow1">
<http:inbound-endpoint exchange-pattern="request-response" host="localhost" port="8081" path="ttt" doc:name="HTTP"/>
<byte-array-to-string-transformer doc:name="Byte Array to String"/>
<logger message="**********test****#[payload]" level="INFO" doc:name="Logger"/>
<until-successful doc:name="Until Successful">
<http:outbound-endpoint exchange-pattern="request-response" host="localhost" port="8089" path="new" method="POST" doc:name="HTTP"/>
</until-successful>
<set-payload value="'Return Response'" doc:name="Set Payload"/>
</flow>
Also tried Max Retries in until successful as '-1'( to make as unlimited retry) but it is not accepting the negative value.
Tried using HTTP connector retry connection strategy(but it seems to be work for JMS or JDBC).
Could you please anyone suggest to handle this issue.
Thanks in advance.
Edited:
<http:connector name="HttpConnector" doc:name="HTTP-HTTPS">
<reconnect-forever />
</http:connector>
<flow name="new1Flow1" doc:name="new1Flow1">
<http:inbound-endpoint exchange-pattern="request-response" doc:name="HTTP" path="ttt" responseTimeout="30000" host="localhost" port="8081" />
<logger message="***entered***" level="INFO" doc:name="Logger"/>
<http:outbound-endpoint exchange-pattern="request-response" host="localhost" port="8089" path="new" connector-ref="HttpConnector" method="POST" doc:name="HTTP"/>
<logger message="**Http StatusCode***#[message.inboundProperties['http.status']]" level="INFO" doc:name="Logger"/>
</flow>
It is not doing retry, since the service is down I could see the following error message in console for one time only.( But we should get multiple times the error message in console until service up)
Failed to route event via endpoint: DefaultOutboundEndpoint{endpointUri=http://localhost:8089/new, connector=HttpConnector
Please suggest.
You can first define a http Connector which will have a property of reconnecting forever
<http:connector name="HttpConnector" >
<reconnect-forever frequency="2000" />
</http:connector>
then you can have your inbound-or outbound endpoint use that connector reference
like this
<http:inbound-endpoint connector-ref="HttpConnector" .../>
or
<http:outbound-endpoint connector-ref="HttpConnector" .../>
Hope this helps!
Good luck!

Error when creating multiple file transfer flows in mule

I have a need to have three different scheduled jobs for picking up and transferring files to an SFTP server. Using examples I was able to create a single working flow. However, when I replicate that flow and adjust the configuration, I get an error complaining about 2 connectors matching protocol file.
It asks me to specify these, however, I have specified which endpoint should be used for each flow.
Does anyone have any ideas about what I'm doing wrong, or what Mule is looking for?
Flow definitions:
<file:endpoint name="partsDataConnector" path="${partsDataOriginFilePath}" pollingFrequency="5000" doc:name="partsDataFile"/>
<flow name="partsDataTransfer">
<quartz:inbound-endpoint jobName="partsDataTransfer"
repeatInterval="10000" responseTimeout="10000" doc:name="Quartz">
<quartz:endpoint-polling-job>
<quartz:job-endpoint ref="partsDataConnector"/>
</quartz:endpoint-polling-job>
</quartz:inbound-endpoint>
<sftp:outbound-endpoint host="${destinationFileServerIp}" port="${destinationFileServerPort}"
path="${partsDataDestinationPath}" tempDir="${partsDataDestinationTempDir}"
user="${destinationFileServerUser}" password="${destinationFileServerPassword}"
outputPattern="#[header:originalFilename]" />
</flow>
<file:endpoint name="imageDataConnector" path="${imageDataOriginFilePath}" pollingFrequency="5000" doc:name="partsDataFile"/>
<flow name="imageDataTransfer">
<quartz:inbound-endpoint jobName="imageDataTransfer"
repeatInterval="10000" responseTimeout="10000" doc:name="Quartz">
<quartz:endpoint-polling-job>
<quartz:job-endpoint ref="imageDataConnector"/>
</quartz:endpoint-polling-job>
</quartz:inbound-endpoint>
<sftp:outbound-endpoint host="${destinationFileServerIp}" port="${destinationFileServerPort}"
path="${imageDataDestinationPath}" tempDir="${imageDataDestinationTempDir}"
user="${destinationFileServerUser}" password="${destinationFileServerPassword}"
outputPattern="#[header:originalFilename]" />
</flow>
<file:endpoint name="customerDataConnector" path="${customerDataOriginFilePath}" pollingFrequency="5000" doc:name="partsDataFile"/>
<flow name="customerDataTransfer">
<quartz:inbound-endpoint jobName="customerDataTransfer"
repeatInterval="10000" responseTimeout="10000" doc:name="Quartz">
<quartz:endpoint-polling-job>
<quartz:job-endpoint ref="customerDataConnector" />
</quartz:endpoint-polling-job>
</quartz:inbound-endpoint>
<sftp:outbound-endpoint host="${destinationFileServerIp}" port="${destinationFileServerPort}"
path="${customerDataDestinationPath}" tempDir="${customerDataDestinationTempDir}"
user="${destinationFileServerUser}" password="${destinationFileServerPassword}"
outputPattern="#[header:originalFilename]" />
</flow>
Stacktrace:
2014-04-09 06:46:44,924 INFO [org.quartz.core.JobRunShell] Job mule.quartz://customerDataTransfer threw a JobExecutionException:
org.quartz.JobExecutionException: org.mule.transport.service.TransportFactoryException: There are at least 2 connectors matching protocol "file", so the connector to use must be specified on the endpoint using the 'connector' property/attribute. Connectors in your configuration that support "file" are: connector.file.mule.default, connector.file.mule.default.1, (java.lang.IllegalStateException) [See nested exception: org.mule.transport.service.TransportFactoryException: There are at least 2 connectors matching protocol "file", so the connector to use must be specified on the endpoint using the 'connector' property/attribute. Connectors in your configuration that support "file" are: connector.file.mule.default, connector.file.mule.default.1, (java.lang.IllegalStateException)]
at org.mule.transport.quartz.jobs.EndpointPollingJob.doExecute(EndpointPollingJob.java:176)
at org.mule.transport.quartz.jobs.AbstractJob.execute(AbstractJob.java:36)
at org.quartz.core.JobRunShell.run(JobRunShell.java:202)
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:534)
The error message is asking you to declare explicit file:connector components. Now you have just file:endpoint components.
If you haven't defined a file connecotr. Try declaring one and add a connector-reference on each of the file endpoints.
<file:connector name="myFileConnector" ></file:connector>
Add the Connector reference on each of the file endpoints as below. Add the reference for all the three file endpoints.
<file:endpoint name="imageDataConnector" connector-ref="myFileConnector" path="${imageDataOriginFilePath}" pollingFrequency="5000" doc:name="partsDataFile"/>
Hope this helps.

How to rectify the issue with the below flow?

I have a mule flow as under
<flow name="flow1" doc:name="f1">
<file:inbound-endpoint path="C:\input" responseTimeout="10000"
doc:name="File" />
</flow>
<flow name="flow2" doc:name="f2">
<http:inbound-endpoint address="http://localhost:8080"
doc:name="HTTP" exchange-pattern="request-response" />
<flow-ref name="flow1" doc:name="Flow Reference" />
<file:outbound-endpoint path="C:\outputfile"
responseTimeout="10000" doc:name="File" />
</flow>
I am trying to move/upload multiple files from source to destination (can be anything e.g. FTP or File outbound etc..) by using the flow.
The reason for doing in this way is that I want to invoke the job from CLI(Command Line Interface) using CURL.
But it is not working....
Edited
I need to pick up some files(multiple files) from a particular folder located in my hard drive. And then move those to some outbound process which can be FTP site or some other hard drive location.
But this flow needs to be invoked from CLI.
Edited (Based on David's answer)
I now have the flow as under
<flow name="filePickupFlow" doc:name="flow1" initialState="stopped">
<file:inbound-endpoint path="C:\Input" responseTimeout="10000" doc:name="File"/>
<logger message="#[message.payloadAs(java.lang.String)]" level="ERROR" />
</flow>
<flow name="flow2" doc:name="flow2">
<http:inbound-endpoint address="http://localhost:8080/file-pickup/start" doc:name="HTTP" exchange-pattern="request-response"/>
<expression-component>
app.registry.filePickupFlow.start();
</expression-component>
<file:outbound-endpoint path="C:\outputfile" responseTimeout="10000" doc:name="File"/>
</flow>
I am getting couple of problems
a) I am getting an error that - Attribute initialState is not defined as a valid property of flow
However, if I remove that attribute, the flow continues without waiting for "http://localhost:8080/file-pickup/start" to fire up.
b) The files are not moved to the destination folder
So how can I do so?
You can't reference a flow that has an inbound endpoint in it because such a flow is already active and consuming events from its inbound endpoint so you can't invoke it on demand.
The following, tested on Mule 3.3.1, shows how to start a "file pickup flow" on demand from an HTTP request.
<flow name="filePickupFlow" initialState="stopped">
<file:inbound-endpoint path="///tmp/mule/input" />
<!-- Do something with the file: here we just log its content -->
<logger message="#[message.payloadAs(java.lang.String)]" level="ERROR" />
</flow>
<flow name="filePickupStarterFlow">
<http:inbound-endpoint address="http://localhost:8080/file-pickup/start"
exchange-pattern="request-response" />
<expression-component>
app.registry.filePickupFlow.start();
</expression-component>
<set-payload value="File Pickup successfully started" />
</flow>
HTTP GETting http://localhost:8080/file-pickup/start would then start the filePickupFlow, which in turn will process the files in /tmp/mule/input.
Note that it is up to you to configure the file:connector for what behavior it must have for files it processes, either deleting them or moving them to another directory are two main options.
I guess in this case a File inbound to read a file on demand will not be helpful.
Please try if the follwoing way.
<flow name="flow1" doc:name="f2">
<http:inbound-endpoint address="http://localhost:8080"
doc:name="HTTP" exchange-pattern="request-response" />
<component>
<spring-object bean="fileLoader"></spring-object>
</component>
<file:outbound-endpoint path="C:\outputfile"
responseTimeout="10000" doc:name="File" />
</flow>
So the Custom component will be a Class which reads the file from your specified location.
Hope this helps.
You can use Mule Requester for a clean solution. See the details in the blog entry Introducing the Mule Requester.