Mule inbound endpoint is not picking file - mule

<flow name="receive-files-from-client">
<file:inbound-endpoint connector-ref="ibFileConnector"
path="/client-data/accounts/client/ToTest">
<file:filename-wildcard-filter pattern="ABC_123*.txt, XYZ_987*.txt" />
<object-to-byte-array-transformer /> <!-- need to convert from an input stream to a byte array to avoid having the wire-tap close it -->
<wire-tap>
<file:outbound-endpoint path="${workingDdir}/Dir1/archive/inbound/#[function:datestamp-yyyy-MM-dd_HH-mm-ss.SSS]" />
</wire-tap>
</file:inbound-endpoint>
....
...
</flow>
I have Configured everything properly, But Mule is not picking the file from that inbound path location.

Use the following modified flow. You are using a file outbound inside a file inbound.
<flow name="receive-files-from-client">
<file:inbound-endpoint connector-ref="inboundFileConnector"
path="/client-ftpdata/ftpaccounts/client/To_CC-Test">
<file:filename-wildcard-filter pattern="CYC53_810*.txt,CYC53_855*.txt,CYC53_856*.txt,CYC53_997*.txt" />
<object-to-byte-array-transformer /> <!-- need to convert from an input stream to a byte array to avoid having the wire-tap close it -->
<wire-tap>
<file:outbound-endpoint path="${global.workdir}/suppliers/S000590/archive/inbound/#[function:datestamp-yyyy-MM-dd_HH-mm-ss.SSS]" />
</wire-tap>
</file:inbound-endpoint>
<wire-tap>
<file:outbound-endpoint path="${global.workdir}/suppliers/S000590/archive/inbound/#[function:datestamp-yyyy-MM-dd_HH-mm-ss.SSS]" />
</wire-tap>
....
...
</flow>
Try to execute your flow without filter and see if the flow is picking the files. If so modify your filter regular expression.
Hope this helps.

Related

How to import CSV files in Anypoint studio and convert them into JSON format?

I want to use HTTP Listener in my flows, and import csv files in Anypoint studio as input and convert them into JSON. Please help me.
you can just use a transform message and convert payload to json.
as you can see i am reading a file called address.csv.
in the transform message you can simple right
and in my logger you can see that contents of file converted to json
note -------------------------------------------------
if you want to pick a file in middle of a flow with a http listener you can always use Message-Requester module
here is how the code will look like
<file:connector name="file-connector-config" autoDelete="false" streaming="true" validateConnections="true" doc:name="File" />
<http:listener-config name="HTTP_Listener_Configuration" host="0.0.0.0" port="8081" basePath="/requester" doc:name="HTTP Listener Configuration" />
<flow name="muleRequester">
<http:listener config-ref="HTTP_Listener_Configuration" path="/requester" doc:name="HTTP" />
<logger message="Invoking Mule Requester" level="INFO" doc:name="Logger" />
<mulerequester:request resource="file://src/main/resources/in/ReadME.txt?connector=file-connector-config" doc:name="Retrieve File" returnClass="java.lang.String" />
<logger message="Payload after file requester #[payload]" level="INFO" doc:name="Logger" />
</flow>
refer link --> https://dzone.com/articles/mule-reading-file-in-the-middle-of-a-flow-using-mu
Maybe I'm misunderstanding the question, but if you'd like the http listener kick off the flow, then to load the file, you'll need a groovy script.
<scripting:component doc:name="Groovy">
<scripting:script engine="Groovy"><![CDATA[return new File("C:\test.csv").getText("UTF-8");]]></scripting:script>
</scripting:component>

org.mule.module.launcher.DeploymentInitException: SAXParseException: Premature end of file

I am using mule API gateway , I have deployed my package in it. But in API gateway that
org.mule.module.launcher.DeploymentInitException: SAXParseException: Premature end of file.
I have tried in version 1.3.0 and 3.8.0. In both I got same error.
Please help me.
File :
<http:connector name="httpConnector" />
<esper:config name="esperModule" configuration="esper-config.xml" />
<mxml:dom-to-xml-transformer name="domToXmlTransformer" />
<flow name="websocket-esper-bridge">
<http:inbound-endpoint address="niohttp://localhost:8080/websocket/events"
exchange-pattern="one-way">
<http:websocket path="events" />
</http:inbound-endpoint>
<custom-processor
class="com.mulesoft.demo.mule.websocket.EsperWebSocketUpdateListener">
<spring:property name="esperModule" ref="esperModule" />
<spring:property name="httpConnector" ref="httpConnector" />
<spring:property name="domToXmlTransformer" ref="domToXmlTransformer" />
</custom-processor>
</flow>
<flow name="signupEventsGenerator">
<poll frequency="3000">
<set-payload value="<signup id='fake' />"/>
</poll>
<mxml:xml-to-dom-transformer returnClass="org.w3c.dom.Document" />
<esper:send eventName="SignupEvent" eventPayload-ref="#[message.payload]" />
</flow>
This is a current known issue with 3.8.0. The error is not very descriptive but it means you are missing a schema declaration for a component. Could you post the entire XML file contents? It's probably one you're using, like http, esper, mxml.
HTH

Using mulerequester for getting file from Amazon S3 bucket

I want to read a file from a configured Amazon S3 bucket on certain event (example - a message in JMS queue). It seems that mule requester helps in these kind of situations for file, ftp, etc connectors. However, it appears that the scope of mulerequester is limited to transport connectors and not cloud connectors.
Can I use S3 as a resource for mulerequester?
<flow name="process_s3_file" doc:name="process_s3_file"
processingStrategy="synchronous">
<mulerequester:request config-ref="" resource="need-to-use-s3-get-object"
doc:name="Mule Requester">
</mulerequester:request>
<logger level="INFO" doc:name="Logger" />
<!-- do something here -->
<s3:delete-object config-ref="Amazon_S3"
bucketName="${s3-read-bucket}" key="#[s3_file_name]" doc:name="Delete File"
accessKey="${s3-access-key}" secretKey="${s3-secret-key}" />
</flow>
Here is the S3 get-object from where I want to request resource.
<s3:get-object-content config-ref="Amazon_S3" bucketName="${s3-read-bucket}"
key="#[s3_file_name]" accessKey="${s3-access-key}"
secretKey="${s3-secret-key}"
doc:name="Read File" />
It seems you don't need to have mulerequester for S3 connector. You can put it anywhere in the flow. The following flow worked for me.
<flow name="process_s3_file" doc:name="process_s3_file"
processingStrategy="synchronous">
<s3:get-object-content config-ref="Amazon_S3" bucketName="${s3-read-bucket}"
key="#[s3_file_name]" accessKey="${s3-access-key}"
secretKey="${s3-secret-key}"
doc:name="Read File" />
<logger level="INFO" doc:name="Logger" />
<!-- do something here -->
<s3:delete-object config-ref="Amazon_S3"
bucketName="${s3-read-bucket}" key="#[s3_file_name]" doc:name="Delete File"
accessKey="${s3-access-key}" secretKey="${s3-secret-key}" /> </flow>

How to get handle to original file from file:outbound-endpoint in mule

In a flow something like below:
<flow name="fileFlow">
<file:inbound-endpoint path="/inbound/ftp/sbc" pollingFrequency="30000" fileAge="30000" moveToDirectory="/inbound/ftp/sbc/archive">
<file:filename-wildcard-filter pattern="*.xml" caseSensitive="false"/>
</file:inbound-endpoint>
<logger message="Entering #[flow.name] flow" level="INFO"/>
<component class="com.abc.RequestFile"/>
<logger message="Payload after transformation is: #[payload] flow" level="INFO"/>
<vm:outbound-endpoint path="merge" />
<logger message="Exiting #[flow.name] flow" level="INFO"/>
</flow>
I get InputStream from file:inbound-endpoint which is passed to RequestFile component. This component is required to return a list of files of which one of this is the original one read and passed. I am seeking a solution other than manual copying InputStream to File in java component.
As explained in this answer https://stackoverflow.com/a/12397775/387927 you can get the java.io.File object instead of its content with this setting:
<file:connector name="fileConnector" streaming="false" autoDelete="false">
<service-overrides messageFactory="org.mule.transport.file.FileMuleMessageFactory" />
</file:connector>
Note that it is up to you to move / delete the file either before you start or once you've done processing it, otherwise Mule will poll it again and again.

Seeing ConcurrentModificationException in Mule 3.3.0

I am fairly new to Mule ,using 3.3.0, but I am trying what I think should be a fairly stock example.
I have a mule config which will read a csv file and attempt to process the lines and columns of in different flows async. However, we are seeing ConcurrentModificationException when the message is being "handed off" to one of the async flows. I was wondering if anyone else has seen this issue and what they may have done to work around the problem.
java.util.ConcurrentModificationException
at org.apache.commons.collections.map.AbstractHashedMap$HashIterator.nextEntry(AbstractHashedMap.java:1113)
at org.apache.commons.collections.map.AbstractHashedMap$KeySetIterator.next(AbstractHashedMap.java:938)
at org.mule.DefaultMuleEvent.setMessage(DefaultMuleEvent.java:933)
at org.mule.DefaultMuleEvent.(DefaultMuleEvent.java:318)
at org.mule.DefaultMuleEvent.(DefaultMuleEvent.java:290)
at org.mule.DefaultMuleEvent.copy(DefaultMuleEvent.java:948)
<queued-asynchronous-processing-strategy poolExhaustedAction="RUN" name="commonProcessingStrategy" maxQueueSize="1000" doc:name="Queued Asynchronous Processing Strategy"/>
<file:connector name="inboundFileConnector" fileAge="1000" autoDelete="true" pollingFrequency="1000" workDirectory="C:/mule/orca/dataprovider/work"/>
<file:endpoint name="dataProviderInbound" path="C:\mule\orca\dataprovider\inbound" moveToPattern="#[function:datestamp]-#[header:originalFilename]" moveToDirectory="C:\mule\orca\dataprovider\history" connector-ref="inboundFileConnector" doc:name="Data Feed File" doc:description="new files are processed in 'work' folder, then moved to 'archive' folder"/>
<flow name="dataProviderFeedFlow">
<inbound-endpoint ref="dataProviderInbound"/>
<file:file-to-string-transformer />
<flow-ref name="dataSub"/>
</flow>
<sub-flow name="dataSub" >
<splitter expression="#[rows=org.mule.util.StringUtils.split(message.payload, '\n\r')]" />
<expression-transformer expression="#[org.mule.util.StringUtils.split(message.payload, ',')]" />
<foreach>
<flow-ref name="storageFlow" />
<flow-ref name="id" />
</foreach>
</sub-flow>
<flow name="storageFlow" processingStrategy="commonProcessingStrategy">
<logger level="INFO" message="calling the 'storageFlow' sub flow."/>
</flow>
<flow name="id" processingStrategy="commonProcessingStrategy">
<logger level="INFO" message="calling the 'id' sub flow."/>
</flow>
Here is a fixed version of the dataSub sub-flow that works fine:
<sub-flow name="dataSub">
<splitter expression="#[org.mule.util.StringUtils.split(message.payload, '\n\r')]" />
<splitter expression="#[org.mule.util.StringUtils.split(message.payload, ',')]" />
<flow-ref name="storageFlow" />
<all>
<async>
<flow-ref name="storageFlow" />
</async>
<async>
<flow-ref name="id" />
</async>
</all>
</sub-flow>
Notice that:
I use two splitter expressions,
I use an all message processor to ensure the same payload is sent to both private flows,
I have to wrap the flow-refs with an async message processor otherwise the invocation fails because the private flows are asynchronous but all forces synchronous.