Mule:Polling on multiple table of a database connector - mule

Need to poll multiple tables of a database connector. When trying to apply separate poll on tables using composite source
<composite-source>
<poll>
<db:select config-ref="databaseConnector"/> <!--select on table 1-->
</poll>
<poll>
<db:select config-ref="databaseConnector"/> <!--select on table 2-->
</poll>
</composite-source>
getting an error poller already registered on endpoint uri. How can i poll multiple tables for updated data using a database connector.

Use three flows:
<flow name="poll-table-1">
<poll frequency="...">...</poll>
<flow-ref name="table-data-processor" />
</flow>
<flow name="poll-table-2">
<poll frequency="...">...</poll>
<flow-ref name="table-data-processor" />
</flow>
<flow name="table-data-processor">
...
</flow>

You can try the following way:-
<composite-source>
<poll frequency="10000" doc:name="Poll">
<processor-chain >
<db:select config-ref="Oracle_Configuration" doc:name="Database">
<db:parameterized-query><![CDATA[select * from Table1]]></db:parameterized-query>
</db:select>
<logger level="INFO" message="Your Payload from Table1:- ....." doc:name="Logger"/>
<db:select config-ref="Oracle_Configuration" doc:name="Database">
<db:parameterized-query><![CDATA[select * from Table2]]></db:parameterized-query>
</db:select>
<logger level="INFO" message="Your Payload from Table2:- ...." doc:name="Logger"/>
</processor-chain>
</poll>
</composite-source>
<logger level="INFO" message="The remaining flow " doc:name="Logger"/>
This is working fine for me :)

Related

"Read multiple file from different location simultaneously and merge them into one payload"

I am reading the multiple files from different folder and merging them into one but not able to merge into one file.
I am using composite source where I added two file connector then I am logging the payload into logger. payload I am getting one by one. How can I get the one payload combination of the two different payloads or multiple files input?
<flow name="file2Flow">
<composite-source doc:name="Copy_of_Composite Source">
<file:inbound-endpoint path="src/main/resources/input1" responseTimeout="10000" doc:name="File"/>
<file:inbound-endpoint path="src/main/resources/input2" responseTimeout="10000" doc:name="File"/>
</composite-source>
<file:file-to-string-transformer doc:name="File to String"/>
<logger message="#[payload]" level="INFO" doc:name="Logger"/>
</flow>
also I am trying this but not getting output
<flow name="file2file2Flow">
<http:listener config-ref="HTTP_Listener_Configuration" path="/files" doc:name="HTTP"/>
<scatter-gather doc:name="Scatter-Gather">
<file:outbound-endpoint path="src/main/resources/input1" responseTimeout="10000" doc:name="File"/>
<file:outbound-endpoint path="src/main/resources/input1" responseTimeout="10000" doc:name="File"/>
</scatter-gather>
<dw:transform-message doc:name="Transform Message">
<dw:set-payload><![CDATA[%dw 1.0
%output application/json
---
{
post1: payload[0],
post2: payload[1]
}]]>
</dw:set-payload>
</dw:transform-message>
<logger message="#[payload]" level="INFO" doc:name="Logger"/>
</flow>
file:inbound-endpoint will poll one directory, so if you need different directories that won't work.
composite-source allows it, but they wont be available in the same payload.
file:outbound-endpoint is for writing files only.
In Mule 3, you can achieve this though through a combination of a poll to trigger the flow, scatter-gather to route to multiple processors and the mule requester module to read files mid flow.
Mule Requester Module: https://www.mulesoft.com/exchange/68ef9520-24e9-4cf2-b2f5-620025690913/requester-module/
Rough example:
<flow name="dw-testFlow">
<poll doc:name="Poll" frequency="10000">
<logger level="INFO" doc:name="Logger" />
</poll>
<scatter-gather doc:name="Scatter-Gather">
<mulerequester:request config-ref="muleRequesterConfig" resource="myFileEndpoint" doc:name="Mule Requester" />
<mulerequester:request config-ref="muleRequesterConfig" resource="myFileEndpoint" doc:name="Mule Requester" />
</scatter-gather>
</flow>

Mule batch job taking too long to enrich data

I am trying to enrich the data to create an XML file.
The first query does a Group By to obtain the transaction header.
The second query gets all records (details) that match the header from the same file, to enrich the message.
The problem is that it takes about a second to run the query that enriches the data. I will need to run this process for 184,764 headers. At one second per header this job will take too long. Is there a way to accomplish the same thing without having to query the database for details? Can all the records be loaded first and obtain the details from memory instead? Here's the code:
<db:generic-config name="Generic_Database_Configuration" url="${db.url}"
driverClassName="${driver.class.name}" doc:name="Generic Database
Configuration"/>
<data-mapper:config name="List_Map__To_List_Map_"
transformationGraphPath="list_map__to_list_map_.grf"
doc:name="List_Map__To_List_Map_"/>
<data-mapper:config name="List_Map__To_XML_1"
transformationGraphPath="list_map__to_xml_1.grf"
doc:name="List_Map__To_XML_1"/>
<batch:job name="OrceTransactionImportBatch">
<batch:input>
<db:select config-ref="Generic_Database_Configuration"
doc:name="Database">
<db:parameterized-query><![CDATA[SELECT TRANDATED, STORED, REG#D
AS REG_D, TRAN#D AS TRAN_D, VIP#D AS VIP_D, VIP#D AS VIPNO, SUM(RETAIL*QTY)
AS TOTAL,
CONCAT(SUBSTRING(TRANDATED,1,4),
CONCAT('-',CONCAT(SUBSTRING(TRANDATED,5,2),
CONCAT('-',CONCAT(SUBSTRING(TRANDATED,7,2),'T00:00:00'))))) AS
BusinessDayDate
FROM ORCTEXDTLP
WHERE DGROUPID IN (SELECT HGROUPID FROM ORCTEXHDRP WHERE HPRCFLAG = 'P')
GROUP BY STORED, TRANDATED, REG#D, TRAN#D, VIP#D
FETCH FIRST 60 ROWS ONLY]]></db:parameterized-query>
</db:select>
<logger message="before mapper..." level="INFO" doc:name="before
mapper..."/>
</batch:input>
<batch:process-records>
<batch:step name="Batch_Step">
<data-mapper:transform config-ref="List_Map__To_List_Map_"
doc:name="List<Map> To List<Map>"/>
<logger message="before enricher..." level="INFO"
doc:name="before enricher..."/>
</batch:step>
<batch:step name="Batch_Step1">
<logger message="BEFORE FOR EACH..." level="INFO"
doc:name="Logger"/>
<enricher target="#[variable:LineItem]" doc:name="Message
Enricher">
<db:select config-ref="Generic_Database_Configuration"
doc:name="Database">
<db:parameterized-query><![CDATA[SELECT TRANCODED,
CONCAT(SUBSTRING(TRANDATED,1,4),
CONCAT('-',CONCAT(SUBSTRING(TRANDATED,5,2),
CONCAT('-',CONCAT(SUBSTRING(TRANDATED,7,2),'T00:00:00'))))) AS
BusinessDayDate, STORED AS RetailStoreID, TRAN#D AS TransactionNumber, REG#D
AS WorkstationID, RETAIL AS TransactionGrandAmount, VIP#D AS AlternateID,
DISCOUNT, VOUCHER#D AS VOUCHER_D, TRIM(SKU#) AS ItemID, A03K2 AS
UnitCostPrice, RETAIL AS RegularSalesUnitPrice, (RETAIL*QTY) AS
ExtendedAmount, QTY AS Quantity, ROW_NUMBER() OVER () rownumber,
(RETAIL*QTY) AS ActualRetail,
VOUCHERCD AS VoucherCode, VOUCHER#D AS VoucherNumber
FROM FBF02P
LEFT OUTER JOIN KSK2P ON SKUK2 = SKU#
WHERE TRANDATED = #[payload[0]['TRANDATED']] AND STORED = #[payload[0]
['STORED']] AND REG#D = #[payload[0]['REG_D']] AND TRAN#D = #[payload[0]
['TRAN_D']]]]></db:parameterized-query>
</db:select>
</enricher>
<expression-component doc:name="Expression"><![CDATA[#
[payload[0].LineItem=flowVars.LineItem]]]></expression-component>
<logger message="#[payload[0]['TRAN_D']]" level="INFO"
doc:name="Logger"/>
</batch:step>
<batch:step name="Batch_Step2">
<batch:commit streaming="true" doc:name="Batch Commit">
<data-mapper:transform config-ref="List_Map__To_XML_1"
doc:name="List<Map> To XML"/>
<file:outbound-endpoint path="${output.path}"
outputPattern="TranImport#[server.dateTime.format('yyyyMMdd_HHmmss')].xml"
responseTimeout="10000" doc:name="File"/>
</batch:commit>
</batch:step>
</batch:process-records>
<batch:on-complete>
<logger message="DONE..." level="INFO" doc:name="Logger"/>
</batch:on-complete>
</batch:job>
<flow name="OrceTransactionImportFlow">
<poll doc:name="Poll">
<fixed-frequency-scheduler frequency="1" timeUnit="DAYS"/>
<db:update config-ref="Generic_Database_Configuration"
doc:name="Database">
<db:parameterized-query><![CDATA[UPDATE ORCTEXHDRP
SET HPRCFLAG = 'P'
WHERE HPRCFLAG = '' OR HPRCFLAG = 'P']]></db:parameterized-query>
</db:update>
</poll>
<choice doc:name="Choice">
<when expression="#[payload == 0]">
<logger message="Zero payload..." level="INFO"
doc:name="Logger"/>
</when>
<otherwise>
<batch:execute name="OrceTransactionImportBatch"
doc:name="OrceTransactionImportBatch"/>
</otherwise>
</choice>
</flow>
Inside your database connector configuration you should setup a Connection Pooling profile.

Mule DB data retrieval into chunks

We are trying to extract approx. 40 GB data from database and want to generate multiple csv files. We used mule DB connector in streaming fashion, which is returning 'ResultSetIterator'
Q1) How to convert this ResultSetIterator to arraylist? or any readable format which we can use further to generate files
Q2) We tried using For-Each component to split this data in chunks, its working for limited set of data and for huge data giving SerializationException
In below input snippet we are making chunks of data using for-each and providing it to batch process for multiple files
<batch:job name="testBatchWithDBOutside">
<batch:input>
<logger message="#[payload]" level="INFO" doc:name="Logger"/>
</batch:input>
<batch:process-records>
<batch:step name="Batch_Step">
<batch:commit size="10" doc:name="Batch Commit">
<object-to-string-transformer doc:name="Object to String"/>
<logger message="#[payload]" level="INFO" doc:name="Logger"/>
<file:outbound-endpoint path="C:\output" outputPattern="#[message.id].txt" responseTimeout="10000" doc:name="File"/>
</batch:commit>
</batch:step>
</batch:process-records>
</batch:job>
<flow name="testBatchWithDBOutsideFlow" processingStrategy="synchronous">
<file:inbound-endpoint path="C:\input" responseTimeout="10000" doc:name="File"/>
<db:select config-ref="MySQL_Configuration" streaming="true" fetchSize="10" doc:name="Database">
<db:parameterized-query><![CDATA[select * from classicmodels]]></db:parameterized-query>
</db:select>
<foreach batchSize="5" doc:name="For Each">
<batch:execute name="testBatchWithDBOutside" doc:name="testBatchWithDBOutside"/>
</foreach>
</flow>
Q1. You don't want to convert the Iterator to a List, as this will defeat the purpose of streaming from the DB connector and load all records into memory. Mule handles Iterators and Lists in the same way anyway.
Q2. The batch module implies a for-each operation. The output of batch:input needs to be a List or an Iterator. You should be able to simplify this
<batch:job name="testBatch">
<batch:input>
<db:select config-ref="MySQL_Configuration" streaming="true" fetchSize="10" doc:name="Database">
<db:parameterized-query><![CDATA[select * from classicmodels]]></db:parameterized-query>
</db:select>
</batch:input>
<batch:process-records>
<batch:step name="Batch_Step">
<object-to-string-transformer doc:name="Object to String"/>
<file:outbound-endpoint path="C:\output" outputPattern="#[message.id].txt" responseTimeout="10000" doc:name="File"/>
</batch:step>
</batch:process-records>
</batch:job>
You will also need to replace the object-to-string-transformer with a component that converts a database record (the payload at this point will be a map where the key is the column name, and the value is the record value) into a csv line.
You can find a decent example in the Mule blog here: https://blogs.mulesoft.com/dev/anypoint-platform-dev/batch-module-reloaded/
Another option would be to remove the batch processor and use DataWeave to generate csv output and stream it to the file. This might be helpful: https://docs.mulesoft.com/mule-user-guide/v/3.7/dataweave-streaming
Dataweave will call next on the ResultSetIterator as it processes each record, and that Iterator will handle selecting chunks of records from the underlying database, so there is no queueing in between steps, or loading the full dataset into memory.
<flow name="batchtestFlow">
<http:listener config-ref="HTTP_Listener_Configuration" path="/batch" allowedMethods="GET" doc:name="HTTP"/>
<db:select config-ref="Generic_Database_Configuration" streaming="true" doc:name="Database">
<db:parameterized-query><![CDATA[select * from Employees]]></db:parameterized-query>
</db:select>
<dw:transform-message doc:name="Transform Message">
<dw:set-payload><![CDATA[%dw 1.0
%input payload application/java
%output application/csv streaming=true, header=true, quoteValues=true
---
payload map ((e, i) -> {
surname: e.SURNAME,
firstname: e.FIRST_NAME
})]]></dw:set-payload>
</dw:transform-message>
<file:outbound-endpoint path="C:/tmp" outputPattern="testbatchfile.csv" connector-ref="File" responseTimeout="10000" doc:name="File"/>
</flow>
You want to use OutputHander. Make sure you have streaming turned on and then use a script component, for instance select groovy and handle each row one at a time like so:
// script.groovy
return {evt, out ->
payload.each { row ->
out << row.SOMECOLUMN.... }
} as OutputHandler
And the component in your xml
<scripting:transformer returnClass="TODO" doc:name="ScriptComponent">
<scripting:script engine="Groovy" file="script.groovy" />
</scripting:transformer>
If you want to return some output. However if you want to write to a file in your case you wouldn't use the variable out but instead write to your files.
I found a simple and quickest way as below:
Here DB connector is in streaming mode, and For-Each is splitting the records in given Batch Size
<flow name="testFlow" processingStrategy="synchronous">
<composite-source doc:name="Composite Source">
<quartz:inbound-endpoint jobName="test" cronExpression="0 48 13 1/1 * ? *" repeatInterval="0" connector-ref="Quartz" responseTimeout="10000" doc:name="Quartz">
<quartz:event-generator-job/>
</quartz:inbound-endpoint>
<http:listener config-ref="HTTP_Listener_Configuration" path="/hit" doc:name="HTTP"/>
</composite-source>
<db:select config-ref="MySQL_Configuration" streaming="true" fetchSize="10000" doc:name="Database">
<db:parameterized-query><![CDATA[SELECT * FROM tblName]]></db:parameterized-query>
</db:select>
<foreach batchSize="10000" doc:name="For Each">
<dw:transform-message doc:name="Transform Message">
<dw:set-payload><![CDATA[%dw 1.0
%output application/csv
---
payload map {
field1:$.InterfaceId,
field2:$.Component
}]]></dw:set-payload>
</dw:transform-message>
<file:outbound-endpoint path="F:\output" outputPattern="#[message.id].csv" responseTimeout="10000" doc:name="File"/>
</foreach>
<set-payload value="*** Success ***" doc:name="Set Payload"/>
</flow>

mule move multiple files from one folder to other inside for each loop

Below is my mule flow. I want to move my corresponding from the jdbc query rseult set
----------------------------------------------------
<foreach doc:name="Foreach" counterVariableName="#[message.payload.size()]">
<logger message="#[payload.filepath] - #[payload.name] - #[payload.filename]" level="INFO" doc:name="Logger" />
</foreach>
---------------------------------------------------------------------
<jdbc-ee:postgresql-data-source name="PostgreSQL_Data_Source"
user="postgres" password="postgres" url="jdbc:postgresql://localhost:5432/postgres"
transactionIsolation="UNSPECIFIED" doc:name="PostgreSQL Data Source">
</jdbc-ee:postgresql-data-source>
<jdbc-ee:connector name="JDBCConnector"
dataSource-ref="PostgreSQL_Data_Source" validateConnections="true"
doc:name="JDBCConnector">
<jdbc-ee:query key="emprec" value="select * from emp where salary>50000";">
</jdbc-ee:query>
</jdbc-ee:connector>
<flow name="empflow" >
<quartz:inbound-endpoint responseTimeout="10000"
doc:name="Quartz" jobName="CronJobSchedule" repeatInterval="0"
cronExpression="0 0/1 * ? * MON-FRI" repeatCount="1">
<quartz:event-generator-job>
<quartz:payload>quartzSchedular started</quartz:payload>
</quartz:event-generator-job>
</quartz:inbound-endpoint>
<jdbc-ee:outbound-endpoint queryKey="emprec"
queryTimeout="-1" connector-ref="JDBCConnector" exchange-pattern="request-response"
doc:name="Database" />
<logger message="Size of payload is ::: #[message.payload.size()]" level="INFO" doc:name="Logger"/>
<foreach doc:name="Foreach" counterVariableName="#[message.payload.size()]">
<logger message="#[payload.filepath] - #[payload.name] - #[payload.filename]" level="INFO" doc:name="Logger" />
</foreach>
</flow>
please suggest way to move whatever filename got using query result ,that file need to move other location
Inside for each loop i tried file inbound and outpoint .but it is not worked out
1 - You need to load the file, for that I suggest using the Mule Requester Module. You can find more on it in this blogpost.
2 - Right after that you can move it using a file outbound endpoint.
Here's an example:
<mulerequester:request config-ref="Mule_Requester" resource="file:///Users/anafelisatti/test.txt" returnClass="java.lang.String" doc:name="Mule Requester"/>
<file:outbound-endpoint responseTimeout="10000" doc:name="File" outputPattern="test.txt" path="/Users/anafelisatti/Documents"/>
Hope that helps.

Flow variable not working correct for DB select query

I am facing one Strange issue ... My Mule flow is as follow :-
<jdbc-ee:connector name="Database_Global" dataSource-ref="DB_Source" validateConnections="true" queryTimeout="-1" pollingFrequency="0" doc:name="Database">
<jdbc-ee:query key="InsertQuery" value="INSERT INTO getData(ID,NAME,AGE,DESIGNATION)VALUES(#[flowVars['id']],#[flowVars['name']],#[flowVars['age']],#[flowVars['designation']])"/>
<jdbc-ee:query key="RetriveQuery" value="Select * from getData where ID=#[flowVars['id']] "/>
</jdbc-ee:connector>
<flow name="MuleDbInsertFlow1" doc:name="MuleDbInsertFlow1">
<http:inbound-endpoint exchange-pattern="request-response" host="localhost" port="8082" path="mainData" doc:name="HTTP"/>
<cxf:jaxws-service service="MainData" serviceClass="com.vertu.services.schema.maindata.v1.MainData" doc:name="SOAPWithHeader" />
<component class="com.vertu.services.schema.maindata.v1.Impl.MainDataImpl" doc:name="JavaMain_ServiceImpl"/>
<mulexml:object-to-xml-transformer doc:name="Object to XML"/>
<choice doc:name="Choice">
<when expression="#[message.inboundProperties['SOAPAction'] contains 'retrieveDataOperation']">
<processor-chain doc:name="Processor Chain">
<set-variable variableName="id" value="#[xpath('//id').text]" doc:name="Variable"/>
<logger message="ID from req #[flowVars['id']]" level="INFO" doc:name="Logger"/>
<jdbc-ee:outbound-endpoint exchange-pattern="request-response" queryKey="RetriveQuery" queryTimeout="-1" connector-ref="Database_Global" doc:name="Database (JDBC)"/>
<choice doc:name="Choice">
<when expression="#[message.payload.isEmpty()]">
<processor-chain>
<!-- Data not exists .. We cannot display -->
<logger message="No records found in Database !!!" level="INFO" doc:name="Logger"/>
</processor-chain>
</when>
<otherwise>
<processor-chain>
<!-- Data exists .. We cannotdisplay -->
<logger message="The Data retrieved from the Database" level="INFO" doc:name="Logger"/>
</processor-chain>
</otherwise>
</choice>
Now the issue is whenever I use the query RetriveQuery:- Select * from getData where ID=#[flowVars['id']]
It goes to the choice block where the logger shows No records found in Database !!! .. But you can see I placed a logger before call the SQL query by DB outbound
<logger message="ID from req #[flowVars['id']]" level="INFO" doc:name="Logger"/>
<jdbc-ee:outbound-endpoint exchange-pattern="request-response" queryKey="RetriveQuery" queryTimeout="-1" connector-ref="Database_Global" doc:name="Database (JDBC)"/>
which prints #[flowVars['id']] and I am successfully getting the value ..
But I don't know why it is going to the <when expression="#[message.payload.isEmpty()]">block ...
If I use the following in RetriveQuery : Select * from getData where ID=22
Then it's successfully getting the value of ID in the query ..
Please let me know why it's not getting the value in SQL query if I use a flowVars ..
It's executing successfully for insert and update query but not for Select ..
Pls note :- here the value of ID in Select * from getData where ID is integer ..
This is strange.
Try cleaning and re-building the project: the version that's running is maybe not using the latest config.
Yes, this was a strange and I found cleaning and re building the project in studio as David suggested worked .. may be there was an issue in picking up the latest config and reflecting in the proect