Parsing multiple records after Polling in Mule and pushing every single record in the queue - rabbitmq

As of now ,when I poll the database with a query,It will fetch me multiple records during a specified duration.Problem is ,these multiple records will be pushed as a single message into the queue.How do I push every record from the set of records as an individual message?

As you have explained the JDBC endpoint is fetching a collection of records and sending them as one single message to the queue. Solution for this is two options.
Using Mule's For-Each message processor. This helps in iterating through the collection object and processes each item as one message.
Using Mule's collection splitter to iterate through the collection of records.
Solution for option 1 looks like as shown in the image below.
The code for this flow loks like this.
<flow name="JDBC-For-Each-JMS-Flow" >
<jdbc-ee:inbound-endpoint queryKey="SelectAll" mimeType="text/javascript" queryTimeout="500000" pollingFrequency="1000" doc:name="Database">
<jdbc-ee:query key="SelectAll" value="select * from users"/>
</jdbc-ee:inbound-endpoint>
<foreach doc:name="For Each" collection="#[payload]" >
<jms:outbound-endpoint doc:name="JMS"/>
</foreach>
<catch-exception-strategy doc:name="Catch Exception Strategy"/>
</flow>
Note: This is a sample flow.
Hope this helps.

Related

RabbitMQ Priority Queues not working in MULE

I'm trying to use the Priority Queues mechanism provided by RabbitMQ within MULE ESB.
I have created a queue with x-max-priority = 3 and in a new Mule Proyect, I've set 3 AMQP:connectors, one for each priority, like this:
<amqp:connector
name="amqpLocalhostConnector0"
host="${amqp.host}"
port="${amqp.port}"
fallbackAddresses="${amqp.fallbackAddresses}"
virtualHost="${amqp.virtualHost}"
username="${amqp.username}"
password="${amqp.password}"
priority="0"
ackMode="AMQP_AUTO"
prefetchCount="${amqp.prefetchCount}" />
/* The values within ${} are taken from a properties file.
* Priorities are: "0", "1", "2" */
Then, there is a simple flow that sends messages every second to the queue containing the priority as a string in the body, besides the property priority used in the connector.
<flow name="rabbitmqFlow1" doc:name="rabbitmqFlow1">
<poll frequency="1000" doc:name="Poll">
<set-payload value="PRIORITY 1" doc:name="Set Payload"/>
</poll>
<amqp:outbound-endpoint
exchangeName="amq.direct"
routingKey="priority"
connector-ref="amqpLocalhostConnector0">
</amqp:outbound-endpoint>
</flow>
I manually repeat this proccess alternating the priority used by each message batch. This way, my queue has several messages with different priorities mixed.
Now, if I manually dequeue messages using the Get Message(s) button in the Management UI, they are delivered according the priority: first those with priority 2, then those with priority 1 and finally those with priority 0.
The problem is when I try to get the messages using a amqp:inbound-endpoint component in MULE.
Here is another simple flow that just gets the messages from the priority queue and shows what's the content of each one.
<flow name="rabbitmqFlow2" doc:name="rabbitmqFlow2">
<amqp:inbound-endpoint
queueName="priority"/>
<byte-array-to-string-transformer doc:name="Byte Array to String"/>
<logger message="#[payload]" level="INFO" doc:name="Logger"/>
</flow>
Here the messages are obtained in the order they were sended to the queue and not according their priorities.
What can I do to read the messages from the queue respecting priorities?

How to process a list in parallel in mule?

I have a list of objects, which right now I am processing in foreach. The list is nothing but a string of ids that kicks off other stuff internally.
<flow name="flow1" processingStrategy="synchronous">
<quartz:inbound-endpoint jobName="integration" repeatInterval="86400000" responseTimeout="10000" doc:name="Quartz" >
<quartz:event-generator-job/>
</quartz:inbound-endpoint>
<component class="RequestFeeder" doc:name="RequestFeeder"/>
<foreach collection="#[payload]" doc:name="For Each">
<flow-ref name="createFlow" doc:name="createFlow"/>
<flow-ref name="queueFlow" doc:name="queueFlow"/>
<flow-ref name="statusCheckFlow" doc:name="statusCheckFlow"/>
<flow-ref name="resultsFlow" doc:name="resultsFlow"/>
<flow-ref name="sftpFlow" doc:name="sftpFlow"/>
<logger message="RequestType #[flowVars['rqstType']] complete" level="INFO" doc:name="Done"/>
</foreach>
<logger message="ALL 15 REQUESTS HAVE BEEN PROCESSED" level="INFO" doc:name="Logger"/>
</flow>
I want to process them in parallel. ie execute the same 4 flow-refs in parallel for all 15 requests coming in the list. This seems simple, but I havent been able to figure it out yet. Any help appreciated.
An alternative to the scatter-gather approach is to simply split the collection and use a VM queue for the items in the list. This method can be simpler if you don't need to wait and collect all 15 results, and will still work if you do.
Try something like this. Mule automatically uses a thread pool (more info) to run your flow, so the requestProcessor flow below will process your requests in parallel.
<flow name="scheduleRequests">
<quartz:inbound-endpoint jobName="integration" repeatInterval="86400000" responseTimeout="10000" doc:name="Quartz" >
<quartz:event-generator-job/>
</quartz:inbound-endpoint>
<component class="RequestFeeder" doc:name="RequestFeeder"/>
<collection-splitter />
<vm:outbound-endpoint path="requests" />
</flow>
<flow name="requestProcessor">
<vm:inbound-endpoint path="requests" />
<flow-ref name="createFlow" doc:name="createFlow"/>
<flow-ref name="queueFlow" doc:name="queueFlow"/>
<flow-ref name="statusCheckFlow" doc:name="statusCheckFlow"/>
<flow-ref name="resultsFlow" doc:name="resultsFlow"/>
<flow-ref name="sftpFlow" doc:name="sftpFlow"/>
</flow>
I reckon you still want those four flows to run sequentially, right?
If that were not the case you could always change the threading profile.
Another thing you could do is to wrap the four flows in an async scope although you may need a processor change.
In any event I think you'll be better of using the scatter gather component:
https://developer.mulesoft.com/docs/display/current/Scatter-Gather
https://www.mulesoft.com/exchange#!/scatter-gather-flow-control
Which without needing the for each scope will split the list and execute each item in a different thread. You could define how many threads you want to run in parallel (so you don't just spin of a new thread you use a pool).
One final note though, is meant to aggregate the result of all the processed items. I reckon you could change that with a custom aggregation strategy but not sure really, please take a look at the docs for that.
HTH
You say 4 flows, but the list contains 5 flows. If you want all flows executed in sequence, but each item in the collection executed in parallel, you will want a splitter followed by a separate vm flow containing all (4/5) flows, as explained here: https://support.mulesoft.com/s/article/Concurrently-processing-Collection-and-getting-the-results.
If you want the flows inside the loop to execute in parallel then you choose a Scatter-Gather component.
It is important to be clear which of the two things you are wanting to achieve as the solution would be very different. So the basic difference is, in Scatter-Gather a single message is sent to multiple recipients for processing in parallel, but in Splitter-Aggregator a single message is split into multiple sub messages and processed individually and then aggregated. See: http://muthurajud.blogspot.com/2016/07/eai-patterns-scattergather-versus.html
Scatter- gather of Mule component is one of the component to make easy for parallel processing, A simple example will be following :-
<scatter-gather >
<flow-ref name="flow1" />
<flow-ref name="flow2" />
<flow-ref name="flow3" />
</scatter-gather>
So, the flows you want to execute in parallel can be kept inside the

How to read value from CSV file in MULE and retrieve data from Database using the value

I have a flow which reads a CSV files and perfrorm CRUD operation in database ... My flow is somewhat like :-
<flow name="CsvToFile" doc:name="CsvToFile">
<file:inbound-endpoint path="C:\Data" responseTimeout="10000" doc:name="CSV" connector-ref="File">
<file:filename-wildcard-filter pattern="*.csv" caseSensitive="true"/>
</file:inbound-endpoint>
<jdbc-ee:csv-to-maps-transformer delimiter="," mappingFile="src/main/abc.xml" ignoreFirstRecord="true" doc:name="CSVTransformer"/>
<jdbc-ee:outbound-endpoint exchange-pattern="request-response" queryKey="SelectQuery" queryTimeout="-1" connector-ref="jdbcConnectorGlobal" doc:name="Database">
<jdbc-ee:query key="SelectQuery" value="Select * FROM DBDATA where ID=#[map-payload:ID]"/>
</jdbc-ee:outbound-endpoint>
<set-payload value="#[message.payload]" doc:name="Set Payload"/>
</flow>
Now if I use SELECT SQL statement.. I don't see the result and no data is fetched ... but if I use INSERT like
<jdbc-ee:query key="InsertQuery" value="INSERT INTO DBDATA (ID,NAME,AGE) VALUES(#[map-payload:ID],#[map-payload:NAME],#[map-payload:AGE])"/>
or an UPDATE SQL statement, I can find it's working and data is inserted or updated in dataase.
My question is: how can I read the ID value from .CSV file and use in SELECT query to retrieve values from the database?
You have exchange-pattern="one-way", meaning the jdbc outbound call will not return to the main flow. Use exchange-pattern="request-response" instead to get a return value.
Also <set-payload value="#[message.payload]" doc:name="Set Payload"/> does not make any sense. You need a transformer of some sort to make the return value readable. You can add a logger to see the returned payload.
UPDATE:
Return value of csv-to-maps-transformer is ArrayList, not Map, so you can not use map-payload:ID. Try splitting the ArrayList, or use #[payload[0].ID] if you have just a single entry.
Thanks to Anton..
The final solution is #[payload[0].ID] what I need and working for me

Mule - split a big JSON list into multiple smaller JSON lists

I have a list of json objects containing about 200 objects. I want to split that list into smaller lists where each list contains max 20 objects each. I would like to POST each sublist to HTTP based endpoint.
<flow name="send-to-next-step" doc:name="send-to-vm-flow">
<vm:inbound-endpoint exchange-pattern="one-way"
path="send-to-next-step-vm" doc:name="VM" />
<!-- received the JSON List payload with 200 objects-->
<!-- TODO do processing here to split the list into sub-lists and call sub-flow for each sub-list
<flow-ref name="send-to-aggregator-sf" doc:name="Flow Reference" />
</flow>
One possible way is that I write a java component which iterates over the list and after iterating over each 20 objects, call sub-flow. Is there any better way of accomplishing this?
If your payload is a Java Collection, the Mule foreach scope has batching built in: http://www.mulesoft.org/documentation/display/current/Foreach
Example:
<foreach batchSize="20">
<json:object-to-json-transformer/>
<http:outbound-endpoint ... />
</foreach>
You could use the Groovy collate method for the batching, and then foreach or collection-splitter, depending on your needs:
<json:json-to-object-transformer returnClass="java.util.List"/>
<set-payload value="#[groovy:payload.collate(20)]"/>
<foreach>
<json:object-to-json-transformer/>
<http:outbound-endpoint exchange-pattern="request-response" host="0.0.0.0" port="8082" path="xx"/>
</foreach>
<set-payload value="#[groovy:payload.flatten()]"/>
This will send each batch of 20 objects to the http endpoint and then flatten back to the original list.

Mule MEL to read database result-set from an second 'database outbound endpoint'

I have a flow something like this
A 'Database inbound endpoint' which polls(for every 5 mins) to mySQL Database-Server and get result-set by a select-query (automatically this becomes the current payload i.e #[message.payload])
'For each' component and a 'Logger' component in it using a expression as #[message.payload]
Now flow has one more 'Database-out-bound-endpoint' component which executes another select-query and obtains result-set.
'For each' component with a 'Logger' component in it using a expression as #[message.payload]
Note: in the loggers result-set of first DB is printing. I mean second logger is also showing result-set of first query itself.Because the result-set is storing as payload
so, my questions are
what is the MEL to read the result-set of second database-query in the above scenario.
is there any another way to read result-set in the flow
Here is the configuration XML
<jdbc-ee:connector name="oracle_database" dataSource-ref="Oracle_Data_Source" validateConnections="true" queryTimeout="-1" pollingFrequency="0" doc:name="Database"/>
<flow name="testFileSaveFlow3" doc:name="testFileSaveFlow3">
<poll frequency="1000" doc:name="Poll">
<jdbc-ee:outbound-endpoint exchange-pattern="one-way" queryKey="selectTable1" queryTimeout="-1" connector-ref="oracle_database" doc:name="get data from table 1">
<jdbc-ee:query key="selectTable1" value="SELECT * FROM TABLE1"/>
</jdbc-ee:outbound-endpoint>
</poll>
<foreach doc:name="For Each">
<logger message="#[message.payload]" level="INFO" doc:name="prints result-set of table1"/>
</foreach>
<jdbc-ee:outbound-endpoint exchange-pattern="one-way" queryKey="selectTable2" queryTimeout="-1" connector-ref="oracle_database" doc:name="get data from table 2">
<jdbc-ee:query key="selectTable2" value="SELECT * FROM TABLE2"/>
</jdbc-ee:outbound-endpoint>
<foreach doc:name="For Each">
<logger message="#[message.payload]" level="INFO" doc:name="prints result-set of table2"/>
</foreach>
</flow>
thanks in advance.
This is not the issue with the MEL. It is the issue with your flow logic.
The second result set is not available in the message.
The JDBC Outbound Endpoint is one-way. So Mule flow will not wait for the reply (result set) from the second JDBC (outbound ) in the middle of the flow. So the second time also it is printing the first result set.
Type 1:
Try making your JBDC outbound request-response instead of one-way.
Type 2:
Try Mule Enricher to call the JDBC outbound to call the DB and store the result set into a varaible and try looping the varaible.
Hope this helps.