migrating jdbc transport to database connector in mule 3.7 - mule

Currently we are using the deprecated jdbc connector and it's corresponding jdbc inbound endpoint to poll data from our database.
We are using the .ack (Acknowledgment (ACK) Statements feature) to prevent processing duplicate records.
I however don't seem to find the same funciontalility using the new database-connector. We are using the Mule Community edition, so we cannot use the batch components.
Is there a possiblity to have the same functionalitity using the database connector (in combination with the polling component). Or will we have to manually do the acknowledgement of our records?
<jdbc:connector name="dbPollingConnector" dataSource-ref="dataSource" queryTimeout="1000" pollingFrequency="1000">
<receiver-threading-profile maxThreadsActive="1" />
<reconnect-forever frequency="60000"></reconnect-forever>
<jdbc:query key="newDataGrouped" value="select * from table where processed = 0"></jdbc:query>
<jdbc:query key="newDataGrouped.ack" value="update table set processed = current_timestamp"></jdbc:query>
</jdbc:connector>
<flow name="flowName">
<jdbc:inbound-endpoint name="groupedInboundComponent" responseTimeout="1000" queryTimeout="100"
pollingFrequency="1000" connector-ref="dbPollingConnector" queryKey="newDataGrouped" exchange-pattern="request-response">
</jdbc:inbound-endpoint>
<!--... rest of the flow ... -->
</flow>

You can utilize Polling for Updates using Watermarks
It works slightly similar with .ack, but without modifying physical data in database. I think it just need minor modification in current query, e.g.: select * from table where id > #[flowVars['lastModifiedID']]

Related

Getting error when working with poll watermarking in mule

I have sfdc (salesforce connector) inside poller and enabled the watermarking for it after then getting data from sfdc and loading it to Database.
<flow name="loadData" processingStrategy="synchronous">
<poll doc:name="Poll">
<fixed-frequency-scheduler frequency="2" timeUnit="MINUTES"/>
<watermark variable="timestamp" default-expression="#[server.dateTime.format("yyyy-MM-dd'T'HH:mm:ss.SSS'Z'")]" selector="MAX" selector-expression="#[payload.LastModifiedDate]" object-store-ref="sfdcStore"/>
<processor-chain doc:name="Processor Chain">
<logger message="poller started at #[server.dateTime]" level="INFO" doc:name="start"/>
<sfdc:query config-ref="svccloud_salesforce_configuration" query="SELECT Name, , Id, BillingStreet, BillingCity, BillingState, BillingCountry, BillingPostalCode, Phone, Pathway_Status__c FROM Account where LastModifiedDate < #[flowVars['timestamp']] and RecordTypeId IN (SELECT Id FROM RecordType where Name = 'Customer')" doc:name="Querying Customer Details"/>
</processor-chain>
</poll>
<logger message="process to DB" level="INFO"/>
</flow>
Data is getting and loading properly to DB but latest Date is not getting stored in the timestamp variable. I am getting below info message. If timestamp value is stored what is message we will get. Can you please help on this
INFO 2017-08-28 15:06:26,795 [pool-13-thread-1] org.mule.transport.polling.watermark.Watermark: Watermark value will not be updated since poll processor returned no results
The query doesn't actually select LastModifiedDate, so when the poll tries to update it, it will allways be null and will not update.
The query selects only records that are before the timestamp, meaning that the MAX watermark will never be updated.
You have to clear your application Data, to resolve this. If you run in studio, the watermark variable gets stored in Object store locally.
If you clear application data, it will work as expected.
Please follow below image to clear Application data.
Right click on project--> Run As --> Run Congiguration --> General Tab--> change Clear Application Data to Always(you need to scroll down to see this option).
enter image description here

Mule ESB: File outputpattern doesn't translate the pattern

I'm using Mule ESB CE 3.4. I have a requirement where I'm reading the configuration information from database and using it as the file name for the file outbound endpoint. Here is an example code (the code may not work as I have only given an outline)
<file:connector name="File-Data" autoDelete="false" streaming="true" validateConnections="true" doc:name="File" />
.....
<!-- Gets the configuration from database using a transformer. The transformer populates the configuration entries in a POJO and puts that in a session. -->
<custom-transformer class="com.test.DbGetConfigsTransformer" doc:name="Get Integration Configs"/>
....<!-- some code to process data -->
<logger message="$$$: #[sessionVars['currentFeed'].getFilePattern()]" doc:name="Set JSON File Name" /> -->
<file:outbound-endpoint path="/temp" outputPattern="#[sessionVars['currentFeed'].getFilePattern()]" responseTimeout="10000" mimeType="text/plain" connector-ref="File-Data" doc:name="Save File"/>
The above code throws the following error:
1. The filename, directory name, or volume label syntax is incorrect (java.io.IOException)
java.io.WinNTFileSystem:-2 (null)
2. Unable to create a canonical file for /temp/Test_User_#[function:datestamp:YYYYMMddhhmmss.sss] (org.mule.api.MuleRuntimeException)
org.mule.util.FileUtils:354 (http://www.mulesoft.org/docs/site/current3/apidocs/org/mule/api/MuleRuntimeException.html)
3. Failed to route event via endpoint: DefaultOutboundEndpoint{endpointUri=file:///temp, connector=FileConnector
In the database table, the field name is called FilePattern and it has the value 'Test_User_#[function:datestamp:YYYYMMddhhmmss.sss]. If I hardcode the value or move this value to the mule configuration file
file.name=Test_User_#[function:datestamp:YYYYMMddhhmmss.sss]
and use the configuration property syntax (for e.g. ${file.name} in the 'outputpattern'), it works. But if I read the same from db and use it, it is not working and throwing the error. The logger displays as (which is read from the db)
$$$: Test_#[function:datestamp:YYYYMMddhhmmss.sss]
Any help is much appreciated.
If your datestamp format does not vary, you should just store the environment prefix in your db and use something like:
outputPattern="#[sessionVars['prefix']+server.dateTime.format('YYYYMMddhhmmss.sss')]"
If you need to use your current database values, you can use basic Java string methods to find the correct substrings. For example:
#[sessionVars['currentFeed'].getFilePattern().substring(0,sessionVars['currentFeed'].getFilePattern().indexOf('function')-2)+server.dateTime.format('YYYYMMddhhmmss.sss')]
If you use different datestamp formats, you can find that part as well using similar String methods. However, I still suggest you come up with an implementation that only stores the environment prefix in the db.

Mule Message Enricher Target Expression

I have a flow that takes a pair of dates inbound and then uses a message enricher to get a list of employees that worked on a specific job between those two dates. The result is a simple list of maps returned from a JDBC database. I got that saved into a flow variable without any trouble. The next enrichment is causing me trouble. I setup a for each loop that uses the employees list from the flow variable. This works great and I then need to execute another JDBC query for each of these employees to get all the time tickets they turned in between the two dates passed to the flow. The query works but I am having trouble defining the target expression to hold the result. I would like to see the target be a map with the employee id as the key and the tickets for the period (list of maps) be the value. Is there any way to do this? Is there a better way to save these results? After I get all the tickets, I need to summarize them in various ways and generate a report showing the detail as well as the analysis.
I am currently developing this in mule studio for the community runtime version 3.4.
Is this like something that you are looking for?
<set-variable variableName="maplistmap" value="#[new java.util.HashMap()]"/>
<foreach>
<jdbc-ee:outbound-endpoint exchange-pattern="request-response" queryKey="selekt" queryTimeout="-1" connector-ref="Database">
<jdbc-ee:query key="selekt" value="select * from mytable where id = #[payload]"/>
</jdbc-ee:outbound-endpoint>
<scripting:component>
<scripting:script engine="Groovy"><![CDATA[
flowVars["maplistmap"].put(payload[0].id, payload)
]]></scripting:script>
</scripting:component>
</foreach>
<logger message="#[flowVars.maplistmap]" level="INFO"/>

Multiple triggers to Quartz endpoint in Mule

Is there a way to configure a Quartz inbound endpoint in Mule to have multiple triggers? Say I want an event every day at 9:00, plus one at 1:00 a.m. on the first day of the month.
Here is what you might work for you --
<flow name="MultipleIBEndpoints" doc:name="MultipleIBEndpoints">
<composite-source doc:name="Composite Source">
<quartz:inbound-endpoint jobName="QuartzDaily" doc:name="Quartz Daily"
cronExpression="0 0 9 1/1 * ? *">
<quartz:event-generator-job>
<quartz:payload>dummy</quartz:payload>
</quartz:event-generator-job>
</quartz:inbound-endpoint>
<quartz:inbound-endpoint jobName="QuartzMonthly" doc:name="Quartz Monthly"
cronExpression="0 0 1 1 1/1 ? *">
<quartz:event-generator-job>
<quartz:payload>dummy</quartz:payload>
</quartz:event-generator-job>
</quartz:inbound-endpoint>
</composite-source>
<logger level="INFO" doc:name="Logger" />
</flow>
The above flow uses composite source scope which allows you to embed into a single message source two or more inbound endpoints.
In the case of Composite, the embedded building blocks are actually message sources (i.e. inbound endpoints) that listen in parallel on different channels for incoming messages. Whenever any of these receivers accepts a message, the Composite scope passes it to the first message processor in the flow, thus triggering that flow.
You can do you requirement just by using one quartz endpoint with the required quartz endpoint
CRON Expression 0 0 1,21 1 * *
Please refer to the below link for more tweaks.
Mulesoft quartz reference
wikipedia reference
List of Cron Expression examples
In that case you need to configure two crontrigger and add them to the scheduler.
Please go through the below link where i have described the whole thing.
Configure multiple cron trigger
Hope this will help.

How to read CSV file and insert data into PostgreSQL using Mule ESB, Mule Studio

I am very new to Mule Studio.
I am facing a problem. I have a requirement where I need to insert data from a CSV file to PostgreSQL Database using Mule Studio.
I am using Mule Studio CE (version: 1.3.1). I check ed in the Google and find that we can use Data-mapper for doing so. But it works only for EE .So I cannot use it.
Also I am checking in the net and found an article Using Mule Studio to read Data from PostgreSQL(Inbound) and write it to File (Outbound) - Step by Step approach.
That seems feasible but my requirement is just the opposite of the article given. I need File as Inbound data while Databse as Outbound component.
What is the way to do so?
Any step by step help (like what components to use) and guidance will be greatly appreciated.
Here is an example that inserts a two columns CSV file:
<configuration>
<expression-language autoResolveVariables="true">
<import class="org.mule.util.StringUtils" />
<import class="org.mule.util.ArrayUtils" />
</expression-language>
</configuration>
<spring:beans>
<spring:bean id="jdbcDataSource" class=" ... your data source ... " />
</spring:beans>
<jdbc:connector name="jdbcConnector" dataSource-ref="jdbcDataSource">
<jdbc:query key="insertRow"
value="insert into my_table(col1, col2) values(#[message.payload[0]],#[message.payload[1]])" />
</jdbc:connector>
<flow name="csvFileToDatabase">
<file:inbound-endpoint path="/tmp/mule/inbox"
pollingFrequency="5000" moveToDirectory="/tmp/mule/processed">
<file:filename-wildcard-filter pattern="*.csv" />
</file:inbound-endpoint>
<!-- Load all file in RAM - won't work for big files! -->
<file:file-to-string-transformer />
<!-- Split each row, dropping the first one (header) -->
<splitter
expression="#[rows=StringUtils.split(message.payload, '\n\r');ArrayUtils.subarray(rows,1,rows.size())]" />
<!-- Transform CSV row in array -->
<expression-transformer expression="#[StringUtils.split(message.payload, ',')]" />
<jdbc:outbound-endpoint queryKey="insertRow" />
</flow>
In order to read CSV file and insert data into PostgreSQL using Mule all you need to follow following steps:
You need to have following things as pre-requisite
PostgreSQL
PostgreSQL JDBC driver
Anypoint Studio IDE and
A database to be created in PostgreSQL
Then configure Postgre SQL JDBC Driver in Global Element Properties inside Studio
Create Mule Flow in Anypoint Studio as follows:
Step 1: Wrap CSV file source in File component
Step 2: Convert between object arrays and strings
Step 3: Split each row
Step 4: Transform CSV row in array
Step 5: Dump into the destination Database
I would like to suggest Dataweave.
Steps
read the file using FTP connector / endpoint.
Transform using Data weave.
Use database connector , store the data in DB.