How to read CSV file and insert data into PostgreSQL using Mule ESB, Mule Studio - mule

I am very new to Mule Studio.
I am facing a problem. I have a requirement where I need to insert data from a CSV file to PostgreSQL Database using Mule Studio.
I am using Mule Studio CE (version: 1.3.1). I check ed in the Google and find that we can use Data-mapper for doing so. But it works only for EE .So I cannot use it.
Also I am checking in the net and found an article Using Mule Studio to read Data from PostgreSQL(Inbound) and write it to File (Outbound) - Step by Step approach.
That seems feasible but my requirement is just the opposite of the article given. I need File as Inbound data while Databse as Outbound component.
What is the way to do so?
Any step by step help (like what components to use) and guidance will be greatly appreciated.

Here is an example that inserts a two columns CSV file:
<configuration>
<expression-language autoResolveVariables="true">
<import class="org.mule.util.StringUtils" />
<import class="org.mule.util.ArrayUtils" />
</expression-language>
</configuration>
<spring:beans>
<spring:bean id="jdbcDataSource" class=" ... your data source ... " />
</spring:beans>
<jdbc:connector name="jdbcConnector" dataSource-ref="jdbcDataSource">
<jdbc:query key="insertRow"
value="insert into my_table(col1, col2) values(#[message.payload[0]],#[message.payload[1]])" />
</jdbc:connector>
<flow name="csvFileToDatabase">
<file:inbound-endpoint path="/tmp/mule/inbox"
pollingFrequency="5000" moveToDirectory="/tmp/mule/processed">
<file:filename-wildcard-filter pattern="*.csv" />
</file:inbound-endpoint>
<!-- Load all file in RAM - won't work for big files! -->
<file:file-to-string-transformer />
<!-- Split each row, dropping the first one (header) -->
<splitter
expression="#[rows=StringUtils.split(message.payload, '\n\r');ArrayUtils.subarray(rows,1,rows.size())]" />
<!-- Transform CSV row in array -->
<expression-transformer expression="#[StringUtils.split(message.payload, ',')]" />
<jdbc:outbound-endpoint queryKey="insertRow" />
</flow>

In order to read CSV file and insert data into PostgreSQL using Mule all you need to follow following steps:
You need to have following things as pre-requisite
PostgreSQL
PostgreSQL JDBC driver
Anypoint Studio IDE and
A database to be created in PostgreSQL
Then configure Postgre SQL JDBC Driver in Global Element Properties inside Studio
Create Mule Flow in Anypoint Studio as follows:
Step 1: Wrap CSV file source in File component
Step 2: Convert between object arrays and strings
Step 3: Split each row
Step 4: Transform CSV row in array
Step 5: Dump into the destination Database

I would like to suggest Dataweave.
Steps
read the file using FTP connector / endpoint.
Transform using Data weave.
Use database connector , store the data in DB.

Related

migrating jdbc transport to database connector in mule 3.7

Currently we are using the deprecated jdbc connector and it's corresponding jdbc inbound endpoint to poll data from our database.
We are using the .ack (Acknowledgment (ACK) Statements feature) to prevent processing duplicate records.
I however don't seem to find the same funciontalility using the new database-connector. We are using the Mule Community edition, so we cannot use the batch components.
Is there a possiblity to have the same functionalitity using the database connector (in combination with the polling component). Or will we have to manually do the acknowledgement of our records?
<jdbc:connector name="dbPollingConnector" dataSource-ref="dataSource" queryTimeout="1000" pollingFrequency="1000">
<receiver-threading-profile maxThreadsActive="1" />
<reconnect-forever frequency="60000"></reconnect-forever>
<jdbc:query key="newDataGrouped" value="select * from table where processed = 0"></jdbc:query>
<jdbc:query key="newDataGrouped.ack" value="update table set processed = current_timestamp"></jdbc:query>
</jdbc:connector>
<flow name="flowName">
<jdbc:inbound-endpoint name="groupedInboundComponent" responseTimeout="1000" queryTimeout="100"
pollingFrequency="1000" connector-ref="dbPollingConnector" queryKey="newDataGrouped" exchange-pattern="request-response">
</jdbc:inbound-endpoint>
<!--... rest of the flow ... -->
</flow>
You can utilize Polling for Updates using Watermarks
It works slightly similar with .ack, but without modifying physical data in database. I think it just need minor modification in current query, e.g.: select * from table where id > #[flowVars['lastModifiedID']]

Play 2.4 - Display Ebeans SQL statement in logs

How to display SQL Statements in the log ? I'm using EBeans and it fails to insert for some reasons but I can't see what's the problem.
I tried to edit my config to:
db.default.logStatements=true
and add this to logback.xml
<logger name="com.jolbox" level="DEBUG" />
to follow some answers I found online, but it doesn't seem to work for 2.4…
Logging has changed with Play 2.4. Starting from now, to display the SQL statements in the console, simply add the following line to the conf/logback.xml file:
<logger name="org.avaje.ebean.SQL" level="TRACE" />
It should work just fine.
As #Flo354 pointed out in the comments, with Play 2.6 you should use:
<logger name="io.bean" level="TRACE" />
From Play 2.5 Logging SQL statements is very easy, Play 2.5 has an easy way to log SQL statements, built on jdbcdslog, that works across all JDBC databases, connection pool implementations and persistence frameworks (Anorm, Ebean, JPA, Slick, etc). When you enable logging you will see each SQL statement sent to your database as well as performance information about how long the statement takes to run.
The SQL log statement feature in Play 2.5 can be configured by database, using logSql property:
db.default.logSql=true
After that, you can configure the jdbcdslog-exp log level by adding this lines to logback.xml:
<logger name="org.jdbcdslog.ConnectionLogger" level="OFF" /> <!-- Won' log connections -->
<logger name="org.jdbcdslog.StatementLogger" level="INFO" /> <!-- Will log all statements -->
<logger name="org.jdbcdslog.ResultSetLogger" level="OFF" /> <!-- Won' log result sets -->
FYI, there's nice video tutorial on Ebean's new doc page showing the way to capture SQL statements only for selected areas of the code.
Thanks to this you can log statements only in problematic places while developing and/or use the logged statements for performing tests as showed in video.
In short: add latest avaje-ebeanorm-mocker dependency to your built.sbt as usually, so later you can use it in your code like:
LoggedSql.start();
User user = User.find.byId(123);
// ... other queries
List<String> capturedLogs = LoggedSql.stop();
Note you don't even need to fetch the List of statements if you do not need to process them as they are displayed in the console as usually. So you can use it like this as well:
if (Play.isDev()) LoggedSql.start();
User user = User.find.byId(345);
// ... other queries
if (Play.isDev()) LoggedSql.stop();
I had success using jdbcdslog. As #Saeed Zarinfam mentioned here, Play 2.5 includes this by default.
Unlike this answer, this solution shows the parameter values instead of question marks.
Here are the steps I followed to get it working for Play 2.4 and MySQL:
Add to build.sbt:
"com.googlecode.usc" % "jdbcdslog" % "1.0.6.2"
Add to logback.xml:
<logger name="org.jdbcdslog.StatementLogger" level="INFO" /> <!-- Will log all statements -->
Create conf/jdbcdslog.properties file containing:
jdbcdslog.driverName=mysql
jdbcdslog.showTime=true
Change db.default.url (example):
jdbc:mysql://127.0.0.1:3306/mydb
changes to
jdbc:jdbcdslog:mysql://127.0.0.1:3306/mydb;targetDriver=com.mysql.jdbc.Driver
Change db.default.driver:
org.jdbcdslog.DriverLoggingProxy

how to replace xml element value in mule

<healthcare>
<plans>
<plan1>
<planid>100</planid>
<planname>medical</planname>
<desc>medical</desc>
<offerprice>500</offerprice>
<area>texas</area>
</plan1>
<plan2>
<planid>101</planid>
<planname>dental</planname>
<desc>dental</desc>
<offerprice>1000</offerprice>
<area>texas</area>
</plan2>
</plans>
</healthcare>
<splitter evaluator="xpath" expression="/healthcare/plans" doc:name="Splitter"/>
<transformer ref="domToXml" doc:name="Transformer Reference"/>
<logger level="INFO" doc:name="Logger" message=" plans detils...#[message.payload]" />
i have input xml data as above. I want to replace offerprice value from the above xml data . I tried various ways. anyone can shed the light for my requirement in mule
in my requiremnet , hit external api based on the result value , I need to change the offerprice value in the input xml .
anyone help is highly valuable.I need this immediately in my work .please shed light
You can use XSLT to transform the XML file to another XML file (with or without the same schema). Here's a small example of how it would look in mule . . .
http://marcotello.com/mule-esb/using-the-xslt-transformer-in-mule-esb/
There are also a lot of resources for learning how to create XSLT files online / through google.
There are many ways to do it.
You can use Mule XSLT in your flow, which will change the value of offerprice from input xml and you will be getting the required Xml as output with the value you want.
Another way is to use Groovy, with XmlSlurper to parse your input Xml, replace the value, and rebuild the XML you want.
reference :- XML Mapping in Mule
and
Mule: Enriching an XML with additional information from DB
also refer
http://www.ibm.com/developerworks/library/j-pg05199/index.html
Hope this help

Mule ESB: File outputpattern doesn't translate the pattern

I'm using Mule ESB CE 3.4. I have a requirement where I'm reading the configuration information from database and using it as the file name for the file outbound endpoint. Here is an example code (the code may not work as I have only given an outline)
<file:connector name="File-Data" autoDelete="false" streaming="true" validateConnections="true" doc:name="File" />
.....
<!-- Gets the configuration from database using a transformer. The transformer populates the configuration entries in a POJO and puts that in a session. -->
<custom-transformer class="com.test.DbGetConfigsTransformer" doc:name="Get Integration Configs"/>
....<!-- some code to process data -->
<logger message="$$$: #[sessionVars['currentFeed'].getFilePattern()]" doc:name="Set JSON File Name" /> -->
<file:outbound-endpoint path="/temp" outputPattern="#[sessionVars['currentFeed'].getFilePattern()]" responseTimeout="10000" mimeType="text/plain" connector-ref="File-Data" doc:name="Save File"/>
The above code throws the following error:
1. The filename, directory name, or volume label syntax is incorrect (java.io.IOException)
java.io.WinNTFileSystem:-2 (null)
2. Unable to create a canonical file for /temp/Test_User_#[function:datestamp:YYYYMMddhhmmss.sss] (org.mule.api.MuleRuntimeException)
org.mule.util.FileUtils:354 (http://www.mulesoft.org/docs/site/current3/apidocs/org/mule/api/MuleRuntimeException.html)
3. Failed to route event via endpoint: DefaultOutboundEndpoint{endpointUri=file:///temp, connector=FileConnector
In the database table, the field name is called FilePattern and it has the value 'Test_User_#[function:datestamp:YYYYMMddhhmmss.sss]. If I hardcode the value or move this value to the mule configuration file
file.name=Test_User_#[function:datestamp:YYYYMMddhhmmss.sss]
and use the configuration property syntax (for e.g. ${file.name} in the 'outputpattern'), it works. But if I read the same from db and use it, it is not working and throwing the error. The logger displays as (which is read from the db)
$$$: Test_#[function:datestamp:YYYYMMddhhmmss.sss]
Any help is much appreciated.
If your datestamp format does not vary, you should just store the environment prefix in your db and use something like:
outputPattern="#[sessionVars['prefix']+server.dateTime.format('YYYYMMddhhmmss.sss')]"
If you need to use your current database values, you can use basic Java string methods to find the correct substrings. For example:
#[sessionVars['currentFeed'].getFilePattern().substring(0,sessionVars['currentFeed'].getFilePattern().indexOf('function')-2)+server.dateTime.format('YYYYMMddhhmmss.sss')]
If you use different datestamp formats, you can find that part as well using similar String methods. However, I still suggest you come up with an implementation that only stores the environment prefix in the db.

MULE expression-transformer not accepted

I'm trying to learn Mule ESB but get problems with example projects. Why are these lines
underlined red and not represented in the Message flow?
<expression-transformer name="returnAttachments">
<return-argument evaluator="attachments-list" expression="*.txt,*.ozb,*.xml" optional="false"/>
</expression-transformer>
I've cut and pasted these lines from mulesoft.org as part of a sample project.
#genjosanzo is right, the MEL equivalent would be:
<expression-transformer
expression="#[($.value in message.inboundAttachments.entrySet() if $.key ~= '(.*\\.txt|.*\\.ozb|.*\\.xml)')]" />
Mule studio has problem rendering nested elements (bug reported here)
Instead you can use the compact version and replace it with the following:
<expression-transformer expression="#[attachments-list:*.txt,*.ozb,*.xml]" doc:name="Expression" />
On a side note ever since mule 3.3.0 the new mule expression languages and it is recommended to rely on it whenever possible.