Play 2.4 - Display Ebeans SQL statement in logs - sql

How to display SQL Statements in the log ? I'm using EBeans and it fails to insert for some reasons but I can't see what's the problem.
I tried to edit my config to:
db.default.logStatements=true
and add this to logback.xml
<logger name="com.jolbox" level="DEBUG" />
to follow some answers I found online, but it doesn't seem to work for 2.4…

Logging has changed with Play 2.4. Starting from now, to display the SQL statements in the console, simply add the following line to the conf/logback.xml file:
<logger name="org.avaje.ebean.SQL" level="TRACE" />
It should work just fine.
As #Flo354 pointed out in the comments, with Play 2.6 you should use:
<logger name="io.bean" level="TRACE" />

From Play 2.5 Logging SQL statements is very easy, Play 2.5 has an easy way to log SQL statements, built on jdbcdslog, that works across all JDBC databases, connection pool implementations and persistence frameworks (Anorm, Ebean, JPA, Slick, etc). When you enable logging you will see each SQL statement sent to your database as well as performance information about how long the statement takes to run.
The SQL log statement feature in Play 2.5 can be configured by database, using logSql property:
db.default.logSql=true
After that, you can configure the jdbcdslog-exp log level by adding this lines to logback.xml:
<logger name="org.jdbcdslog.ConnectionLogger" level="OFF" /> <!-- Won' log connections -->
<logger name="org.jdbcdslog.StatementLogger" level="INFO" /> <!-- Will log all statements -->
<logger name="org.jdbcdslog.ResultSetLogger" level="OFF" /> <!-- Won' log result sets -->

FYI, there's nice video tutorial on Ebean's new doc page showing the way to capture SQL statements only for selected areas of the code.
Thanks to this you can log statements only in problematic places while developing and/or use the logged statements for performing tests as showed in video.
In short: add latest avaje-ebeanorm-mocker dependency to your built.sbt as usually, so later you can use it in your code like:
LoggedSql.start();
User user = User.find.byId(123);
// ... other queries
List<String> capturedLogs = LoggedSql.stop();
Note you don't even need to fetch the List of statements if you do not need to process them as they are displayed in the console as usually. So you can use it like this as well:
if (Play.isDev()) LoggedSql.start();
User user = User.find.byId(345);
// ... other queries
if (Play.isDev()) LoggedSql.stop();

I had success using jdbcdslog. As #Saeed Zarinfam mentioned here, Play 2.5 includes this by default.
Unlike this answer, this solution shows the parameter values instead of question marks.
Here are the steps I followed to get it working for Play 2.4 and MySQL:
Add to build.sbt:
"com.googlecode.usc" % "jdbcdslog" % "1.0.6.2"
Add to logback.xml:
<logger name="org.jdbcdslog.StatementLogger" level="INFO" /> <!-- Will log all statements -->
Create conf/jdbcdslog.properties file containing:
jdbcdslog.driverName=mysql
jdbcdslog.showTime=true
Change db.default.url (example):
jdbc:mysql://127.0.0.1:3306/mydb
changes to
jdbc:jdbcdslog:mysql://127.0.0.1:3306/mydb;targetDriver=com.mysql.jdbc.Driver
Change db.default.driver:
org.jdbcdslog.DriverLoggingProxy

Related

Apache Camel get empty response from jetty

I face a complex case.
What I'm doing is as following steps:
1) <from uri="jetty:http://0.0.0.0:30100/jetty/test"/>
2) <to uri="hazelcast-client:master-test-series" />
3) <to uri="bean:modelSeriesWrapperTest" />
4)
<split parallelProcessing="true" streaming="true">
<simple>${body}</simple> <to uri="direct:dw.model.test"/>
</split>
5) From another route
<from uri="direct:dw.model.test"/>
<aggregate strategyRef="myAggregatorStrategy"
completionTimeout="1000">
<correlationExpression>
<constant>true</constant>
</correlationExpression>
<marshal ref="modelSeriesVariantColourGson" />
<camel:to uri="file:src/data/catask/output?fileName=output.xml"/>
</aggregate>
The problem is that the jetty response is empty. I use TCP trace to track the request and response, the Content-Length is 0. But the output.xml file has correct JSON format content.
Even I cross the <camel:to uri="file:src/data/catask/output?fileName=output.xml"/>. The jetty response is still empty.
I try the InOut pattern, it doesn't work as well.
It seems jetty return directly, not waiting split done. I try to set In and Out body, it doesn't work either. I Google every case that I can image. There is no helpful case.
Could you please help me? Thank you very much.
If you want the jetty response to include whatever information from your aggregator, then you must use the splitter only approach as documented at:
http://camel.apache.org/composed-message-processor.html
The splitter has built-in aggregation, and that ensures when the splitter is done, it aggregates also, and then you can use that as the jetty response.
When you use <aggregate> then it becomes a separate exchange. To understand this more then read more about the aggregate eip, and other SO, and in various Camel books etc.

Is Apache Camel's idempotent consumer pattern scalable?

I'm using Apache Camel 2.13.1 to poll a database table which will have upwards of 300k rows in it. I'm looking to use the Idempotent Consumer EIP to filter rows that have already been processed.
I'm wondering though, whether the implementation is really scalable or not. My camel context is:-
<camelContext xmlns="http://camel.apache.org/schema/spring">
<route id="main">
<from
uri="sql:select * from transactions?dataSource=myDataSource&consumer.delay=10000&consumer.useIterator=true" />
<transacted ref="PROPAGATION_REQUIRED" />
<enrich uri="direct:invokeIdempotentTransactions" />
<!-- Any processors here will be executed on all messages -->
</route>
<route id="idempotentTransactions">
<from uri="direct:invokeIdempotentTransactions" />
<idempotentConsumer
messageIdRepositoryRef="jdbcIdempotentRepository">
<ognl>#{request.body.ID}</ognl>
<!-- Anything here will only be executed for non-duplicates -->
<log message="non-duplicate" />
<to uri="stream:out" />
</idempotentConsumer>
</route>
</camelContext>
It would seem that the full 300k rows are going to be processed every 10 seconds (via consumer.delay parameter) which seems very inefficient. I would expect some sort of feedback loop as part of the pattern so that the query that feeds the filter could take advantage of the set of rows already processed.
However, the messageid column in the CAMEL_MESSAGEPROCESSED table has the pattern of
{1908988=null}
where 1908988 is the request.body.ID I've set the EIP to key on so this doesn't make it easy to incorporate into my query.
Is there a better way of using the CAMEL_MESSAGEPROCESSED table as a feedback loop into my select statement so that the SQL server is performing most of the load?
Update:
So, I've since found out that it was my ognl code that was causing the odd message id column value. Changing it to
<el>${in.body.ID}</el>
has fixed it. So, now that I have a usable messageId column, I can now change my 'from' SQL query to
select * from transactions tr where tr.ID IN (select cmp.messageid from CAMEL_MESSAGEPROCESSED cmp where cmp.processor = 'transactionProcessor')
but I still think I'm corrupting the Idempotent Consumer EIP.
Does anyone else do this? Any reason not to?
Yes, it is. But you need to use scalable storage for holding sets of already processed messages. You can use either Hazelcast - http://camel.apache.org/hazelcast-idempotent-repository-tutorial.html or Infinispan - http://java.dzone.com/articles/clustered-idempotent-consumer - depending on which solution is already in your stack. Of course, JDBC repository would work, but only if it meets performance criteria selected.

Mule ESB: File outputpattern doesn't translate the pattern

I'm using Mule ESB CE 3.4. I have a requirement where I'm reading the configuration information from database and using it as the file name for the file outbound endpoint. Here is an example code (the code may not work as I have only given an outline)
<file:connector name="File-Data" autoDelete="false" streaming="true" validateConnections="true" doc:name="File" />
.....
<!-- Gets the configuration from database using a transformer. The transformer populates the configuration entries in a POJO and puts that in a session. -->
<custom-transformer class="com.test.DbGetConfigsTransformer" doc:name="Get Integration Configs"/>
....<!-- some code to process data -->
<logger message="$$$: #[sessionVars['currentFeed'].getFilePattern()]" doc:name="Set JSON File Name" /> -->
<file:outbound-endpoint path="/temp" outputPattern="#[sessionVars['currentFeed'].getFilePattern()]" responseTimeout="10000" mimeType="text/plain" connector-ref="File-Data" doc:name="Save File"/>
The above code throws the following error:
1. The filename, directory name, or volume label syntax is incorrect (java.io.IOException)
java.io.WinNTFileSystem:-2 (null)
2. Unable to create a canonical file for /temp/Test_User_#[function:datestamp:YYYYMMddhhmmss.sss] (org.mule.api.MuleRuntimeException)
org.mule.util.FileUtils:354 (http://www.mulesoft.org/docs/site/current3/apidocs/org/mule/api/MuleRuntimeException.html)
3. Failed to route event via endpoint: DefaultOutboundEndpoint{endpointUri=file:///temp, connector=FileConnector
In the database table, the field name is called FilePattern and it has the value 'Test_User_#[function:datestamp:YYYYMMddhhmmss.sss]. If I hardcode the value or move this value to the mule configuration file
file.name=Test_User_#[function:datestamp:YYYYMMddhhmmss.sss]
and use the configuration property syntax (for e.g. ${file.name} in the 'outputpattern'), it works. But if I read the same from db and use it, it is not working and throwing the error. The logger displays as (which is read from the db)
$$$: Test_#[function:datestamp:YYYYMMddhhmmss.sss]
Any help is much appreciated.
If your datestamp format does not vary, you should just store the environment prefix in your db and use something like:
outputPattern="#[sessionVars['prefix']+server.dateTime.format('YYYYMMddhhmmss.sss')]"
If you need to use your current database values, you can use basic Java string methods to find the correct substrings. For example:
#[sessionVars['currentFeed'].getFilePattern().substring(0,sessionVars['currentFeed'].getFilePattern().indexOf('function')-2)+server.dateTime.format('YYYYMMddhhmmss.sss')]
If you use different datestamp formats, you can find that part as well using similar String methods. However, I still suggest you come up with an implementation that only stores the environment prefix in the db.

JMS Message Selector in Mule using date

In Mule 3.3.1, during async processing, when any of my external services are down, I would like to place the message on a queue (retryQueue) with a particular "next retry" timestamp. The flow that processes messages from this retryQueue selects messages based on "next retry" time as in if "next retry" time is past current time, select the message for processing. Similar to what has been mentioned in following link.
Retry JMS queue implementation to deliver failed messages after certain interval of time
Could you please provide sample code to achieve this?
I tried:
<on-redelivery-attempts-exceeded>
<message-properties-transformer scope="outbound">
<add-message-property key="putOnQueueTime" value="#[function:datestamp:yyyy-MM-dd hh:mm:ssZ]" />
</message-properties-transformer>
<jms:outbound-endpoint ref="retryQueue"/>
</on-redelivery-attempts-exceeded>
and on the receiving flow
<jms:inbound-endpoint ref="retryQueue">
<!-- I have no idea how to do the selector....
I tried....<jms:selector expression="#[header:INBOUND:putOnQueueTime > ((function:now) - 30)]"/>, but obviously it doesn't work. Gives me an invalid message selector. -->
</jms:inbound-endpoint>.
Another note: If I set the outbound property using
<add-message-property key="putOnQueueTime" value="#[function:now]"/>,
it doesn't get carried over as part of header. That's why I changed it to:
<add-message-property key="putOnQueueTime" value="#[function:datestamp:yyyy-MM-dd hh:mm:ssZ]" />
The expression in:
<jms:selector expression="#[header:INBOUND:putOnQueueTime > ((function:now) - 30)]"/>
should evaluate to a valid JMS selector, which is not the case here. Try with:
<jms:selector expression="putOnQueueTime > #[XXX]"/>
replacing XXX with an expression that creates the time you want.
We were trying to achieve this in one of the projects I'm working on, and tried what was being suggested in the other answer here, and it did not work, with various variantions. The problem is that the jms:selector doesn't support MEL, since it's relies on ActiveMQ classes.
We registered a support-ticket to Mulesoft, and their reply was that this is not supported.
What we ended up doing was this:
Create a simple Component, which does a Thread.sleep(numberOfMillis), where the number of millis is defined in a property.
In the flow that was supposed to delay processing, we added this component as the first step after reading the message from the inbound endpoint.
Not the best solution ever made, but it works..

How can I read output of an sql query into an ant property?

I would like to feed the result of a simple SQL query (something like: select SP_NUMBER from SERVICE_PACK) which I run inside my ant script (using the sql task) back into an ant property (e.g. service.pack.number).
The sql task can output to a file, but is there a more direct way?
Although I would have preferred not creating a file, I eventually went with the following solution:
The sql task is called as follows
<sql ... print="yes" output="temp.properties"
expandProperties="true" showheaders="false" showtrailers="false" >
<![CDATA[
select 'current.sp.version=' || NAME from SERVICE_PACK;
select 'current.major.version=' || NAME from VERSION;
]]>
</sql>
The generated properties file will contain:
current.sp.version=03
current.major.version=5
Then you just load the properties file and delete it:
<property file="temp.properties" />
<delete file="temp.properties" />
<echo message="Current service pack version: ${current.sp.version}" />
<echo message="Current major version: ${current.major.version}" />
This works, and everything is right there in the ant script (even if it's not the prettiest thing in the world!).
Perhaps the Ant exec task is more useful here ? You can execute a standalone and get the result in a property via outputproperty. You'll have to execute your SQL in some standalone fashion, unfortunately.
Alternatively is it worth looking at the code to the Ant Sql task and modify it to accept an outputproperty ? It sounds a bit of a pain, but I think it could well be a very simple modification if you can't find anything more direct.
MySQL users- I had to modify the query like so:
SELECT CONCAT('mytable.id=', CAST(ID as CHAR)) from mytable
Without the CONCAT function, I just got back the text "1" (representing my id) in the properties file. Also, the CAST is needed in a MySQL system, otherwise the concatenated field is returned as a BLOB.