Logging query execution time in Eclipselink - eclipselink

I'd like to configure EclipseLink to log executed SQL queries and query execution time.
In persistence.xml I have following properties set:
<property name="eclipselink.logging.level" value="ALL" />
<property name="eclipselink.logging.level.sql" value="ALL"/>
<property name="eclipselink.logging.parameters" value="true"/>
<property name="eclipselink.logging.timestamp" value="true" />
I can see SQL queries, bound parameters, and execution timestamp. But not the execution time.
Here is the example of messages I see in log:
[EL Fine]: sql: 2016-07-14 17:22:40.114--Connection(1201360998)--SELECT ID, a, b, c FROM my_table WHERE (id = ?)
bind => [4]
It there any way how to get this information to logs?

Related

Not being able to fetch the result to log in POST API

I am trying to do post api request and trying to fetch the data through SQL Query, however, I am unable to understand how to log the results. Any help would mean a lot, the code I have written is below:
<statement>
<sql><![CDATA[SELECT * FROM TEST2 t WHERE t.CORRELATION_ID = 1]]></sql>
<result column="CORRELATION_ID" name="CORRELATION_ID"/>
<result column="MSISDN" name="MSISDN"/>
<result column="FL" name="FLAG"/>
</statement>
<log level="full">
<property name="After Query" value="Below Query"/>
<property expression="get-property(FLAG)" name="FLAGG"/>
</log>
I am not getting any value in logs instead it is printing nothing as the value is given as "FLAGG = ".
Thank You
If you get data from the Database the returned values should be assigned to properties. Your Log seems to be a bit off. Can you try the following?
<log level="full">
<property name="After Query" value="Below Query"/>
<property expression="$ctx:FLAG" name="FLAGG"/>
<property expression="$ctx:CORRELATION_ID" name="CORRELATION_ID"/>
</log>

Pentaho JNDI source name as parameter (Multi-Tennant)

I have googled this for the last half an hour, and found hits for pentaho parameters etc but nothing that appears to ask or answer this question.
I have a set of reports that are the same for each customer, but need to connect to different databases depending upon the customer who is running the report.
So my idea is to pass the JNDI data source name to the report at runtime as a parameter, so that the customer will connect to the correct database.
Is this possible, or is there a better way of managing a common set of reports that are used by different customers running on different databases but in the same single instance of the pentaho engine ?
OK, I have found a better solution using the little documented multi-tennant feature.
1) Stop Pentaho
2) Modify ( pentaho-solutions/system/pentahoObjects.spring.xml )
<!-- Original Code
<bean id="IDBDatasourceService" class="org.pentaho.platform.engine.services.connection.datasource.dbcp.DynamicallyPooledOrJndiDatasourceService" scope="singleton">
<property name="pooledDatasourceService" ref="pooledOrJndiDatasourceService" />
<property name="nonPooledDatasourceService" ref="nonPooledOrJndiDatasourceService" />
</bean>
-->
<!--Begin Tenant -->
<bean id="IDBDatasourceService" class="org.pentaho.platform.engine.services.connection.datasource.dbcp.tenantaware.TenantAwareLoginParsingDatasourceService"
scope="singleton">
<property name="requireTenantId" value="false" />
<property name="datasourceNameFormat" value="{1}-{0}" />
<property name="tenantSeparator" value="#" />
<property name="tenantOnLeft" value="false" />
</bean>
<!-- End Tenant -->
3) Add Suffix to Data Sources ( biserver-ce/tomcat/webapps/pentaho/META-INF/context.xml )
<Resource
name="jdbc/MYDBSRC-xxx"
auth="Container"
type="javax.sql.DataSource"
factory="org.apache.commons.dbcp.BasicDataSourceFactory"
maxActive="20"
maxIdle="5"
maxWait="10000"
username="XXXX"
password="XXXX"
driverClassName="net.sourceforge.jtds.jdbc.Driver"
url="jdbc:jtds:sqlserver://192.168.42.0:1433;DatabaseName=SOMEDB"
/>
<Resource
name="jdbc/MYDBSRC-aaa"
auth="Container"
type="javax.sql.DataSource"
factory="org.apache.commons.dbcp.BasicDataSourceFactory"
maxActive="20"
maxIdle="5"
maxWait="10000"
username="XXXX"
password="XXXX"
driverClassName="net.sourceforge.jtds.jdbc.Driver"
url="jdbc:jtds:sqlserver://192.168.42.0:1433;DatabaseName=AOTHERDB"
/>
4) Delete /tomcat/conf/Catalina/localhost/pentaho.xml
5) Restart Pentaho, create a user someone#xxx etc etc
6) Create a report using the JNDI Name "MYDBSRC"
7) Login as someone#xxx and you will get a different report / datasource than either logging in as user, or user#aaa
Tadah !!

How can I print SQL query result log with log4j?

I'm using Spring 3.1.1, MyBatis 3.1.1, MySQL 5.0.67. My Spring configuration is below:
<bean id="dataSource" class="org.apache.tomcat.jdbc.pool.DataSource" destroy-method="close">
<property name="driverClassName" value="com.mysql.jdbc.Driver" />
<property name="url" value="${jdbc.url}" />
<property name="username" value="${jdbc.username}" />
<property name="password" value="${jdbc.password}" />
<property name="validationQuery" value="select 1"/>
<property name="testWhileIdle" value="true"/>
<property name="timeBetweenEvictionRunsMillis" value="14400000"/>
<property name="testOnBorrow" value="false"/>
</bean>
<bean id="sqlSessionFactory" class="org.mybatis.spring.SqlSessionFactoryBean">
<property name="dataSource" ref="dataSource" />
<property name="configLocation" value="classpath:mybatis/myBatisConfig.xml"/>
</bean>
<bean id="sqlSessionTemplate" class="org.mybatis.spring.SqlSessionTemplate">
<constructor-arg ref="sqlSessionFactory"/>
</bean>
<bean id="transactionManager"
class="org.springframework.jdbc.datasource.DataSourceTransactionManager"
p:dataSource-ref="dataSource"/>
<tx:annotation-driven transaction-manager="transactionManager"/>
And log4.properties is below:
log4j.logger.org.springframework=DEBUG
log4j.logger.org.apache=DEBUG
log4j.logger.org.mybatis=DEBUG
log4j.logger.java.sql=DEBUG
log4j.logger.java.sql.Connection=DEBUG
log4j.logger.java.sql.Statement=DEBUG
log4j.logger.java.sql.PreparedStatement=DEBUG
log4j.logger.java.sql.ResultSet=DEBUG
With these configuration, I can see SQL query statement which is executed and parameters to that query but I can't see query result log. My log is like this:
[org.mybatis.spring.SqlSessionUtils] - Creating a new SqlSession
[org.mybatis.spring.SqlSessionUtils] - SqlSession [org.apache.ibatis.session.defaults.DefaultSqlSession#4ccdd1f] was not registered for synchronization because synchronization is not active
[org.springframework.jdbc.datasource.DataSourceUtils] - Fetching JDBC Connection from DataSource
[org.mybatis.spring.transaction.SpringManagedTransaction] - JDBC Connection [ProxyConnection[PooledConnection[com.mysql.jdbc.JDBC4Connection#3cfde82]]] will not be managed by Spring
[java.sql.Connection] - ooo Using Connection [ProxyConnection[PooledConnection[com.mysql.jdbc.JDBC4Connection#3cfde82]]]
[java.sql.Connection] - ==> Preparing: SELECT col FROM table WHERE col1=? AND col2=?
[java.sql.PreparedStatement] - ==> Parameters: 93(Integer), 4(Integer)
[org.mybatis.spring.SqlSessionUtils] - Closing non transactional SqlSession [org.apache.ibatis.session.defaults.DefaultSqlSession#4ccdd1f]
[org.springframework.jdbc.datasource.DataSourceUtils] - Returning JDBC Connection to DataSource
Is there any way to print log including query result?
Two ways:
log4j.logger.java.sql.ResultSet=TRACE
Or use the namespaces to set logging. This is the only logging method in mybatis 3.2
http://mybatis.github.io/mybatis-3/logging.html
What works for me is:
log4j.logger.org.springframework.jdbc.core.StatementCreatorUtils=TRACE
This prints entries like these:
...
TRACE SimpleAsyncTaskExecutor-1 org.springframework.jdbc.core.StatementCreatorUtils - Setting SQL statement parameter value: column index 1, parameter value [12345], value class [java.math.BigDecimal], SQL type 3
TRACE SimpleAsyncTaskExecutor-1 org.springframework.jdbc.core.StatementCreatorUtils - Setting SQL statement parameter value: column index 2, parameter value [ADDRESS], value class [java.lang.String], SQL type 12
TRACE SimpleAsyncTaskExecutor-1 org.springframework.jdbc.core.StatementCreatorUtils - Setting SQL statement parameter value: column index 3, parameter value [20130916], value class [java.math.BigDecimal], SQL type 3
...
In my case I am using running a query inside a JdbcBatchItemWriter in a Spring Batch application, which uses named params so it may not be a general solution.

Is an XA transaction really atomic?

It seems that I'm not completely understand how an XA transaction works. I thought that it is atomic: I thought that when I commit a transaction, then new messages and new data will be available in the same time.
This misunderstanding led me to the following issue:
new rows are inserted to DB and a message is sent to a queue in a transactional route. In another route the message is received. Then this route tries to perform some manipulations with the rows that were inserted in the previous route. But it doesn't see them!
The second route is configured so it rolls back a message to the queue when an exception is happened. And I see that after a second run the route sees the rows!
As a conclusion I would ask the next questions:
Is an XA transaction really atomic?
If no, how can I configure commit order for my transactional resources?
Additional note: the issue is found in Fuse ESB/ServiceMix 4.4.1
2 Jake:
My camel context configuration looks like following:
<osgi:reference id="osgiPlatformTransactionManager" interface="org.springframework.transaction.PlatformTransactionManager"/>
<osgi:reference id="osgiJtaTransactionManager" interface="javax.transaction.TransactionManager"/>
<osgi:reference id="myDataSource"
interface="javax.sql.DataSource"
filter="(osgi.jndi.service.name=jdbc/postgresXADB)"/>
<bean id="PROPAGATION_MANDATORY" class="org.apache.camel.spring.spi.SpringTransactionPolicy">
<property name="transactionManager" ref="osgiPlatformTransactionManager"/>
<property name="propagationBehaviorName" value="PROPAGATION_MANDATORY"/>
</bean>
<bean id="PROPAGATION_REQUIRED" class="org.apache.camel.spring.spi.SpringTransactionPolicy">
<property name="transactionManager" ref="osgiPlatformTransactionManager"/>
<property name="propagationBehaviorName" value="PROPAGATION_REQUIRED"/>
</bean>
<bean id="jmstx" class="org.apache.activemq.camel.component.ActiveMQComponent">
<property name="configuration" ref="jmsTxConfig" />
</bean>
<bean id="jmsTxConfig" class="org.apache.camel.component.jms.JmsConfiguration">
<property name="connectionFactory" ref="jmsXaPoolConnectionFactory"/>
<property name="transactionManager" ref="osgiPlatformTransactionManager"/>
<property name="transacted" value="false"/>
<property name="cacheLevelName" value="CACHE_NONE"/>
<property name="concurrentConsumers" value="${jms.concurrentConsumers}" />
</bean>
<bean id="jmsXaPoolConnectionFactory" class="org.apache.activemq.pool.XaPooledConnectionFactory">
<property name="maxConnections" value="${jms.maxConnections}" />
<property name="connectionFactory" ref="jmsXaConnectionFactory" />
<property name="transactionManager" ref="osgiJtaTransactionManager" />
</bean>
<bean id="jmsXaConnectionFactory" class="org.apache.activemq.ActiveMQXAConnectionFactory">
<property name="brokerURL" value="${jms.broker.url}"/>
<property name="redeliveryPolicy">
<bean class="org.apache.activemq.RedeliveryPolicy">
<property name="maximumRedeliveries" value="-1"/>
<property name="initialRedeliveryDelay" value="2000" />
<property name="redeliveryDelay" value="5000" />
</bean>
</property>
</bean>
DB data source is configured as following:
<bean id="myDataSource" class="org.postgresql.xa.PGXADataSource">
<property name="serverName" value="${db.host}"/>
<property name="databaseName" value="${db.name}"/>
<property name="portNumber" value="${db.port}"/>
<property name="user" value="${db.user}"/>
<property name="password" value="${db.password}"/>
</bean>
<service ref="myDataSource" interface="javax.sql.XADataSource">
<service-properties>
<entry key="osgi.jndi.service.name" value="jdbc/postgresXADB"/>
<entry key="datasource" value="postgresXADB"/>
</service-properties>
</service>
I'm not an expert in this stuff, but my view would be that the atomicity that XA provides guaruntees only that:
Either the entire commit occurs or the entire commit rolls back.
That the whole commit/rollback completes before the commit request returns to whoever called it.
I don't think any guaruntee is made regarding the individual participants completing at the same instant, nor is there any kind of 'commit dependency tree' maintained guaranteeing that the subsequent processing only happens on participants who have committed.
I think to achieve what you want, you might need to put the message queue outside the main transaction... Which destroys the whole point of the transaction in the first place :(
I think you might just have to put a retry/timeout loop in your downstream processing. The alternative might be to explore the concurrency options to see if you an allow that downstream transaction to 'see' the upstream.
Hopefully this answer will prompt someone with more knowladge of this stuff to chip in!
For XA transactions, when you commit, then camel will run xa commit for each DB, ensuring that all XA transaction will be committed eventually, not simultaneously. Since the data in each DB is committed separately, it is not atomic, and it is impossible to make data modification across database atomic.
For your application, there may be two choice to avoid the problem.
Don't use XA transactions, instead use OutBox pattern. You can update part of the data, send message to queue, and then return. Any order dependent operation is put into queue, where you can easily custom the order.
Base on your XA solution, when you want to read the newest data, you call select for update, which will wait for the row lock holding by unfinished XA transaction. When XA transactions returned, select for update will return the newest data.
Your AMQ_SCHEDULED_DELAY header is a workaround, but not working when exceptions happen.

Hibernate 4.1.2.FINAL Properties hbm2ddl.import_files don't seems to work

Hi I have a issue respect a hbm2ddl.import_files, it seems that don't work and not seems to appear in the log.
this is my configuration:
<property name="hibernateProperties">
<value>
hibernate.dialect=${hibernate.dialect}
hibernate.default_schema=${hibernate.default_schema}
hibernate.jdbc.batch_size=${hibernate.jdbc.batch_size}
hibernate.show_sql=${hibernate.show_sql}
hibernate.hbm2ddl.auto=${hibernate.hbm2ddl.auto}
hibernate.id.new_generator_mappings=${hibernate.id.new_generator_mappings}
hibernate.hbm2ddl.import_files=${hibernate.hbm2ddl.import_files}
<!-- Auto Generated Schemas and tables not good for production
hibernate.hbm2ddl.auto=update-->
</value>
</property>
the hibernate.hbm2ddl.import_files=/import.sql, and the file is:
insert into DEPARTAMENTO (NOMBRE_DEPART,REFERENCIA_DEPART) values ('AMAZONAS')
The jdbc.properties:
#org.hibernate.dialect.PostgreSQLDialect
hibernate.default_schema = "DBMERCANCIAS"
hibernate.show_sql = true
hibernate.id.new_generator_mappings = true
hibernate.hbm2ddl.auto = create
hibernate.jdbc.batch_size = 5
#Default the factory to use to instantiate transactions org.transaction.JDBCTransactionFactory
hibernate.transaction.factory_class=org.transaction.JDBCTransactionFactory
#Initialize values statements only on create-drop or create
hibernate.hbm2ddl.import_files = /import.sql
The database is postgresql 9.1.1, spring 3.1.0.RELEASE and hibernate 4.1.2.Final, the hibernate.hbm2ddl.auto is set to "create", the tables and the schema create but not run sql command insert why?, I can see in the log where this command run.
My error was the location in hibernate properties.
hibernate.hbm2ddl.import_files = /META-INF/spring/import.sql
is the correct location.
You could put import.sql in classpath(/classes/import.sql) and remove property hibernate.hbm2ddl.import_files from hibernate configuration/property.
NOTE: hibernate.hbm2ddl.auto must to create
<bean id="sessionFactory"
class="org.springframework.orm.hibernate4.LocalSessionFactoryBean">
<property name="dataSource" ref="dataSource" />
<property name="hibernateProperties">
<props>
<prop key="hibernate.dialect">${hibernate.dialect}</prop>
<prop key="hibernate.show_sql">${hibernate.show_sql}</prop>
<prop key="hibernate.hbm2ddl.auto">create</prop>
</property>
</bean>