How can I print SQL query result log with log4j? - sql

I'm using Spring 3.1.1, MyBatis 3.1.1, MySQL 5.0.67. My Spring configuration is below:
<bean id="dataSource" class="org.apache.tomcat.jdbc.pool.DataSource" destroy-method="close">
<property name="driverClassName" value="com.mysql.jdbc.Driver" />
<property name="url" value="${jdbc.url}" />
<property name="username" value="${jdbc.username}" />
<property name="password" value="${jdbc.password}" />
<property name="validationQuery" value="select 1"/>
<property name="testWhileIdle" value="true"/>
<property name="timeBetweenEvictionRunsMillis" value="14400000"/>
<property name="testOnBorrow" value="false"/>
</bean>
<bean id="sqlSessionFactory" class="org.mybatis.spring.SqlSessionFactoryBean">
<property name="dataSource" ref="dataSource" />
<property name="configLocation" value="classpath:mybatis/myBatisConfig.xml"/>
</bean>
<bean id="sqlSessionTemplate" class="org.mybatis.spring.SqlSessionTemplate">
<constructor-arg ref="sqlSessionFactory"/>
</bean>
<bean id="transactionManager"
class="org.springframework.jdbc.datasource.DataSourceTransactionManager"
p:dataSource-ref="dataSource"/>
<tx:annotation-driven transaction-manager="transactionManager"/>
And log4.properties is below:
log4j.logger.org.springframework=DEBUG
log4j.logger.org.apache=DEBUG
log4j.logger.org.mybatis=DEBUG
log4j.logger.java.sql=DEBUG
log4j.logger.java.sql.Connection=DEBUG
log4j.logger.java.sql.Statement=DEBUG
log4j.logger.java.sql.PreparedStatement=DEBUG
log4j.logger.java.sql.ResultSet=DEBUG
With these configuration, I can see SQL query statement which is executed and parameters to that query but I can't see query result log. My log is like this:
[org.mybatis.spring.SqlSessionUtils] - Creating a new SqlSession
[org.mybatis.spring.SqlSessionUtils] - SqlSession [org.apache.ibatis.session.defaults.DefaultSqlSession#4ccdd1f] was not registered for synchronization because synchronization is not active
[org.springframework.jdbc.datasource.DataSourceUtils] - Fetching JDBC Connection from DataSource
[org.mybatis.spring.transaction.SpringManagedTransaction] - JDBC Connection [ProxyConnection[PooledConnection[com.mysql.jdbc.JDBC4Connection#3cfde82]]] will not be managed by Spring
[java.sql.Connection] - ooo Using Connection [ProxyConnection[PooledConnection[com.mysql.jdbc.JDBC4Connection#3cfde82]]]
[java.sql.Connection] - ==> Preparing: SELECT col FROM table WHERE col1=? AND col2=?
[java.sql.PreparedStatement] - ==> Parameters: 93(Integer), 4(Integer)
[org.mybatis.spring.SqlSessionUtils] - Closing non transactional SqlSession [org.apache.ibatis.session.defaults.DefaultSqlSession#4ccdd1f]
[org.springframework.jdbc.datasource.DataSourceUtils] - Returning JDBC Connection to DataSource
Is there any way to print log including query result?

Two ways:
log4j.logger.java.sql.ResultSet=TRACE
Or use the namespaces to set logging. This is the only logging method in mybatis 3.2
http://mybatis.github.io/mybatis-3/logging.html

What works for me is:
log4j.logger.org.springframework.jdbc.core.StatementCreatorUtils=TRACE
This prints entries like these:
...
TRACE SimpleAsyncTaskExecutor-1 org.springframework.jdbc.core.StatementCreatorUtils - Setting SQL statement parameter value: column index 1, parameter value [12345], value class [java.math.BigDecimal], SQL type 3
TRACE SimpleAsyncTaskExecutor-1 org.springframework.jdbc.core.StatementCreatorUtils - Setting SQL statement parameter value: column index 2, parameter value [ADDRESS], value class [java.lang.String], SQL type 12
TRACE SimpleAsyncTaskExecutor-1 org.springframework.jdbc.core.StatementCreatorUtils - Setting SQL statement parameter value: column index 3, parameter value [20130916], value class [java.math.BigDecimal], SQL type 3
...
In my case I am using running a query inside a JdbcBatchItemWriter in a Spring Batch application, which uses named params so it may not be a general solution.

Related

apache ignite SQL query too slow

3 node cluster.
Each node has 2 * L5520 physical processor,and 64GB memory,1TB HDD
I used COPY FROM ... FORMAT CSV import the data to the ignite,Now I execute SQL query in the JDBC console, it's so slow. Can someone tell me any optimizations?
you are missing indices on cache.
<property name="indexes">
<list>
<bean class="org.apache.ignite.cache.QueryIndex">
<constructor-arg value="xyzFromKeyorVal"/>
</bean>
</list>
</property>
Add above property to cacheConfiguration. This 'xyzFromKeyorVal' is nothing but any property from key or value object that you want to put index on.

Logging query execution time in Eclipselink

I'd like to configure EclipseLink to log executed SQL queries and query execution time.
In persistence.xml I have following properties set:
<property name="eclipselink.logging.level" value="ALL" />
<property name="eclipselink.logging.level.sql" value="ALL"/>
<property name="eclipselink.logging.parameters" value="true"/>
<property name="eclipselink.logging.timestamp" value="true" />
I can see SQL queries, bound parameters, and execution timestamp. But not the execution time.
Here is the example of messages I see in log:
[EL Fine]: sql: 2016-07-14 17:22:40.114--Connection(1201360998)--SELECT ID, a, b, c FROM my_table WHERE (id = ?)
bind => [4]
It there any way how to get this information to logs?

How to determine the right size of table entries for CacheJdbcBlobStore?

I am trying to set up a persistent cache store for my Apache Ignite application. The cache store I try to use is CacheJdbcBlobStore. Here is the list of software I am using in my prototype:
JDK 1.8.0_66
Apache Tomcat 7.0.68
Apache Ignite 1.6.0
HyperSQL 2.3.4
My data source is set up as below in Spring:
<bean id="myDataSource" class="org.springframework.jdbc.datasource.DriverManagerDataSource">
<property name="driverClassName" value="org.hsqldb.jdbcDriver"/>
<property name="url" value="jdbc:hsqldb:http://localhost"/>
<property name="username" value="sa"/>
<property name="password" value=""/>
</bean>
And the Ignite configuration:
<bean id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
<property name="cacheConfiguration">
<list>
<bean class="org.apache.ignite.configuration.CacheConfiguration">
<property name="cacheMode" value="REPLICATED"/>
<property name="name" value="session-cache"/>
<property name="cacheStoreFactory">
<bean class="org.apache.ignite.cache.store.jdbc.CacheJdbcBlobStoreFactory">
<property name="dataSourceBean" value="myDataSource" />
</bean>
</property>
<property name="readThrough" value="true" />
<property name="writeThrough" value="true" />
</bean>
</list>
</property>
</bean>
However, when I ran the application, I got the following exception:
ERROR - root - Failed to update web session: null
class org.apache.ignite.IgniteException: Failed to save session: C56AF4E1DA01A439E43E512950D32D45
Caused by: javax.cache.integration.CacheWriterException: class org.apache.ignite.internal.processors.cache.CachePartialUpdateCheckedException: Failed to update keys (retry update if possible).: [C56AF4E1DA
01A439E43E512950D32D45]
Caused by: class org.apache.ignite.internal.processors.cache.CachePartialUpdateCheckedException: Failed to update keys (retry update if possible).: [C56AF4E1DA01A439E43E512950D32D45]
Suppressed: class org.apache.ignite.IgniteCheckedException: Failed to put object [key=C56AF4E1DA01A439E43E512950D32D45, val=WebSessionEntity [id=C56AF4E1DA01A439E43E512950D32D45, createTime=1464182374328, accessTime=1464182374330, maxInactiveInterval=1800, attributes=[]]]
at org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.put(GridCacheStoreManagerAdapter.java:583)
at org.apache.ignite.internal.processors.cache.GridCacheMapEntry.innerUpdate(GridCacheMapEntry.java:2358)
at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateSingle(GridDhtAtomicCache.java:2246)
... 37 more
Caused by: javax.cache.integration.CacheWriterException: Failed to put object [key=C56AF4E1DA01A439E43E512950D32D45, val=WebSessionEntity [id=C56AF4E1DA01A439E43E512950D32D45, createTime=1464182374328, accessTime=1464182374330, maxInactiveInterval=1800, attributes=[]]]
at org.apache.ignite.cache.store.jdbc.CacheJdbcBlobStore.write(CacheJdbcBlobStore.java:281)
at org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.put(GridCacheStoreManagerAdapter.java:575)
... 39 more
Caused by: java.sql.SQLDataException: data exception: string data, right truncation; table: ENTRIES column: AKEY
at org.hsqldb.jdbc.JDBCUtil.sqlException(Unknown Source)
at org.hsqldb.jdbc.JDBCUtil.sqlException(Unknown Source)
at org.hsqldb.jdbc.JDBCPreparedStatement.fetchResult(Unknown Source)
at org.hsqldb.jdbc.JDBCPreparedStatement.executeUpdate(Unknown Source)
at org.apache.ignite.cache.store.jdbc.CacheJdbcBlobStore.write(CacheJdbcBlobStore.java:277)
... 40 more
I've figured out that you need to add the createTableQuery property to jdbc.CacheJdbcBlobStoreFactory:
<bean class="org.apache.ignite.cache.store.jdbc.CacheJdbcBlobStoreFactory">
...
<property name="createTableQuery" value="create table if not exists ENTRIES (akey VARBINARY(100) primary key, val BLOB(10k))" />
</bean>
So as to override the default one:
create table if not exists ENTRIES (akey binary primary key, val binary)
My question is: how would you determine the size of the table entry (especially the val column)? Currently I put down 10k. If I set the size too high, it will be wasteful, but If I put the size too low, I'll get the exception mentioned above sooner or later. Thank you.
Perhaps you don't have to worry about the columns sizes of table ENTRIES. For example, in HyperSQL database if you create the table the following way:
create table if not exists ENTRIES (akey VARBINARY(100) primary key, val BLOB(10k))
10k is the maximum size of column val rather than the actual space that would be taken. Therefore it will not be wasteful if the actual length is lower than the maximum limit.
In Oracle, you can create the table as below:
create table ENTRIES (akey VARCHAR2(100) primary key, val BLOB);
And again, you don't have to worry about the size of column val. The size of column with type BLOB can be regarded as unlimited (Maximum size: (4 GB - 1) * DB_BLOCK_SIZE initialization parameter (8 TB to 128 TB)) in Oracle. The actual space taken depends on the actual size of the cache. No space will be wasted.

Pentaho JNDI source name as parameter (Multi-Tennant)

I have googled this for the last half an hour, and found hits for pentaho parameters etc but nothing that appears to ask or answer this question.
I have a set of reports that are the same for each customer, but need to connect to different databases depending upon the customer who is running the report.
So my idea is to pass the JNDI data source name to the report at runtime as a parameter, so that the customer will connect to the correct database.
Is this possible, or is there a better way of managing a common set of reports that are used by different customers running on different databases but in the same single instance of the pentaho engine ?
OK, I have found a better solution using the little documented multi-tennant feature.
1) Stop Pentaho
2) Modify ( pentaho-solutions/system/pentahoObjects.spring.xml )
<!-- Original Code
<bean id="IDBDatasourceService" class="org.pentaho.platform.engine.services.connection.datasource.dbcp.DynamicallyPooledOrJndiDatasourceService" scope="singleton">
<property name="pooledDatasourceService" ref="pooledOrJndiDatasourceService" />
<property name="nonPooledDatasourceService" ref="nonPooledOrJndiDatasourceService" />
</bean>
-->
<!--Begin Tenant -->
<bean id="IDBDatasourceService" class="org.pentaho.platform.engine.services.connection.datasource.dbcp.tenantaware.TenantAwareLoginParsingDatasourceService"
scope="singleton">
<property name="requireTenantId" value="false" />
<property name="datasourceNameFormat" value="{1}-{0}" />
<property name="tenantSeparator" value="#" />
<property name="tenantOnLeft" value="false" />
</bean>
<!-- End Tenant -->
3) Add Suffix to Data Sources ( biserver-ce/tomcat/webapps/pentaho/META-INF/context.xml )
<Resource
name="jdbc/MYDBSRC-xxx"
auth="Container"
type="javax.sql.DataSource"
factory="org.apache.commons.dbcp.BasicDataSourceFactory"
maxActive="20"
maxIdle="5"
maxWait="10000"
username="XXXX"
password="XXXX"
driverClassName="net.sourceforge.jtds.jdbc.Driver"
url="jdbc:jtds:sqlserver://192.168.42.0:1433;DatabaseName=SOMEDB"
/>
<Resource
name="jdbc/MYDBSRC-aaa"
auth="Container"
type="javax.sql.DataSource"
factory="org.apache.commons.dbcp.BasicDataSourceFactory"
maxActive="20"
maxIdle="5"
maxWait="10000"
username="XXXX"
password="XXXX"
driverClassName="net.sourceforge.jtds.jdbc.Driver"
url="jdbc:jtds:sqlserver://192.168.42.0:1433;DatabaseName=AOTHERDB"
/>
4) Delete /tomcat/conf/Catalina/localhost/pentaho.xml
5) Restart Pentaho, create a user someone#xxx etc etc
6) Create a report using the JNDI Name "MYDBSRC"
7) Login as someone#xxx and you will get a different report / datasource than either logging in as user, or user#aaa
Tadah !!

Hibernate 4.1.2.FINAL Properties hbm2ddl.import_files don't seems to work

Hi I have a issue respect a hbm2ddl.import_files, it seems that don't work and not seems to appear in the log.
this is my configuration:
<property name="hibernateProperties">
<value>
hibernate.dialect=${hibernate.dialect}
hibernate.default_schema=${hibernate.default_schema}
hibernate.jdbc.batch_size=${hibernate.jdbc.batch_size}
hibernate.show_sql=${hibernate.show_sql}
hibernate.hbm2ddl.auto=${hibernate.hbm2ddl.auto}
hibernate.id.new_generator_mappings=${hibernate.id.new_generator_mappings}
hibernate.hbm2ddl.import_files=${hibernate.hbm2ddl.import_files}
<!-- Auto Generated Schemas and tables not good for production
hibernate.hbm2ddl.auto=update-->
</value>
</property>
the hibernate.hbm2ddl.import_files=/import.sql, and the file is:
insert into DEPARTAMENTO (NOMBRE_DEPART,REFERENCIA_DEPART) values ('AMAZONAS')
The jdbc.properties:
#org.hibernate.dialect.PostgreSQLDialect
hibernate.default_schema = "DBMERCANCIAS"
hibernate.show_sql = true
hibernate.id.new_generator_mappings = true
hibernate.hbm2ddl.auto = create
hibernate.jdbc.batch_size = 5
#Default the factory to use to instantiate transactions org.transaction.JDBCTransactionFactory
hibernate.transaction.factory_class=org.transaction.JDBCTransactionFactory
#Initialize values statements only on create-drop or create
hibernate.hbm2ddl.import_files = /import.sql
The database is postgresql 9.1.1, spring 3.1.0.RELEASE and hibernate 4.1.2.Final, the hibernate.hbm2ddl.auto is set to "create", the tables and the schema create but not run sql command insert why?, I can see in the log where this command run.
My error was the location in hibernate properties.
hibernate.hbm2ddl.import_files = /META-INF/spring/import.sql
is the correct location.
You could put import.sql in classpath(/classes/import.sql) and remove property hibernate.hbm2ddl.import_files from hibernate configuration/property.
NOTE: hibernate.hbm2ddl.auto must to create
<bean id="sessionFactory"
class="org.springframework.orm.hibernate4.LocalSessionFactoryBean">
<property name="dataSource" ref="dataSource" />
<property name="hibernateProperties">
<props>
<prop key="hibernate.dialect">${hibernate.dialect}</prop>
<prop key="hibernate.show_sql">${hibernate.show_sql}</prop>
<prop key="hibernate.hbm2ddl.auto">create</prop>
</property>
</bean>