Hibernate 4.1.2.FINAL Properties hbm2ddl.import_files don't seems to work - sql

Hi I have a issue respect a hbm2ddl.import_files, it seems that don't work and not seems to appear in the log.
this is my configuration:
<property name="hibernateProperties">
<value>
hibernate.dialect=${hibernate.dialect}
hibernate.default_schema=${hibernate.default_schema}
hibernate.jdbc.batch_size=${hibernate.jdbc.batch_size}
hibernate.show_sql=${hibernate.show_sql}
hibernate.hbm2ddl.auto=${hibernate.hbm2ddl.auto}
hibernate.id.new_generator_mappings=${hibernate.id.new_generator_mappings}
hibernate.hbm2ddl.import_files=${hibernate.hbm2ddl.import_files}
<!-- Auto Generated Schemas and tables not good for production
hibernate.hbm2ddl.auto=update-->
</value>
</property>
the hibernate.hbm2ddl.import_files=/import.sql, and the file is:
insert into DEPARTAMENTO (NOMBRE_DEPART,REFERENCIA_DEPART) values ('AMAZONAS')
The jdbc.properties:
#org.hibernate.dialect.PostgreSQLDialect
hibernate.default_schema = "DBMERCANCIAS"
hibernate.show_sql = true
hibernate.id.new_generator_mappings = true
hibernate.hbm2ddl.auto = create
hibernate.jdbc.batch_size = 5
#Default the factory to use to instantiate transactions org.transaction.JDBCTransactionFactory
hibernate.transaction.factory_class=org.transaction.JDBCTransactionFactory
#Initialize values statements only on create-drop or create
hibernate.hbm2ddl.import_files = /import.sql
The database is postgresql 9.1.1, spring 3.1.0.RELEASE and hibernate 4.1.2.Final, the hibernate.hbm2ddl.auto is set to "create", the tables and the schema create but not run sql command insert why?, I can see in the log where this command run.

My error was the location in hibernate properties.
hibernate.hbm2ddl.import_files = /META-INF/spring/import.sql
is the correct location.

You could put import.sql in classpath(/classes/import.sql) and remove property hibernate.hbm2ddl.import_files from hibernate configuration/property.
NOTE: hibernate.hbm2ddl.auto must to create
<bean id="sessionFactory"
class="org.springframework.orm.hibernate4.LocalSessionFactoryBean">
<property name="dataSource" ref="dataSource" />
<property name="hibernateProperties">
<props>
<prop key="hibernate.dialect">${hibernate.dialect}</prop>
<prop key="hibernate.show_sql">${hibernate.show_sql}</prop>
<prop key="hibernate.hbm2ddl.auto">create</prop>
</property>
</bean>

Related

Affinity Backup Filter

Trying to set up an affinity backup filter. Most of the bits are clear and I am following the details outlined here - https://ignite.apache.org/releases/latest/javadoc/index.html?org/apache/ignite/cache/affinity/rendezvous/ClusterNodeAttributeAffinityBackupFilter.html
which talks about backup by availability zone. And each node can set the value of that AZ.
However, the thing I am not very clear on is where this AZ value goes for each node, the above link says "that the environment variable "AVAILABILTY_ZONE" be set appropriately on each node via some means external to Ignite".
I see a couple of options where this could be set,
Use System.setProperty()(based on the above comment around environment variable)
Set it as part of IgniteConfiguration.setUserAttributes() (based on looking at the ClusterNodeAttributeAffinityBackupFilter source which is comparing node attributes)
Any inputs around this are helpful.
TIA
I suppose this documentation might be better structured.
Main idea is to set a user-defined attribute, for example "color" to red or blue.
<bean class="org.apache.ignite.configuration.IgniteConfiguration">
<property name="userAttributes">
<map>
<entry key="color" value="blue"/>
</map>
</property>
</bean>
Or config.setUserAttributes(F.asMap("color", "red"));
And use it in your backup filter implementation:
<bean class="org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction">
<property name="affinityBackupFilter">
<bean class="org.apache.ignite.cache.affinity.rendezvous.ClusterNodeAttributeAffinityBackupFilter">
<constructor-arg>
<array value-type="java.lang.String">
<value>color</value>
</array>
</constructor-arg>
</bean>
</property>
</bean>
In that case, prior to saving a backup copy, Ignite will check if the "color" attribute of the current node differs from the backup's one (i.e. if it's "read", then we need to search for a "blue" node).

apache ignite SQL query too slow

3 node cluster.
Each node has 2 * L5520 physical processor,and 64GB memory,1TB HDD
I used COPY FROM ... FORMAT CSV import the data to the ignite,Now I execute SQL query in the JDBC console, it's so slow. Can someone tell me any optimizations?
you are missing indices on cache.
<property name="indexes">
<list>
<bean class="org.apache.ignite.cache.QueryIndex">
<constructor-arg value="xyzFromKeyorVal"/>
</bean>
</list>
</property>
Add above property to cacheConfiguration. This 'xyzFromKeyorVal' is nothing but any property from key or value object that you want to put index on.

How to determine the right size of table entries for CacheJdbcBlobStore?

I am trying to set up a persistent cache store for my Apache Ignite application. The cache store I try to use is CacheJdbcBlobStore. Here is the list of software I am using in my prototype:
JDK 1.8.0_66
Apache Tomcat 7.0.68
Apache Ignite 1.6.0
HyperSQL 2.3.4
My data source is set up as below in Spring:
<bean id="myDataSource" class="org.springframework.jdbc.datasource.DriverManagerDataSource">
<property name="driverClassName" value="org.hsqldb.jdbcDriver"/>
<property name="url" value="jdbc:hsqldb:http://localhost"/>
<property name="username" value="sa"/>
<property name="password" value=""/>
</bean>
And the Ignite configuration:
<bean id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
<property name="cacheConfiguration">
<list>
<bean class="org.apache.ignite.configuration.CacheConfiguration">
<property name="cacheMode" value="REPLICATED"/>
<property name="name" value="session-cache"/>
<property name="cacheStoreFactory">
<bean class="org.apache.ignite.cache.store.jdbc.CacheJdbcBlobStoreFactory">
<property name="dataSourceBean" value="myDataSource" />
</bean>
</property>
<property name="readThrough" value="true" />
<property name="writeThrough" value="true" />
</bean>
</list>
</property>
</bean>
However, when I ran the application, I got the following exception:
ERROR - root - Failed to update web session: null
class org.apache.ignite.IgniteException: Failed to save session: C56AF4E1DA01A439E43E512950D32D45
Caused by: javax.cache.integration.CacheWriterException: class org.apache.ignite.internal.processors.cache.CachePartialUpdateCheckedException: Failed to update keys (retry update if possible).: [C56AF4E1DA
01A439E43E512950D32D45]
Caused by: class org.apache.ignite.internal.processors.cache.CachePartialUpdateCheckedException: Failed to update keys (retry update if possible).: [C56AF4E1DA01A439E43E512950D32D45]
Suppressed: class org.apache.ignite.IgniteCheckedException: Failed to put object [key=C56AF4E1DA01A439E43E512950D32D45, val=WebSessionEntity [id=C56AF4E1DA01A439E43E512950D32D45, createTime=1464182374328, accessTime=1464182374330, maxInactiveInterval=1800, attributes=[]]]
at org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.put(GridCacheStoreManagerAdapter.java:583)
at org.apache.ignite.internal.processors.cache.GridCacheMapEntry.innerUpdate(GridCacheMapEntry.java:2358)
at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateSingle(GridDhtAtomicCache.java:2246)
... 37 more
Caused by: javax.cache.integration.CacheWriterException: Failed to put object [key=C56AF4E1DA01A439E43E512950D32D45, val=WebSessionEntity [id=C56AF4E1DA01A439E43E512950D32D45, createTime=1464182374328, accessTime=1464182374330, maxInactiveInterval=1800, attributes=[]]]
at org.apache.ignite.cache.store.jdbc.CacheJdbcBlobStore.write(CacheJdbcBlobStore.java:281)
at org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.put(GridCacheStoreManagerAdapter.java:575)
... 39 more
Caused by: java.sql.SQLDataException: data exception: string data, right truncation; table: ENTRIES column: AKEY
at org.hsqldb.jdbc.JDBCUtil.sqlException(Unknown Source)
at org.hsqldb.jdbc.JDBCUtil.sqlException(Unknown Source)
at org.hsqldb.jdbc.JDBCPreparedStatement.fetchResult(Unknown Source)
at org.hsqldb.jdbc.JDBCPreparedStatement.executeUpdate(Unknown Source)
at org.apache.ignite.cache.store.jdbc.CacheJdbcBlobStore.write(CacheJdbcBlobStore.java:277)
... 40 more
I've figured out that you need to add the createTableQuery property to jdbc.CacheJdbcBlobStoreFactory:
<bean class="org.apache.ignite.cache.store.jdbc.CacheJdbcBlobStoreFactory">
...
<property name="createTableQuery" value="create table if not exists ENTRIES (akey VARBINARY(100) primary key, val BLOB(10k))" />
</bean>
So as to override the default one:
create table if not exists ENTRIES (akey binary primary key, val binary)
My question is: how would you determine the size of the table entry (especially the val column)? Currently I put down 10k. If I set the size too high, it will be wasteful, but If I put the size too low, I'll get the exception mentioned above sooner or later. Thank you.
Perhaps you don't have to worry about the columns sizes of table ENTRIES. For example, in HyperSQL database if you create the table the following way:
create table if not exists ENTRIES (akey VARBINARY(100) primary key, val BLOB(10k))
10k is the maximum size of column val rather than the actual space that would be taken. Therefore it will not be wasteful if the actual length is lower than the maximum limit.
In Oracle, you can create the table as below:
create table ENTRIES (akey VARCHAR2(100) primary key, val BLOB);
And again, you don't have to worry about the size of column val. The size of column with type BLOB can be regarded as unlimited (Maximum size: (4 GB - 1) * DB_BLOCK_SIZE initialization parameter (8 TB to 128 TB)) in Oracle. The actual space taken depends on the actual size of the cache. No space will be wasted.

How can I print SQL query result log with log4j?

I'm using Spring 3.1.1, MyBatis 3.1.1, MySQL 5.0.67. My Spring configuration is below:
<bean id="dataSource" class="org.apache.tomcat.jdbc.pool.DataSource" destroy-method="close">
<property name="driverClassName" value="com.mysql.jdbc.Driver" />
<property name="url" value="${jdbc.url}" />
<property name="username" value="${jdbc.username}" />
<property name="password" value="${jdbc.password}" />
<property name="validationQuery" value="select 1"/>
<property name="testWhileIdle" value="true"/>
<property name="timeBetweenEvictionRunsMillis" value="14400000"/>
<property name="testOnBorrow" value="false"/>
</bean>
<bean id="sqlSessionFactory" class="org.mybatis.spring.SqlSessionFactoryBean">
<property name="dataSource" ref="dataSource" />
<property name="configLocation" value="classpath:mybatis/myBatisConfig.xml"/>
</bean>
<bean id="sqlSessionTemplate" class="org.mybatis.spring.SqlSessionTemplate">
<constructor-arg ref="sqlSessionFactory"/>
</bean>
<bean id="transactionManager"
class="org.springframework.jdbc.datasource.DataSourceTransactionManager"
p:dataSource-ref="dataSource"/>
<tx:annotation-driven transaction-manager="transactionManager"/>
And log4.properties is below:
log4j.logger.org.springframework=DEBUG
log4j.logger.org.apache=DEBUG
log4j.logger.org.mybatis=DEBUG
log4j.logger.java.sql=DEBUG
log4j.logger.java.sql.Connection=DEBUG
log4j.logger.java.sql.Statement=DEBUG
log4j.logger.java.sql.PreparedStatement=DEBUG
log4j.logger.java.sql.ResultSet=DEBUG
With these configuration, I can see SQL query statement which is executed and parameters to that query but I can't see query result log. My log is like this:
[org.mybatis.spring.SqlSessionUtils] - Creating a new SqlSession
[org.mybatis.spring.SqlSessionUtils] - SqlSession [org.apache.ibatis.session.defaults.DefaultSqlSession#4ccdd1f] was not registered for synchronization because synchronization is not active
[org.springframework.jdbc.datasource.DataSourceUtils] - Fetching JDBC Connection from DataSource
[org.mybatis.spring.transaction.SpringManagedTransaction] - JDBC Connection [ProxyConnection[PooledConnection[com.mysql.jdbc.JDBC4Connection#3cfde82]]] will not be managed by Spring
[java.sql.Connection] - ooo Using Connection [ProxyConnection[PooledConnection[com.mysql.jdbc.JDBC4Connection#3cfde82]]]
[java.sql.Connection] - ==> Preparing: SELECT col FROM table WHERE col1=? AND col2=?
[java.sql.PreparedStatement] - ==> Parameters: 93(Integer), 4(Integer)
[org.mybatis.spring.SqlSessionUtils] - Closing non transactional SqlSession [org.apache.ibatis.session.defaults.DefaultSqlSession#4ccdd1f]
[org.springframework.jdbc.datasource.DataSourceUtils] - Returning JDBC Connection to DataSource
Is there any way to print log including query result?
Two ways:
log4j.logger.java.sql.ResultSet=TRACE
Or use the namespaces to set logging. This is the only logging method in mybatis 3.2
http://mybatis.github.io/mybatis-3/logging.html
What works for me is:
log4j.logger.org.springframework.jdbc.core.StatementCreatorUtils=TRACE
This prints entries like these:
...
TRACE SimpleAsyncTaskExecutor-1 org.springframework.jdbc.core.StatementCreatorUtils - Setting SQL statement parameter value: column index 1, parameter value [12345], value class [java.math.BigDecimal], SQL type 3
TRACE SimpleAsyncTaskExecutor-1 org.springframework.jdbc.core.StatementCreatorUtils - Setting SQL statement parameter value: column index 2, parameter value [ADDRESS], value class [java.lang.String], SQL type 12
TRACE SimpleAsyncTaskExecutor-1 org.springframework.jdbc.core.StatementCreatorUtils - Setting SQL statement parameter value: column index 3, parameter value [20130916], value class [java.math.BigDecimal], SQL type 3
...
In my case I am using running a query inside a JdbcBatchItemWriter in a Spring Batch application, which uses named params so it may not be a general solution.

EclipseLink very slow on inserting data

I'm using the latest EclipseLink version with MySQL 5.5 (table type InnoDB). I'm inserting about 30900 records (which could be also more) at a time.
The problem is, that the insert performance is pretty poor: it takes about 22 seconds to insert all records (compared with JDBC: 7 seconds). I've read that using batch writing should help - but doesn't!?
#Entity
public class TestRecord {
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
public Long id;
public int test;
}
The code to insert the records:
factory = Persistence.createEntityManagerFactory("xx_test");
EntityManager em = factory.createEntityManager();
em.getTransaction().begin();
for(int i = 0; i < 30900; i++) {
TestRecord record = new TestRecord();
record.test = 21;
em.persist(record);
}
em.getTransaction().commit();
em.close();
And finally my EclipseLink configuration:
<persistence-unit name="xx_test" transaction-type="RESOURCE_LOCAL">
<class>com.test.TestRecord</class>
<properties>
<property name="javax.persistence.jdbc.driver" value="com.mysql.jdbc.Driver" />
<property name="javax.persistence.jdbc.url" value="jdbc:mysql://localhost:3306/xx_test" />
<property name="javax.persistence.jdbc.user" value="root" />
<property name="javax.persistence.jdbc.password" value="test" />
<property name="eclipselink.jdbc.batch-writing" value="JDBC" />
<property name="eclipselink.jdbc.cache-statements" value="true"/>
<property name="eclipselink.ddl-generation.output-mode" value="both" />
<property name="eclipselink.ddl-generation" value="drop-and-create-tables" />
<property name="eclipselink.logging.level" value="INFO" />
</properties>
</persistence-unit>
What I'm doing wrong? I've tried several setting, but nothing seems the help.
Thanks in advance for helping me! :)
-Stefan
Another thing is to add ?rewriteBatchedStatements=true to the data URL used by the connector.
This caused executing about 120300 inserts down to about 30s which was about 60s before.
JDBC Batch writing improves performance drastically; please try it
Eg: property name="eclipselink.jdbc.batch-writing" value="JDBC"
#GeneratedValue(strategy = GenerationType.IDENTITY)
Switch to TABLE sequencing, IDENTITY is never recommended and a major performance issue.
See,
http://java-persistence-performance.blogspot.com/2011/06/how-to-improve-jpa-performance-by-1825.html
I seem to remember that MySQL may not support batch writing without some database config as well, there was another post on this, I forget the url but you could probably search for it.
Probably the biggest difference besides the mapping conversion is the caching. By default EclipseLink is placing each of the entities into the persistence context (EntityManager) and then during the finalization of the commit it needs to add them all to the cache.
One thing to try for now is:
measure how long an em.flush() call takes after the loop but before the commit. Then if you want you could call em.clear() after the flush so that the newly inserted entities are not merged into the cache.
Doug