3 node cluster.
Each node has 2 * L5520 physical processor,and 64GB memory,1TB HDD
I used COPY FROM ... FORMAT CSV import the data to the ignite,Now I execute SQL query in the JDBC console, it's so slow. Can someone tell me any optimizations?
you are missing indices on cache.
<property name="indexes">
<list>
<bean class="org.apache.ignite.cache.QueryIndex">
<constructor-arg value="xyzFromKeyorVal"/>
</bean>
</list>
</property>
Add above property to cacheConfiguration. This 'xyzFromKeyorVal' is nothing but any property from key or value object that you want to put index on.
Related
Trying to set up an affinity backup filter. Most of the bits are clear and I am following the details outlined here - https://ignite.apache.org/releases/latest/javadoc/index.html?org/apache/ignite/cache/affinity/rendezvous/ClusterNodeAttributeAffinityBackupFilter.html
which talks about backup by availability zone. And each node can set the value of that AZ.
However, the thing I am not very clear on is where this AZ value goes for each node, the above link says "that the environment variable "AVAILABILTY_ZONE" be set appropriately on each node via some means external to Ignite".
I see a couple of options where this could be set,
Use System.setProperty()(based on the above comment around environment variable)
Set it as part of IgniteConfiguration.setUserAttributes() (based on looking at the ClusterNodeAttributeAffinityBackupFilter source which is comparing node attributes)
Any inputs around this are helpful.
TIA
I suppose this documentation might be better structured.
Main idea is to set a user-defined attribute, for example "color" to red or blue.
<bean class="org.apache.ignite.configuration.IgniteConfiguration">
<property name="userAttributes">
<map>
<entry key="color" value="blue"/>
</map>
</property>
</bean>
Or config.setUserAttributes(F.asMap("color", "red"));
And use it in your backup filter implementation:
<bean class="org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction">
<property name="affinityBackupFilter">
<bean class="org.apache.ignite.cache.affinity.rendezvous.ClusterNodeAttributeAffinityBackupFilter">
<constructor-arg>
<array value-type="java.lang.String">
<value>color</value>
</array>
</constructor-arg>
</bean>
</property>
</bean>
In that case, prior to saving a backup copy, Ignite will check if the "color" attribute of the current node differs from the backup's one (i.e. if it's "read", then we need to search for a "blue" node).
I am trying to set up a persistent cache store for my Apache Ignite application. The cache store I try to use is CacheJdbcBlobStore. Here is the list of software I am using in my prototype:
JDK 1.8.0_66
Apache Tomcat 7.0.68
Apache Ignite 1.6.0
HyperSQL 2.3.4
My data source is set up as below in Spring:
<bean id="myDataSource" class="org.springframework.jdbc.datasource.DriverManagerDataSource">
<property name="driverClassName" value="org.hsqldb.jdbcDriver"/>
<property name="url" value="jdbc:hsqldb:http://localhost"/>
<property name="username" value="sa"/>
<property name="password" value=""/>
</bean>
And the Ignite configuration:
<bean id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
<property name="cacheConfiguration">
<list>
<bean class="org.apache.ignite.configuration.CacheConfiguration">
<property name="cacheMode" value="REPLICATED"/>
<property name="name" value="session-cache"/>
<property name="cacheStoreFactory">
<bean class="org.apache.ignite.cache.store.jdbc.CacheJdbcBlobStoreFactory">
<property name="dataSourceBean" value="myDataSource" />
</bean>
</property>
<property name="readThrough" value="true" />
<property name="writeThrough" value="true" />
</bean>
</list>
</property>
</bean>
However, when I ran the application, I got the following exception:
ERROR - root - Failed to update web session: null
class org.apache.ignite.IgniteException: Failed to save session: C56AF4E1DA01A439E43E512950D32D45
Caused by: javax.cache.integration.CacheWriterException: class org.apache.ignite.internal.processors.cache.CachePartialUpdateCheckedException: Failed to update keys (retry update if possible).: [C56AF4E1DA
01A439E43E512950D32D45]
Caused by: class org.apache.ignite.internal.processors.cache.CachePartialUpdateCheckedException: Failed to update keys (retry update if possible).: [C56AF4E1DA01A439E43E512950D32D45]
Suppressed: class org.apache.ignite.IgniteCheckedException: Failed to put object [key=C56AF4E1DA01A439E43E512950D32D45, val=WebSessionEntity [id=C56AF4E1DA01A439E43E512950D32D45, createTime=1464182374328, accessTime=1464182374330, maxInactiveInterval=1800, attributes=[]]]
at org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.put(GridCacheStoreManagerAdapter.java:583)
at org.apache.ignite.internal.processors.cache.GridCacheMapEntry.innerUpdate(GridCacheMapEntry.java:2358)
at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateSingle(GridDhtAtomicCache.java:2246)
... 37 more
Caused by: javax.cache.integration.CacheWriterException: Failed to put object [key=C56AF4E1DA01A439E43E512950D32D45, val=WebSessionEntity [id=C56AF4E1DA01A439E43E512950D32D45, createTime=1464182374328, accessTime=1464182374330, maxInactiveInterval=1800, attributes=[]]]
at org.apache.ignite.cache.store.jdbc.CacheJdbcBlobStore.write(CacheJdbcBlobStore.java:281)
at org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.put(GridCacheStoreManagerAdapter.java:575)
... 39 more
Caused by: java.sql.SQLDataException: data exception: string data, right truncation; table: ENTRIES column: AKEY
at org.hsqldb.jdbc.JDBCUtil.sqlException(Unknown Source)
at org.hsqldb.jdbc.JDBCUtil.sqlException(Unknown Source)
at org.hsqldb.jdbc.JDBCPreparedStatement.fetchResult(Unknown Source)
at org.hsqldb.jdbc.JDBCPreparedStatement.executeUpdate(Unknown Source)
at org.apache.ignite.cache.store.jdbc.CacheJdbcBlobStore.write(CacheJdbcBlobStore.java:277)
... 40 more
I've figured out that you need to add the createTableQuery property to jdbc.CacheJdbcBlobStoreFactory:
<bean class="org.apache.ignite.cache.store.jdbc.CacheJdbcBlobStoreFactory">
...
<property name="createTableQuery" value="create table if not exists ENTRIES (akey VARBINARY(100) primary key, val BLOB(10k))" />
</bean>
So as to override the default one:
create table if not exists ENTRIES (akey binary primary key, val binary)
My question is: how would you determine the size of the table entry (especially the val column)? Currently I put down 10k. If I set the size too high, it will be wasteful, but If I put the size too low, I'll get the exception mentioned above sooner or later. Thank you.
Perhaps you don't have to worry about the columns sizes of table ENTRIES. For example, in HyperSQL database if you create the table the following way:
create table if not exists ENTRIES (akey VARBINARY(100) primary key, val BLOB(10k))
10k is the maximum size of column val rather than the actual space that would be taken. Therefore it will not be wasteful if the actual length is lower than the maximum limit.
In Oracle, you can create the table as below:
create table ENTRIES (akey VARCHAR2(100) primary key, val BLOB);
And again, you don't have to worry about the size of column val. The size of column with type BLOB can be regarded as unlimited (Maximum size: (4 GB - 1) * DB_BLOCK_SIZE initialization parameter (8 TB to 128 TB)) in Oracle. The actual space taken depends on the actual size of the cache. No space will be wasted.
I'm trying to understand the use of 'boosting' properties in the indexing configuration for CQ5. I thought I understood from http://wiki.apache.org/jackrabbit/IndexingConfiguration that setting a boost determined how far up the list an item would be returned as a search result. So I tried adding the following boost lines to my default CQ5 indexing configuration:
<index-rule nodeType="nt:base">
<property boost="5.0">jcr:title</property>
<property boost="5.0">history:title</property>
<property boost="3.0">history:description</property>
<property boost="3.0">history:caption</property>
<property boost="2.0">text</property>
<property nodeScopeIndex="false">analyticsProvider</property>
<property nodeScopeIndex="false">analyticsSnippet</property>
<property nodeScopeIndex="false">hideInNav</property>
<property nodeScopeIndex="false">offTime</property>
<property nodeScopeIndex="false">onTime</property>
:
:
<property isRegexp="true">.*:.*</property>
</index-rule>
The intent was that, in a full text search, text found in jcr:title or history:title properties would be the most relevant followed by history:description, history:caption and, finally, text.
I deleted the index information from the repository and from the workspace, then restarted CQ and let it rebuild all of the indexes.
Now when I do a full text search, I'm only getting results if the search text is in the nodename itself - nothing from description, caption, etc.
Obviously I've done something wrong but I'm not sure what. Any help would be greatly appreciated.
I'm using Spring 3.1.1, MyBatis 3.1.1, MySQL 5.0.67. My Spring configuration is below:
<bean id="dataSource" class="org.apache.tomcat.jdbc.pool.DataSource" destroy-method="close">
<property name="driverClassName" value="com.mysql.jdbc.Driver" />
<property name="url" value="${jdbc.url}" />
<property name="username" value="${jdbc.username}" />
<property name="password" value="${jdbc.password}" />
<property name="validationQuery" value="select 1"/>
<property name="testWhileIdle" value="true"/>
<property name="timeBetweenEvictionRunsMillis" value="14400000"/>
<property name="testOnBorrow" value="false"/>
</bean>
<bean id="sqlSessionFactory" class="org.mybatis.spring.SqlSessionFactoryBean">
<property name="dataSource" ref="dataSource" />
<property name="configLocation" value="classpath:mybatis/myBatisConfig.xml"/>
</bean>
<bean id="sqlSessionTemplate" class="org.mybatis.spring.SqlSessionTemplate">
<constructor-arg ref="sqlSessionFactory"/>
</bean>
<bean id="transactionManager"
class="org.springframework.jdbc.datasource.DataSourceTransactionManager"
p:dataSource-ref="dataSource"/>
<tx:annotation-driven transaction-manager="transactionManager"/>
And log4.properties is below:
log4j.logger.org.springframework=DEBUG
log4j.logger.org.apache=DEBUG
log4j.logger.org.mybatis=DEBUG
log4j.logger.java.sql=DEBUG
log4j.logger.java.sql.Connection=DEBUG
log4j.logger.java.sql.Statement=DEBUG
log4j.logger.java.sql.PreparedStatement=DEBUG
log4j.logger.java.sql.ResultSet=DEBUG
With these configuration, I can see SQL query statement which is executed and parameters to that query but I can't see query result log. My log is like this:
[org.mybatis.spring.SqlSessionUtils] - Creating a new SqlSession
[org.mybatis.spring.SqlSessionUtils] - SqlSession [org.apache.ibatis.session.defaults.DefaultSqlSession#4ccdd1f] was not registered for synchronization because synchronization is not active
[org.springframework.jdbc.datasource.DataSourceUtils] - Fetching JDBC Connection from DataSource
[org.mybatis.spring.transaction.SpringManagedTransaction] - JDBC Connection [ProxyConnection[PooledConnection[com.mysql.jdbc.JDBC4Connection#3cfde82]]] will not be managed by Spring
[java.sql.Connection] - ooo Using Connection [ProxyConnection[PooledConnection[com.mysql.jdbc.JDBC4Connection#3cfde82]]]
[java.sql.Connection] - ==> Preparing: SELECT col FROM table WHERE col1=? AND col2=?
[java.sql.PreparedStatement] - ==> Parameters: 93(Integer), 4(Integer)
[org.mybatis.spring.SqlSessionUtils] - Closing non transactional SqlSession [org.apache.ibatis.session.defaults.DefaultSqlSession#4ccdd1f]
[org.springframework.jdbc.datasource.DataSourceUtils] - Returning JDBC Connection to DataSource
Is there any way to print log including query result?
Two ways:
log4j.logger.java.sql.ResultSet=TRACE
Or use the namespaces to set logging. This is the only logging method in mybatis 3.2
http://mybatis.github.io/mybatis-3/logging.html
What works for me is:
log4j.logger.org.springframework.jdbc.core.StatementCreatorUtils=TRACE
This prints entries like these:
...
TRACE SimpleAsyncTaskExecutor-1 org.springframework.jdbc.core.StatementCreatorUtils - Setting SQL statement parameter value: column index 1, parameter value [12345], value class [java.math.BigDecimal], SQL type 3
TRACE SimpleAsyncTaskExecutor-1 org.springframework.jdbc.core.StatementCreatorUtils - Setting SQL statement parameter value: column index 2, parameter value [ADDRESS], value class [java.lang.String], SQL type 12
TRACE SimpleAsyncTaskExecutor-1 org.springframework.jdbc.core.StatementCreatorUtils - Setting SQL statement parameter value: column index 3, parameter value [20130916], value class [java.math.BigDecimal], SQL type 3
...
In my case I am using running a query inside a JdbcBatchItemWriter in a Spring Batch application, which uses named params so it may not be a general solution.
Hi I have a issue respect a hbm2ddl.import_files, it seems that don't work and not seems to appear in the log.
this is my configuration:
<property name="hibernateProperties">
<value>
hibernate.dialect=${hibernate.dialect}
hibernate.default_schema=${hibernate.default_schema}
hibernate.jdbc.batch_size=${hibernate.jdbc.batch_size}
hibernate.show_sql=${hibernate.show_sql}
hibernate.hbm2ddl.auto=${hibernate.hbm2ddl.auto}
hibernate.id.new_generator_mappings=${hibernate.id.new_generator_mappings}
hibernate.hbm2ddl.import_files=${hibernate.hbm2ddl.import_files}
<!-- Auto Generated Schemas and tables not good for production
hibernate.hbm2ddl.auto=update-->
</value>
</property>
the hibernate.hbm2ddl.import_files=/import.sql, and the file is:
insert into DEPARTAMENTO (NOMBRE_DEPART,REFERENCIA_DEPART) values ('AMAZONAS')
The jdbc.properties:
#org.hibernate.dialect.PostgreSQLDialect
hibernate.default_schema = "DBMERCANCIAS"
hibernate.show_sql = true
hibernate.id.new_generator_mappings = true
hibernate.hbm2ddl.auto = create
hibernate.jdbc.batch_size = 5
#Default the factory to use to instantiate transactions org.transaction.JDBCTransactionFactory
hibernate.transaction.factory_class=org.transaction.JDBCTransactionFactory
#Initialize values statements only on create-drop or create
hibernate.hbm2ddl.import_files = /import.sql
The database is postgresql 9.1.1, spring 3.1.0.RELEASE and hibernate 4.1.2.Final, the hibernate.hbm2ddl.auto is set to "create", the tables and the schema create but not run sql command insert why?, I can see in the log where this command run.
My error was the location in hibernate properties.
hibernate.hbm2ddl.import_files = /META-INF/spring/import.sql
is the correct location.
You could put import.sql in classpath(/classes/import.sql) and remove property hibernate.hbm2ddl.import_files from hibernate configuration/property.
NOTE: hibernate.hbm2ddl.auto must to create
<bean id="sessionFactory"
class="org.springframework.orm.hibernate4.LocalSessionFactoryBean">
<property name="dataSource" ref="dataSource" />
<property name="hibernateProperties">
<props>
<prop key="hibernate.dialect">${hibernate.dialect}</prop>
<prop key="hibernate.show_sql">${hibernate.show_sql}</prop>
<prop key="hibernate.hbm2ddl.auto">create</prop>
</property>
</bean>