H2 and EclipseLink - very slow on tables with one row - sql

I have 7 tables and some tables are empty and some has only one row. In table maximum 5 columns, no any extra indexes and I make quite simple SQL queries.
This is how I run h2.
java -jar h2-1.3.176.jar
Queries are generated and executed by eclipselink 2.6.3. These are my settings in persistence.xml
<properties>
<property name="eclipselink.connection-pool.default.initial" value="5"/>
<property name="eclipselink.connection-pool.default.min" value="5"/>
<property name="javax.persistence.jdbc.driver" value="org.h2.Driver"/>
<property name="javax.persistence.jdbc.url" value="jdbc:h2:tcp://localhost//home/somepath;DATABASE_TO_UPPER=FALSE"/>
<property name="javax.persistence.jdbc.user" value="sa"/>
<property name="javax.persistence.jdbc.password" value=""/>
<property name="eclipselink.logging.level" value="FINE"/>
<property name="eclipselink.logging.thread" value="false"/>
<property name="eclipselink.logging.exceptions" value="true"/>
<property name="eclipselink.orm.throw.exceptions" value="true"/>
<property name="eclipselink.left-join-fetch" value="true"/>
<property name="eclipselink.weaving" value="true"/>
</properties>
So every SQL query is executed about one second(!). Besides I hear that for every SQL query hard drive is working hard.
However, when I in H2 webconsole press connect to the same DB, then my java program works very fast. It works fast until I press disconnect in webconsole. How to explain this?

Related

Apache Ignite data loss when one node goes down

I'm new with Ignite and I'm trying to test data quality and availability of Ignite cluster.
I use the below xml configuration for setting cluster,
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="socketTimeout" value="50000" />
<property name="networkTimeout" value="50000" />
<property name="reconnectCount" value="5" />
<property name="ipFinder">
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
<property name="addresses">
<list>
<value>x.x.x.1:47500..47509</value>
<value>x.x.x.2:47500..47509</value>
</list>
</property>
</bean>
</property>
</bean>
</property>
</bean>
Also and jthe CacheConfiguration is,
<bean id="cache-template-bean" class="org.apache.ignite.configuration.CacheConfiguration">
<property name="name" value="CACHE_TEMPLATE*"/>
<property name="cacheMode" value="PARTITIONED" />
<property name="backups" value="1" />
<!-- <property name="backups" value="2" />
<property name="backups" value="3" /> -->
<property name="atomicityMode" value="TRANSACTIONAL" />
<property name="writeSynchronizationMode" value="PRIMARY_SYNC" />
<property name="rebalanceBatchSize" value="#{4 * 1024 * 1024}" />
<property name="rebalanceMode" value="ASYNC" />
<property name="statisticsEnabled" value="true" />
<property name="rebalanceBatchesPrefetchCount" value="4" />
<property name="defaultLockTimeout" value="5000" />
<property name="readFromBackup" value="true" />
<property name="queryParallelism" value="6" />
<property name="nodeFilter">
<bean class="org.apache.ignite.util.AttributeNodeFilter">
<constructor-arg>
<map>
<entry key="ROLE" value="data.compute"/>
</map>
</constructor-arg>
</bean>
</property>
</bean>
My scenarios are,
Loaded the 5 million data when all the 3 nodes
Bring one node down
The count shows 3.75 million. (Data loss)
Bringing the node up counts 5 million again.
I tried backup 1,2,3 all resulted in the same data loss. As per Ignite documents, appears the data loss should not happen. If this fixed, I can try adding data when the node is down and check how it behaves.
Any suggestions, please?
Ash
The main idea of the baseline topology and persistence is to prevent unnecessary rebalance and store data only in specified server nodes. When a baseline node stopped, it is expected that one will back soon and the rebalance process is not triggered. You could exclude the node from the baseline using api or control.sh utility.
IgniteCache.size() returns the number of primary entries. So when a baseline node is stopped, size() shows a smaller number indicating that a number of primary entries is not accessible.
In your case the data is not lost by two reasons:
1. The data is persisted in backup entries on alive baseline nodes.
2. The primary and backup entries located on the stopped node will back to the cluster after the node started.
[1] https://apacheignite.readme.io/docs/baseline-topology

Multiple Persistence Store for Apache Ignite

I have one use case where I have to support multiple persistence store for my ignite cluster,For example Cache A1 should be primed from Database db1 and Cache B1 should be primed from database db2. can this be done?.In ignite Configuration XML I can only provide one persistence store details,
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:util="http://www.springframework.org/schema/util"
xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/util
http://www.springframework.org/schema/util/spring-util.xsd">
<!-- Datasource for Persistence. -->
<bean name="dataSource"
class="org.springframework.jdbc.datasource.DriverManagerDataSource">
<property name="driverClassName" value="oracle.jdbc.driver.OracleDriver" />
<property name="url" value="jdbc:oracle:thin:#localhost:1521:roc12c" />
<property name="username" value="test" />
<property name="password" value="test" />
</bean>
In my CacheStore implementation I can only access this Database right?.
I've not tried this, but if its similar to other bean-configured systems. You should be able to create another bean with a different name and configuration. Then in your cache configuration for A1 and B1 specify the different data sources. That being said, I'm guessing that theoretically.
It may be that you are already doing so, but I can't tell from your question. If you instead choose to implement your caches in this manner https://apacheignite.readme.io/docs/persistent-store you can definitely configure two caches to have different data sources. This is how I'm currently implementing multiple caches. In the cache store I use I specifically call out which database to go to.
Here is a cache configuration I use for mine.
<property name="cacheConfiguration">
<bean class="org.apache.ignite.configuration.CacheConfiguration">
<!-- Set a cache name. -->
<property name="name" value="recordData"/>
<property name="rebalanceMode" value="ASYNC"/>
<property name="cacheMode" value="PARTITIONED"/>
<property name="backups" value="1"/>
<!-- Enable Off-Heap memory with max size of 10 Gigabytes (0 for unlimited). -->
<property name="memoryMode" value="OFFHEAP_TIERED"/>
<property name="offHeapMaxMemory" value="0"/>
<property name="swapEnabled" value="false"/>
<property name="cacheStoreFactory">
<bean class="javax.cache.configuration.FactoryBuilder" factory-method="factoryOf">
<constructor-arg value="com.company.util.MyDataStore"/>
</bean>
</property>
<property name="readThrough" value="true"/>
<property name="writeThrough" value="true"/>
</bean>
</property>
Cache store is configured per cache, so you just need to inject different data sources to different stores. What you showed is just a standalone data source bean, it's not even a part of IgniteConfiguration. You can have multiple data source beans with different IDs.

JPA PagingItemReader with PostgreSQL

I have done a job that reads data from a db and writes it in a file. It works fine with an Oracle DB. However, when I use it with Postgres I get the following error:
org.postgresql.util.PSQLException: ERROR: subquery in FROM must have an alias
Hint: Por ejemplo, FROM (SELECT ...) [AS] foo.
Position: 15
Error Code: 0
The reader is defined as follows:
<bean id="myReader"
class="org.springframework.batch.item.database.Jpa PagingItemReader">
<property name="entityManagerFactory" ref="entityManagerFactory" />
<property name="queryString" value="select c from CountryEntity c" />
<property name="pageSize" value="1000"/>
</bean>
Does anybody know if this is a common issue related with Postgres? Do I need to use a specific configuration?
You need to configure your JPA provider to use the PostGreSQL dialect.
E.g. for Hibernate, you would use a setup (persistence.xml) like this:
<persistence-unit name="somename" transaction-type="RESOURCE_LOCAL">
<properties>
<property name="hibernate.dialect" value="org.hibernate.dialect.PostgreSQLDialect"/>
<property name="hibernate.connection.url" value="jdbc:postgresql:sample"/>
<property name="javax.persistence.jdbc.driver" value="org.postgresql.Driver"/>
<property name="hibernate.format_sql" value="true"/>
<property name="hbm2ddl.auto" value="update"/>
</properties>
</persistence-unit>

Is it possible to generate artifacts for all the tables at once with MyBatis Generator?

I am using MyBatis Generator 1.3.1 from the command-line. I read in the documentation that I need to specify at least one table for object generation, but I was hoping maybe it is possible to use some wildcard and have mappers for all the tables generated at once? We don't want to use Hibernate, because MyBatis seems to handle custom types in the database better.
Thank you for your help!
You can use SQL wildcards, ie:
<table tableName="%">
<property name="useActualColumnNames" value="true"/>
</table>
You can reference: link
If you use MySQL, The crucial point is this:
<table schema="dbName" tableName="%"</table> and
<property name="nullCatalogMeansCurrent" value="true" />
reference:http://www.mybatis.org/generator/usage/intro.html
Settings like this:
<generatorConfiguration>
<properties resource="mybatis-generator/generator.properties"></properties>
<classPathEntry location="${driverLocation}"/>
<context id="default" targetRuntime="MyBatis3">
<property name="javaFileEncoding" value="UTF-8"/>
<commentGenerator type="org.zhang.generator.MyCommentGenerator">
<property name="suppressDate" value="true"/>
<property name="suppressAllComments" value="true"/>
</commentGenerator>
<jdbcConnection
driverClass="${driverClassName}"
connectionURL="${url}"
userId="${username}"
password="${password}">
<property name="nullCatalogMeansCurrent" value="true" />
</jdbcConnection>
<javaTypeResolver>
<property name="forceBigDecimals" value="true"/>
</javaTypeResolver>
<javaModelGenerator targetPackage="com.entity" targetProject="src/main/java">
<property name="enableSubPackages" value="true"/>
</javaModelGenerator>
<sqlMapGenerator targetPackage="daoMappers" targetProject="src/main/resources">
<property name="enableSubPackages" value="true"/>
</sqlMapGenerator>
<javaClientGenerator targetPackage="com.dao" targetProject="src/main/java" type="XMLMAPPER">
<property name="enableSubPackages" value="true"/>
</javaClientGenerator>
<table schema="dbName" tableName="%"
enableSelectByPrimaryKey="true"
enableCountByExample="false"
enableUpdateByExample="false"
enableDeleteByExample="false"
enableSelectByExample="false"
selectByExampleQueryId="false"
enableDeleteByPrimaryKey="false"
enableUpdateByPrimaryKey="false"
enableInsert="false">
<property name="useActualColumnNames" value="true"></property>
</table>
</generatorConfiguration>

"Connection reset on log" from Commons DBCP

I am using Spring, Hibernate for CRUD operations and using Apache 'BasicDataSource' for connection pooling
Now the problem is when use below following configuration in datasource
<property name="maxActive" value="100"/>
<property name="maxWait" value="10000"/>
<property name="removeAbandoned" value="true"/>
<property name="removeAbandonedTimeout" value="60"/>
<property name="logAbandoned" value="true"/>
<property name="maxIdle" value="10"/>
than after using all connections I m getting Error of "Connection reset on log" but it takes long time to get back.
And if I m removing following lines from datasource
<property name="removeAbandoned" value="true"/>
<property name="removeAbandonedTimeout" value="60"/>
<property name="logAbandoned" value="true"/>
And adding below line to SessionFactory(hibernateProperties)
<prop key="hibernate.connection.release_mode">after_statement</prop>
than i getting no error on console but the problem is It uses the connection and close as soon as it completes.