can not resolve property tag with job parameter - properties

I am trying to concatenate the job parameter, #{jobParameters['arg1']} with myfeed.query to dynamically pick the right query from the properties file. But it's not getting resolved.
below is the exception log
Caused by: org.springframework.jdbc.BadSqlGrammarException: Executing query; bad SQL grammar [${myfeed.queryZONE1}]
below is the code snippet in the xml file.
<bean id="itemReader" class="org.springframework.batch.item.database.JdbcCursorItemReader" scope="step">
<property name="dataSource" ref="dataSource" />
<property name="sql">
<value>${myfeed.query#{jobParameters['arg1']}}</value>
</property>
<property name="rowMapper">
<bean class="com.sgcib.loa.matrix.mapper.MyFeedRowMapper" />
</property>
</bean>

To do that, you will need to declare explicit properties for your PropertyPlaceholderConfigurer :
<bean id="propertiesConfigurer" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<property name="properties" ref="properties" />
</bean>
<bean id="properties" class="org.springframework.beans.factory.config.PropertiesFactoryBean">
<property name="location">
<value>file:xxxxxxx.properties</value>
</property>
</bean>
Then using Spring Expression Language (spEL), you can get the right property with :
<property name="sql" value="#{properties.getProperty('myfeed.query' + jobParameters['arg1'])}" /></property>
Note that this solution maintains compatibility with ${...} syntax.

The above solution does not work, tested solution is One ItemReader, 2 SQL Query, jdbcTemplate?
http://incomplete-code.blogspot.in/2013/06/dynamically-switch-sql-statements-in.html

Related

GridGain Near Cache Not storing data

I have a query re. the setup of the GridGain near cache, we have a single server node with the config as listed below and have a single thick client connecting successfully to it ~
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd">
<bean class="org.apache.ignite.configuration.IgniteConfiguration">
<!-- PEER CLASS LOADING -->
<property name="peerClassLoadingEnabled" value="true"/>
<!-- CACHE CONFIG-->
<property name="cacheConfiguration">
<list>
<!-- ENTER CACHE TEMPLATE-->
<bean class="org.apache.ignite.configuration.CacheConfiguration">
<property name="name" value="cache1"/>
<property name="cacheMode" value="PARTITIONED"/>
<property name="rebalanceMode" value="SYNC"/>
<property name="nearConfiguration">
<bean class="org.apache.ignite.configuration.NearCacheConfiguration">
<property name="nearEvictionPolicyFactory">
<bean class="org.apache.ignite.cache.eviction.lru.LruEvictionPolicyFactory">
<property name="maxSize" value="100000"/>
</bean>
</property>
</bean>
</property>
</bean>
<bean class="org.apache.ignite.configuration.CacheConfiguration">
<property name="name" value="cache2"/>
<property name="cacheMode" value="PARTITIONED"/>
<property name="rebalanceMode" value="SYNC"/>
<property name="nearConfiguration">
<bean class="org.apache.ignite.configuration.NearCacheConfiguration">
<property name="nearEvictionPolicyFactory">
<bean class="org.apache.ignite.cache.eviction.lru.LruEvictionPolicyFactory">
<property name="maxSize" value="100000"/>
</bean>
</property>
</bean>
</property>
</bean>
<bean class="org.apache.ignite.configuration.CacheConfiguration">
<property name="name" value="cache3"/>
<property name="cacheMode" value="PARTITIONED"/>
<property name="rebalanceMode" value="SYNC"/>
<property name="nearConfiguration">
<bean class="org.apache.ignite.configuration.NearCacheConfiguration">
<property name="nearEvictionPolicyFactory">
<bean class="org.apache.ignite.cache.eviction.lru.LruEvictionPolicyFactory">
<property name="maxSize" value="100000"/>
</bean>
</property>
</bean>
</property>
</bean>
</list>
</property>
<!-- DISCOVERY-->
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder">
<property name="namespace" value="gridgain"/>
<property name="serviceName" value="gridgain-service"/>
</bean>
</property>
</bean>
</property>
</bean>
</beans>
In setting the server up like this it was my understanding that as per the documentation here , that "Once configured in this way, the near cache is created on any node that requests data from the underlying cache, including both server nodes and client nodes. When you get an instance of the cache, as shown in the following example, the data requests go through the near cache.
IgniteCache<Integer, Integer> cache = ignite.cache("myCache");
int value = cache.get(1);
Based on this I do not believe that I have any need to create the near cache config on our client? and have just implemented code as ~
IgniteCache<Object, Object> cache = ignite.cache(ourCacheName);
The issue I see is that when I peek at the local cache to try and find values in there, after searching for them ~
cache_.localPeek(key, CachePeekMode.NEAR)
The objects are not found, despite being searched for several times, and it looks like they are not added to our near cache setup, everything just refers to the underlying cache. Previously we had programmatically created the Near cache on the client and it had worked, but we would like to config the solution on the server if possible. Our client node is just using default config, if this makes a difference.
Any thoughts why we are not seeing a near cache?
Thanks,
LS
In order to use the cache I suggest you create the near cache explicitly using the following syntax:
IgniteCache<Integer, Integer> clientCache = client.getOrCreateNearCache(cacheCfg.getName(), nearCfg);
...
clientCache.get(1);
System.out.println(clientCache.localPeek(1, CachePeekMode.NEAR));
There are some tickets like IGNITE-15960 or IGNITE-1163 with discussions about the API improvements. I suppose the cache has to be declared on the servers first and then you would be able to create it explicitly on the clients. Agree, the docs and API are super confusing and have to be reworked.
Also, the near cache is local to a node, i.e. you might have them for some clients/servers and do not want to create it for other ones.

Apache Ignite CacheConfiguration repeat for each data set?

I am trying to modify default-config.xml by adding cacheConfiguration tags. Do i need to repeat cacheConfiguration XML tag for each data set RDD that i am tyring to keep to keep it in the memory ? Can i set backups to 0, if i don't want it.
ex:
<property name="cacheConfiguration">
<bean class="org.apache.ignite.configuration.CacheConfiguration">
<property name="name" value="TEST1_RDD"/>
<property name="cacheMode" value="PARTITIONED"/>
<property name="backups" value="0"/>
</bean>
</property> <property name="cacheConfiguration">
<bean class="org.apache.ignite.configuration.CacheConfiguration">
<property name="name" value="TEST2_RDD"/>
<property name="cacheMode" value="PARTITIONED"/>
<property name="backups" value="0"/>
</bean>
</property>
Also, do i need to specify explicitly write synchronization mode ? and by default which one Ignite consider ?
ex:
<property name="writeSynchronizationMode" value="FULL_SYNC"/>
Appreciate your response.
Yes, You have to write configuration for each cache as your cache may have different functionality/purpose and you have to set configuration according to it.
For backups it's default value is 0 and for CacheWriteSynchronizationMode default value is PRIMARY_SYNC
There is a possibility to define cache templates, if you don't want to provide the same configuration for caches: https://apacheignite.readme.io/docs/cache-template

Apache Camel JPA: Connect to two database instances

Is there an obvious way to use two JPA consumers/producers in the Camel Spring DSL to talk to two different database instances? I tried to configure two EntityManagerFactory instances pointing to two Persistence Units but end up with the following when error :(
Caused by: org.apache.camel.NoSuchBeanException: No bean could be found in the registry for: Found 2 beans of type: interface javax.persistence.EntityManagerFactory. Only one bean expected.
Camel Version: 2.13.2
You might have to make 2 entity manager factories, and have them point at different persistence units.
<bean id="entityManagerFactory"
class="org.springframework.orm.jpa.LocalEntityManagerFactoryBean">
<property name="persistenceUnitName" value="primary" />
</bean>
<bean id="entityManagerFactory2"
class="org.springframework.orm.jpa.LocalEntityManagerFactoryBean">
<property name="persistenceUnitName" value="secondary" />
</bean>
then when you set up the jpa bean, you can specify two different origins
<bean id="jpa" class="org.apache.camel.component.jpa.JpaComponent">
<property name="entityManagerFactory" ref="entityManagerFactory" />
<property name="transactionManager" ref="transactionManager" />
</bean>
<bean id="jpa2" class="org.apache.camel.component.jpa.JpaComponent">
<property name="entityManagerFactory" ref="entityManagerFactory2" />
<property name="transactionManager" ref="transactionManager" />
</bean>
and use:
<from uri="jpa://
or
<from uri="jpa2://

JPA PagingItemReader with PostgreSQL

I have done a job that reads data from a db and writes it in a file. It works fine with an Oracle DB. However, when I use it with Postgres I get the following error:
org.postgresql.util.PSQLException: ERROR: subquery in FROM must have an alias
Hint: Por ejemplo, FROM (SELECT ...) [AS] foo.
Position: 15
Error Code: 0
The reader is defined as follows:
<bean id="myReader"
class="org.springframework.batch.item.database.Jpa PagingItemReader">
<property name="entityManagerFactory" ref="entityManagerFactory" />
<property name="queryString" value="select c from CountryEntity c" />
<property name="pageSize" value="1000"/>
</bean>
Does anybody know if this is a common issue related with Postgres? Do I need to use a specific configuration?
You need to configure your JPA provider to use the PostGreSQL dialect.
E.g. for Hibernate, you would use a setup (persistence.xml) like this:
<persistence-unit name="somename" transaction-type="RESOURCE_LOCAL">
<properties>
<property name="hibernate.dialect" value="org.hibernate.dialect.PostgreSQLDialect"/>
<property name="hibernate.connection.url" value="jdbc:postgresql:sample"/>
<property name="javax.persistence.jdbc.driver" value="org.postgresql.Driver"/>
<property name="hibernate.format_sql" value="true"/>
<property name="hbm2ddl.auto" value="update"/>
</properties>
</persistence-unit>

Inject Weblogic JDBC datasource (JNDI name) in spring applicationContext.xml

Currently, I am creating dataSource in spring applicationContext.xml by reading DB credentials from a property file.
<!-- property config -->
<bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<property name="location"><value>/WEBINF/resources/springConfig.properties</value></property>
</bean>
<!-- Database connection Oracle 10g jdbc -->
<bean id="dataSource" class="oracle.jdbc.pool.OracleDataSource" destroy-method="close">
<property name="URL" value="${url}" />
<property name="user" value="${user}" />
<property name="password" value="${password}" />
<property name="connectionCachingEnabled" value="true" />
</bean>
Then i am referencing it using context.getBean
DataSource dataSource = (DataSource)context.getBean("dataSource");
I need to modify my applicationContext to create dataSource by not reading a property file but by using Weblogic JDBC datasource (I am not sure if its jndiTemplate or jdbcTemplate)
Please provide an example and do i need to change the way i do getBean("dataSource") once i use the jndiTemplate?
You want to do a JNDI datasource lookup. Here's an example:
http://middlewaremagic.com/weblogic/?p=5106