Error during readThrough operation in Apache ignite - ignite

I have implemented preload and readThrough operations for my database by referring to demo example at schema-import. I am getting following error while readThrough operation in Apache ignite. I am trying to read data from oracle database. I was able to preload data from database but while reading record I am getting following error.
[18:50:42,299][SEVERE][sys-#97%null%][GridPartitionedSingleGetFuture] Failed to get values from dht cache [fut=GridCompoundIdentityFuture [super=GridCompoundFuture [rdc=Collection reducer: null, initFlag=1, lsnrCalls=0, done=true, cancelled=false, err=class o.a.i.IgniteCheckedException: javax.cache.CacheException: Failed to find mapping description [cache=WarehouseCache, typeId=class WarehouseKey]. Please configure JdbcType to associate cache 'WarehouseCache' with JdbcPojoStore., futs=[false]]]]
class org.apache.ignite.IgniteCheckedException: javax.cache.CacheException: Failed to find mapping description [cache=WarehouseCache, typeId=class WarehouseKey]. Please configure JdbcType to associate cache 'WarehouseCache' with JdbcPojoStore.
at org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.loadFromStore(GridCacheStoreManagerAdapter.java:337)
at org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.load(GridCacheStoreManagerAdapter.java:293)
at org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.loadAllFromStore(GridCacheStoreManagerAdapter.java:426)
at org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.loadAll(GridCacheStoreManagerAdapter.java:392)
at org.apache.ignite.internal.processors.cache.GridCacheAdapter$16.call(GridCacheAdapter.java:1985)
at org.apache.ignite.internal.processors.cache.GridCacheAdapter$16.call(GridCacheAdapter.java:1983)
at org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6521)
at org.apache.ignite.internal.processors.closure.GridClosureProcessor$2.body(GridClosureProcessor.java:922)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: javax.cache.integration.CacheLoaderException: javax.cache.CacheException: Failed to find mapping description [cache=WarehouseCache, typeId=class WarehouseKey]. Please configure JdbcType to associate cache 'WarehouseCache' with JdbcPojoStore.
... 12 more
Caused by: javax.cache.CacheException: Failed to find mapping description [cache=WarehouseCache, typeId=class WarehouseKey]. Please configure JdbcType to associate cache 'WarehouseCache' with JdbcPojoStore.
at org.apache.ignite.cache.store.jdbc.CacheAbstractJdbcStore.entryMapping(CacheAbstractJdbcStore.java:693)
at org.apache.ignite.cache.store.jdbc.CacheAbstractJdbcStore.load(CacheAbstractJdbcStore.java:813)
at org.apache.ignite.internal.processors.cache.CacheStoreBalancingWrapper.load(CacheStoreBalancingWrapper.java:97)
at org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.loadFromStore(GridCacheStoreManagerAdapter.java:326)
... 11 more
[18:50:42] Ignite node stopped OK [uptime=00:00:02:031]
Exception in thread "main" javax.cache.integration.CacheLoaderException: javax.cache.CacheException: Failed to find mapping description [cache=WarehouseCache, typeId=class WarehouseKey]. Please configure JdbcType to associate cache 'WarehouseCache' with JdbcPojoStore.
at org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.loadFromStore(GridCacheStoreManagerAdapter.java:337)
at org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.load(GridCacheStoreManagerAdapter.java:293)
at org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.loadAllFromStore(GridCacheStoreManagerAdapter.java:426)
at org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.loadAll(GridCacheStoreManagerAdapter.java:392)
at org.apache.ignite.internal.processors.cache.GridCacheAdapter$16.call(GridCacheAdapter.java:1985)
at org.apache.ignite.internal.processors.cache.GridCacheAdapter$16.call(GridCacheAdapter.java:1983)
at org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6521)
at org.apache.ignite.internal.processors.closure.GridClosureProcessor$2.body(GridClosureProcessor.java:922)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: javax.cache.CacheException: Failed to find mapping description [cache=WarehouseCache, typeId=class WarehouseKey]. Please configure JdbcType to associate cache 'WarehouseCache' with JdbcPojoStore.
at org.apache.ignite.cache.store.jdbc.CacheAbstractJdbcStore.entryMapping(CacheAbstractJdbcStore.java:693)
at org.apache.ignite.cache.store.jdbc.CacheAbstractJdbcStore.load(CacheAbstractJdbcStore.java:813)
at org.apache.ignite.internal.processors.cache.CacheStoreBalancingWrapper.load(CacheStoreBalancingWrapper.java:97)
at org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.loadFromStore(GridCacheStoreManagerAdapter.java:326)
... 11 more
I have used following CacheConfig file,
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import java.sql.*;
import java.util.*;
import org.apache.ignite.cache.*;
import org.apache.ignite.cache.store.jdbc.*;
import org.apache.ignite.configuration.*;
/**
* CacheConfig definition.
*
* Code generated by Apache Ignite Schema Import utility: 10/21/2016.
*/
public class CacheConfig {
/**
* Create JDBC type for WAREHOUSE.
*
* #param cacheName Cache name.
* #return Configured JDBC type.
*/
private static JdbcType jdbcTypeWarehouse(String cacheName) {
JdbcType jdbcType = new JdbcType();
jdbcType.setCacheName(cacheName);
jdbcType.setDatabaseSchema("C##TPCCTEST");
jdbcType.setDatabaseTable("WAREHOUSE");
jdbcType.setKeyType("org.apache.ignite.WarehouseKey");
jdbcType.setValueType("org.apache.ignite.Warehouse");
// Key fields for WAREHOUSE.
Collection<JdbcTypeField> keys = new ArrayList<>();
keys.add(new JdbcTypeField(Types.INTEGER, "W_ID", int.class, "wId"));
jdbcType.setKeyFields(keys.toArray(new JdbcTypeField[keys.size()]));
// Value fields for WAREHOUSE.
Collection<JdbcTypeField> vals = new ArrayList<>();
vals.add(new JdbcTypeField(Types.INTEGER, "W_ID", int.class, "wId"));
vals.add(new JdbcTypeField(Types.VARCHAR, "W_NAME", String.class, "wName"));
vals.add(new JdbcTypeField(Types.VARCHAR, "W_STREET_1", String.class, "wStreet1"));
vals.add(new JdbcTypeField(Types.VARCHAR, "W_STREET_2", String.class, "wStreet2"));
vals.add(new JdbcTypeField(Types.VARCHAR, "W_CITY", String.class, "wCity"));
vals.add(new JdbcTypeField(Types.CHAR, "W_STATE", String.class, "wState"));
vals.add(new JdbcTypeField(Types.CHAR, "W_ZIP", String.class, "wZip"));
vals.add(new JdbcTypeField(Types.FLOAT, "W_TAX", Double.class, "wTax"));
vals.add(new JdbcTypeField(Types.FLOAT, "W_YTD", Double.class, "wYtd"));
jdbcType.setValueFields(vals.toArray(new JdbcTypeField[vals.size()]));
return jdbcType;
}
/**
* Create SQL Query descriptor for WAREHOUSE.
*
* #return Configured query entity.
*/
private static QueryEntity queryEntityWarehouse() {
QueryEntity qryEntity = new QueryEntity();
qryEntity.setKeyType("org.apache.ignite.WarehouseKey");
qryEntity.setValueType("org.apache.ignite.Warehouse");
// Query fields for WAREHOUSE.
LinkedHashMap<String, String> fields = new LinkedHashMap<>();
fields.put("wId", "java.lang.Integer");
fields.put("wName", "java.lang.String");
fields.put("wStreet1", "java.lang.String");
fields.put("wStreet2", "java.lang.String");
fields.put("wCity", "java.lang.String");
fields.put("wState", "java.lang.String");
fields.put("wZip", "java.lang.String");
fields.put("wTax", "java.lang.Double");
fields.put("wYtd", "java.lang.Double");
qryEntity.setFields(fields);
// Aliases for fields.
Map<String, String> aliases = new HashMap<>();
aliases.put("wId", "W_ID");
aliases.put("wName", "W_NAME");
aliases.put("wStreet1", "W_STREET_1");
aliases.put("wStreet2", "W_STREET_2");
aliases.put("wCity", "W_CITY");
aliases.put("wState", "W_STATE");
aliases.put("wZip", "W_ZIP");
aliases.put("wTax", "W_TAX");
aliases.put("wYtd", "W_YTD");
qryEntity.setAliases(aliases);
// Indexes for WAREHOUSE.
Collection<QueryIndex> idxs = new ArrayList<>();
idxs.add(new QueryIndex("wId", true, "SYS_C0011180"));
qryEntity.setIndexes(idxs);
return qryEntity;
}
/**
* Configure cache.
*
* #param cacheName Cache name.
* #param storeFactory Cache store factory.
* #return Cache configuration.
*/
public static <K, V> CacheConfiguration<K, V> cache(String cacheName, CacheJdbcPojoStoreFactory<K, V> storeFactory) {
if (storeFactory == null)
throw new IllegalArgumentException("Cache store factory cannot be null.");
CacheConfiguration<K, V> ccfg = new CacheConfiguration<>(cacheName);
ccfg.setCacheStoreFactory(storeFactory);
ccfg.setReadThrough(true);
ccfg.setWriteThrough(true);
// Configure JDBC types.
Collection<JdbcType> jdbcTypes = new ArrayList<>();
jdbcTypes.add(jdbcTypeWarehouse(cacheName));
storeFactory.setTypes(jdbcTypes.toArray(new JdbcType[jdbcTypes.size()]));
// Configure query entities.
Collection<QueryEntity> qryEntities = new ArrayList<>();
qryEntities.add(queryEntityWarehouse());
ccfg.setQueryEntities(qryEntities);
return ccfg;
}
}

Related

Having same configuration on Ignite client when already have configuration on Ignite server

Okay
Here is my Ignite Server cfg code.
#Bean("serverCfg")
public IgniteConfiguration createConfiguration() throws Exception {
IgniteConfiguration cfg = new IgniteConfiguration(); cfg.setIgniteInstanceName("CcPlatformUserRolesOrganizationAssociationServer");
cfg.setSqlSchemas("public");
TcpDiscoverySpi discovery = new TcpDiscoverySpi();
TcpDiscoveryMulticastIpFinder ipFinder = new
TcpDiscoveryMulticastIpFinder();
ipFinder.setAddresses(Arrays.asList("127.0.0.1:47500..47510"));
discovery.setIpFinder(ipFinder);
cfg.setDiscoverySpi(discovery);
// cfg.setPeerClassLoadingEnabled(true);
cfg.setCacheConfiguration(cacheOrganizationsCache()
,
cacheRolesCache(), cacheUsersCache(),
cacheUsersRolesCache(), cacheGroupsCache(),
cacheGroupusersCache(), cacheGlobalPermissionsCache(),
cacheTemplatesCache(), cachePasswordsCache()
);
return cfg;
}
And Here is my Ignite Client Code.
#Bean
public Ignite createConfiguration() throws Exception {
IgniteConfiguration cfg = new IgniteConfiguration();
cfg.setClientMode(true); cfg.setIgniteInstanceName("CcPlatformUserRolesOrganizationAssociationServerClient");
TcpDiscoverySpi discovery = new TcpDiscoverySpi();
TcpDiscoveryMulticastIpFinder ipFinder = new
TcpDiscoveryMulticastIpFinder();
ipFinder.setAddresses(Arrays.asList("127.0.0.1:47500..47510"));
discovery.setIpFinder(ipFinder);
cfg.setDiscoverySpi(discovery);
cfg.setCacheConfiguration( cacheOrganizationsCache(), cacheRolesCache(),
cacheUsersCache(), cacheUsersRolesCache(), cacheGroupsCache(),
cacheGroupusersCache() );
Ignite ignite = Ignition.start(cfg);
ignite.cluster().active(true);
return ignite;
}
So My question is do I had to have same piece of code that contains all cache configurations including data source at client side also?
How to avoid this code redundancy?
You don't have to supply all cache configurations on client. Once first server node comes up, it will start all caches, other nodes will be able to use them regardless whether they have them in their own configs or not. Any new caches will be created when nodes join. Cache configurations will never be changed on join of new nodes with differing cfgs of existing caches.

RedisSystemException: java.lang.ClassCastException: [B cannot be cast to java.lang.Long

I meet this exception when using jedis with spring-data-redis in multi threading environment:
org.springframework.data.redis.RedisSystemException: Unknown redis exception; nested exception is java.lang.ClassCastException: [B cannot be cast to java.lang.Long
at org.springframework.data.redis.FallbackExceptionTranslationStrategy.getFallback(FallbackExceptionTranslationStrategy.java:48)
at org.springframework.data.redis.FallbackExceptionTranslationStrategy.translate(FallbackExceptionTranslationStrategy.java:38)
at org.springframework.data.redis.connection.jedis.JedisConnection.convertJedisAccessException(JedisConnection.java:241)
at org.springframework.data.redis.connection.jedis.JedisConnection.rPush(JedisConnection.java:1705)
at org.springframework.data.redis.core.DefaultListOperations$14.doInRedis(DefaultListOperations.java:187)
at org.springframework.data.redis.core.DefaultListOperations$14.doInRedis(DefaultListOperations.java:184)
at org.springframework.data.redis.core.RedisTemplate.execute(RedisTemplate.java:207)
at org.springframework.data.redis.core.RedisTemplate.execute(RedisTemplate.java:169)
at org.springframework.data.redis.core.AbstractOperations.execute(AbstractOperations.java:91)
at org.springframework.data.redis.core.DefaultListOperations.rightPush(DefaultListOperations.java:184)
at XXXXXXXXXXXXXXX
Caused by: java.lang.ClassCastException: [B cannot be cast to java.lang.Long
at redis.clients.jedis.Connection.getIntegerReply(Connection.java:265)
at redis.clients.jedis.BinaryJedis.rpush(BinaryJedis.java:1053)
at org.springframework.data.redis.connection.jedis.JedisConnection.rPush(JedisConnection.java:1703)
... 19 common frames omitted
jedis version: 2.9.0
spring-data-redis version: 1.8.12.RELEASE
redis server version: 3.0.6
My Client Java Code:
// Init JedisConnectionFactory
JedisPoolConfig jedisPoolConfig = new JedisPoolConfig();
JedisConnectionFactory jedisConnectionFactory = new JedisConnectionFactory();
jedisPoolConfig.setMaxTotal(maxActive);
jedisPoolConfig.setMaxIdle(maxIdle);
jedisPoolConfig.setMaxWaitMillis(maxWait);
jedisPoolConfig.setTestOnBorrow(true);
jedisConnectionFactory.setPoolConfig(jedisPoolConfig);
jedisConnectionFactory.setHostName(host);
jedisConnectionFactory.setPort(port);
jedisConnectionFactory.setTimeout(timeout);
jedisConnectionFactory.setPassword(password);
jedisConnectionFactory.afterPropertiesSet();
// Create RedisTemplate
redisTemplate = new RedisTemplate<String, Object>();
redisTemplate.setConnectionFactory(jedisConnectionFactory);
redisTemplate.setEnableTransactionSupport(true);
StringRedisSerializer serializer = new StringRedisSerializer();
redisTemplate.setKeySerializer(serializer);
redisTemplate.setValueSerializer(serializer);
redisTemplate.setHashKeySerializer(serializer);
redisTemplate.setHashValueSerializer(serializer);
redisTemplate.afterPropertiesSet();
Finally, I solved my problem by remove this line, after I read the source code of spring-data:
redisTemplate.setEnableTransactionSupport(true);
You should share the pool and get a different Jedis from it in every thread.
See more on GitHub
This is a recurring pattern on Samebug. Try to search with your stack trace.

Apache camel SSL connection to restful service

I am busy with a project where I have to do a GET on an exposed rest service using specific certificates. I am using the apache camel framework with the https4 component. I created a keystore and tested it using soapUI and it connected successfully, but I am however unable to connect through my project.
I used the following page as reference: http://camel.apache.org/http4.html
I set up the SSL for the HTTP Client through the following configuration:
<spring:sslContextParameters id="sslContextParameters">
<spring:keyManagers keyPassword="xxxx">
<spring:keyStore resource="classpath:certificates/keystore.jks" password="xxxx"/>
</spring:keyManagers>
</spring:sslContextParameters>
<setHeader headerName="CamelHttpMethod">
<simple>GET</simple>
</setHeader>
My endpoint is configured as:
<to uri="https4://endpointUrl:9007/v1/{id}?sslContextParametersRef=sslContextParameters"/>
The stacktrace I am receiving:
javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at sun.security.ssl.Alerts.getSSLException(Alerts.java:192)
at sun.security.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1904)
at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:279)
at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:273)
at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1446)
at sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:209)
at sun.security.ssl.Handshaker.processLoop(Handshaker.java:901)
at sun.security.ssl.Handshaker.process_record(Handshaker.java:837)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1023)
at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1332)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1359)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1343)
at org.apache.http.conn.ssl.SSLConnectionSocketFactory.createLayeredSocket(SSLConnectionSocketFactory.java:394)
at org.apache.http.conn.ssl.SSLConnectionSocketFactory.connectSocket(SSLConnectionSocketFactory.java:353)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:141)
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:353)
at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:380)
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236)
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:88)
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
at org.apache.camel.component.http4.HttpProducer.executeMethod(HttpProducer.java:301)
at org.apache.camel.component.http4.HttpProducer.process(HttpProducer.java:173)
at org.apache.camel.util.AsyncProcessorConverterHelper$ProcessorToAsyncProcessorBridge.process(AsyncProcessorConverterHelper.java:61)
at org.apache.camel.processor.SendProcessor.process(SendProcessor.java:145)
at org.apache.camel.processor.interceptor.TraceInterceptor.process(TraceInterceptor.java:163)
at org.apache.camel.processor.RedeliveryErrorHandler.process(RedeliveryErrorHandler.java:468)
at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:197)
at org.apache.camel.processor.Pipeline.process(Pipeline.java:121)
at org.apache.camel.processor.Pipeline.process(Pipeline.java:83)
at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:197)
at org.apache.camel.component.direct.DirectProducer.process(DirectProducer.java:62)
at org.apache.camel.impl.InterceptSendToEndpoint$1.process(InterceptSendToEndpoint.java:164)
at org.apache.camel.processor.SendProcessor.process(SendProcessor.java:145)
at org.apache.camel.processor.interceptor.TraceInterceptor.process(TraceInterceptor.java:163)
at org.apache.camel.processor.RedeliveryErrorHandler.process(RedeliveryErrorHandler.java:468)
at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:197)
at org.apache.camel.processor.ChoiceProcessor.process(ChoiceProcessor.java:117)
at org.apache.camel.processor.interceptor.TraceInterceptor.process(TraceInterceptor.java:163)
at org.apache.camel.processor.RedeliveryErrorHandler.process(RedeliveryErrorHandler.java:468)
at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:197)
at org.apache.camel.processor.Pipeline.process(Pipeline.java:121)
at org.apache.camel.processor.Pipeline.access$100(Pipeline.java:44)
at org.apache.camel.processor.Pipeline$1.done(Pipeline.java:139)
at org.apache.camel.processor.CamelInternalProcessor$InternalCallback.done(CamelInternalProcessor.java:257)
at org.apache.camel.processor.RedeliveryErrorHandler$1.done(RedeliveryErrorHandler.java:480)
at org.apache.camel.processor.interceptor.TraceInterceptor$1.done(TraceInterceptor.java:180)
at org.apache.camel.processor.SendProcessor$1.done(SendProcessor.java:155)
at org.apache.camel.processor.CamelInternalProcessor$InternalCallback.done(CamelInternalProcessor.java:257)
at org.apache.camel.processor.Pipeline$1.done(Pipeline.java:148)
at org.apache.camel.processor.CamelInternalProcessor$InternalCallback.done(CamelInternalProcessor.java:257)
at org.apache.camel.processor.RedeliveryErrorHandler$1.done(RedeliveryErrorHandler.java:480)
at org.apache.camel.processor.interceptor.TraceInterceptor$1.done(TraceInterceptor.java:180)
at org.apache.camel.processor.SendProcessor$1.done(SendProcessor.java:155)
at org.apache.camel.component.cxf.CxfClientCallback.handleResponse(CxfClientCallback.java:61)
at org.apache.cxf.endpoint.ClientImpl.onMessage(ClientImpl.java:827)
at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.handleResponseInternal(HTTPConduit.java:1672)
at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream$1.run(HTTPConduit.java:1168)
at org.apache.cxf.workqueue.AutomaticWorkQueueImpl$3.run(AutomaticWorkQueueImpl.java:428)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at org.apache.cxf.workqueue.AutomaticWorkQueueImpl$AWQThreadFactory$1.run(AutomaticWorkQueueImpl.java:353)
at java.lang.Thread.run(Thread.java:745)
Any help would be much appreciated !
Just same: I followed documented instructions and got too stuck on "PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target". There's a quick fix, but if you want to link the configuration to the client HTTP session at stake, it becomes a complex set-up.
Method 1:
Doc pages, forums, and this other article would tell you that setting JVM launch options "-Djavax.net.ssl.trustStore=myKeystore.jks -Djavax.net.ssl.trustStorePassword=mystorepass" do solve the issue, provided the remote parties' certificates (self signed, or signed by a CA but then with all the full certificate chain) were all fetched as Trusted certificates in the supplied keystore. Fact is, HTTP4 is based on JSSE, and these java launch options do configure the stack JVM-wide.
As an alternative, you can also fetch peers' certificates (complete chains) in the default JVM keystore jre\lib\security\cacerts (initial password: "changeit") and thus not even need JVM options.
If you have a few outgoing client connections and few peer certificates, this is the simplest way.
Method 2:
In our context, with above 100 remote parties, each requiring certificate updates every 2 years in average, that method implies a JVM reboot on an updated keystore about every week. Our highly available gateway is no longer highly available. So I searched a dynamic/per-connexion/programmatic way.
Below is a simplified excerpt of code from a CAMEL Processor that we use to remotely connect as REST or plain-vanilla HTTP client, with or without SSL/TLS, and with or without client-side certificate (i.e. 2-way SSL/TLS versus 1-way SSL/TLS), as well as combine HTTP Basic Auth as required by peers.
For various reasons the now old CAMEL version 2.16.3 is still used in our context. I have not tested yet newer versions. I suspect no changes given the libraries at stake under the Apache CAMEL layer.
I have added in the code below many comments detailling variant API's to the same effect. So you have clues below to further simplify the code or try alternatives with newer HTTP4 versions. As is, the code works with 2.16, as a CAMEL Processor bean within a Spring application context that contains the entire CAMEL route definition in DSL.
In our context we use java code for configuring entirely dynamic SSL/TLS outbound connexions per session. You should have no difficulties freezing part of the configuration that we set below dynamically via java, into the CAMEL XML DSL as suitable to your context.
Maven dependencies at stake:
<properties>
<camel-version>2.16.3</camel-version>
</properties>
...
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-core</artifactId>
<version>${camel-version}</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-http4</artifactId>
<version>${camel-version}</version>
<scope>provided</scope>
</dependency>
Code extracted from our org.apache.camel.Processor (I have removed many Exception handling and simplified the code below in order to focus on the solution):
// relevant imports (partial)
import java.security.KeyStore;
import java.security.SecureRandom;
import java.security.Security;
import java.security.cert.CertificateException;
import java.security.cert.X509Certificate;
import javax.net.ssl.HostnameVerifier;
import javax.net.ssl.KeyManagerFactory;
import javax.net.ssl.SSLContext;
import javax.net.ssl.SSLSession;
import javax.net.ssl.TrustManager;
import javax.net.ssl.TrustManagerFactory;
import javax.net.ssl.X509TrustManager;
import org.apache.camel.Exchange;
import org.apache.camel.Processor;
import org.apache.camel.component.http4.HttpClientConfigurer;
import org.apache.camel.component.http4.HttpComponent;
import org.apache.http.config.Registry;
import org.apache.http.config.RegistryBuilder;
import org.apache.http.conn.HttpClientConnectionManager;
import org.apache.http.conn.socket.ConnectionSocketFactory;
import org.apache.http.conn.ssl.SSLConnectionSocketFactory;
import org.apache.http.impl.conn.BasicHttpClientConnectionManager;
...
#Override
public void process(Exchange exchange) throws Exception {
// assume here that we have previously fetched all dynamic connexion parameters in set of java Properties. Of course you can use numerous means to inject connection parameters
Properties params= ... ;
// Trick! 'targetURL' is the URI of the http server to call. Its not the same as the Camel endpoint URI (see further "httpUrlToken" placeHolder), on which you configure endpoint options
// Fact is, we prefer to pass just the target URL as parameter and keep full control on building the CAMEL endpoint URI in java
String targetURL= params.getProperty("targetURL"); // URL to call, e.g. "http://remoteHost.com/some/servlet/path". Will override the placeholder URL set on the endpoint.
// default plain HTTP without SSL/TLS:
String endPointURI = "http4://httpUrlToken?throwExceptionOnFailure=false"; // with option to prevent exceptions from being thrown for failed response codes. It allows us to process all the response codes in a response Processor
// Oh yes! we have to manage a map of HttpComponent instances, because the CAMEL doc clearly tells that each instance can only support a single configuration
// and our true connector is multithreading where each request may go to a different (dynamic) destination with different SSL settings,
// so we actually use a Map of HttpComponent instances of size MAX_THREADS and indexed by the thread ID plus ageing and re-use strategies... but this brings us too far.
// So, for a single thread per client instance, you can just do:
HttpComponent httpComponent = exchange.getContext().getComponent("http4", HttpComponent.class);
// overload in case of SSL/TLS
if (targetURL.startsWith("https")) {
try {
endPointURI = "https4://httpUrlToken?throwExceptionOnFailure=false";
httpComponent = exchange.getContext().getComponent("https4", HttpComponent.class); // well: "https4" and "http4" are the same, so you may skip this line! (our true HttpComponent map is common to secured and unsecured client connexions)
// basic SSL context setup as documented elsewhere, should be enough in theory
SSLContext sslctxt = getSSLContext(exchange, params.getProperty("keystoreFilePath"), params.getProperty("keystorePassword"), params.getProperty("authenticationMode")); // cfr helper method below
HttpClientConfigurer httpClientConfig = getEndpointClientConfigurer(sslctxt); // cfr helper method below
httpComponent.setHttpClientConfigurer(httpClientConfig);
// from here, if you skip the rest of the configuration, you'll get the exception "sun.security.provider.certpath.SunCertPathBuilderException:unable to find valid certification path to requested target"
// the SSL context covers certificate validation but not the host name verification process
// we de-activate here at the connection factory level (systematically... you may not want that), and link the later to the HTTP component
HostnameVerifier hnv = new AllowAll();
SSLConnectionSocketFactory sslSocketFactory = new SSLConnectionSocketFactory(sslctxt, hnv);
// You may choose to enforce the BasicHttpClientConnectionManager or PoolingHttpClientConnectionManager, cfr CAMEL docs
// In addition, the following linkage of the connection factory through a Registry that captures the 'https' scheme to your factory is required
Registry<ConnectionSocketFactory> lookup = RegistryBuilder.<ConnectionSocketFactory>create().register("https", sslSocketFactory).build();
HttpClientConnectionManager connManager = new BasicHttpClientConnectionManager(lookup);
// Does not work in 2.16, as documented at http://camel.apache.org/http4.html#HTTP4-UsingtheJSSEConfigurationUtility
// ... keystore and key manager setup ...
// SSLContextParameters scp = new SSLContextParameters();
// scp.setKeyManagers(...);
// httpComponent.setSslContextParameters(scp);
// Not as good as using a connection manager on the HTTP component, although same effects in theory
// HttpClientBuilder clientBuilder = HttpClientBuilder.create();
// clientBuilder.set... various parameters...
// httpClientConfig.configureHttpClient(clientBuilder);
// Commented-out alternative method to set BasicAuth with user and password
// HttpConfiguration httpConfiguration = new HttpConfiguration();
// httpConfiguration.setAuthUsername(authUsername);
// ... more settings ...
// httpComponent.setHttpConfiguration(httpConfiguration);
// setClientConnectionManager() is compulsory to prevent "SunCertPathBuilderException: unable to find valid certification path to requested target"
// if instead we bind the connection manager to a clientBuilder, that doesn't work...
httpComponent.setClientConnectionManager(connManager);
} catch (Exception e) { ... ; }
}
// (back to code common to secured and unsecured client sessions)
// additional parameters on the endpoint as needed, cfr API docs
httpComponent.set...(...) ;
// you may want to append these 3 URI options in case of HTTP[S] with Basic Auth
if (... basic Auth needed ...)
endPointURI += "&authUsername="+params.getProperty("user")+"&authPassword="+params.getProperty("password")+"&authenticationPreemptive=true";
// *********** ACTUAL TRANSMISSION ********************
exchange.getIn().setHeader(Exchange.HTTP_URI, targetURL); // needed to overload the "httpUrlToken" placeholder in the endPointURI
// Next, there are many ways to get a CAMEL Producer or ProducerTemplate
// e.g. httpComponent.createEndpoint(endPointURI).createProducer()
// ... in our case we use a template injected from a Spring application context (i.e. <camel:template id="producerTemplate"/>) via constructor arguments on our Processor bean
try {
producerTemplate.send(httpComponent.createEndpoint(endPointURI),exchange);
} catch (Exception e) { ...; }
// you can then process the HTTP response here, or better dedicate the next
// Processor on the CAMEL route to such handlings...
...
}
Supporting helper methods, invoked by above code
private HttpClientConfigurer getEndpointClientConfigurer(final SSLContext sslContext) {
return new HttpClientConfigurer(){
#Override
public void configureHttpClient(HttpClientBuilder clientBuilder) {
// I put a logger trace here to see if/when the ssl context is actually applied, the outcome was ... weird, try it!
clientBuilder.setSSLContext(sslContext);
}
};
}
/**
* Build a SSL context with keystore and other parameters according to authentication mode.
* The keystore may just contain a trusted peer's certificate for 1way cases, and the associated certificate chain up to a trusted root as applicable.
* The keystore shall too contain one single client private key and certificate for 2way modes. We assume here a same password on keystore and private key.
* #param authenticationMode one of "1waySSL" "1wayTLS" "2waySSL" "2wayTLS" each possibly suffixed by "noCHECK" as in "1waySSLnoCHECK"
* #param keystoreFilePath can be null for "noCHECK" modes
* #param keystorePassword would be null if above is null
*/
private SSLContext getSSLContext(Exchange exchange, String keystoreFilePath, String keystorePassword, String authenticationMode) throws GeneralSecurityException, FileNotFoundException, IOException {
SSLContext sslContext = SSLContext.getInstance(authenticationMode.substring(4,7).toUpperCase(),"SunJSSE");
//enforce Trust ALL ? pass a trust manager that does not validate certificate chains
if (authenticationMode.endsWith("noCHECK")) {
TrustManager[] trustAllCerts = new TrustManager[]{ new TrustALLManager()};
sslContext.init(null , trustAllCerts, null);
return sslContext;
}
// we use https, and validate remote cert's by default, henceforth keystore and password become compulsory
if (null == keystoreFilePath || null == keystorePassword)
throw new GeneralSecurityException("Config ERROR: using https://... and implicit default AUTHMODE=1waySSL altogether requires to supply keystore parameters");
KeyStore trustStore = KeyStore.getInstance(KeyStore.getDefaultType());
TrustManagerFactory tmf = TrustManagerFactory.getInstance("SunX509");
trustStore.load(new FileInputStream(keystoreFilePath), keystorePassword.toCharArray());
tmf.init(trustStore);
KeyStore keyStore = KeyStore.getInstance(KeyStore.getDefaultType());
KeyManagerFactory kmf = KeyManagerFactory.getInstance("SunX509");
if (authenticationMode.charAt(0)=='2') { // our authenticationMode starts with 1way.. or 2way...
// 2way... case: set the keystore parameters accordingly
keyStore.load(new FileInputStream(keystoreFilePath), keystorePassword.toCharArray());
kmf.init(keyStore, keystorePassword.toCharArray());
sslContext.init(kmf.getKeyManagers() , tmf.getTrustManagers(), new SecureRandom());
} else { // 1way... case
sslContext.init(null , tmf.getTrustManagers(), new SecureRandom());
}
return sslContext;
}
// Create a trust manager that does not validate certificate chains
private class TrustALLManager implements X509TrustManager {
#Override
public void checkClientTrusted(X509Certificate[] chain, String authType) throws CertificateException { }
#Override
public void checkServerTrusted(X509Certificate[] chain, String authType) throws CertificateException { }
#Override
public X509Certificate[] getAcceptedIssuers() {
return new X509Certificate[0];
}
}
private static class AllowAll implements HostnameVerifier
{
#Override
public boolean verify(String arg0, SSLSession arg1) {
return true;
}
}
}
Hope this helps. I spent many hours trying to get it working (although I know well about SSL/TLS principles, security, X509, etc) ... This code is far from my taste for clean and lean java code. In addition I assumed that you do know how to build a keystore, supply all needed certificate chains, define a CAMEL route, etc. As such, it works with Camel 2.16 within a Spring Application Context, and has no other pretention than providing clues that would save you hours.

How to restore cache after ignite server reconnected

Really appreciate if someone can help me out.
I have an ignite server written in Java, and have a client written in C#, the client can be connected to the server, and can get server's cache correctly.
once the server restarted, the client received the EVT_CLIENT_NODE_RECONNECTED event from server. But the cache cannot be used any more.
Server code:
CacheConfiguration cacheConfiguration = new CacheConfiguration();
cacheConfiguration.setName("Sample");
cacheConfiguration.setCacheMode(CacheMode.REPLICATED);
cacheConfiguration.setAtomicityMode(CacheAtomicityMode.ATOMIC);
cacheConfiguration.setRebalanceMode(CacheRebalanceMode.ASYNC);
cacheConfiguration.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
cacheConfiguration.setBackups(0);
cacheConfiguration.setCopyOnRead(true);
cacheConfiguration.setStoreKeepBinary(false);
cacheConfiguration.setReadThrough(false);
cacheConfiguration.setWriteThrough(true);
cacheConfiguration.setWriteBehindEnabled(true);
cacheConfiguration.setWriteBehindFlushFrequency(2000);
cacheConfiguration.setWriteBehindFlushThreadCount(2);
DriverManagerDataSource theDataSource = new DriverManagerDataSource();
theDataSource.setDriverClassName("org.postgresql.Driver");
theDataSource.setUrl("jdbc:postgresql://192.168.224.128:5432/sample");
theDataSource.setUsername("postgres");
theDataSource.setPassword("password");
CacheJdbcPojoStoreFactory jdbcPojoStoreFactory = new CacheJdbcPojoStoreFactory<Long, SampleModel>()
.setParallelLoadCacheMinimumThreshold(0)
.setMaximumPoolSize(1)
.setDataSource(theDataSource);
cacheConfiguration.setCacheStoreFactory(jdbcPojoStoreFactory);
Collection<JdbcType> jdbcTypes = new ArrayList<JdbcType>();
JdbcType jdbcType = new JdbcType();
jdbcType.setCacheName("Sample");
jdbcType.setDatabaseSchema("public");
jdbcType.setKeyType("java.lang.Long");
Collection<JdbcTypeField> keys = new ArrayList<JdbcTypeField>();
keys.add(new JdbcTypeField(Types.BIGINT, "id", long.class, "id"));
jdbcType.setKeyFields(keys.toArray(new JdbcTypeField[keys.size()]));
Collection<JdbcTypeField> vals = new ArrayList<JdbcTypeField>();
jdbcType.setDatabaseTable("sample");
jdbcType.setValueType("com.nmf.SampleModel");
vals.add(new JdbcTypeField(Types.BIGINT, "id", long.class, "id"));
vals.add(new JdbcTypeField(Types.VARCHAR, "name", String.class, "name"));
jdbcType.setValueFields(vals.toArray(new JdbcTypeField[vals.size()]));
jdbcTypes.add(jdbcType);
((CacheJdbcPojoStoreFactory)cacheConfiguration.getCacheStoreFactory()).setTypes(jdbcTypes.toArray(new JdbcType[jdbcTypes.size()]));
IgniteConfiguration icfg = new IgniteConfiguration();
icfg.setCacheConfiguration(cacheConfiguration);
Ignite ignite = Ignition.start(icfg);
SampleModel:
public class SampleModel implements Serializable {
private long id;
private String Name;
public long getId() {
return id;
}
public void setId(long id) {
id = id;
}
public String getName() {
return Name;
}
public void setName(String name) {
Name = name;
}
#Override
public boolean equals(Object o) {
if (this == o) return true;
if (!(o instanceof SampleModel)) return false;
SampleModel that = (SampleModel) o;
return id == that.id;
}
#Override
public int hashCode() {
return (int) (id ^ (id >>> 32));
}
}
Client Code:
ExecutorService executor = Executors.newSingleThreadExecutor(r -> new Thread(r, "worker"));
CacheConfiguration cacheConfiguration = new CacheConfiguration();
cacheConfiguration.setName("Sample");
cacheConfiguration.setCacheMode(CacheMode.REPLICATED);
cacheConfiguration.setAtomicityMode(CacheAtomicityMode.ATOMIC);
cacheConfiguration.setRebalanceMode(CacheRebalanceMode.ASYNC);
cacheConfiguration.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
cacheConfiguration.setBackups(0);
cacheConfiguration.setCopyOnRead(true);
cacheConfiguration.setStoreKeepBinary(false);
IgniteConfiguration icfg = new IgniteConfiguration();
icfg.setCacheConfiguration(cacheConfiguration);
icfg.setClientMode(true);
final Ignite ignite = Ignition.start(icfg);
ignite.events().localListen(new IgnitePredicate<Event>() {
public boolean apply(Event event) {
if (event.type() == EVT_CLIENT_NODE_RECONNECTED) {
System.out.println("Reconnected");
executor.submit(()-> {
IgniteCache<Long, SampleModel> cache = ignite.getOrCreateCache("Sample");
System.out.println("Got the cache");
SampleModel model = cache.get(1L);
System.out.println(model.getName());
});
}
return true;
}
}, EVT_CLIENT_NODE_RECONNECTED);
IgniteCache<Long, SampleModel> cache = ignite.getOrCreateCache("Sample");
SampleModel model = cache.get(1L);
System.out.println(model.getName());
Error log on Client:
SEVERE: Failed to reinitialize local partitions (preloading will be stopped): GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion [topVer=2, minorTopVer=1], nodeId=dea5f59b, evt=DISCOVERY_CUSTOM_EVT]
class org.apache.ignite.IgniteCheckedException: Failed to start component: class org.apache.ignite.IgniteException: Failed to initialize cache store (data source is not provided).
at org.apache.ignite.internal.util.IgniteUtils.startLifecycleAware(IgniteUtils.java:8726)
at org.apache.ignite.internal.processors.cache.GridCacheProcessor.createCache(GridCacheProcessor.java:1486)
at org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareCacheStart(GridCacheProcessor.java:1931)
at org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareCacheStart(GridCacheProcessor.java:1833)
at org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager.onCacheChangeRequest(CacheAffinitySharedManager.java:379)
at org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onCacheChangeRequest(GridDhtPartitionsExchangeFuture.java:688)
at org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:529)
at org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:1806)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.lang.Thread.run(Thread.java:745)
Caused by: class org.apache.ignite.IgniteException: Failed to initialize cache store (data source is not provided).
at org.apache.ignite.cache.store.jdbc.CacheAbstractJdbcStore.start(CacheAbstractJdbcStore.java:298)
at org.apache.ignite.internal.util.IgniteUtils.startLifecycleAware(IgniteUtils.java:8722)
... 9 more
July 25, 2017 12:58:38 PM org.apache.ignite.logger.java.JavaLogger error
SEVERE: Failed to wait for completion of partition map exchange (preloading will not start): GridDhtPartitionsExchangeFuture [dummy=false, forcePreload=false, reassign=false, discoEvt=DiscoveryCustomEvent [customMsg=null, affTopVer=AffinityTopologyVersion [topVer=2, minorTopVer=1], super=DiscoveryEvent [evtNode=TcpDiscoveryNode [id=dea5f59b-bdda-47a1-b31d-1ecb08fc746f, addrs=[0:0:0:0:0:0:0:1, 127.0.0.1, 192.168.224.1, 192.168.6.15, 192.168.80.1, 2001:0:9d38:90d7:c83:fac:98d7:5fc1], sockAddrs=[/0:0:0:0:0:0:0:1:0, /127.0.0.1:0, Ares-W11/169.254.194.93:0, /2001:0:9d38:90d7:c83:fac:98d7:5fc1:0, /192.168.6.15:0, windows10.microdone.cn/192.168.224.1:0, /192.168.80.1:0], discPort=0, order=2, intOrder=0, lastExchangeTime=1500958697559, loc=true, ver=2.0.0#20170430-sha1:d4eef3c6, isClient=true], topVer=2, nodeId8=dea5f59b, msg=null, type=DISCOVERY_CUSTOM_EVT, tstamp=1500958718133]], crd=TcpDiscoveryNode [id=247d2926-010d-429b-aef2-97a18fbb3b5d, addrs=[0:0:0:0:0:0:0:1, 127.0.0.1, 192.168.224.1, 192.168.6.15, 192.168.80.1, 2001:0:9d38:90d7:c83:fac:98d7:5fc1], sockAddrs=[/192.168.6.15:47500, /2001:0:9d38:90d7:c83:fac:98d7:5fc1:47500, windows10.microdone.cn/192.168.224.1:47500, /192.168.80.1:47500, Ares-W11/169.254.194.93:47500, /0:0:0:0:0:0:0:1:47500, /127.0.0.1:47500], discPort=47500, order=1, intOrder=1, lastExchangeTime=1500958718083, loc=false, ver=2.0.0#20170430-sha1:d4eef3c6, isClient=false], exchId=GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion [topVer=2, minorTopVer=1], nodeId=dea5f59b, evt=DISCOVERY_CUSTOM_EVT], added=true, initFut=GridFutureAdapter [ignoreInterrupts=false, state=DONE, res=false, hash=842035444], init=false, lastVer=null, partReleaseFut=null, affChangeMsg=null, skipPreload=true, clientOnlyExchange=false, initTs=1500958718133, centralizedAff=false, changeGlobalStateE=null, exchangeOnChangeGlobalState=false, forcedRebFut=null, evtLatch=0, remaining=[247d2926-010d-429b-aef2-97a18fbb3b5d], srvNodes=[TcpDiscoveryNode [id=247d2926-010d-429b-aef2-97a18fbb3b5d, addrs=[0:0:0:0:0:0:0:1, 127.0.0.1, 192.168.224.1, 192.168.6.15, 192.168.80.1, 2001:0:9d38:90d7:c83:fac:98d7:5fc1], sockAddrs=[/192.168.6.15:47500, /2001:0:9d38:90d7:c83:fac:98d7:5fc1:47500, windows10.microdone.cn/192.168.224.1:47500, /192.168.80.1:47500, Ares-W11/169.254.194.93:47500, /0:0:0:0:0:0:0:1:47500, /127.0.0.1:47500], discPort=47500, order=1, intOrder=1, lastExchangeTime=1500958718083, loc=false, ver=2.0.0#20170430-sha1:d4eef3c6, isClient=false]], super=GridFutureAdapter [ignoreInterrupts=false, state=DONE, res=class o.a.i.IgniteCheckedException: Failed to start component: class o.a.i.IgniteException: Failed to initialize cache store (data source is not provided)., hash=1281081640]]
class org.apache.ignite.IgniteCheckedException: Failed to start component: class org.apache.ignite.IgniteException: Failed to initialize cache store (data source is not provided).
at org.apache.ignite.internal.util.IgniteUtils.startLifecycleAware(IgniteUtils.java:8726)
at org.apache.ignite.internal.processors.cache.GridCacheProcessor.createCache(GridCacheProcessor.java:1486)
at org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareCacheStart(GridCacheProcessor.java:1931)
at org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareCacheStart(GridCacheProcessor.java:1833)
at org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager.onCacheChangeRequest(CacheAffinitySharedManager.java:379)
at org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onCacheChangeRequest(GridDhtPartitionsExchangeFuture.java:688)
at org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:529)
at org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:1806)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.lang.Thread.run(Thread.java:745)
Caused by: class org.apache.ignite.IgniteException: Failed to initialize cache store (data source is not provided).
at org.apache.ignite.cache.store.jdbc.CacheAbstractJdbcStore.start(CacheAbstractJdbcStore.java:298)
at org.apache.ignite.internal.util.IgniteUtils.startLifecycleAware(IgniteUtils.java:8722)
... 9 more
Only server nodes store caches(except of Local caches), so, when you restarted server node, this cache was stopped. The problem here is that client node was reconnected to the cluster, but not joined it as new node. That's why cache was not created again.
I think it's wrong behavior and cache should be recreated at client reconnecting.
I've created an issue for that.
As a workaround, you can use Ignite.GetOrCreateCache("Sample") method instead of Ignite.GetCache("Sample")
Are you still having issues where Ignite.GetOrCreateCache("Sample") hangs? Make sure you aren't making that call from a thread in the System Pool. I was listening for the EVT_CLIENT_NODE_RECONNECTED event and calling Ignite.GetOrCreateCache("Sample") when I ran into a similar issue. For more information, see the answer to this question: Closures stuck in 2.0 when try to add an element into the queue

Unable to access a read-only Embedded Derby database from within EAR file deployed on JBoss server

I am trying to access a read-only Embedded Derby database. It is available as myDB.jar. This jar has one folder of the Apache Derby database - myDB (log and seg0 folders and service.properties file). This code works fine when I run from a file with a main method. But, when I package it into EAR and deploy it on server it gives error.
This database is packaged with EAR file and deployed on JBoss 5.0.1 server.
The EAR has following contents:
• myWebApp.war
• myEjbs.jar
• myDB.jar
• META-INF/MANIFEST.MF and META-INF/application.xml
Contents of MANIFEST.MF:
Manifest-Version: 1.0
Class-Path: myDB.jar
myDB.jar is not registered in application.xml
EJB-JAR i.e. myEjbs.jar has the following contents:
• derby.properties
• META-INF/MANIFEST.MF and others such as persistence.xml, etc.
Contents of MANIFEST.MF:
Manifest-Version: 1.0
Class-Path: myDB.jar
• com.xxx.common.DbUtility.class that has the following code accessing the database:
private static String dbURL = "jdbc:derby:jar:(myDB.jar)";
private static String dbName = "myDB";
private static String user = "";
private static String password = "";
Connection con = DriverManager.getConnection(dbURL+ dbName, user, password);
The output of this class is then used by the EJBs in com.xxx.ejbs package.
Following is the error I get:
INFO Loaded database driver: org.apache.derby.jdbc.EmbeddedDriver
INFO SQLException: Failed to start database 'jar:(myDB.jar)myDB' with class loader BaseClassLoader#127627{vfsfile:/C:/jboss-5.0.1.GA/server/default/conf/jboss-service.xml}, see the next exception for details.
INFO java.sql.SQLException: Failed to start database 'jar:(myDB.jar)myDB' with class loader BaseClassLoader#127627{vfsfile:/C:/jboss-5.0.1.GA/server/default/conf/jboss-service.xml}, see the next exception for details.
INFO at org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown Source)
INFO [STDOUT] (http-127.0.0.1-8080-1) at org.apache.derby.impl.jdbc.Util.newEmbedSQLException(Unknown Source)
INFO at java.sql.DriverManager.getConnection(DriverManager.java:582)
INFO at java.sql.DriverManager.getConnection(DriverManager.java:185)
INFO Caused by: java.sql.SQLException: Failed to start database 'jar:(myDB.jar)myDB' with class loader BaseClassLoader#127627{vfsfile:/C:/jboss-5.0.1.GA/server/default/conf/jboss-service.xml}, see the next exception for details.
INFO at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown Source)
INFO at org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(Unknown Source)
INFO Caused by: java.sql.SQLException: Java exception: 'myDB.jar (The system cannot find the file specified): java.io.FileNotFoundException'.
INFO at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown Source)
INFO at org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown Source)
INFO at org.apache.derby.impl.jdbc.Util.newEmbedSQLException(Unknown Source)
INFO at org.apache.derby.impl.jdbc.Util.javaException(Unknown Source)
INFO Caused by: java.io.FileNotFoundException: myDB.jar (The system cannot find the file specified)
INFO at java.util.zip.ZipFile.open(Native Method)
INFO at java.util.zip.ZipFile.<init>(ZipFile.java:114)
INFO at java.util.zip.ZipFile.<init>(ZipFile.java:131)
INFO at org.apache.derby.impl.io.JarStorageFactory.doInit(Unknown Source)
INFO at org.apache.derby.impl.io.BaseStorageFactory.init(Unknown Source)
Thank you for your reply. I have now tried the following:
(I)
String path = getClass().getClassLoader().getResource("myDB.jar").getPath();
System.out.println("Path found = " + path);
private static String dbURL = "jdbc:derby:jar:" + "(" + path + ")";
private static String dbName = "myDB";
private static String user = "";
private static String password = "";
Connection con = DriverManager.getConnection(dbURL+dbName, user, password);
It still gives the same error. Following is the server log.
INFO Path found = /C:/jboss-5.0.1.GA/server/default/deploy/Main.ear/myDB.jar/
INFO Loaded database driver: org.apache.derby.jdbc.EmbeddedDriver
INFO SQLException: Failed to start database 'jar:(/C:/jboss-5.0.1.GA/server/default/deploy/Main.ear/myDB.jar/)myDB' with class loader BaseClassLoader#e6c6d7{vfsfile:/C:/jboss-5.0.1.GA/server/default/conf/jboss-service.xml}, see the next exception for details.
INFO at org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown Source)
INFO Caused by: java.sql.SQLException: Java exception: 'C:\jboss-5.0.1.GA\server\default\deploy\Main.ear\ myDB.jar (The system cannot find the path specified): java.io.FileNotFoundException'.
INFO at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown Source)
INFO Caused by: java.io.FileNotFoundException: C:\jboss-5.0.1.GA\server\default\deploy\Main.ear\ myDB.jar (The system cannot find the path specified)
INFO at java.util.zip.ZipFile.open(Native Method)
INFO at java.util.zip.ZipFile.(ZipFile.java:114)
INFO at java.util.zip.ZipFile.(ZipFile.java:131)
INFO at org.apache.derby.impl.io.JarStorageFactory.doInit(Unknown Source)
Following is the output when classes are being loaded by JBoss initially:
BaseClassLoader#a75818{vfszip:/C:/jboss-5.0.1.GA/server/default/deploy/Main.ear/} with policy VFSClassLoaderPolicy#88a588{name=vfszip:/C:/jboss-5.0.1.GA/server/default/deploy/Main.ear/ domain=null roots=[MemoryContextHandler#19639558[path= context=vfsmemory://ak42v-bfhwq-ger46v84-1-ger477uj-20 real=vfsmemory://ak42v-bfhwq-ger46v84-1-ger477uj-20], DelegatingHandler#7111491[path=Main.ear context=file:/C:/jboss-5.0.1.GA/server/default/deploy/ real=file:/C:/jboss-5.0.1.GA/server/default/deploy/Main.ear], DelegatingHandler#1948811[path=Main.ear/myEJBs.jar context=file:/C:/jboss-5.0.1.GA/server/default/deploy/ real=file:/C:/jboss-5.0.1.GA/server/default/deploy/Main.ear/myEJBs.jar], DelegatingHandler#4545587[path=Main.ear/ myDB.jar context=file:/C:/jboss-5.0.1.GA/server/default/deploy/ real=file:/C:/jboss-5.0.1.GA/server/default/deploy/Main.ear/ myDB.jar], com.xxx.common, com.xxx.ejb, myDB, myDB.seg0, META-INF, myDB.log, …
So it looks like the myDB.jar is in the classpath and the database folder myDB is also loaded.
(II)
Then I tried the following:
private static String dbURL_nfdc = "jdbc:derby:/";
private static String dbName = "myDB";
private static String user = "";
private static String password = "";
Connection con = DriverManager.getConnection(dbURL+dbName, user, password);
I again get an error, but, now I do not get the FileNotFoundException:
INFO java.sql.SQLException: Database '/myDB' not found.
INFO Caused by: java.sql.SQLException: Database '/myDB' not found.
Looks like you have pointed me in the right direction, but, I am not able to find the reason for this error.
(III)
I also tried the following:
private static String dbURL_nfdc = "jdbc:derby:"; (No / )
private static String dbName = "myDB";
private static String user = "";
private static String password = "";
Connection con = DriverManager.getConnection(dbURL+dbName, user, password);
But, get the same SQLException.
Is it possible that JBoss is treating myDB as a java package and not a simple file folder?
The following worked:
private static String dbURL_nfdc = "jdbc:derby:classpath:/";
private static String dbName = "myDB";
private static String user = "";
private static String password = "";
Connection con = DriverManager.getConnection(dbURL+dbName, user, password);
Thank you so much for leading me in the right direction. Appreciate your help!!!
Since your database jar is inside your same EJB ear package, I think it is supposed to be "in the classpath", so you should try following the "in the classpath" section of the docs at http://db.apache.org/derby/docs/10.6/devguide/cdevdvlp24155.html#cdevdvlp24155.
That is, I don't think you want to use the "jar" sub-protocol.
Alternatively, if you are going to use the "jar" sub-protocol, then I think that the part inside the parentheses should be the full filesystem path of your ejb ear file.