How to restore cache after ignite server reconnected - ignite

Really appreciate if someone can help me out.
I have an ignite server written in Java, and have a client written in C#, the client can be connected to the server, and can get server's cache correctly.
once the server restarted, the client received the EVT_CLIENT_NODE_RECONNECTED event from server. But the cache cannot be used any more.
Server code:
CacheConfiguration cacheConfiguration = new CacheConfiguration();
cacheConfiguration.setName("Sample");
cacheConfiguration.setCacheMode(CacheMode.REPLICATED);
cacheConfiguration.setAtomicityMode(CacheAtomicityMode.ATOMIC);
cacheConfiguration.setRebalanceMode(CacheRebalanceMode.ASYNC);
cacheConfiguration.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
cacheConfiguration.setBackups(0);
cacheConfiguration.setCopyOnRead(true);
cacheConfiguration.setStoreKeepBinary(false);
cacheConfiguration.setReadThrough(false);
cacheConfiguration.setWriteThrough(true);
cacheConfiguration.setWriteBehindEnabled(true);
cacheConfiguration.setWriteBehindFlushFrequency(2000);
cacheConfiguration.setWriteBehindFlushThreadCount(2);
DriverManagerDataSource theDataSource = new DriverManagerDataSource();
theDataSource.setDriverClassName("org.postgresql.Driver");
theDataSource.setUrl("jdbc:postgresql://192.168.224.128:5432/sample");
theDataSource.setUsername("postgres");
theDataSource.setPassword("password");
CacheJdbcPojoStoreFactory jdbcPojoStoreFactory = new CacheJdbcPojoStoreFactory<Long, SampleModel>()
.setParallelLoadCacheMinimumThreshold(0)
.setMaximumPoolSize(1)
.setDataSource(theDataSource);
cacheConfiguration.setCacheStoreFactory(jdbcPojoStoreFactory);
Collection<JdbcType> jdbcTypes = new ArrayList<JdbcType>();
JdbcType jdbcType = new JdbcType();
jdbcType.setCacheName("Sample");
jdbcType.setDatabaseSchema("public");
jdbcType.setKeyType("java.lang.Long");
Collection<JdbcTypeField> keys = new ArrayList<JdbcTypeField>();
keys.add(new JdbcTypeField(Types.BIGINT, "id", long.class, "id"));
jdbcType.setKeyFields(keys.toArray(new JdbcTypeField[keys.size()]));
Collection<JdbcTypeField> vals = new ArrayList<JdbcTypeField>();
jdbcType.setDatabaseTable("sample");
jdbcType.setValueType("com.nmf.SampleModel");
vals.add(new JdbcTypeField(Types.BIGINT, "id", long.class, "id"));
vals.add(new JdbcTypeField(Types.VARCHAR, "name", String.class, "name"));
jdbcType.setValueFields(vals.toArray(new JdbcTypeField[vals.size()]));
jdbcTypes.add(jdbcType);
((CacheJdbcPojoStoreFactory)cacheConfiguration.getCacheStoreFactory()).setTypes(jdbcTypes.toArray(new JdbcType[jdbcTypes.size()]));
IgniteConfiguration icfg = new IgniteConfiguration();
icfg.setCacheConfiguration(cacheConfiguration);
Ignite ignite = Ignition.start(icfg);
SampleModel:
public class SampleModel implements Serializable {
private long id;
private String Name;
public long getId() {
return id;
}
public void setId(long id) {
id = id;
}
public String getName() {
return Name;
}
public void setName(String name) {
Name = name;
}
#Override
public boolean equals(Object o) {
if (this == o) return true;
if (!(o instanceof SampleModel)) return false;
SampleModel that = (SampleModel) o;
return id == that.id;
}
#Override
public int hashCode() {
return (int) (id ^ (id >>> 32));
}
}
Client Code:
ExecutorService executor = Executors.newSingleThreadExecutor(r -> new Thread(r, "worker"));
CacheConfiguration cacheConfiguration = new CacheConfiguration();
cacheConfiguration.setName("Sample");
cacheConfiguration.setCacheMode(CacheMode.REPLICATED);
cacheConfiguration.setAtomicityMode(CacheAtomicityMode.ATOMIC);
cacheConfiguration.setRebalanceMode(CacheRebalanceMode.ASYNC);
cacheConfiguration.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
cacheConfiguration.setBackups(0);
cacheConfiguration.setCopyOnRead(true);
cacheConfiguration.setStoreKeepBinary(false);
IgniteConfiguration icfg = new IgniteConfiguration();
icfg.setCacheConfiguration(cacheConfiguration);
icfg.setClientMode(true);
final Ignite ignite = Ignition.start(icfg);
ignite.events().localListen(new IgnitePredicate<Event>() {
public boolean apply(Event event) {
if (event.type() == EVT_CLIENT_NODE_RECONNECTED) {
System.out.println("Reconnected");
executor.submit(()-> {
IgniteCache<Long, SampleModel> cache = ignite.getOrCreateCache("Sample");
System.out.println("Got the cache");
SampleModel model = cache.get(1L);
System.out.println(model.getName());
});
}
return true;
}
}, EVT_CLIENT_NODE_RECONNECTED);
IgniteCache<Long, SampleModel> cache = ignite.getOrCreateCache("Sample");
SampleModel model = cache.get(1L);
System.out.println(model.getName());
Error log on Client:
SEVERE: Failed to reinitialize local partitions (preloading will be stopped): GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion [topVer=2, minorTopVer=1], nodeId=dea5f59b, evt=DISCOVERY_CUSTOM_EVT]
class org.apache.ignite.IgniteCheckedException: Failed to start component: class org.apache.ignite.IgniteException: Failed to initialize cache store (data source is not provided).
at org.apache.ignite.internal.util.IgniteUtils.startLifecycleAware(IgniteUtils.java:8726)
at org.apache.ignite.internal.processors.cache.GridCacheProcessor.createCache(GridCacheProcessor.java:1486)
at org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareCacheStart(GridCacheProcessor.java:1931)
at org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareCacheStart(GridCacheProcessor.java:1833)
at org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager.onCacheChangeRequest(CacheAffinitySharedManager.java:379)
at org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onCacheChangeRequest(GridDhtPartitionsExchangeFuture.java:688)
at org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:529)
at org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:1806)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.lang.Thread.run(Thread.java:745)
Caused by: class org.apache.ignite.IgniteException: Failed to initialize cache store (data source is not provided).
at org.apache.ignite.cache.store.jdbc.CacheAbstractJdbcStore.start(CacheAbstractJdbcStore.java:298)
at org.apache.ignite.internal.util.IgniteUtils.startLifecycleAware(IgniteUtils.java:8722)
... 9 more
July 25, 2017 12:58:38 PM org.apache.ignite.logger.java.JavaLogger error
SEVERE: Failed to wait for completion of partition map exchange (preloading will not start): GridDhtPartitionsExchangeFuture [dummy=false, forcePreload=false, reassign=false, discoEvt=DiscoveryCustomEvent [customMsg=null, affTopVer=AffinityTopologyVersion [topVer=2, minorTopVer=1], super=DiscoveryEvent [evtNode=TcpDiscoveryNode [id=dea5f59b-bdda-47a1-b31d-1ecb08fc746f, addrs=[0:0:0:0:0:0:0:1, 127.0.0.1, 192.168.224.1, 192.168.6.15, 192.168.80.1, 2001:0:9d38:90d7:c83:fac:98d7:5fc1], sockAddrs=[/0:0:0:0:0:0:0:1:0, /127.0.0.1:0, Ares-W11/169.254.194.93:0, /2001:0:9d38:90d7:c83:fac:98d7:5fc1:0, /192.168.6.15:0, windows10.microdone.cn/192.168.224.1:0, /192.168.80.1:0], discPort=0, order=2, intOrder=0, lastExchangeTime=1500958697559, loc=true, ver=2.0.0#20170430-sha1:d4eef3c6, isClient=true], topVer=2, nodeId8=dea5f59b, msg=null, type=DISCOVERY_CUSTOM_EVT, tstamp=1500958718133]], crd=TcpDiscoveryNode [id=247d2926-010d-429b-aef2-97a18fbb3b5d, addrs=[0:0:0:0:0:0:0:1, 127.0.0.1, 192.168.224.1, 192.168.6.15, 192.168.80.1, 2001:0:9d38:90d7:c83:fac:98d7:5fc1], sockAddrs=[/192.168.6.15:47500, /2001:0:9d38:90d7:c83:fac:98d7:5fc1:47500, windows10.microdone.cn/192.168.224.1:47500, /192.168.80.1:47500, Ares-W11/169.254.194.93:47500, /0:0:0:0:0:0:0:1:47500, /127.0.0.1:47500], discPort=47500, order=1, intOrder=1, lastExchangeTime=1500958718083, loc=false, ver=2.0.0#20170430-sha1:d4eef3c6, isClient=false], exchId=GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion [topVer=2, minorTopVer=1], nodeId=dea5f59b, evt=DISCOVERY_CUSTOM_EVT], added=true, initFut=GridFutureAdapter [ignoreInterrupts=false, state=DONE, res=false, hash=842035444], init=false, lastVer=null, partReleaseFut=null, affChangeMsg=null, skipPreload=true, clientOnlyExchange=false, initTs=1500958718133, centralizedAff=false, changeGlobalStateE=null, exchangeOnChangeGlobalState=false, forcedRebFut=null, evtLatch=0, remaining=[247d2926-010d-429b-aef2-97a18fbb3b5d], srvNodes=[TcpDiscoveryNode [id=247d2926-010d-429b-aef2-97a18fbb3b5d, addrs=[0:0:0:0:0:0:0:1, 127.0.0.1, 192.168.224.1, 192.168.6.15, 192.168.80.1, 2001:0:9d38:90d7:c83:fac:98d7:5fc1], sockAddrs=[/192.168.6.15:47500, /2001:0:9d38:90d7:c83:fac:98d7:5fc1:47500, windows10.microdone.cn/192.168.224.1:47500, /192.168.80.1:47500, Ares-W11/169.254.194.93:47500, /0:0:0:0:0:0:0:1:47500, /127.0.0.1:47500], discPort=47500, order=1, intOrder=1, lastExchangeTime=1500958718083, loc=false, ver=2.0.0#20170430-sha1:d4eef3c6, isClient=false]], super=GridFutureAdapter [ignoreInterrupts=false, state=DONE, res=class o.a.i.IgniteCheckedException: Failed to start component: class o.a.i.IgniteException: Failed to initialize cache store (data source is not provided)., hash=1281081640]]
class org.apache.ignite.IgniteCheckedException: Failed to start component: class org.apache.ignite.IgniteException: Failed to initialize cache store (data source is not provided).
at org.apache.ignite.internal.util.IgniteUtils.startLifecycleAware(IgniteUtils.java:8726)
at org.apache.ignite.internal.processors.cache.GridCacheProcessor.createCache(GridCacheProcessor.java:1486)
at org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareCacheStart(GridCacheProcessor.java:1931)
at org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareCacheStart(GridCacheProcessor.java:1833)
at org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager.onCacheChangeRequest(CacheAffinitySharedManager.java:379)
at org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onCacheChangeRequest(GridDhtPartitionsExchangeFuture.java:688)
at org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:529)
at org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:1806)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.lang.Thread.run(Thread.java:745)
Caused by: class org.apache.ignite.IgniteException: Failed to initialize cache store (data source is not provided).
at org.apache.ignite.cache.store.jdbc.CacheAbstractJdbcStore.start(CacheAbstractJdbcStore.java:298)
at org.apache.ignite.internal.util.IgniteUtils.startLifecycleAware(IgniteUtils.java:8722)
... 9 more

Only server nodes store caches(except of Local caches), so, when you restarted server node, this cache was stopped. The problem here is that client node was reconnected to the cluster, but not joined it as new node. That's why cache was not created again.
I think it's wrong behavior and cache should be recreated at client reconnecting.
I've created an issue for that.
As a workaround, you can use Ignite.GetOrCreateCache("Sample") method instead of Ignite.GetCache("Sample")

Are you still having issues where Ignite.GetOrCreateCache("Sample") hangs? Make sure you aren't making that call from a thread in the System Pool. I was listening for the EVT_CLIENT_NODE_RECONNECTED event and calling Ignite.GetOrCreateCache("Sample") when I ran into a similar issue. For more information, see the answer to this question: Closures stuck in 2.0 when try to add an element into the queue

Related

Having same configuration on Ignite client when already have configuration on Ignite server

Okay
Here is my Ignite Server cfg code.
#Bean("serverCfg")
public IgniteConfiguration createConfiguration() throws Exception {
IgniteConfiguration cfg = new IgniteConfiguration(); cfg.setIgniteInstanceName("CcPlatformUserRolesOrganizationAssociationServer");
cfg.setSqlSchemas("public");
TcpDiscoverySpi discovery = new TcpDiscoverySpi();
TcpDiscoveryMulticastIpFinder ipFinder = new
TcpDiscoveryMulticastIpFinder();
ipFinder.setAddresses(Arrays.asList("127.0.0.1:47500..47510"));
discovery.setIpFinder(ipFinder);
cfg.setDiscoverySpi(discovery);
// cfg.setPeerClassLoadingEnabled(true);
cfg.setCacheConfiguration(cacheOrganizationsCache()
,
cacheRolesCache(), cacheUsersCache(),
cacheUsersRolesCache(), cacheGroupsCache(),
cacheGroupusersCache(), cacheGlobalPermissionsCache(),
cacheTemplatesCache(), cachePasswordsCache()
);
return cfg;
}
And Here is my Ignite Client Code.
#Bean
public Ignite createConfiguration() throws Exception {
IgniteConfiguration cfg = new IgniteConfiguration();
cfg.setClientMode(true); cfg.setIgniteInstanceName("CcPlatformUserRolesOrganizationAssociationServerClient");
TcpDiscoverySpi discovery = new TcpDiscoverySpi();
TcpDiscoveryMulticastIpFinder ipFinder = new
TcpDiscoveryMulticastIpFinder();
ipFinder.setAddresses(Arrays.asList("127.0.0.1:47500..47510"));
discovery.setIpFinder(ipFinder);
cfg.setDiscoverySpi(discovery);
cfg.setCacheConfiguration( cacheOrganizationsCache(), cacheRolesCache(),
cacheUsersCache(), cacheUsersRolesCache(), cacheGroupsCache(),
cacheGroupusersCache() );
Ignite ignite = Ignition.start(cfg);
ignite.cluster().active(true);
return ignite;
}
So My question is do I had to have same piece of code that contains all cache configurations including data source at client side also?
How to avoid this code redundancy?
You don't have to supply all cache configurations on client. Once first server node comes up, it will start all caches, other nodes will be able to use them regardless whether they have them in their own configs or not. Any new caches will be created when nodes join. Cache configurations will never be changed on join of new nodes with differing cfgs of existing caches.

RedisSystemException: java.lang.ClassCastException: [B cannot be cast to java.lang.Long

I meet this exception when using jedis with spring-data-redis in multi threading environment:
org.springframework.data.redis.RedisSystemException: Unknown redis exception; nested exception is java.lang.ClassCastException: [B cannot be cast to java.lang.Long
at org.springframework.data.redis.FallbackExceptionTranslationStrategy.getFallback(FallbackExceptionTranslationStrategy.java:48)
at org.springframework.data.redis.FallbackExceptionTranslationStrategy.translate(FallbackExceptionTranslationStrategy.java:38)
at org.springframework.data.redis.connection.jedis.JedisConnection.convertJedisAccessException(JedisConnection.java:241)
at org.springframework.data.redis.connection.jedis.JedisConnection.rPush(JedisConnection.java:1705)
at org.springframework.data.redis.core.DefaultListOperations$14.doInRedis(DefaultListOperations.java:187)
at org.springframework.data.redis.core.DefaultListOperations$14.doInRedis(DefaultListOperations.java:184)
at org.springframework.data.redis.core.RedisTemplate.execute(RedisTemplate.java:207)
at org.springframework.data.redis.core.RedisTemplate.execute(RedisTemplate.java:169)
at org.springframework.data.redis.core.AbstractOperations.execute(AbstractOperations.java:91)
at org.springframework.data.redis.core.DefaultListOperations.rightPush(DefaultListOperations.java:184)
at XXXXXXXXXXXXXXX
Caused by: java.lang.ClassCastException: [B cannot be cast to java.lang.Long
at redis.clients.jedis.Connection.getIntegerReply(Connection.java:265)
at redis.clients.jedis.BinaryJedis.rpush(BinaryJedis.java:1053)
at org.springframework.data.redis.connection.jedis.JedisConnection.rPush(JedisConnection.java:1703)
... 19 common frames omitted
jedis version: 2.9.0
spring-data-redis version: 1.8.12.RELEASE
redis server version: 3.0.6
My Client Java Code:
// Init JedisConnectionFactory
JedisPoolConfig jedisPoolConfig = new JedisPoolConfig();
JedisConnectionFactory jedisConnectionFactory = new JedisConnectionFactory();
jedisPoolConfig.setMaxTotal(maxActive);
jedisPoolConfig.setMaxIdle(maxIdle);
jedisPoolConfig.setMaxWaitMillis(maxWait);
jedisPoolConfig.setTestOnBorrow(true);
jedisConnectionFactory.setPoolConfig(jedisPoolConfig);
jedisConnectionFactory.setHostName(host);
jedisConnectionFactory.setPort(port);
jedisConnectionFactory.setTimeout(timeout);
jedisConnectionFactory.setPassword(password);
jedisConnectionFactory.afterPropertiesSet();
// Create RedisTemplate
redisTemplate = new RedisTemplate<String, Object>();
redisTemplate.setConnectionFactory(jedisConnectionFactory);
redisTemplate.setEnableTransactionSupport(true);
StringRedisSerializer serializer = new StringRedisSerializer();
redisTemplate.setKeySerializer(serializer);
redisTemplate.setValueSerializer(serializer);
redisTemplate.setHashKeySerializer(serializer);
redisTemplate.setHashValueSerializer(serializer);
redisTemplate.afterPropertiesSet();
Finally, I solved my problem by remove this line, after I read the source code of spring-data:
redisTemplate.setEnableTransactionSupport(true);
You should share the pool and get a different Jedis from it in every thread.
See more on GitHub
This is a recurring pattern on Samebug. Try to search with your stack trace.

Error during readThrough operation in Apache ignite

I have implemented preload and readThrough operations for my database by referring to demo example at schema-import. I am getting following error while readThrough operation in Apache ignite. I am trying to read data from oracle database. I was able to preload data from database but while reading record I am getting following error.
[18:50:42,299][SEVERE][sys-#97%null%][GridPartitionedSingleGetFuture] Failed to get values from dht cache [fut=GridCompoundIdentityFuture [super=GridCompoundFuture [rdc=Collection reducer: null, initFlag=1, lsnrCalls=0, done=true, cancelled=false, err=class o.a.i.IgniteCheckedException: javax.cache.CacheException: Failed to find mapping description [cache=WarehouseCache, typeId=class WarehouseKey]. Please configure JdbcType to associate cache 'WarehouseCache' with JdbcPojoStore., futs=[false]]]]
class org.apache.ignite.IgniteCheckedException: javax.cache.CacheException: Failed to find mapping description [cache=WarehouseCache, typeId=class WarehouseKey]. Please configure JdbcType to associate cache 'WarehouseCache' with JdbcPojoStore.
at org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.loadFromStore(GridCacheStoreManagerAdapter.java:337)
at org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.load(GridCacheStoreManagerAdapter.java:293)
at org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.loadAllFromStore(GridCacheStoreManagerAdapter.java:426)
at org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.loadAll(GridCacheStoreManagerAdapter.java:392)
at org.apache.ignite.internal.processors.cache.GridCacheAdapter$16.call(GridCacheAdapter.java:1985)
at org.apache.ignite.internal.processors.cache.GridCacheAdapter$16.call(GridCacheAdapter.java:1983)
at org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6521)
at org.apache.ignite.internal.processors.closure.GridClosureProcessor$2.body(GridClosureProcessor.java:922)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: javax.cache.integration.CacheLoaderException: javax.cache.CacheException: Failed to find mapping description [cache=WarehouseCache, typeId=class WarehouseKey]. Please configure JdbcType to associate cache 'WarehouseCache' with JdbcPojoStore.
... 12 more
Caused by: javax.cache.CacheException: Failed to find mapping description [cache=WarehouseCache, typeId=class WarehouseKey]. Please configure JdbcType to associate cache 'WarehouseCache' with JdbcPojoStore.
at org.apache.ignite.cache.store.jdbc.CacheAbstractJdbcStore.entryMapping(CacheAbstractJdbcStore.java:693)
at org.apache.ignite.cache.store.jdbc.CacheAbstractJdbcStore.load(CacheAbstractJdbcStore.java:813)
at org.apache.ignite.internal.processors.cache.CacheStoreBalancingWrapper.load(CacheStoreBalancingWrapper.java:97)
at org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.loadFromStore(GridCacheStoreManagerAdapter.java:326)
... 11 more
[18:50:42] Ignite node stopped OK [uptime=00:00:02:031]
Exception in thread "main" javax.cache.integration.CacheLoaderException: javax.cache.CacheException: Failed to find mapping description [cache=WarehouseCache, typeId=class WarehouseKey]. Please configure JdbcType to associate cache 'WarehouseCache' with JdbcPojoStore.
at org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.loadFromStore(GridCacheStoreManagerAdapter.java:337)
at org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.load(GridCacheStoreManagerAdapter.java:293)
at org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.loadAllFromStore(GridCacheStoreManagerAdapter.java:426)
at org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.loadAll(GridCacheStoreManagerAdapter.java:392)
at org.apache.ignite.internal.processors.cache.GridCacheAdapter$16.call(GridCacheAdapter.java:1985)
at org.apache.ignite.internal.processors.cache.GridCacheAdapter$16.call(GridCacheAdapter.java:1983)
at org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6521)
at org.apache.ignite.internal.processors.closure.GridClosureProcessor$2.body(GridClosureProcessor.java:922)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: javax.cache.CacheException: Failed to find mapping description [cache=WarehouseCache, typeId=class WarehouseKey]. Please configure JdbcType to associate cache 'WarehouseCache' with JdbcPojoStore.
at org.apache.ignite.cache.store.jdbc.CacheAbstractJdbcStore.entryMapping(CacheAbstractJdbcStore.java:693)
at org.apache.ignite.cache.store.jdbc.CacheAbstractJdbcStore.load(CacheAbstractJdbcStore.java:813)
at org.apache.ignite.internal.processors.cache.CacheStoreBalancingWrapper.load(CacheStoreBalancingWrapper.java:97)
at org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.loadFromStore(GridCacheStoreManagerAdapter.java:326)
... 11 more
I have used following CacheConfig file,
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import java.sql.*;
import java.util.*;
import org.apache.ignite.cache.*;
import org.apache.ignite.cache.store.jdbc.*;
import org.apache.ignite.configuration.*;
/**
* CacheConfig definition.
*
* Code generated by Apache Ignite Schema Import utility: 10/21/2016.
*/
public class CacheConfig {
/**
* Create JDBC type for WAREHOUSE.
*
* #param cacheName Cache name.
* #return Configured JDBC type.
*/
private static JdbcType jdbcTypeWarehouse(String cacheName) {
JdbcType jdbcType = new JdbcType();
jdbcType.setCacheName(cacheName);
jdbcType.setDatabaseSchema("C##TPCCTEST");
jdbcType.setDatabaseTable("WAREHOUSE");
jdbcType.setKeyType("org.apache.ignite.WarehouseKey");
jdbcType.setValueType("org.apache.ignite.Warehouse");
// Key fields for WAREHOUSE.
Collection<JdbcTypeField> keys = new ArrayList<>();
keys.add(new JdbcTypeField(Types.INTEGER, "W_ID", int.class, "wId"));
jdbcType.setKeyFields(keys.toArray(new JdbcTypeField[keys.size()]));
// Value fields for WAREHOUSE.
Collection<JdbcTypeField> vals = new ArrayList<>();
vals.add(new JdbcTypeField(Types.INTEGER, "W_ID", int.class, "wId"));
vals.add(new JdbcTypeField(Types.VARCHAR, "W_NAME", String.class, "wName"));
vals.add(new JdbcTypeField(Types.VARCHAR, "W_STREET_1", String.class, "wStreet1"));
vals.add(new JdbcTypeField(Types.VARCHAR, "W_STREET_2", String.class, "wStreet2"));
vals.add(new JdbcTypeField(Types.VARCHAR, "W_CITY", String.class, "wCity"));
vals.add(new JdbcTypeField(Types.CHAR, "W_STATE", String.class, "wState"));
vals.add(new JdbcTypeField(Types.CHAR, "W_ZIP", String.class, "wZip"));
vals.add(new JdbcTypeField(Types.FLOAT, "W_TAX", Double.class, "wTax"));
vals.add(new JdbcTypeField(Types.FLOAT, "W_YTD", Double.class, "wYtd"));
jdbcType.setValueFields(vals.toArray(new JdbcTypeField[vals.size()]));
return jdbcType;
}
/**
* Create SQL Query descriptor for WAREHOUSE.
*
* #return Configured query entity.
*/
private static QueryEntity queryEntityWarehouse() {
QueryEntity qryEntity = new QueryEntity();
qryEntity.setKeyType("org.apache.ignite.WarehouseKey");
qryEntity.setValueType("org.apache.ignite.Warehouse");
// Query fields for WAREHOUSE.
LinkedHashMap<String, String> fields = new LinkedHashMap<>();
fields.put("wId", "java.lang.Integer");
fields.put("wName", "java.lang.String");
fields.put("wStreet1", "java.lang.String");
fields.put("wStreet2", "java.lang.String");
fields.put("wCity", "java.lang.String");
fields.put("wState", "java.lang.String");
fields.put("wZip", "java.lang.String");
fields.put("wTax", "java.lang.Double");
fields.put("wYtd", "java.lang.Double");
qryEntity.setFields(fields);
// Aliases for fields.
Map<String, String> aliases = new HashMap<>();
aliases.put("wId", "W_ID");
aliases.put("wName", "W_NAME");
aliases.put("wStreet1", "W_STREET_1");
aliases.put("wStreet2", "W_STREET_2");
aliases.put("wCity", "W_CITY");
aliases.put("wState", "W_STATE");
aliases.put("wZip", "W_ZIP");
aliases.put("wTax", "W_TAX");
aliases.put("wYtd", "W_YTD");
qryEntity.setAliases(aliases);
// Indexes for WAREHOUSE.
Collection<QueryIndex> idxs = new ArrayList<>();
idxs.add(new QueryIndex("wId", true, "SYS_C0011180"));
qryEntity.setIndexes(idxs);
return qryEntity;
}
/**
* Configure cache.
*
* #param cacheName Cache name.
* #param storeFactory Cache store factory.
* #return Cache configuration.
*/
public static <K, V> CacheConfiguration<K, V> cache(String cacheName, CacheJdbcPojoStoreFactory<K, V> storeFactory) {
if (storeFactory == null)
throw new IllegalArgumentException("Cache store factory cannot be null.");
CacheConfiguration<K, V> ccfg = new CacheConfiguration<>(cacheName);
ccfg.setCacheStoreFactory(storeFactory);
ccfg.setReadThrough(true);
ccfg.setWriteThrough(true);
// Configure JDBC types.
Collection<JdbcType> jdbcTypes = new ArrayList<>();
jdbcTypes.add(jdbcTypeWarehouse(cacheName));
storeFactory.setTypes(jdbcTypes.toArray(new JdbcType[jdbcTypes.size()]));
// Configure query entities.
Collection<QueryEntity> qryEntities = new ArrayList<>();
qryEntities.add(queryEntityWarehouse());
ccfg.setQueryEntities(qryEntities);
return ccfg;
}
}

Error configuring SSL using Spring Data Cassandra

I am using the Spring Data Cassandra project v1.3.0 am unable to configure SSL for my Cassandra cluster (v2.0.17). The Sprint Data Cassandra documentation says it supports Cassandra 2.X using the DataStax Java Driver (2.0.X) so there shouldn't be an issue there. Here is my Java cassandra configuration that initializes the cassandra cluster bean:
#Autowired
private Environment env;
#Bean
public CassandraClusterFactoryBean cluster() {
SSLContext context = null;
try {
context = getSSLContext(
env.getProperty("cassandra.connection.ssl.trustStorePath"),
env.getProperty("cassandra.connection.ssl.trustStorePassword"),
env.getProperty("cassandra.connection.ssl.keyStorePath"),
env.getProperty("cassandra.connection.ssl.keyStorePassword"));
} catch (Exception ex) {
log.warn("Error setting SSL context for Cassandra.");
}
// Default cipher suites supported by C*
String[] cipherSuites = { "TLS_RSA_WITH_AES_128_CBC_SHA",
"TLS_RSA_WITH_AES_256_CBC_SHA" };
CassandraClusterFactoryBean cluster = new CassandraClusterFactoryBean();
cluster.setContactPoints(env.getProperty("cassandra.contactpoints"));
cluster.setPort(Integer.parseInt(env.getProperty("cassandra.port")));
cluster.setSslOptions(new SSLOptions(context, cipherSuites));
cluster.setSslEnabled(true);
return cluster;
}
#Bean
public CassandraMappingContext mappingContext() {
return new BasicCassandraMappingContext();
}
#Bean
public CassandraConverter converter() {
return new MappingCassandraConverter(mappingContext());
}
#Bean
public CassandraSessionFactoryBean session() throws Exception {
CassandraSessionFactoryBean session = new CassandraSessionFactoryBean();
session.setCluster(cluster().getObject());
session.setKeyspaceName(env.getProperty("cassandra.keyspace"));
session.setConverter(converter());
session.setSchemaAction(SchemaAction.NONE);
return session;
}
#Bean
public CassandraOperations cassandraTemplate() throws Exception {
return new CassandraTemplate(session().getObject());
}
private static SSLContext getSSLContext(String truststorePath,
String truststorePassword, String keystorePath,
String keystorePassword) throws Exception {
FileInputStream tsf = new FileInputStream(truststorePath);
FileInputStream ksf = new FileInputStream(keystorePath);
SSLContext ctx = SSLContext.getInstance("SSL");
KeyStore ts = KeyStore.getInstance("JKS");
ts.load(tsf, truststorePassword.toCharArray());
TrustManagerFactory tmf = TrustManagerFactory
.getInstance(TrustManagerFactory.getDefaultAlgorithm());
tmf.init(ts);
KeyStore ks = KeyStore.getInstance("JKS");
ks.load(ksf, keystorePassword.toCharArray());
KeyManagerFactory kmf = KeyManagerFactory.getInstance(KeyManagerFactory
.getDefaultAlgorithm());
kmf.init(ks, keystorePassword.toCharArray());
ctx.init(kmf.getKeyManagers(), tmf.getTrustManagers(),
new SecureRandom());
return ctx;
}
I have verified the environment properties to set the SSL context are being populated properly and are the same keystore and truststore being used in the cassandra configuration file. Below is my cassandra configuration regarding enabling client to node encryption:
server_encryption_options:
internode_encryption: all
keystore: /usr/share/ssl/cassandra_client.jks
keystore_password: cassandra
truststore: /usr/share/ssl/cassandra_client_trust.jks
truststore_password: cassandra
# More advanced defaults below:
# protocol: TLS
# algorithm: SunX509
store_type: JKS
cipher_suites: [TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA] #,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA]
# require_client_auth: true
# enable or disable client/server encryption.
client_encryption_options:
enabled: true
keystore: /usr/share/ssl/cassandra_client.jks
keystore_password: cassandra
require_client_auth: true
# Set trustore and truststore_password if require_client_auth is true
truststore: /usr/share/ssl/cassandra_client_trust.jks
truststore_password: cassandra
# More advanced defaults below:
# protocol: TLS
# algorithm: SunX509
store_type: JKS
cipher_suites: [TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA] #,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA]
When launching my client application I get the following error when cassandra cluster is being initialized:
17:02:39,330 WARN [org.springframework.web.context.support.XmlWebApplicationContext] (ServerService Thread Pool -- 58) Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'cassandraServiceImpl': Injection of autowired dependencies failed; nested exception is org.springframework.beans.factory.BeanCreationException: Could not autowire field: private org.springframework.data.cassandra.core.CassandraOperations com.cloudistics.cldtx.mwc.service.CassandraServiceImpl.cassandraOperations; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'cassandraTemplate' defined in com.cloudistics.cldtx.mwc.conn.CassandraConfig: Bean instantiation via factory method failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.springframework.data.cassandra.core.CassandraOperations]: Factory method 'cassandraTemplate' threw exception; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'session' defined in com.cloudistics.cldtx.mwc.conn.CassandraConfig: Invocation of init method failed; nested exception is com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /127.0.0.1:9042 (com.datastax.driver.core.ConnectionException: [/127.0.0.1:9042] Unexpected error during transport initialization (com.datastax.driver.core.OperationTimedOutException: [/127.0.0.1:9042] Operation timed out)))17:02:39,330 WARN [org.springframework.web.context.support.XmlWebApplicationContext] (ServerService Thread Pool -- 58) Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'cassandraServiceImpl': Injection of autowired dependencies failed; nested exception is org.springframework.beans.factory.BeanCreationException: Could not autowire field: private org.springframework.data.cassandra.core.CassandraOperations com.cloudistics.cldtx.mwc.service.CassandraServiceImpl.cassandraOperations; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'cassandraTemplate' defined in com.cloudistics.cldtx.mwc.conn.CassandraConfig: Bean instantiation via factory method failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.springframework.data.cassandra.core.CassandraOperations]: Factory method 'cassandraTemplate' threw exception; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'session' defined in com.cloudistics.cldtx.mwc.conn.CassandraConfig: Invocation of init method failed; nested exception is com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /127.0.0.1:9042 (com.datastax.driver.core.ConnectionException: [/127.0.0.1:9042] Unexpected error during transport initialization (com.datastax.driver.core.OperationTimedOutException: [/127.0.0.1:9042] Operation timed out)))
If anybody has any insight into this it would be greatly appreciated. I followed these instructions on Datastax to prepare the server certificates and enable client-to-node encryption.
I could able to do this with following code. Hope this helps.
cassandra.properties
# KeyStore Path
cassandra.cassks=classpath:cass.keystore.p12
# KeyStore Password
cassandra.casskspass=defkeypass
# KeyStore Type
cassandra.casskstype=pkcs12
# TrustStore Path
cassandra.cassts=classpath:cass.truststore.p12
# TrustStore Password
cassandra.casstspass=deftrustpass
# TrustStore Type
cassandra.casststype=pkcs12
CassandraProperties.java
#Configuration
#ConfigurationProperties("cassandra")
#PropertySource(value = "${classpath:conf/cassandra.properties}")
#Validated
#Data
public class CassandraProperties {
#NotNull
private Boolean ssl;
#NotNull
private String sslver;
private Resource cassks;
private String casskspass;
private String casskstype;
private Resource cassts;
private String casstspass;
private String casststype;
}
CassandraConfig.java
public class CassandraConfig extends AbstractCassandraConfiguration {
#Autowired
private CassandraProperties cassandraProp;
#Bean
public CassandraClusterFactoryBean cluster() {
CassandraClusterFactoryBean cluster = new CassandraClusterFactoryBean();
cluster.setContactPoints(cassandraProp.getContactpoints());
cluster.setPort(cassandraProp.getPort());
if (true == cassandraProp.isSSEnabled()) {
File KeyStoreFile = null;
File TrustStoreFile = null;
InputStream keyStoreIS = null;
InputStream tustStoreIS = null;
KeyStore keyStore = null;
KeyStore trustStore = null;
TrustManagerFactory tmf = null;
KeyManagerFactory kmf = null;
SSLContext sslContext = null;
RemoteEndpointAwareJdkSSLOptions sslOptions = null;
try {
KeyStoreFile = cassandraProp.getCassks().getFile();
keyStoreIS = new FileInputStream(KeyStoreFile);
keyStore = KeyStore.getInstance(cassandraProp.getCasskstype());
keyStore.load(keyStoreIS, cassandraProp.getCasskspass().toCharArray());
TrustStoreFile = cassandraProp.getCassts().getFile();
tustStoreIS = new FileInputStream(TrustStoreFile);
trustStore = KeyStore.getInstance(cassandraProp.getCasststype());
trustStore.load(tustStoreIS, cassandraProp.getCasstspass().toCharArray());
tmf = TrustManagerFactory.getInstance("SunX509");
tmf.init(trustStore);
kmf = KeyManagerFactory.getInstance("SunX509");
kmf.init(keyStore, cassandraProp.getCasskspass().toCharArray());
sslContext = SSLContext.getInstance(cassandraProp.getSslver());
sslContext.init(kmf.getKeyManagers(), tmf.getTrustManagers(), null);
sslOptions = new RemoteEndpointAwareJdkSSLOptions.Builder().withSSLContext(sslContext).build();
} catch (NoSuchAlgorithmException | KeyStoreException | CertificateException | IOException
| KeyManagementException | UnrecoverableKeyException e) {
e.printStackTrace();
}
cluster.setSslEnabled(true);
cluster.setSslOptions(sslOptions);
}
return cluster;
}
}

Cannot Instantiate InitialContext

I have a class that acts as standalone client for Glassfish V3 JMS queue. This class works fine from my localhost, i.e. both glassfish server and the standalone client are on my local PC.
Now I need to install this client on a Linux machine. Glassfish V3 is already running on this Linux machine. I have added appserv-rt.jar from the glassfish installation directory and added it in the directory of standlaone client and set the classpath. But I keep getting this error:
javax.naming.NoInitialContextException: Cannot instantiate class: com.sun.enterprise.naming.SerialInitContextFactory [Root exception is java.lang.ClassNotFoundException: com.sun.enterprise.naming.SerialInitContextFactory]
at javax.naming.spi.NamingManager.getInitialContext(NamingManager.java:657)
at javax.naming.InitialContext.getDefaultInitCtx(InitialContext.java:288)
at javax.naming.InitialContext.init(InitialContext.java:223)
at javax.naming.InitialContext.<init>(InitialContext.java:197)
at com.cisco.zbl.controller.ZblBulkUploadThread.run(ZblBulkUploadThread.java:55)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.lang.ClassNotFoundException: com.sun.enterprise.naming.SerialInitContextFactory
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:248)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:247)
at com.sun.naming.internal.VersionHelper12.loadClass(VersionHelper12.java:46)
at javax.naming.spi.NamingManager.getInitialContext(NamingManager.java:654)
... 5 more
Here is my Java code:
public class ZblBulkUploadThread implements Runnable,MessageListener{
private static final Category log = Category.getInstance(ZblBulkUploadThread.class) ;
private Queue queue;
public void run()
{
try
{
ZblConfig zblConfig = new ZblConfig() ;
InitialContext jndiContext = null;
MessageConsumer messageConsumer=null;
Properties props = new Properties();
props.setProperty("java.naming.factory.initial", "com.sun.enterprise.naming.SerialInitContextFactory");
props.setProperty("java.naming.factory.url.pkgs", "com.sun.enterprise.naming");
props.setProperty("java.naming.factory.state", "com.sun.corba.ee.impl.presentation.rmi.JNDIStateFactoryImpl");
jndiContext = new InitialContext(props);
log.debug(zblConfig.getProperty("JMSConnectionFactoryName")) ;
//System.setProperty("java.naming.factory.initial","com.sun.jndi.ldap.LdapCtxFactory");
ConnectionFactory connectionFactory = (ConnectionFactory)jndiContext.lookup(zblConfig.getProperty("JMSConnectionFactoryName"));
Connection connection = connectionFactory.createConnection();
Session session = connection.createSession(false,Session.AUTO_ACKNOWLEDGE);
queue = (Queue)jndiContext.lookup(zblConfig.getProperty("JMSQueueName")) ;
messageConsumer = session.createConsumer(queue);
connection.start();
while(true)
{
Message message = messageConsumer.receive() ;
ObjectMessage om = ((ObjectMessage)message) ;
try
{
RedirectFile file = (RedirectFile)om.getObject() ;
log.debug("filePath "+file.getFilePath()) ;
log.debug(" userName "+file.getUserName()) ;
log.debug(" mode is "+file.getMode()) ;
processMessage(file,zblConfig) ;
}
catch(Exception ex)
{
log.error("ERROR "+ex.getMessage()) ;
ex.printStackTrace() ;
}
}
}
catch(Exception ex)
{
ex.printStackTrace() ;
log.error("Error "+ex.getMessage()) ;
}
}
The error comes at this line: jndiContext = new InitialContext(props);
It does not make any difference if I use the no-arg constructor of InitialContext.
Here is my unix shell script that invokes this java program (Standlaone client):
APP_HOME=/local/scripts/apps/bulkUpload;
CLASSPATH=.:$APP_HOME/lib/gf-client.jar:$APP_HOME/lib/zbl.jar:$APP_HOME/lib/log4j- 1.2.4.jar:$APP_HOME/lib/javaee.jar:$APP_HOME/lib/poi-3.8-beta5-20111217.jar:$APP_HOME/lib/poi-examples-3.8-beta5-20111217:$APP_HOME/lib/poi-excelant-3.8-beta5-20111217:$APP_HOME/lib/poi-ooxml-3.8-beta5-20111217:$APP_HOME/lib/poi-ooxml-schemas-3.8-beta5-20111217:$APP_HOME/lib/poi-scratchpad-3.8-beta5-20111217:$APP_HOME/lib/appserv-rt.jar:
echo "CLASSPATH=$CLASSPATH";
export APP_HOME;
export CLASSPATH;
cd $APP_HOME;
#javac -d . ZblBulkUploadThread.java
java -cp $CLASSPATH -Dzbl.properties=zbl-stage.properties -Djava.naming.factory.initial=com.sun.enterprise.naming.SerialInitContextFactory com.cisco.zbl.controller.ZblBulkUploadThread
Please help me - I have been stuck on this problem for a long time.
do a which java command and see if jdk is being picked up correctly. I doubt that linux is picking gc4j
update:
change this line
CLASSPATH=.:$APP_HOME/lib/gf-client.jar:$APP_HOME/lib/zbl.jar:$APP_HOME/lib/log4j- 1.2.4.jar:$APP_HOME/lib/javaee.jar:$APP_HOME/lib/poi-3.8-beta5-20111217.jar:$APP_HOME/lib/poi-examples-3.8-beta5-20111217:$APP_HOME/lib/poi-excelant-3.8-beta5-20111217:$APP_HOME/lib/poi-ooxml-3.8-beta5-20111217:$APP_HOME/lib/poi-ooxml-schemas-3.8-beta5-20111217:$APP_HOME/lib/poi-scratchpad-3.8-beta5-20111217:$APP_HOME/lib/appserv-rt.jar: to
CLASSPATH=.:$APP_HOME/lib/gf-client.jar:$APP_HOME/lib/zbl.jar:$APP_HOME/lib/log4j- 1.2.4.jar:$APP_HOME/lib/javaee.jar:$APP_HOME/lib/poi-3.8-beta5-20111217.jar:$APP_HOME/lib/poi-examples-3.8-beta5-20111217:$APP_HOME/lib/poi-excelant-3.8-beta5-20111217:$APP_HOME/lib/poi-ooxml-3.8-beta5-20111217:$APP_HOME/lib/poi-ooxml-schemas-3.8-beta5-20111217:$APP_HOME/lib/poi-scratchpad-3.8-beta5-20111217:$APP_HOME/lib/appserv-rt.jar