Starting Server node on a cluster through Service and trying loadcache - ignite
I debugged the code and found all parameters being set appropriately and even in the console we can see that the server in remote node has started and the cache has been initialized.
All the requried parameters are passed through db
On trying to assert on the cache wihtout lazyload(hotloading it from the persistent store) the error log that i get is ,
I am not able to understand what s going wrong in the cluster so have only attached the code that does the job of starting the servers.
InitializeCache internally calls loadcache after all the keyfileds,jdbctypes are set..
class{
private void startNodes() {
logger.info("Starting Ignite Nodes");
IgniteCluster igniteCluster = rocCachemanager.getCluster();
// HashMaps for holding host and default configurations
HashMap<String, Object> defaults = new HashMap<>();
HashMap<String, Object> hmHosts;
// get Ignite configuration from DB
List<IgniteConfigPojo> list = igniteConfigImpl.getIgniteConfigList();
IgniteConfigPojo configPojo = list.get(0);
List<IgniteNodeMapPojo> listNodeMap = configPojo.getIgniteNodeMap();
// Collection of Host configuration
Collection<Map<String, Object>> hosts = new ArrayList<>();
// Prepare the map with all the ignite server host information
prepareHostList(listNodeMap, hosts);
// Actual start of remote nodes via ssh call
try {
if (listNodeMap.size() != igniteCluster.forServers().nodes().size()) {
Collection<ClusterStartNodeResult> result = igniteCluster.startNodes(hosts, defaults, false, 10000, 1);
for (ClusterStartNodeResult res : result) {
if (!res.isSuccess()) {
throw new ROCCacheException(res.getError());
} else {
logger.info("Ignite server start successfully triggered on machine " + res.getHostName());
}
}
}
int waitTime = 0;
while (listNodeMap.size() != igniteCluster.forServers().nodes().size()) {
if (waitTime >= MAX_TIME_FOR_SERVER_START) {
int serverNodes = igniteCluster.forServers().nodes().size();
throw new ROCCacheException("All the Server nodes have not joined the Ignite Cluster, Expected servers :"
+ listNodeMap.size() + " , actual :" + serverNodes);
}
synchronized (this) {
wait(2000);
}
waitTime += 2000;
}
logger.info("Successfully started all the ignite servers");
} catch (IgniteException e) {
throw new ROCCacheException("Error while starting the Ignite Servers", e);
} catch (InterruptedException e) {
throw new ROCCacheException("Error while starting the Ignite Servers,Received Interrupt signal", e);
}
}
#Override
public void onLeaderStart() {
startNodes();
initializeBookeeperCache();
initializeCaches();
}
}
#Test
#Transactional(propagation = Propagation.SUPPORTS)
public void startNodeTest() {
try {
roccacheservice.onLeaderStart();
Collection<ClusterNode> colClusterClientNodes = rocCacheManager.getCluster().forClients().nodes();
for (ClusterNode clientNode : colClusterClientNodes) {
assertEquals(clientNode.addresses().contains("10.113.56.110"), true);
}
Collection<ClusterNode> colClusterServerNodes = rocCacheManager.getCluster().forServers().nodes();
for (ClusterNode serverNode : colClusterServerNodes) {
assertEquals(serverNode.addresses().contains("10.113.56.231"), true);
System.out.println(serverNode.metrics());
}
****************************works fine till here****************************
ROCCacheConfiguration<Long, PersonPojo> new4 = new ROCCacheConfiguration<>();
new4.setName("Person");
ROCCache<Long, PersonPojo> orgCache4 = rocCacheManager.createCache(new4);
assertEquals(orgCache4.get(1L).getName(), "Abhishek");
assertEquals(orgCache4.get(1L).getAge(), 25);
} catch (Exception e) {
e.printStackTrace();
}
}
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/abhisheks/.m2/repository/org/slf4j/slf4j-simple/1.7.19/slf4j-simple-1.7.19.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/abhisheks/.m2/repository/org/slf4j/slf4j-log4j12/1.7.10/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.SimpleLoggerFactory]
[main] INFO org.springframework.test.context.support.DefaultTestContextBootstrapper - Loaded default TestExecutionListener class names from location [META-INF/spring.factories]: [org.springframework.test.context.web.ServletTestExecutionListener, org.springframework.test.context.support.DirtiesContextBeforeModesTestExecutionListener, org.springframework.test.context.support.DependencyInjectionTestExecutionListener, org.springframework.test.context.support.DirtiesContextTestExecutionListener, org.springframework.test.context.transaction.TransactionalTestExecutionListener, org.springframework.test.context.jdbc.SqlScriptsTestExecutionListener]
[main] INFO org.springframework.test.context.support.DefaultTestContextBootstrapper - Could not instantiate TestExecutionListener [org.springframework.test.context.web.ServletTestExecutionListener]. Specify custom listener classes or make the default listener classes (and their required dependencies) available. Offending class: [org/springframework/web/context/request/RequestAttributes]
[main] INFO org.springframework.test.context.support.DefaultTestContextBootstrapper - Using TestExecutionListeners: [org.springframework.test.context.support.DirtiesContextBeforeModesTestExecutionListener#5383967b, org.springframework.test.context.support.DependencyInjectionTestExecutionListener#2ac273d3, org.springframework.test.context.support.DirtiesContextTestExecutionListener#71423665, org.springframework.test.context.transaction.TransactionalTestExecutionListener#20398b7c, org.springframework.test.context.jdbc.SqlScriptsTestExecutionListener#6fc6f14e]
[main] INFO org.springframework.context.support.GenericApplicationContext - Refreshing org.springframework.context.support.GenericApplicationContext#d44fc21: startup date [Tue Apr 19 15:32:01 IST 2016]; root of context hierarchy
[main] WARN org.springframework.context.annotation.ConfigurationClassEnhancer - #Bean method IgniteStoreConfig.getPropertySourcesPlaceholderConfigurer is non-static and returns an object assignable to Spring's BeanFactoryPostProcessor interface. This will result in a failure to process annotations such as #Autowired, #Resource and #PostConstruct within the method's declaring #Configuration class. Add the 'static' modifier to this method to avoid these container lifecycle issues; see #Bean javadoc for complete details.
[main] INFO org.springframework.context.support.PropertySourcesPlaceholderConfigurer - Loading properties file from class path resource [ignitePersistentStore.properties]
[main] INFO org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor - JSR-330 'javax.inject.Inject' annotation found and supported for autowiring
[main] INFO org.springframework.jdbc.datasource.DriverManagerDataSource - Loaded JDBC driver: com.mysql.jdbc.Driver
[main] INFO org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean - Building JPA container EntityManagerFactory for persistence unit 'ben'
HHH000204: Processing PersistenceUnitInfo [
name: ben
...]
HHH000412: Hibernate Core {5.0.7.Final}
HHH000206: hibernate.properties not found
HHH000021: Bytecode provider name : javassist
HCANN000001: Hibernate Commons Annotations {5.0.1.Final}
HHH000400: Using dialect: org.hibernate.dialect.MySQLDialect
HHH000457: Joined inheritance hierarchy [com.subex.roc.schema.md.TraitValue] defined explicit #DiscriminatorColumn. Legacy Hibernate behavior was to ignore the #DiscriminatorColumn. However, as part of issue HHH-6911 we now apply the explicit #DiscriminatorColumn. If you would prefer the legacy behavior, enable the `hibernate.discriminator.ignore_explicit_for_joined` setting (hibernate.discriminator.ignore_explicit_for_joined=true)
HHH000228: Running hbm2ddl schema update
HHH000262: Table not found: SREG_Field
HHH000262: Table not found: SREG_Field
HHH000262: Table not found: SREG_Model
HHH000262: Table not found: SREG_Model
HHH000262: Table not found: SREG_Trait
HHH000262: Table not found: SREG_Trait
HHH000262: Table not found: SREG_TraitGroup
HHH000262: Table not found: SREG_TraitGroup
HHH000262: Table not found: SREG_TraitMultiValue
HHH000262: Table not found: SREG_TraitMultiValue
HHH000262: Table not found: SREG_TraitSingleValue
HHH000262: Table not found: SREG_TraitSingleValue
HHH000262: Table not found: SREG_TraitValueBase
HHH000262: Table not found: SREG_TraitValueBase
HHH000262: Table not found: SREG_TraitValueStore
HHH000262: Table not found: SREG_TraitValueStore
HHH000397: Using ASTQueryTranslatorFactory
Hibernate: select igniteconf0_.icf_id as icf_id1_0_, igniteconf0_.enable_peerclassload as enable_p2_0_, igniteconf0_.grid_name as grid_nam3_0_, igniteconf0_.join_timeout as join_tim4_0_ from ignite_config igniteconf0_
Hibernate: select ignitenode0_.icf_id as icf_id3_1_0_, ignitenode0_.inm_id as inm_id1_1_0_, ignitenode0_.inm_id as inm_id1_1_1_, ignitenode0_.icf_id as icf_id3_1_1_, ignitenode0_.nod_id as nod_id4_1_1_, ignitenode0_.port_range as port_ran2_1_1_, rocnodepoj1_.nod_id as nod_id1_4_2_, rocnodepoj1_.nod_address as nod_addr2_4_2_, rocnodedea2_.rnd_id as rnd_id1_3_3_, rocnodedea2_.nod_id as nod_id2_3_3_, rocnodedea2_.rnd_ignite_home as rnd_igni3_3_3_, rocnodedea2_.rnd_numberof_nodes as rnd_numb4_3_3_, rocnodedea2_.rnd_password as rnd_pass5_3_3_, rocnodedea2_.rnd_ssh_port as rnd_ssh_6_3_3_, rocnodedea2_.rnd_user_name as rnd_user7_3_3_ from ignite_node_map ignitenode0_ left outer join roc_nodes rocnodepoj1_ on ignitenode0_.nod_id=rocnodepoj1_.nod_id left outer join roc_node_detail rocnodedea2_ on rocnodepoj1_.nod_id=rocnodedea2_.rnd_id where ignitenode0_.icf_id=?
[main] INFO org.springframework.beans.factory.xml.XmlBeanDefinitionReader - Loading XML bean definitions from URL [file:/home/abhisheks/Desktop/apache-ignite-fabric-1.5.0.final-bin/conf/spring_igniteConfig.xml]
[main] INFO org.springframework.beans.factory.xml.XmlBeanDefinitionReader - Loading XML bean definitions from URL [file:/home/abhisheks/Desktop/apache-ignite-fabric-1.5.0.final-bin/conf/ignite_Config.xml]
[main] INFO org.springframework.context.support.GenericApplicationContext - Refreshing org.springframework.context.support.GenericApplicationContext#5d8ab698: startup date [Tue Apr 19 15:32:04 IST 2016]; root of context hierarchy
[main] INFO org.springframework.jdbc.datasource.DriverManagerDataSource - Loaded JDBC driver: com.mysql.jdbc.Driver
>>> __________ ________________
>>> / _/ ___/ |/ / _/_ __/ __/
>>> _/ // (7 7 // / / / / _/
>>> /___/\___/_/|_/___/ /_/ /___/
>>>
>>> ver. 1.5.0-final#20151229-sha1:f1f8cda2
>>> 2015 Copyright(C) Apache Software Foundation
>>>
>>> Ignite documentation: http://ignite.apache.org
Config URL: file:/home/abhisheks/Desktop/apache-ignite-fabric-1.5.0.final-bin/conf/spring_igniteConfig.xml
Daemon mode: off
OS: Linux 2.6.32-504.el6.x86_64 amd64
OS user: abhisheks
Language runtime: Java Platform API Specification ver. 1.8
VM information: Java(TM) SE Runtime Environment 1.8.0_66-b17 Oracle Corporation Java HotSpot(TM) 64-Bit Server VM 25.66-b17
VM total memory: 1.7GB
Remote Management [restart: off, REST: on, JMX (remote: off)]
IGNITE_HOME=/home/abhisheks/Desktop/apache-ignite-fabric-1.5.0.final-bin
VM arguments: [-Dfile.encoding=UTF-8]
Configured caches ['ignite-marshaller-sys-cache', 'ignite-sys-cache', 'ignite-atomics-sys-cache']
3-rd party licenses can be found at: /home/abhisheks/Desktop/apache-ignite-fabric-1.5.0.final-bin/libs/licenses
Initial heap size is 122MB (should be no less than 512MB, use -Xms512m -Xmx512m).
Non-loopback local IPs: 10.113.56.110, 192.168.122.1, fe80:0:0:0:c634:6bff:fe4f:784d%eth1
Enabled local MACs: 5254004ABB26, C4346B4F784D
Configured plugins:
^-- None
IPC shared memory server endpoint started [port=48100, tokDir=/home/abhisheks/Desktop/apache-ignite-fabric-1.5.0.final-bin/work/ipc/shmem/8f12688b-fef6-4981-a5f4-aa6781438930-23547]
Successfully bound shared memory communication to TCP port [port=48100, locHost=0.0.0.0/0.0.0.0]
Successfully bound to TCP port [port=47100, locHost=0.0.0.0/0.0.0.0]
Checkpoints are disabled (to enable configure any GridCheckpointSpi implementation)
Collision resolution is disabled (all jobs will be activated upon arrival).
Swap space is disabled. To enable use FileSwapSpaceSpi.
Security status [authentication=off, tls/ssl=off]
Command protocol successfully started [name=TCP binary, host=0.0.0.0/0.0.0.0, port=11211]
Successfully bound to TCP port [port=47500, localHost=0.0.0.0/0.0.0.0]
Started cache [name=ignite-sys-cache, mode=REPLICATED]
Started cache [name=ignite-atomics-sys-cache, mode=PARTITIONED]
Started cache [name=ignite-marshaller-sys-cache, mode=REPLICATED]
Performance suggestions for grid 'subexIgnite' (fix if possible)
To disable, set -DIGNITE_PERFORMANCE_SUGGESTIONS_DISABLED=true
^-- Disable grid events (remove 'includeEventTypes' from configuration)
^-- Enable client mode for TcpDiscoverySpi (set TcpDiscoverySpi.forceServerMode to false)
To start Console Management & Monitoring run ignitevisorcmd.{sh|bat}
>>> +----------------------------------------------------------------------------+
>>> Ignite ver. 1.5.0-final#20151229-sha1:f1f8cda2f3f62231f42a59951bf34c39577c1bec
>>> +----------------------------------------------------------------------------+
>>> OS name: Linux 2.6.32-504.el6.x86_64 amd64
>>> CPU(s): 8
>>> Heap: 1.7GB
>>> VM name: 23547#abhisheks
>>> Grid name: subexIgnite
>>> Local node [ID=8F12688B-FEF6-4981-A5F4-AA6781438930, order=1, clientMode=true]
>>> Local node addresses: [192.168.122.1/0:0:0:0:0:0:0:1%lo, abhisheks/10.113.56.110, /127.0.0.1, /192.168.122.1]
>>> Local ports: TCP:11211 TCP:47100 TCP:47500 TCP:48100
Topology snapshot [ver=1, servers=0, clients=1, CPUs=8, heap=1.7GB]
[main] INFO org.springframework.test.context.transaction.TransactionContext - Began transaction (1) for test context [DefaultTestContext#2478b629 testClass = StartServiceTest, testInstance = com.subex.roc.cache.startserviceintegration.StartServiceTest#39023dbf, testMethod = startNodeTest#StartServiceTest, testException = [null], mergedContextConfiguration = [MergedContextConfiguration#2c2c3947 testClass = StartServiceTest, locations = '{}', classes = '{class com.subex.roc.cache.IgniteJPAConfiguration, class com.subex.roc.cache.IgniteEnvConfiguration}', contextInitializerClasses = '[]', activeProfiles = '{}', propertySourceLocations = '{}', propertySourceProperties = '{}', contextLoader = 'org.springframework.test.context.support.DelegatingSmartContextLoader', parent = [null]]]; transaction manager [org.springframework.orm.jpa.JpaTransactionManager#1a2ac487]; rollback [true]
[main] INFO com.subex.roc.cache.ROCCacheService - Starting Ignite Nodes
Hibernate: select igniteconf0_.icf_id as icf_id1_0_, igniteconf0_.enable_peerclassload as enable_p2_0_, igniteconf0_.grid_name as grid_nam3_0_, igniteconf0_.join_timeout as join_tim4_0_ from ignite_config igniteconf0_
Hibernate: select ignitenode0_.icf_id as icf_id3_1_0_, ignitenode0_.inm_id as inm_id1_1_0_, ignitenode0_.inm_id as inm_id1_1_1_, ignitenode0_.icf_id as icf_id3_1_1_, ignitenode0_.nod_id as nod_id4_1_1_, ignitenode0_.port_range as port_ran2_1_1_, rocnodepoj1_.nod_id as nod_id1_4_2_, rocnodepoj1_.nod_address as nod_addr2_4_2_, rocnodedea2_.rnd_id as rnd_id1_3_3_, rocnodedea2_.nod_id as nod_id2_3_3_, rocnodedea2_.rnd_ignite_home as rnd_igni3_3_3_, rocnodedea2_.rnd_numberof_nodes as rnd_numb4_3_3_, rocnodedea2_.rnd_password as rnd_pass5_3_3_, rocnodedea2_.rnd_ssh_port as rnd_ssh_6_3_3_, rocnodedea2_.rnd_user_name as rnd_user7_3_3_ from ignite_node_map ignitenode0_ left outer join roc_nodes rocnodepoj1_ on ignitenode0_.nod_id=rocnodepoj1_.nod_id left outer join roc_node_detail rocnodedea2_ on rocnodepoj1_.nod_id=rocnodedea2_.rnd_id where ignitenode0_.icf_id=?
Starting remote node with SSH command: nohup "/home/benakaraj/Downloads/apache-ignite-fabric-1.5.0.final-bin/bin/ignite.sh" -v "conf/spring_igniteConfig.xml" -J-DIGNITE_SSH_HOST="10.113.56.231" -J-DIGNITE_SSH_USER_NAME="root" > ignite-startNodes/04-19-2016--15-32-05-521bc7ca.log 2>& 1 &
[main] INFO com.subex.roc.cache.ROCCacheService - Ignite server start successfully triggered on machine 10.113.56.231
Your version is up to date.
Local java version is different from remote [loc=8, rmt=7]
Added new node to topology: TcpDiscoveryNode [id=e93bc2fa-8a37-4a50-9a22-071abece643f, addrs=[0:0:0:0:0:0:0:1%1, 10.113.56.231, 127.0.0.1, 192.168.122.1], sockAddrs=[/192.168.122.1:47500, /0:0:0:0:0:0:0:1%1:47500, /10.113.56.231:47500, /10.113.56.231:47500, /127.0.0.1:47500, /192.168.122.1:47500], discPort=47500, order=2, intOrder=2, lastExchangeTime=1461060127285, loc=false, ver=1.5.0#20151229-sha1:f1f8cda2, isClient=false]
Topology snapshot [ver=2, servers=1, clients=1, CPUs=16, heap=2.7GB]
[main] INFO com.subex.roc.cache.ROCCacheService - Successfully started all the ignite servers
Started cache [name=bookeeperCache, mode=PARTITIONED]
Hibernate: select roccacheco0_.rcc_id as rcc_id1_2_, roccacheco0_.automicity_mode as automici2_2_, roccacheco0_.backup_count as backup_c3_2_, roccacheco0_.cache_mode as cache_mo4_2_, roccacheco0_.cache_writeorder_mode as cache_wr5_2_, roccacheco0_.eviction_policy as eviction6_2_, roccacheco0_.filterClass as filterCl7_2_, roccacheco0_.is_lazy_load as is_lazy_8_2_, roccacheco0_.is_near_cache as is_near_9_2_, roccacheco0_.is_read_through as is_read10_2_, roccacheco0_.is_write_behind as is_writ11_2_, roccacheco0_.is_write_through as is_writ12_2_, roccacheco0_.key_class as key_cla13_2_, roccacheco0_.max_cache_entries as max_cac14_2_, roccacheco0_.rcc_cache_name as rcc_cac15_2_, roccacheco0_.rcc_table_name as rcc_tab16_2_, roccacheco0_.schema_version as schema_17_2_, roccacheco0_.value_class as value_c18_2_, roccacheco0_.writebehind_batch_size as writebe19_2_, roccacheco0_.writebehind_flush_freq as writebe20_2_, roccacheco0_.writebehind_flush_size as writebe21_2_ from roc_cache_config roccacheco0_
Hibernate: select model0_.id as id1_6_, model0_.description as descript2_6_, model0_.name as name3_6_, model0_.version as version4_6_ from SREG_Model model0_ where model0_.name=? and model0_.version=?
Hibernate: select fields0_.model_id as model_id5_5_0_, fields0_.id as id1_5_0_, fields0_.id as id1_5_1_, fields0_.name as name2_5_1_, fields0_.position as position3_5_1_, fields0_.type as type4_5_1_ from SREG_Field fields0_ where fields0_.model_id=?
Hibernate: select traitgroup0_.field_id as field_id3_8_0_, traitgroup0_.id as id1_8_0_, traitgroup0_.id as id1_8_1_, traitgroup0_.name as name2_8_1_ from SREG_TraitGroup traitgroup0_ where traitgroup0_.field_id=?
Hibernate: select traitgroup0_.field_id as field_id3_8_0_, traitgroup0_.id as id1_8_0_, traitgroup0_.id as id1_8_1_, traitgroup0_.name as name2_8_1_ from SREG_TraitGroup traitgroup0_ where traitgroup0_.field_id=?
Hibernate: select traitgroup0_.field_id as field_id3_8_0_, traitgroup0_.id as id1_8_0_, traitgroup0_.id as id1_8_1_, traitgroup0_.name as name2_8_1_ from SREG_TraitGroup traitgroup0_ where traitgroup0_.field_id=?
Hibernate: select traitgroup0_.model_id as model_id4_8_0_, traitgroup0_.id as id1_8_0_, traitgroup0_.id as id1_8_1_, traitgroup0_.name as name2_8_1_ from SREG_TraitGroup traitgroup0_ where traitgroup0_.model_id=?
Hibernate: select traits0_.group_id as group_id5_7_0_, traits0_.id as id1_7_0_, traits0_.id as id1_7_1_, traits0_.data_type as data_typ2_7_1_, traits0_.name as name3_7_1_, traits0_.trait_id as trait_id4_7_1_, traitvalue1_.id as id2_11_2_, traitvalue1_2_.value as value1_10_2_, traitvalue1_.trait_type as trait_ty1_11_2_ from SREG_Trait traits0_ left outer join SREG_TraitValueBase traitvalue1_ on traits0_.trait_id=traitvalue1_.id left outer join SREG_TraitMultiValue traitvalue1_1_ on traitvalue1_.id=traitvalue1_1_.id left outer join SREG_TraitSingleValue traitvalue1_2_ on traitvalue1_.id=traitvalue1_2_.id where traits0_.group_id=?
Started cache [name=Person, mode=REPLICATED]
Failed to obtain remote job result policy for result from ComputeTask.result(..) method (will fail the whole task): GridJobResultImpl [job=C2 [], sib=GridJobSiblingImpl [sesId=e4f38fd2451-8f12688b-fef6-4981-a5f4-aa6781438930, jobId=15f38fd2451-e93bc2fa-8a37-4a50-9a22-071abece643f, nodeId=e93bc2fa-8a37-4a50-9a22-071abece643f, isJobDone=false], jobCtx=GridJobContextImpl [jobId=15f38fd2451-e93bc2fa-8a37-4a50-9a22-071abece643f, timeoutObj=null, attrs={}], node=TcpDiscoveryNode [id=e93bc2fa-8a37-4a50-9a22-071abece643f, addrs=[0:0:0:0:0:0:0:1%1, 10.113.56.231, 127.0.0.1, 192.168.122.1], sockAddrs=[/192.168.122.1:47500, /0:0:0:0:0:0:0:1%1:47500, /10.113.56.231:47500, /10.113.56.231:47500, /127.0.0.1:47500, /192.168.122.1:47500], discPort=47500, order=2, intOrder=2, lastExchangeTime=1461060127285, loc=false, ver=1.5.0#20151229-sha1:f1f8cda2, isClient=false], ex=class o.a.i.IgniteException: null, hasRes=true, isCancelled=false, isOccupied=true]
class org.apache.ignite.IgniteException: Remote job threw user exception (override or implement ComputeTask.result(..) method if you would like to have automatic failover for this exception).
at org.apache.ignite.compute.ComputeTaskAdapter.result(ComputeTaskAdapter.java:101)
at org.apache.ignite.internal.processors.task.GridTaskWorker$3.apply(GridTaskWorker.java:909)
at org.apache.ignite.internal.processors.task.GridTaskWorker$3.apply(GridTaskWorker.java:902)
at org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6429)
at org.apache.ignite.internal.processors.task.GridTaskWorker.result(GridTaskWorker.java:902)
at org.apache.ignite.internal.processors.task.GridTaskWorker.onResponse(GridTaskWorker.java:798)
at org.apache.ignite.internal.processors.task.GridTaskProcessor.processJobExecuteResponse(GridTaskProcessor.java:995)
at org.apache.ignite.internal.processors.task.GridTaskProcessor$JobMessageListener.onMessage(GridTaskProcessor.java:1219)
at org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:821)
at org.apache.ignite.internal.managers.communication.GridIoManager.access$1600(GridIoManager.java:103)
at org.apache.ignite.internal.managers.communication.GridIoManager$5.run(GridIoManager.java:784)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: class org.apache.ignite.IgniteException: null
at org.apache.ignite.internal.processors.closure.GridClosureProcessor$C2.execute(GridClosureProcessor.java:1792)
at org.apache.ignite.internal.processors.job.GridJobWorker$2.call(GridJobWorker.java:509)
at org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6397)
at org.apache.ignite.internal.processors.job.GridJobWorker.execute0(GridJobWorker.java:503)
at org.apache.ignite.internal.processors.job.GridJobWorker.body(GridJobWorker.java:456)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at org.apache.ignite.internal.processors.job.GridJobProcessor.processJobExecuteRequest(GridJobProcessor.java:1166)
at org.apache.ignite.internal.processors.job.GridJobProcessor$JobExecutionListener.onMessage(GridJobProcessor.java:1770)
at org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:821)
at org.apache.ignite.internal.managers.communication.GridIoManager.access$1600(GridIoManager.java:103)
at org.apache.ignite.internal.managers.communication.GridIoManager$5.run(GridIoManager.java:784)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
... 1 more
Caused by: java.lang.NullPointerException
at org.apache.ignite.internal.processors.cache.GridCacheAdapter$LoadCacheClosure.call(GridCacheAdapter.java:5769)
at org.apache.ignite.internal.processors.cache.GridCacheAdapter$LoadCacheClosure.call(GridCacheAdapter.java:5716)
at org.apache.ignite.internal.processors.closure.GridClosureProcessor$C2.execute(GridClosureProcessor.java:1789)
... 13 more
java.lang.NullPointerException
at org.apache.ignite.internal.processors.cache.GridCacheAdapter$LoadCacheClosure.call(GridCacheAdapter.java:5769)
at org.apache.ignite.internal.processors.cache.GridCacheAdapter$LoadCacheClosure.call(GridCacheAdapter.java:5716)
at org.apache.ignite.internal.processors.closure.GridClosureProcessor$C2.execute(GridClosureProcessor.java:1789)
at org.apache.ignite.internal.processors.job.GridJobWorker$2.call(GridJobWorker.java:509)
at org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6397)
at org.apache.ignite.internal.processors.job.GridJobWorker.execute0(GridJobWorker.java:503)
at org.apache.ignite.internal.processors.job.GridJobWorker.body(GridJobWorker.java:456)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at org.apache.ignite.internal.processors.job.GridJobProcessor.processJobExecuteRequest(GridJobProcessor.java:1166)
at org.apache.ignite.internal.processors.job.GridJobProcessor$JobExecutionListener.onMessage(GridJobProcessor.java:1770)
at org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:821)
at org.apache.ignite.internal.managers.communication.GridIoManager.access$1600(GridIoManager.java:103)
at org.apache.ignite.internal.managers.communication.GridIoManager$5.run(GridIoManager.java:784)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
[main] INFO org.springframework.test.context.transaction.TransactionContext - Rolled back transaction for test context [DefaultTestContext#2478b629 testClass = StartServiceTest, testInstance = com.subex.roc.cache.startserviceintegration.StartServiceTest#39023dbf, testMethod = startNodeTest#StartServiceTest, testException = [null], mergedContextConfiguration = [MergedContextConfiguration#2c2c3947 testClass = StartServiceTest, locations = '{}', classes = '{class com.subex.roc.cache.IgniteJPAConfiguration, class com.subex.roc.cache.IgniteEnvConfiguration}', contextInitializerClasses = '[]', activeProfiles = '{}', propertySourceLocations = '{}', propertySourceProperties = '{}', contextLoader = 'org.springframework.test.context.support.DelegatingSmartContextLoader', parent = [null]]].
Invoking shutdown hook...
[Thread-3] INFO org.springframework.context.support.GenericApplicationContext - Closing org.springframework.context.support.GenericApplicationContext#d44fc21: startup date [Tue Apr 19 15:32:01 IST 2016]; root of context hierarchy
Command protocol successfully stopped: TCP binary
Stopped cache: ignite-marshaller-sys-cache
Stopped cache: ignite-sys-cache
Stopped cache: ignite-atomics-sys-cache
Stopped cache: bookeeperCache
Stopped cache: Person
>>> +---------------------------------------------------------------------------------------+
>>> Ignite ver. 1.5.0-final#20151229-sha1:f1f8cda2f3f62231f42a59951bf34c39577c1bec stopped OK
>>> +---------------------------------------------------------------------------------------+
>>> Grid name: subexIgnite
>>> Grid uptime: 00:00:14:747
[Thread-3] INFO org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean - Closing JPA EntityManagerFactory for persistence unit 'ben'
The possibility of this NPE is removed in the latest Ignite version (1.6.0). It can be downloaded here: ignite.apache.org/download.cgi#binaries
Related
Ktor-server-test-host did not cleaned up Exposed database instannce across tests
I'm working on a web service using Ktor 1.6.8 and Exposed 0.39.2. My application module and database is setup as following: fun Application.module(testing: Boolean = false) { val hikariConfig = HikariConfig().apply { driverClassName = "org.postgresql.Driver" jdbcUrl = environment.config.propertyOrNull("ktor.database.url")?.getString() username = environment.config.propertyOrNull("ktor.database.username")?.getString() password = environment.config.propertyOrNull("ktor.database.password")?.getString() maximumPoolSize = 10 isAutoCommit = false transactionIsolation = "TRANSACTION_REPEATABLE_READ" validate() } val pool = HikariDataSource(hikariConfig) val db = Database.connect(pool, {}, DatabaseConfig { useNestedTransactions = true }) } I use ktor-server-test-host, Test Containers and junit 5 to test the service. My test looks similar like below: #Testcontainers class SampleApplicationTest{ companion object { #Container val postgreSQLContainer = PostgreSQLContainer<Nothing>(DockerImageName.parse("postgres:13.4-alpine")).apply { withDatabaseName("database_test") } } #Test internal fun `should make request successfully`() { withTestApplication({ (environment.config as MapApplicationConfig).apply { put("ktor.database.url", postgreSQLContainer.jdbcUrl) put("ktor.database.user", postgreSQLContainer.username) put("ktor.database.password", postgreSQLContainer.password) } module(testing = true) }) { handleRequest(...) } } } I observed an issue that if I ran multiple test classes together, some requests ended up using old Exposed db instance that was setup in a previous test class, causing the test case failed because the underlying database was already stopped. When I ran one test class at a time, all were running fine. Please refer to the log below for the error stack trace: 2022-10-01 08:00:36.102 [DefaultDispatcher-worker-5 #request#103] WARN Exposed - Transaction attempt #1 failed: java.sql.SQLTransientConnectionException: HikariPool-4 - Connection is not available, request timed out after 30001ms.. Statement(s): INSERT INTO cards (...) org.jetbrains.exposed.exceptions.ExposedSQLException: java.sql.SQLTransientConnectionException: HikariPool-4 - Connection is not available, request timed out after 30001ms. at org.jetbrains.exposed.sql.statements.Statement.executeIn$exposed_core(Statement.kt:49) at org.jetbrains.exposed.sql.Transaction.exec(Transaction.kt:143) at org.jetbrains.exposed.sql.Transaction.exec(Transaction.kt:128) at org.jetbrains.exposed.sql.statements.Statement.execute(Statement.kt:28) at org.jetbrains.exposed.sql.QueriesKt.insert(Queries.kt:73) at com.example.application.services.CardService$createCard$row$1.invokeSuspend(CardService.kt:53) at org.jetbrains.exposed.sql.transactions.experimental.SuspendedKt$suspendedTransactionAsyncInternal$1.invokeSuspend(Suspended.kt:127) at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33) at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:106) at kotlinx.coroutines.internal.LimitedDispatcher.run(LimitedDispatcher.kt:42) at kotlinx.coroutines.scheduling.TaskImpl.run(Tasks.kt:95) at kotlinx.coroutines.scheduling.CoroutineScheduler.runSafely(CoroutineScheduler.kt:570) at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.executeTask(CoroutineScheduler.kt:750) at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.runWorker(CoroutineScheduler.kt:677) at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run(CoroutineScheduler.kt:664) Caused by: java.sql.SQLTransientConnectionException: HikariPool-4 - Connection is not available, request timed out after 30001ms. at com.zaxxer.hikari.pool.HikariPool.createTimeoutException(HikariPool.java:695) at com.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:197) at com.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:162) at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:100) at org.jetbrains.exposed.sql.Database$Companion$connect$3.invoke(Database.kt:142) at org.jetbrains.exposed.sql.Database$Companion$connect$3.invoke(Database.kt:139) at org.jetbrains.exposed.sql.Database$Companion$doConnect$3.invoke(Database.kt:127) at org.jetbrains.exposed.sql.Database$Companion$doConnect$3.invoke(Database.kt:128) at org.jetbrains.exposed.sql.transactions.ThreadLocalTransactionManager$ThreadLocalTransaction$connectionLazy$1.invoke(ThreadLocalTransactionManager.kt:69) at org.jetbrains.exposed.sql.transactions.ThreadLocalTransactionManager$ThreadLocalTransaction$connectionLazy$1.invoke(ThreadLocalTransactionManager.kt:68) at kotlin.UnsafeLazyImpl.getValue(Lazy.kt:81) at org.jetbrains.exposed.sql.transactions.ThreadLocalTransactionManager$ThreadLocalTransaction.getConnection(ThreadLocalTransactionManager.kt:75) at org.jetbrains.exposed.sql.Transaction.getConnection(Transaction.kt) at org.jetbrains.exposed.sql.statements.InsertStatement.prepared(InsertStatement.kt:157) at org.jetbrains.exposed.sql.statements.Statement.executeIn$exposed_core(Statement.kt:47) ... 19 common frames omitted Caused by: org.postgresql.util.PSQLException: Connection to localhost:49544 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections. at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:303) at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:51) at org.postgresql.jdbc.PgConnection.<init>(PgConnection.java:223) at org.postgresql.Driver.makeConnection(Driver.java:465) at org.postgresql.Driver.connect(Driver.java:264) at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:138) at com.zaxxer.hikari.pool.PoolBase.newConnection(PoolBase.java:358) at com.zaxxer.hikari.pool.PoolBase.newPoolEntry(PoolBase.java:206) at com.zaxxer.hikari.pool.HikariPool.createPoolEntry(HikariPool.java:477) at com.zaxxer.hikari.pool.HikariPool.access$100(HikariPool.java:71) at com.zaxxer.hikari.pool.HikariPool$PoolEntryCreator.call(HikariPool.java:725) at com.zaxxer.hikari.pool.HikariPool$PoolEntryCreator.call(HikariPool.java:711) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) at java.base/java.lang.Thread.run(Thread.java:833) Caused by: java.net.ConnectException: Connection refused at java.base/sun.nio.ch.Net.pollConnect(Native Method) at java.base/sun.nio.ch.Net.pollConnectNow(Net.java:672) at java.base/sun.nio.ch.NioSocketImpl.timedFinishConnect(NioSocketImpl.java:542) at java.base/sun.nio.ch.NioSocketImpl.connect(NioSocketImpl.java:597) at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:327) I tried to add some cleanup code for Exposed's TransactionManager in my application module as following: fun Application.module(testing: Boolean = false) { // ... val db = Database.connect(pool, {}, DatabaseConfig { useNestedTransactions = true }) if (testing) { environment.monitor.subscribe(ApplicationStopped) { TransactionManager.closeAndUnregister(db) } } } However, the issue still happened, and I also observed additional error as following: 2022-10-01 08:00:36.109 [DefaultDispatcher-worker-5 #request#93] ERROR Application - Unexpected error java.lang.RuntimeException: database org.jetbrains.exposed.sql.Database#3bf4644c don't have any transaction manager at org.jetbrains.exposed.sql.transactions.TransactionApiKt.getTransactionManager(TransactionApi.kt:149) at org.jetbrains.exposed.sql.transactions.experimental.SuspendedKt.closeAsync(Suspended.kt:85) at org.jetbrains.exposed.sql.transactions.experimental.SuspendedKt.access$closeAsync(Suspended.kt:1) at org.jetbrains.exposed.sql.transactions.experimental.SuspendedKt$suspendedTransactionAsyncInternal$1.invokeSuspend(Suspended.kt:138) (Coroutine boundary) at org.mpierce.ktor.newrelic.KtorNewRelicKt$runPipelineInTransaction$2.invokeSuspend(KtorNewRelic.kt:178) at org.mpierce.ktor.newrelic.KtorNewRelicKt$setUpNewRelic$2.invokeSuspend(KtorNewRelic.kt:104) at io.ktor.routing.Routing.executeResult(Routing.kt:154) at io.ktor.routing.Routing$Feature$install$1.invokeSuspend(Routing.kt:107) at io.ktor.features.ContentNegotiation$Feature$install$1.invokeSuspend(ContentNegotiation.kt:145) at io.ktor.features.StatusPages$interceptCall$2.invokeSuspend(StatusPages.kt:102) at io.ktor.features.StatusPages.interceptCall(StatusPages.kt:101) at io.ktor.features.StatusPages$Feature$install$2.invokeSuspend(StatusPages.kt:142) at io.ktor.features.CallLogging$Feature$install$2.invokeSuspend(CallLogging.kt:188) at io.ktor.server.testing.TestApplicationEngine$callInterceptor$1.invokeSuspend(TestApplicationEngine.kt:296) at io.ktor.server.testing.TestApplicationEngine$2.invokeSuspend(TestApplicationEngine.kt:50) Caused by: java.lang.RuntimeException: database org.jetbrains.exposed.sql.Database#3bf4644c don't have any transaction manager at org.jetbrains.exposed.sql.transactions.TransactionApiKt.getTransactionManager(TransactionApi.kt:149) at org.jetbrains.exposed.sql.transactions.experimental.SuspendedKt.closeAsync(Suspended.kt:85) at org.jetbrains.exposed.sql.transactions.experimental.SuspendedKt.access$closeAsync(Suspended.kt:1) at org.jetbrains.exposed.sql.transactions.experimental.SuspendedKt$suspendedTransactionAsyncInternal$1.invokeSuspend(Suspended.kt:138) at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33) at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:106) at kotlinx.coroutines.internal.LimitedDispatcher.run(LimitedDispatcher.kt:42) at kotlinx.coroutines.scheduling.TaskImpl.run(Tasks.kt:95) at kotlinx.coroutines.scheduling.CoroutineScheduler.runSafely(CoroutineScheduler.kt:570) at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.executeTask(CoroutineScheduler.kt:750) at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.runWorker(CoroutineScheduler.kt:677) at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run(CoroutineScheduler.kt:664) Could someone show me what could be the issue here with my application code & test setup? Thanks and regards.
When using the node driver, notarisation in flows hangs with a handshake failure
Whenever I try and test using the node driver, I find at the point of notarisation, my flows will hang. After examining the node logs, it shows that the notary's message broker was unreachable: [INFO ] 09:33:26,653 [nioEventLoopGroup-3-3] (AMQPClient.kt:91) netty.AMQPClient.run - Retry connect {} [INFO ] 09:33:26,657 [nioEventLoopGroup-3-4] (AMQPClient.kt:76) netty.AMQPClient.operationComplete - Connected to localhost:10001 {} [INFO ] 09:33:26,658 [nioEventLoopGroup-3-4] (AMQPChannelHandler.kt:49) O=Notary Service, L=Zurich, C=CH.channelActive - New client connection db926eb8 from localhost/127.0.0.1:10001 to /127.0.0.1:63781 {} [INFO ] 09:33:26,658 [nioEventLoopGroup-3-4] (AMQPClient.kt:86) netty.AMQPClient.operationComplete - Disconnected from localhost:10001 {} [ERROR] 09:33:26,658 [nioEventLoopGroup-3-4] (AMQPChannelHandler.kt:98) O=Notary Service, L=Zurich, C=CH.userEventTriggered - Handshake failure SslHandshakeCompletionEvent(java.nio.channels.ClosedChannelException) {} [INFO ] 09:33:26,659 [nioEventLoopGroup-3-4] (AMQPChannelHandler.kt:74) O=Notary Service, L=Zurich, C=CH.channelInactive - Closed client connection db926eb8 from localhost/127.0.0.1:10001 to /127.0.0.1:63781 {} [INFO ] 09:33:26,659 [nioEventLoopGroup-3-4] (AMQPBridgeManager.kt:115) peers.DLF1ZmHt1DXc9HbxzDNm6VHduUABBbNsp7Mh4DhoBs6ifd -> localhost:10001:O=Notary Service, L=Zurich, C=CH.onSocketConnected - Bridge Disconnected {} While the notary logs display the following: [INFO ] 13:24:21,735 [main] (ActiveMQServerImpl.java:540) core.server.internalStart - AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.2.0 [localhost, nodeID=7b3df3b8-98aa-11e8-83bd-ead493c8221e] {} [DEBUG] 13:24:21,735 [main] (ArtemisRpcBroker.kt:51) rpc.ArtemisRpcBroker.start - Artemis RPC broker is started. {} [INFO ] 13:24:21,737 [main] (ArtemisMessagingClient.kt:28) internal.ArtemisMessagingClient.start - Connecting to message broker: localhost:10001 {} [ERROR] 13:24:22,298 [main] (NettyConnector.java:713) core.client.createConnection - AMQ214016: Failed to create netty connection {} java.nio.channels.ClosedChannelException: null at io.netty.handler.ssl.SslHandler.channelInactive(...)(Unknown Source) ~[netty-all-4.1.9.Final.jar:4.1.9.Final] [DEBUG] 13:24:22,362 [main] (PersistentIdentityService.kt:137) identity.PersistentIdentityService.verifyAndRegisterIdentity - Registering identity O=Notary Service, L=Zurich, C=CH {} [WARN ] 13:24:22,363 [main] (AppendOnlyPersistentMap.kt:79) utilities.AppendOnlyPersistentMapBase.set - Double insert in net.corda.node.utilities.AppendOnlyPersistentMap for entity class class net.corda.node.services.identity.PersistentIdentityService$PersistentIdentity key 69ACAA32A0C7934D9454CB53EEA6CA6CCD8E4090B30C560A5A36EA10F3DC13E8, not inserting the second time {} [ERROR] 13:24:22,368 [main] (NodeStartup.kt:125) internal.Node.run - Exception during node startup {} org.apache.activemq.artemis.api.core.ActiveMQNotConnectedException: AMQ119007: Cannot connect to server(s). Tried with all available servers. at org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.createSessionFactory(ServerLocatorImpl.java:787) ~[artemis-core-client-2.2.0.jar:2.2.0] at net.corda.nodeapi.internal.ArtemisMessagingClient.start(ArtemisMessagingClient.kt:39) ~[corda-node-api-3.2-corda.jar:?] at net.corda.nodeapi.internal.bridging.AMQPBridgeManager.start(AMQPBridgeManager.kt:195) ~[corda-node-api-3.2-corda.jar:?] at net.corda.nodeapi.internal.bridging.BridgeControlListener.start(BridgeControlListener.kt:35) ~[corda-node-api-3.2-corda.jar:?] at net.corda.node.internal.Node.startMessagingService(Node.kt:301) ~[corda-node-3.2-corda.jar:?] How do I fix this?
IntelliJ Ultimate ships with the Yourkit profiler, which by default starts when IntelliJ starts and listens on port 100001 - the default port for the Notary in Driver. You can locate the config for this using here and alter it to use a different port as per this Your new config line will look something like this: -agentlib:yjpagent=delay=10000,probe_disable=*,port=30000
Unable to join Akka.NET cluster
I am having a problem joining and debugging joining to Akka.NET cluster. I am using version 1.3.8. My setup is following: Lighthouse Almost default code from github. Runs in console akka.hocon is following: lighthouse { actorsystem: "sng" } petabridge.cmd{ host = "0.0.0.0" port = 9110 } akka { loglevel = DEBUG loggers = ["Akka.Logger.Serilog.SerilogLogger, Akka.Logger.Serilog"] actor { provider = "Akka.Cluster.ClusterActorRefProvider, Akka.Cluster" debug { receive = on autoreceive = on lifecycle = on event-stream = on unhandled = on } } remote { log-sent-messages = on log-received-messages = on log-remote-lifecycle-events = on enabled-transports = ["akka.remote.dot-netty.tcp"] dot-netty.tcp { transport-class = "Akka.Remote.Transport.DotNetty.TcpTransport, Akka.Remote" applied-adapters = [] transport-protocol = tcp hostname = "0.0.0.0" port = 4053 } log-remote-lifecycle-events = DEBUG } cluster { auto-down-unreachable-after = 5s seed-nodes = [] roles = [lighthouse] } } Working node Also console (net461) application with as simple as possible startup and joining. It works as excpected. akka.hocon: akka { loglevel = DEBUG loggers = ["Akka.Logger.Serilog.SerilogLogger, Akka.Logger.Serilog"] actor { provider = "Akka.Cluster.ClusterActorRefProvider, Akka.Cluster" } remote { log-sent-messages = on log-received-messages = on log-remote-lifecycle-events = on dot-netty.tcp { transport-class = "Akka.Remote.Transport.DotNetty.TcpTransport, Akka.Remote" applied-adapters = [] transport-protocol = tcp hostname = "0.0.0.0" port = 0 } } cluster { auto-down-unreachable-after = 5s seed-nodes = ["akka.tcp://sng#127.0.0.1:4053"] roles = [monitor] } } Not working node An .NET 4.6.1 library, registerd as COM and started in other (Media Monkey) application with VBA code: Sub OnStartup Set o = CreateObject("MediaMonkey.Akka.Agent.MediaMonkeyAkkaProxy") o.Init(SDB) End Sub Akka system is, as in console aplikation, created with standard ActorSystem.Create("sng", config); akka.hocon: akka { loglevel = DEBUG loggers = ["Akka.Logger.Serilog.SerilogLogger, Akka.Logger.Serilog"] actor { provider = "Akka.Cluster.ClusterActorRefProvider, Akka.Cluster" } remote { log-sent-messages = on log-received-messages = on log-remote-lifecycle-events = on dot-netty.tcp { transport-class = "Akka.Remote.Transport.DotNetty.TcpTransport, Akka.Remote" applied-adapters = [] transport-protocol = tcp hostname = "0.0.0.0" port = 0 } } cluster { auto-down-unreachable-after = 5s seed-nodes = ["akka.tcp://sng#127.0.0.1:4053"] roles = [mediamonkey] } } Debugging workflow Startup Lighthouse application: Configuration Result: [Success] Name sng.Lighthouse [Success] ServiceName sng.Lighthouse Topshelf v4.0.0.0, .NET Framework v4.0.30319.42000 [Lighthouse] ActorSystem: sng; IP: 127.0.0.1; PORT: 4053 [Lighthouse] Performing pre-boot sanity check. Should be able to parse address [akka.tcp://sng#127.0.0.1:4053] [Lighthouse] Parse successful. [21:01:35 INF] Starting remoting [21:01:35 INF] Remoting started; listening on addresses : [akka.tcp://sng#127.0.0.1:4053] [21:01:35 INF] Remoting now listens on addresses: [akka.tcp://sng#127.0.0.1:4053] [21:01:35 INF] Cluster Node [akka.tcp://sng#127.0.0.1:4053] - Starting up... [21:01:35 INF] Cluster Node [akka.tcp://sng#127.0.0.1:4053] - Started up successfully The sng.Lighthouse service is now running, press Control+C to exit. [21:01:35 INF] petabridge.cmd host bound to [0.0.0.0:9110] [21:01:35 INF] Node [akka.tcp://sng#127.0.0.1:4053] is JOINING, roles [lighthouse] [21:01:35 INF] Leader is moving node [akka.tcp://sng#127.0.0.1:4053] to [Up] Started and stopped working console node Lighthouse logs: [21:05:40 INF] Node [akka.tcp://sng#0.0.0.0:37516] is JOINING, roles [monitor] [21:05:40 INF] Leader is moving node [akka.tcp://sng#0.0.0.0:37516] to [Up] [21:05:54 INF] Connection was reset by the remote peer. Channel [[::ffff:127.0.0.1]:4053->[::ffff:127.0.0.1]:37517](Id=1293c63a) [21:05:54 INF] Message AckIdleCheckTimer from akka://sng/system/endpointManager/reliableEndpointWriter-akka.tcp%3A%2F%2Fsng%400.0.0.0%3A37516-1/endpointWriter to akka://sng/system/endpointManager/reliableEndpointWriter-akka.tcp%3A%2F%2Fsng%400.0.0.0%3A37516-1/endpointWriter was not delivered. 1 dead letters encountered. [21:05:55 INF] Message GossipStatus from akka://sng/system/cluster/core/daemon to akka://sng/deadLetters was not delivered. 2 dead letters encountered. [21:05:55 INF] Message Heartbeat from akka://sng/system/cluster/core/daemon/heartbeatSender to akka://sng/deadLetters was not delivered. 3 dead letters encountered. [21:05:56 INF] Message GossipStatus from akka://sng/system/cluster/core/daemon to akka://sng/deadLetters was not delivered. 4 dead letters encountered. [21:05:56 INF] Message Heartbeat from akka://sng/system/cluster/core/daemon/heartbeatSender to akka://sng/deadLetters was not delivered. 5 dead letters encountered. [21:05:57 INF] Message GossipStatus from akka://sng/system/cluster/core/daemon to akka://sng/deadLetters was not delivered. 6 dead letters encountered. [21:05:57 INF] Message Heartbeat from akka://sng/system/cluster/core/daemon/heartbeatSender to akka://sng/deadLetters was not delivered. 7 dead letters encountered. [21:05:58 INF] Message GossipStatus from akka://sng/system/cluster/core/daemon to akka://sng/deadLetters was not delivered. 8 dead letters encountered. [21:05:58 INF] Message Heartbeat from akka://sng/system/cluster/core/daemon/heartbeatSender to akka://sng/deadLetters was not delivered. 9 dead letters encountered. [21:05:59 WRN] Cluster Node [akka.tcp://sng#127.0.0.1:4053] - Marking node(s) as UNREACHABLE [Member(address = akka.tcp://sng#0.0.0.0:37516, Uid=1060233119 status = Up, role=[monitor], upNumber=2)]. Node roles [lighthouse] [21:06:01 WRN] AssociationError [akka.tcp://sng#127.0.0.1:4053] -> akka.tcp://sng#0.0.0.0:37516: Error [Association failed with akka.tcp://sng#0.0.0.0:37516] [] [21:06:01 WRN] Tried to associate with unreachable remote address [akka.tcp://sng#0.0.0.0:37516]. Address is now gated for 5000 ms, all messages to this address will be delivered to dead letters. Reason: [Association failed with akka.tcp://sng#0.0.0.0:37516] Caused by: [System.AggregateException: One or more errors occurred. ---> Akka.Remote.Transport.InvalidAssociationException: No connection could be made because the target machine actively refused it tcp://sng#0.0.0.0:37516 at Akka.Remote.Transport.DotNetty.TcpTransport.<AssociateInternal>d__1.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at Akka.Remote.Transport.DotNetty.DotNettyTransport.<Associate>d__22.MoveNext() --- End of inner exception stack trace --- at System.Threading.Tasks.Task`1.GetResultCore(Boolean waitCompletionNotification) at Akka.Remote.Transport.ProtocolStateActor.<>c.<InitializeFSM>b__11_54(Task`1 result) at System.Threading.Tasks.ContinuationResultTaskFromResultTask`2.InnerInvoke() at System.Threading.Tasks.Task.Execute() ---> (Inner Exception #0) Akka.Remote.Transport.InvalidAssociationException: No connection could be made because the target machine actively refused it tcp://sng#0.0.0.0:37516 at Akka.Remote.Transport.DotNetty.TcpTransport.<AssociateInternal>d__1.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at Akka.Remote.Transport.DotNetty.DotNettyTransport.<Associate>d__22.MoveNext()<--- ] [21:06:04 INF] Cluster Node [akka.tcp://sng#127.0.0.1:4053] - Leader is auto-downing unreachable node [akka.tcp://sng#127.0.0.1:4053] [21:06:04 INF] Marking unreachable node [akka.tcp://sng#0.0.0.0:37516] as [Down] [21:06:05 INF] Leader is removing unreachable node [akka.tcp://sng#0.0.0.0:37516] [21:06:05 WRN] Association to [akka.tcp://sng#0.0.0.0:37516] having UID [1060233119] is irrecoverably failed. UID is now quarantined and all messages to this UID will be delivered to dead letters. Remote actorsystem must be restarted to recover from this situation. Working node logs: [21:05:38 INF] Starting remoting [21:05:38 INF] Remoting started; listening on addresses : [akka.tcp://sng#0.0.0.0:37516] [21:05:38 INF] Remoting now listens on addresses: [akka.tcp://sng#0.0.0.0:37516] [21:05:38 INF] Cluster Node [akka.tcp://sng#0.0.0.0:37516] - Starting up... [21:05:38 INF] Cluster Node [akka.tcp://sng#0.0.0.0:37516] - Started up successfully [21:05:40 INF] Welcome from [akka.tcp://sng#127.0.0.1:4053] [21:05:40 INF] Member is Up: Member(address = akka.tcp://sng#127.0.0.1:4053, Uid=439782041 status = Up, role=[lighthouse], upNumber=1) [21:05:40 INF] Member is Up: Member(address = akka.tcp://sng#0.0.0.0:37516, Uid=1060233119 status = Up, role=[monitor], upNumber=2) //shutdown logs are missing Started and stopped COM node Lighthouse logs: [21:12:02 INF] Connection was reset by the remote peer. Channel [::ffff:127.0.0.1]:4053->[::ffff:127.0.0.1]:37546](Id=4ca91e15) COM node logs: [WARNING][18. 07. 2018 19:11:15][Thread 0001][ActorSystem(sng)] The type name for serializer 'hyperion' did not resolve to an actual Type: 'Akka.Serialization.HyperionSerializer, Akka.Serialization.Hyperion' [WARNING][18. 07. 2018 19:11:15][Thread 0001][ActorSystem(sng)] Serialization binding to non existing serializer: 'hyperion' [21:11:15 DBG] Logger log1-SerilogLogger [SerilogLogger] started [21:11:15 DBG] StandardOutLogger being removed [21:11:15 DBG] Default Loggers started [21:11:15 INF] Starting remoting [21:11:15 DBG] Starting prune timer for endpoint manager... [21:11:15 INF] Remoting started; listening on addresses : [akka.tcp://sng#0.0.0.0:37543] [21:11:15 INF] Remoting now listens on addresses: [akka.tcp://sng#0.0.0.0:37543] [21:11:15 INF] Cluster Node [akka.tcp://sng#0.0.0.0:37543] - Starting up... [21:11:15 INF] Cluster Node [akka.tcp://sng#0.0.0.0:37543] - Started up successfully [21:11:15 DBG] [Uninitialized] Received Akka.Cluster.InternalClusterAction+Subscribe [21:11:15 DBG] [Uninitialized] Received Akka.Cluster.InternalClusterAction+Subscribe [21:11:16 DBG] [Uninitialized] Received Akka.Cluster.InternalClusterAction+JoinSeedNodes [21:11:16 DBG] [Uninitialized] Received Akka.Cluster.InternalClusterAction+Subscribe [21:11:26 WRN] Couldn't join seed nodes after [2] attempts, will try again. seed-nodes=[akka.tcp://sng#127.0.0.1:4053] [21:11:31 WRN] Couldn't join seed nodes after [3] attempts, will try again. seed-nodes=[akka.tcp://sng#127.0.0.1:4053] [21:11:36 WRN] Couldn't join seed nodes after [4] attempts, will try again. seed-nodes=[akka.tcp://sng#127.0.0.1:4053] [21:11:40 ERR] No response from remote. Handshake timed out or transport failure detector triggered. [21:11:40 WRN] AssociationError [akka.tcp://sng#0.0.0.0:37543] -> akka.tcp://sng#127.0.0.1:4053: Error [Association failed with akka.tcp://sng#127.0.0.1:4053] [] [21:11:40 WRN] Tried to associate with unreachable remote address [akka.tcp://sng#127.0.0.1:4053]. Address is now gated for 5000 ms, all messages to this address will be delivered to dead letters. Reason: [Association failed with akka.tcp://sng#127.0.0.1:4053] Caused by: [Akka.Remote.Transport.AkkaProtocolException: No response from remote. Handshake timed out or transport failure detector triggered. at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at Akka.Remote.Transport.AkkaProtocolTransport.<Associate>d__19.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at Akka.Remote.EndpointWriter.<AssociateAsync>d__23.MoveNext()] [21:11:40 DBG] Disassociated [akka.tcp://sng#0.0.0.0:37543] -> akka.tcp://sng#127.0.0.1:4053 [21:11:40 INF] Message InitJoin from akka://sng/system/cluster/core/daemon/joinSeedNodeProcess-1 to akka://sng/deadLetters was not delivered. 1 dead letters encountered. [21:11:40 INF] Message InitJoin from akka://sng/system/cluster/core/daemon/joinSeedNodeProcess-1 to akka://sng/deadLetters was not delivered. 2 dead letters encountered. [21:11:40 INF] Message InitJoin from akka://sng/system/cluster/core/daemon/joinSeedNodeProcess-1 to akka://sng/deadLetters was not delivered. 3 dead letters encountered. [21:11:40 INF] Message InitJoin from akka://sng/system/cluster/core/daemon/joinSeedNodeProcess-1 to akka://sng/deadLetters was not delivered. 4 dead letters encountered. [21:11:40 INF] Message InitJoin from akka://sng/system/cluster/core/daemon/joinSeedNodeProcess-1 to akka://sng/deadLetters was not delivered. 5 dead letters encountered. [21:11:40 INF] Message AckIdleCheckTimer from akka://sng/system/endpointManager/reliableEndpointWriter-akka.tcp%3A%2F%2Fsng%40127.0.0.1%3A4053-1/endpointWriter to akka://sng/system/endpointManager/reliableEndpointWriter-akka.tcp%3A%2F%2Fsng%40127.0.0.1%3A4053-1/endpointWriter was not delivered. 6 dead letters encountered. [21:11:41 WRN] Couldn't join seed nodes after [5] attempts, will try again. seed-nodes=[akka.tcp://sng#127.0.0.1:4053] [21:11:41 INF] Message InitJoin from akka://sng/system/cluster/core/daemon/joinSeedNodeProcess-1 to akka://sng/deadLetters was not delivered. 7 dead letters encountered. [21:11:46 WRN] Couldn't join seed nodes after [6] attempts, will try again. seed-nodes=[akka.tcp://sng#127.0.0.1:4053] [21:11:51 WRN] Couldn't join seed nodes after [7] attempts, will try again. seed-nodes=[akka.tcp://sng#127.0.0.1:4053] Do you have any idea how to debug and/or resolve this?
As I can see that the first thing I notice in the non-working node the hocon configuration contains different "seed-nodes" address from the working node. IMHO the "seed-nodes" in all the applications [nodes as called in cluster] withinvthe cluster needs to be same. So in the non-working node instead of seed-nodes = ["akka.tcp://songoulash#127.0.0.1:4053"] replace with the below which is in the working node seed-nodes = ["akka.tcp://sng#127.0.0.1:4053"] Also, please check the github link for sample https://github.com/AJEETX/Akka.Cluster and another link https://github.com/AJEETX/AkkaNet.Cluster.RoundRobinGroup #Rok, Kindly let me know if this was helpful or I can further try to investigate.
javalite async event processing failed with error [client] - AMQ214008: Failed to handle packet java.lang.UnsupportedOperationException
please anybody help to fix this issue?<br/> **I am getting issue [client] - AMQ214008: Failed to handle packet java.lang.UnsupportedOperationException while processing the command data in javalite async?**<br/> [2018-03-30 10:27:16,303] - [DEBUG] [client] - Calling close on session ClientSessionImpl [name=d13aa760-33d6-11e8-b4fb-844bf530b8f3, username=null, closed=false, factory = org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl#58e64301, metaData=(jms-session=,)]#6a6c5fb3 <br/> [2018-03-30 10:27:16,306] - [DEBUG] [server] - QueueImpl[name=jms.queue.eventQueue, postOffice=PostOfficeImpl [server=ActiveMQServerImpl::serverUUID=70d74287-3283-11e8-8a66-844bf530b8f3]]#39533a61 doing deliver. messageReferences=0 <br/> [2018-03-30 10:27:16,308] - [DEBUG] [client] - calling cleanup on ClientSessionImpl [name=d13aa760-33d6-11e8-b4fb-844bf530b8f3, username=null, closed=false, factory = org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl#58e64301, metaData=(jms-session=,)]#6a6c5fb3 <br/> [2018-03-30 10:27:16,335] - [DEBUG] [HttpAsyncRequestExecutor] - http-outgoing-0 [ACTIVE] [content length: 42355; pos: 42355; completed: true] <br/> [2018-03-30 10:27:16,336] - [DEBUG] [ThreadLocalRandom] - -Dio.netty.initialSeedUniquifier: 0xad1a1d5891abf66a <br/> **[2018-03-30 10:27:16,337] - [ERROR] [client] - AMQ214008: Failed to handle packet <br/> java.lang.UnsupportedOperationException<br/> at java.nio.ByteBuffer.array(Unknown Source)**<br/> at org.apache.activemq.artemis.core.client.impl.ClientConsumerImpl.handleCompressedMessage(ClientConsumerImpl.java:600)<br/> at org.apache.activemq.artemis.core.client.impl.ClientConsumerImpl.handleMessage(ClientConsumerImpl.java:532)<br/> at org.apache.activemq.artemis.core.client.impl.ClientSessionImpl.handleReceiveMessage(ClientSessionImpl.java:824)<br/> at org.apache.activemq.artemis.spi.core.remoting.SessionContext.handleReceiveMessage(SessionContext.java:97)<br/> at org.apache.activemq.artemis.core.protocol.core.impl.ActiveMQSessionContext.handleReceivedMessagePacket(ActiveMQSessionContext.java:712)<br/> at org.apache.activemq.artemis.core.protocol.core.impl.ActiveMQSessionContext.access$400(ActiveMQSessionContext.java:111)<br/> at org.apache.activemq.artemis.core.protocol.core.impl.ActiveMQSessionContext$ClientSessionPacketHandler.handlePacket(ActiveMQSessionContext.java:755)<br/> at org.apache.activemq.artemis.core.protocol.core.impl.ChannelImpl.handlePacket(ChannelImpl.java:594)<br/> at org.apache.activemq.artemis.core.protocol.core.impl.RemotingConnectionImpl.doBufferReceived(RemotingConnectionImpl.java:368)<br/> at org.apache.activemq.artemis.core.protocol.core.impl.RemotingConnectionImpl.bufferReceived(RemotingConnectionImpl.java:350)<br/> at org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl$DelegatingBufferHandler.bufferReceived(ClientSessionFactoryImpl.java:1140)<br/> at org.apache.activemq.artemis.core.remoting.impl.invm.InVMConnection$1.run(InVMConnection.java:183)<br/> at org.apache.activemq.artemis.utils.OrderedExecutorFactory$OrderedExecutor$ExecutorTask.run(OrderedExecutorFactory.java:100)<br/> at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)<br/> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)<br/> at java.lang.Thread.run(Unknown Source)<br/> <br/><br/> When i execute the code in standalone project, its working fine. But while running the same in Tomcat server its throwing the above error..? Source Code is below public class TestCommand extends Command { private TestEvent event; public TestCommand(MsgEvent event) { this.event = (TestEvent)event; } public TestCommand() { } #Override public void execute() { //code stuff } } <br/><br/> async = new Async(filePath, false, new QueueConfig("eventQueue", new CommandListener(), threadCount)); <br/> async.start(); <br/> public void test(EventCommand ev) { async.send("eventQueue", ev); } <br/> Following libraries are loaded into classpath please anybody help to fix this issue?
The evidence suggests to me that when this code is executed in Tomcat it is using a different java.nio.ByteBuffer implementation than when it is run standalone (perhaps due to different versions of Netty). The code causing the exception is calling java.nio.ByteBuffer.array() which is not required to be implemented (i.e. throwing an UnsupportedOperationException is valid here). This was dealt with in Artemis via this commit which is available in Artemis 1.4. That said, there's no reason to use such an old version of Artemis. I would recommend you upgrade to the latest 2.5 release as soon as possible.
How to handle Spark with multiple cassandra server with different ssl policy
One cassandra cluster doesn't have SSL enabled and another cassandra cluster has SSL enabled. How to interact with both the cassandra cluster from a single spark job. I have to copy the table from one server(without SSL) and put into another server(with SSL). Spark job:- object TwoClusterExample extends App { val conf = new SparkConf(true).setAppName("SparkCassandraTwoClusterExample") println("Starting the SparkCassandraLocalJob....") val sc = new SparkContext(conf) val connectorToClusterOne = CassandraConnector(sc.getConf.set("spark.cassandra.connection.host", "localhost")) val connectorToClusterTwo = CassandraConnector(sc.getConf.set("spark.cassandra.connection.host", "remote")) val rddFromClusterOne = { implicit val c = connectorToClusterOne sc.cassandraTable("test","one") } { implicit val c = connectorToClusterTwo rddFromClusterOne.saveToCassandra("test","one") } } Cassandra conf:- spark.master spark://ip:6066 spark.executor.memory 1g spark.cassandra.connection.host remote spark.cassandra.auth.username iccassandra spark.cassandra.auth.password pwd1 spark.serializer org.apache.spark.serializer.KryoSerializer spark.eventLog.enabled true spark.eventLog.dir /Users/test/logs/spark spark.cassandra.connection.ssl.enabled true spark.cassandra.connection.ssl.trustStore.password pwd2 spark.cassandra.connection.ssl.trustStore.path truststore.jks Submitting the job:- spark-submit --deploy-mode cluster --master spark://ip:6066 --properties-file cassandra-count.conf --class TwoClusterExample target/scala-2.10/cassandra-table-assembly-1.0.jar Below error i am getting:- 17/10/26 16:27:20 DEBUG STATES: [/remote:9042] preventing new connections for the next 1000 ms 17/10/26 16:27:20 DEBUG STATES: [/remote:9042] Connection[/remote:9042-1, inFlight=0, closed=true] failed, remaining = 0 17/10/26 16:27:20 DEBUG ControlConnection: [Control connection] error on /remote:9042 connection, no more host to try com.datastax.driver.core.exceptions.TransportException: [/remote] Cannot connect at com.datastax.driver.core.Connection$1.operationComplete(Connection.java:157) at com.datastax.driver.core.Connection$1.operationComplete(Connection.java:140) at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680) at io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603) at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563) at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:424) at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe$1.run(AbstractNioChannel.java:222) at io.netty.util.concurrent.PromiseTask$RunnableAdapter.call(PromiseTask.java:38) at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:120) at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357) at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111) at java.lang.Thread.run(Thread.java:748) Caused by: io.netty.channel.ConnectTimeoutException: connection timed out: /remote:9042 at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe$1.run(AbstractNioChannel.java:220) ... 6 more 17/10/26 16:27:20 DEBUG Cluster: Shutting down Exception in thread "main" java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.worker.DriverWrapper$.main(DriverWrapper.scala:58) at org.apache.spark.deploy.worker.DriverWrapper.main(DriverWrapper.scala) Caused by: java.io.IOException: Failed to open native connection to Cassandra at {remote}:9042 at com.datastax.spark.connector.cql.CassandraConnector$.com$datastax$spark$connector$cql$CassandraConnector$$createSession(CassandraConnector.scala:162) at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$2.apply(CassandraConnector.scala:148) at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$2.apply(CassandraConnector.scala:148) at com.datastax.spark.connector.cql.RefCountedCache.createNewValueAndKeys(RefCountedCache.scala:31) at com.datastax.spark.connector.cql.RefCountedCache.acquire(RefCountedCache.scala:56) at com.datastax.spark.connector.cql.CassandraConnector.openSession(CassandraConnector.scala:81) at com.datastax.spark.connector.cql.CassandraConnector.withSessionDo(CassandraConnector.scala:109) at com.datastax.spark.connector.cql.CassandraConnector.withClusterDo(CassandraConnector.scala:120) at com.datastax.spark.connector.cql.Schema$.fromCassandra(Schema.scala:304) at com.datastax.spark.connector.rdd.CassandraTableRowReaderProvider$class.tableDef(CassandraTableRowReaderProvider.scala:51) at com.datastax.spark.connector.rdd.CassandraTableScanRDD.tableDef$lzycompute(CassandraTableScanRDD.scala:59) at com.datastax.spark.connector.rdd.CassandraTableScanRDD.tableDef(CassandraTableScanRDD.scala:59) at com.datastax.spark.connector.rdd.CassandraTableRowReaderProvider$class.verify(CassandraTableRowReaderProvider.scala:146) at com.datastax.spark.connector.rdd.CassandraTableScanRDD.verify(CassandraTableScanRDD.scala:59) at com.datastax.spark.connector.rdd.CassandraTableScanRDD.getPartitions(CassandraTableScanRDD.scala:143) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237) at scala.Option.getOrElse(Option.scala:120) at org.apache.spark.rdd.RDD.partitions(RDD.scala:237) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1929) at org.apache.spark.rdd.RDD.count(RDD.scala:1143) at cassandraCount$.runJob(cassandraCount.scala:27) at cassandraCount$delayedInit$body.apply(cassandraCount.scala:22) at scala.Function0$class.apply$mcV$sp(Function0.scala:40) at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12) at scala.App$$anonfun$main$1.apply(App.scala:71) at scala.App$$anonfun$main$1.apply(App.scala:71) at scala.collection.immutable.List.foreach(List.scala:318) at scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:32) at scala.App$class.main(App.scala:71) at cassandraCount$.main(cassandraCount.scala:10) at cassandraCount.main(cassandraCount.scala) ... 6 more Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /remote:9042 (com.datastax.driver.core.exceptions.TransportException: [/remote] Cannot connect)) at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:231) at com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:77) at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1414) at com.datastax.driver.core.Cluster.getMetadata(Cluster.java:393) at com.datastax.spark.connector.cql.CassandraConnector$.com$datastax$spark$connector$cql$CassandraConnector$$createSession(CassandraConnector.scala:155) ... 37 more 17/10/26 16:27:23 INFO SparkContext: Invoking stop() from shutdown hook 17/10/26 16:27:23 INFO SparkUI: Stopped Spark web UI at http://10.7.10.138:4040 Working code:- object ClusterSSLTest extends App{ val conf = new SparkConf(true).setAppName("sparkCassandraLocalJob") println("Starting the ClusterSSLTest....") val sc = new SparkContext(conf) val sourceCluster = CassandraConnector( sc.getConf.set("spark.cassandra.connection.host", "localhost")) val destinationCluster = CassandraConnector( sc.getConf.set("spark.cassandra.connection.host", "remoteip1,remoteip2") .set("spark.cassandra.auth.username","uname") .set("spark.cassandra.auth.password","pwd") .set("spark.cassandra.connection.ssl.enabled","true") .set("spark.cassandra.connection.timeout_ms","10000") .set("spark.cassandra.connection.ssl.trustStore.path", "../truststore.jks") .set("spark.cassandra.connection.ssl.trustStore.password", "pwd") .set("spark.cassandra.connection.ssl.trustStore.type", "JKS") .set("spark.cassandra.connection.ssl.protocol", "TLS") .set("spark.cassandra.connection.ssl.enabledAlgorithms", "TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA") ) val rddFromSourceCluster = { implicit val c = sourceCluster val tbRdd = sc.cassandraTable("analytics","products") println(s"no of rows ${tbRdd.count()}") tbRdd } val rddToDestinationCluster = { implicit val c = destinationCluster // connect to source cluster in this code block. rddFromSourceCluster.saveToCassandra("analytics","products") } sc.stop() }