Apache Ignite - Node running on remote machine not discovered - ignite

Apache Ignite Version is: 2.1.0
I am using TcpDiscoveryVmIpFinder to configure the nodes in an Apache Ignite cluster to setup a compute grid. Below is my configuration which is nothing but the example-default.xml, edited for the IP addresses:
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<!--
Ignite provides several options for automatic discovery that can be used
instead os static IP based discovery. For information on all options refer
to our documentation: http://apacheignite.readme.io/docs/cluster-config
-->
<!-- Uncomment static IP finder to enable static-based discovery of initial nodes. -->
<!--<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">-->
<!-- <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder"> -->
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
<property name="addresses">
<list>
<!-- In distributed environment, replace with actual host IP address. -->
<value>xxx.40.16.yyy:47500..47509</value>
<value>xx.40.16.zzz:47500..47509</value>
</list>
</property>
</bean>
</property>
</bean>
</property>
If I start multiple nodes on individual machine, the nodes on respective machines discover each other and form a cluster. But, the nodes on the remote machines do not discover each other.
Any advise will be helpful...

First of all, make sure that you really use this config file and not a default config. With default configuration, nodes can find each other only on the same machine.
Once you've checked it, you also need to test that it's possible to connect from host 106.40.16.64 to 106.40.16.121(and vice versa) via 47500..47509 ports. It's possible that there is a firewall blocked connections or these ports is simply closed.
For example, it's possible to check it with netcat, run this from 106.40.16.64 host:
nc -z 106.40.16.121 47500

Related

ignite client returns : Failed to establish connection with any host

I am new to ignite and I am trying to run this simple example.
I run a node with this configuration
<?xml version="1.0" encoding="UTF-8"?>
<!--
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
The ASF licenses this file to You under the Apache License, Version 2.0
(the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd">
<bean id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
<!-- Enabling Apache Ignite Persistent Store. -->
<property name="dataStorageConfiguration">
<bean class="org.apache.ignite.configuration.DataStorageConfiguration">
<property name="defaultDataRegionConfiguration">
<bean class="org.apache.ignite.configuration.DataRegionConfiguration">
<property name="persistenceEnabled" value="true"/>
</bean>
</property>
</bean>
</property>
<!-- Explicitly configure TCP discovery SPI to provide a list of initial nodes. -->
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<!-- Uncomment static IP finder to enable static-based discovery of initial nodes. -->
<!--<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">-->
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder">
<property name="addresses">
<list>
<!-- In distributed environment, replace with actual host IP address. -->
<value>198.168.0.1:47500..47502</value>
</list>
</property>
</bean>
</property>
</bean>
</property>
</bean>
</beans>
after that I am trying to run
IgniteClientConfiguration mConfiguration;
mConfiguration.SetEndPoints("198.168.0.1:47500..47502");
mClient = IgniteClient::Start(mConfiguration);
but Start is throwing and exception Failed to establish connection with any host
Does anybody know why ? I am running my node and program under the same machine ubuntu 20.
I believe it happens because you are trying to connect a .NET thin client to a discovery port (it's for thick clients and servers). Try this one (10800 is the default thin client port):
mConfiguration.SetEndPoints("198.168.0.1:10800");
This one (with the localhost) will work for example if you wish to connect to a node deployed locally.

Ignite Thin Client in Kubernetes

I'm trying to set up a distributed cache using Ignite and my java app through a thin client in a Kubernetes environment.
In my Kubernetes cluster, I have 2 pods with the java app and 2 pods of ignite. For the java pods to communicate with ignite pods, I have configured a thin client to connect with the ignite kubernetes service. With this configuration, I was expecting that the load balancing was on the kubernetes side. Here's what I have done in java code:
ClientConfiguration cfg = new ClientConfiguration()
.setAddresses("ignite-service.default.svc.cluster.local:10800")
.setUserName("user")
.setUserPassword("password");
IgniteClient igniteClient = Ignition.startClient(cfg);
While storing and getting objects from ignite, I deleted one of the ignite pods and, after a while, I was getting errors saying that "Ignite cluster is unavailable":
org.apache.ignite.client.ClientConnectionException: Ignite cluster is unavailable
With this behavior, I assume that the method setAddresses in ClientConfiguration class stores one of the IPs of the pods and channels all communication to that pod.
Is this what's happening in this method?
Ignite version 2.7
Kubernetes version 1.12.3
You need to pass several IP addresses to enable the failover (aka. automatic reconnect) on the thin client end. Find more details here.
Although you might have resolved the issue since the question was posted a long time back, but still putting an answer here for others.
With the Apache Ignite version(2.7+), you can modify your deployment to use Kubernetes IP Finder. With this Kubernetes will take care of discovering and connecting all server and client nodes.
TcpDiscoveryKubernetesIpFinder module will help you achieve this.
This is the discovery SPI that needs to be added to your configuration (Replace with appropriate Namespace and Service Name)
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder">
<constructor-arg>
<bean class="org.apache.ignite.kubernetes.configuration.KubernetesConnectionConfiguration">
<property name="namespace" value="default" />
<property name="serviceName" value="ignite" />
</bean>
</constructor-arg>
</bean>
</property>
</bean>
</property>
Official documentation can be found here - https://ignite.apache.org/docs/latest/installation/kubernetes/amazon-eks-deployment

Ignite cluster communication with multiple nics

Ignite uses one network card during a re-balance. It should use multiple.
Our cluster using more than 1gbps bandwidth during re-balance, so we tried network bonding but ARP cache needs to be refresh. Instead we want to use separate network devices on a virtual machine. But ignite uses one of them per re-balancing. Virtual machines are centos7. ignite is 2.7.0-1
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
<property name="addresses">
<list>
<value>ip1:47500..47509</value>
<value>ip2:47500..47509</value>
<value>ip3:47500..47509</value>
</list>
</property>
</bean>
</property>
</bean>
</property>
We expect ignite to re-balance trough ip1, ip2, ip3 at the same time.
UPDATE
We've made a bonded virtual network device with combining multiple devices, unfortunately it has required some down time. Problem solved.
Two options here:
Can you create a virtual NIC which will join two physical ones together? I think this should be doable.
Failing that, you can have two nodes per VM, one with localAddress nic1 and other with localAddress nic2. Please note that you should define localAddress on TcpCommunicationSpi since that's where traffic is. Share RAM fairly between those two nodes.
Maybe you could also have a custom TcpCommunicationSpi which will use two NICs, but I'm not sure if traffic will be distributed fairly even then.

Apache Ignite SQLClient Connection from outside cluster

Apache Ignite is running in the 5 node hadoop cluster. Ignite Visor top command shows all the recognized nodes accurately. Outside the cluster, only one node is exposed as an edge node, using external ip. I am unable to connect to the Apache Ignite Cluster from outside the cluster using the exposed ip of the edge node.
Working within cluster : jdbc:ignite:thin://127.0.0.1/
Working within cluster : jdbc:ignite:thin://internal-ip.labs.net/
Not Working Outside cluster : jdbc:ignite:thin://external-ip.labs.net/
Please advise if any additional configuration is needed in the edge node to make the jdbc url work using the external ip address also. I am trying to do this in order to connect to the ignite cluster from outside using a sql client so that I can run all the sqls.
My Current Config
<bean id="grid.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.sharedfs.TcpDiscoverySharedFsIpFinder">
<property name="path" value="/storage/softwares/ignite/addresses"/>
</bean>
</property>
</bean>
</property>
</bean>
Apache Ignite JDBC driver operates over port 10800 by default. You need to forward it from external IP to your Ignite node to be able to connect to the cluster using JDBC.

Ignite on server

I have a server,and have a container on server, on which started Ignite node(s).
And know that server configs (IP,container port etc.).
And want to connect(find) to this node from my PC(from Intellij Idea).
Namely I want to start another Ignite to which must connect to node on server.
How do my new starting node configuration?
With TcpDiscoverySpi or CommunicationSpi and how with IP and port.
You need to start a node on your PC with a configuration where IP finder, that is set for TcpDiscoverySpi will contain list of IPs and ports of your remote cluster.
Most likely it will be more than enough to configure static IP finder on your side.
Simply you can create the static IP finder the way below and set this discovery bean into configuration of all the nodes (servers and clients)
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
<property name="addresses">
<list>
<!-- In distributed environment, replace with actual host IP address. -->
<value>server_1_ip:47500..47509</value>
<value>server_2_ip:47500..47509</value>
<value>server_3_ip:47500..47509</value>
</list>
</property>
</bean>
</property>
</bean>
</property>