Apache Ignite | TCP discovery SPI - ignite

What is TCP discovery SPI in Apache Ignite and what it does?
What 127.0.0.1:47500..47509 in example-cache.xml of apache ignite?

Apache Ignite nodes are all the same and join together automatically to form a cluster. The "Discovery" mechanism is how they find each other, by using the IP address and port of at least 1 other Ignite node to join the rest of the cluster.
TCP Discovery uses a TCP connection to the addresses and ports specified.
127.0.0.1:47500..47509 is a shortened notation that says contact IP 127.0.0.1 (also known as your localhost) and use ports from 47500 to 47509.
This is used in the example code so you can easily run a few instances of Ignite for testing on your computer and they all will connect to each other as a cluster, even though they are all running on the same machine.

This is a component responsible for nodes discovery. Please refer to this page for details: https://apacheignite.readme.io/docs/tcpip-discovery

Related

How to setup Redis cluster behind a load balancer?

We want to set up Redis 6.2 clustering behind a LB. There are only master nodes and there is no Redis Sentinel being used. Each cluster-enabled Redis instance is running on a different host with the same configuration (eg. all of them are configured with port 6379). Is this possible with some port configuration on the LB such that a unique port on an LB maps to a unique_ip:6379?
Our idea is to use a cluster-aware Redis client like Lettuce RedisClusterClient which would issue CLUSTER NODES/SLOTS commands or react to MOVED/ASK redirection. It would also take care of split up a pipeline into using separate connections based on the slot for a command
It seems like this is not possible to achieve if the same port is used on all Redis hosts. Using https://docs.redis.com/latest/rs/networking/cluster-lba-setup/ as a guide, the best we could manage was to configure each Redis with a unique port and set cluster-announce-ip as the virtual IP (points to LB) and then manually make sure that the same port is used on LB as the Redis host. With this, the CLUSTER SLOTS and MOVED responses from Redis hosts could be correctly acted upon by the client. But this complicates our setup when a new Redis host has to be added or removed
You can use Route 53 if you're on AWS to achieve this.
Create A setup like this:
Add all hosts(IP addresses) in Route 53 and set TTL to smaller values like 30 seconds or so. Route 53 will return one of these Redis IP addresses, using this endpoint Redis clients like Lettuce or Jedis will discover all the Redis nodes.
You can use any other DNS system as well, record type should be A.

Gcloud load balancing to the same host for two TCP connections

I'm using GCP like in the following schema:
TCP balancer -> backend-service -> MIG(my app) with auto scaling.
"My app" accepts commands on a TCP port (A) and sends notifications on another TCP port(B) for subscriber.
I'm running my tests against TCP LB's IP - my tests connect to port B on a startup(i.e. one of instances of "my app") and also my tests make a connection to port A for each test.
i.e. I've faced with a case when port A and port B are terminated/connected to different hosts.
I am not sure how to circumvent this case.
I have mitigated the issue using --session-affinity=CLIENT_IP for backend-services configuration, I.e. all connections from one IP are directed to the same target.

Aerospike: How to find from any aerospike server which clients are accessing it?

We had multiple clients configured to talk to this cluster of aerospike nodes. Now that we have removed the configuration from all the clients we are aware of, there are still some read/write requests coming to this cluster, as shown in the AMC.
I looked at the log file generated in /var/log/aerospike/aerospike.log, but could not get any information.
Update
The netstat command as mentioned in the answer by #kporter shows the number of connections, with statuses ESTABLISHED, TIME_WAIT, CLOSE_WAIT etc. But, that does not mean those connections are currently being used for get/set operations. How do I get the IPs from which aerospike operations are currently being done?
Update 2 (Solved)
As mentioned in the comments to #kporter's answer, a tcpdump command on the culprit client showed packets still being sent to the aerospike cluster which was no more referenced in the config file. This was happening while even AMC of that cluster did not show any more read/write TPS.
I later found that this stopped after doing a restart of the nginx service on the client. Please note that the config file in the client now references a new aerospike cluster and packets sent to that cluster did not stop after the nginx restart. This is weird but it worked.
Clients connect to Aerospike over port 3000:
The following command, when run on the server nodes, will show the addresses of hosts connecting to the server over port 3000.
netstat --tcp --numeric-ports | grep 3000

Clustering doesn't work with mod_cluster on JbossAS7 - Stateful Application

I'm going to explain my situation.
Background:
I'm running three virtual machines with Debian Jessie on Open Nebula, one as master and the other two as slaves. In them i've installed JBoss AS 7.1 and mod_cluster 1.2.
Goal:
Run a stateful app, so when I shutdown the master server the cluster allows me to continue using the app with shared session and mantain the variables values.
I followed this guide with the given web application.
Errors:
I can't access directly the app at http://master/cluster-demo/ like as in the guide above, I have to specify the port (8330 for server-three).
When I shutdown server-three the slaves notices that the server is shutted down but the session is not shared and the application is no more accessible. This is the output on slave when i shoutdown server-three on master.
Configuration Files
I attach my configuration files:
/opt/jboss/domain/configuration/domain.xml
/opt/jboss/httpd/httpd/conf/httpd.conf
/opt/jboss/domain/configuration/host.xml in the master
/opt/jboss/domain/configuration/host.xml in the slaves
Answer
mod_cluster does not have anything in common with messaging (JMS, HornetQ) subsystems. mod_cluster setting also does not have anything in common with clustering subsystem, i.e. Infinispan and its workhorse, JGroups.
What AS7 mod_cluster subsystem does is that is listens to UDP multicast advertising messages emitted by Apache HTTP Server mod_cluster modules. When it receives such message, it registers itself with your Apache HTTP Server load balancer. From that moment, your registered AS7 "worker" node keeps sending specialized HTTP messages (via TCP), informing Apache HTTP Server about:
its name (jvmRoute or generated)
its current load
its deployments, i.e. application contexts
aliases etc.
When there are no worker nodes registered with your Apache HTTP Server balancer, there are no contexts, hence there is nowhere to forward your requests to.
According to the configuration you posted, you rely on UDP multicast messages being sent to/received from 224.0.1.105:23364.
Open Nebula, firewall and UDP multicast
It is possible that Open Nebula doesn't allow UDP multicast between hosts or that your iptables are blocking it. Try this:
use curl on your worker host to access the balancer host -- exactly the VirtualHost where you have the directive EnableMCPMReceive defined.
if it doesn't work, you must fix iptables, selinux, httpd's allow/deny and such
if it works, it's a good sign that worker can talk to the balancer
go to your AS7 xml, modcluster subsystem, and add attribute to the config: <mod-cluster-config advertise-socket="modcluster" proxy-list="your-httpd-address:port"> -- the one you've just tried with curl
now it should work even without UDP multicast
if you would like to debug your UDP multicast settings in Open Nebula, give it a shot with Advertize.java
1.2.0 is too old, do not use vulnerable code
Please, do not use mod_cluster 1.2.0 with your Apache HTTP Server. The version is completely obsolete and it contains serious bugs, including a code injection CVE and severe performance issue. Download mod_cluster 1.3.1.Final for httpd 2.4.x or build your own from the sources, if you desire httpd 2.2.x support. If you happen to need any any help with that, ask.

Configure load balancing activemq servers which have only one static load balancing IP

This is my scenario.
I have two consumers server:
Server A has IP: 192.168.0.1
Server B has IP: 192.168.0.2
Both servers have configured the activemq as below:
transportConnector uri="tcp://192.168.0.X:61616"
updateClusterClients="true"
In my system, I have one load balancing server (hardware load balancing) which has IP 192.168.0.100 and load balances all requests to above servers.
In the past, my client must configure the connect url as below:
failover:(tcp://192.168.0.1:61616,tcp://192.168.0.2:61616)
to send active mq message.
In the current, we cannot send message directly to each server and must send to load balancing IP. But when I configure the url as below:
failover:(tcp://192.168.0.100:61616)
Nothing happens, we can ping to port 61616 but message cannot be sent.
I cannot use the acitvemq load balancing model because client cannot reach the child servers. Can some one help me? Can we configure the activemq to have the virtual client which have the load balanacing ip.
Thank a lot.
Currently, I found that I had researched the wrong way. OpenWire is a two way communication protocol which requried the connected point from both way. So we cannot put them in load balancing which have shared IP. Need to find another way :)