Clustering in vernEMQ gives node not discoverable - mqtt-vernemq

I have to nodes named VMQ1#127.0.0.1 and VMQ2#127.0.0.1 on two different containers in the same virtual machine
but when i try from node2 to join the cluster with vmq-admin as below:
vmq-admin cluster join discovery-node=VMQ1#127.0.0.1.
It says it cannot discover the node or node not found.
Where am i going wrong??
And yes the cookies are set to same variable and all the ports that are need to be opened are exposed.

Related

How to connect to redis-ha cluster in Kubernetes cluster?

So I recently installed stable/redis-ha cluster (https://github.com/helm/charts/tree/master/stable/redis-ha) on my G-Cloud based kubernetes cluster. The cluster was installed as a "Headless Service" without a ClusterIP. There are 3 pods that make up this cluster one of which is elected master.
The cluster has installed with no issues and can be accessed via redis-cli from my local pc (after port-forwarding with kubectl).
The output from the cluster install provided me with DNS name for the cluster. Because the service is a headless I am using the following DNS Name
port_name.port_protocol.svc.namespace.svc.cluster.local (As specified by the documentation)
When attempting to connect I get the following error:
"redis.exceptions.ConnectionError: Error -2 connecting to
port_name.port_protocol.svc.namespace.svc.cluster.local :6379. Name does not
resolve."
This is not working.
Not sure what to do here. Any help would be greatly appreciated.
the DNS appears to be incorrect. it should be in the below format
<redis-service-name>.<namespace>.svc.cluster.local:6379
say, redis service name is redis and namespace is default then it should be
redis.default.svc.cluster.local:6379
you can also use pod dns, like below
<redis-pod-name>.<redis-service-name>.<namespace>.svc.cluster.local:6379
say, redis pod name is redis-0 and redis service name is redis and namespace is default then it should be
redis-0.redis.default.svc.cluster.local:6379
assuming the service port is same as container port and that is 6379
Not sure if this is still relevant. Just enhance the chart similar to other charts to support NodePort, e.g. rabbitmq-ha so that you can use any node ip and configured node port if you want to access redis from outside the cluster.

cluster_formation.classic_config.nodes does not work of rabbitmq

I have 2 rabbitmq nodes.
Their node names are: rabbit#testhost1 and rabbit#testhost2
I'd like them can auto cluster.
On testhost1
# cat /etc/rabbitmq/rabbitmq.conf
cluster_formation.peer_discovery_backend = rabbit_peer_discovery_classic_config
cluster_formation.classic_config.nodes.1 = rabbit#testhost1
cluster_formation.classic_config.nodes.2 = rabbit#testhost2
On testhost2
# cat /etc/rabbitmq/rabbitmq.conf
cluster_formation.peer_discovery_backend = rabbit_peer_discovery_classic_config
cluster_formation.classic_config.nodes.1 = rabbit#testhost1
cluster_formation.classic_config.nodes.2 = rabbit#testhost2
I start rabbit#testhost1 first and then rabbit#testhost2.
The second node didn't join to the cluster of first node.
While node rabbit#testhost1 can join rabbit#testhost2 with rabbitmqctl command: rabbitmqctl join_cluster rabbit#testhost2.
So the network between should not have problem.
Could you give me some idea about why can't combine cluster? Is the configuration nor correct?
I have opened the debug log and the info related to rabbit_peer_discovery_classic_config is very little:
2019-01-28 16:56:47.913 [info] <0.250.0> Peer discovery backend rabbit_peer_discovery_classic_config does not support registration, skipping registration.
The rabbitmq version is 3.7.8
Did you start the nodes without cluster config before you attempted clustering?
I had started individual peers with the default configuration once before I added cluster formation settings to the config file. By starting a node without clustering config, it seems form a cluster of its own, and on further start would only contact the last known cluster (self).
From https://www.rabbitmq.com/cluster-formation.html
How Peer Discovery Works
When a node starts and detects it doesn't have a previously initialised database, it will check if there's a peer discovery mechanism configured. If that's the case, it will then perform the discovery and attempt to contact each discovered peer in order. Finally, it will attempt to join the cluster of the first reachable peer.
You should be able to reset the node with rabbitmqctl reset (Warning: This removes all data from the management database, such as configured users and vhosts, and deletes all persistent messages along with clustering information.) and then use clustering config.

GoogleCloud Kubernetes node cannot connect to Redis Memorystore possibly due to overlap in IP ranges

I have a GoogleCloud Kubernetes cluster consisting of multiple nodes and a GoogleCloud Redis Memorystore. Distributed over these nodes are replicas of a pod containing a container that needs to connect to the Redis Memorystore. I have noticed that one of the nodes is not able to connect to Redis, i.e. any container in a pod on that node cannot connect to Redis.
The Redis Memorystore has the following properties:
IP address: 10.0.6.12
Instance IP address range: 10.0.6.8/29 (10.0.6.8 - 10.0.6.15)
The node from which no connection to Redis can be made has the following properties:
Internal IP: 10.132.0.5
PodCIDR: 10.0.6.0/24 (10.0.6.0 - 10.0.6.255)
I assume this problem is caused by the overlap in IP ranges of the Memorystore and this node. Is this assumption correct?
If this is the problem I would like to change the IP range of the node.
I have tried to do this by editing spec.podCIRD in the node config:
$ kubectl edit node <node-name>
However this did not work and resulted in the error message:
# * spec.podCIDR: Forbidden: node updates may not change podCIDR except from "" to valid
# * []: Forbidden: node updates may only change labels, taints, or capacity (or configSource, if the DynamicKubeletConfig feature gate is enabled)
Is there another way to change the IP range of an existing Kubernetes node? If so, how?
Sometimes I need to temporarily increase the number of pods in a cluster. When I do this I want to prevent Kubernetes from creating a new node with the IP range 10.0.6.0/24.
Is it possible to tell the Kubernetes cluster to not create new nodes with the IP range 10.0.6.0/24? If so, how?
Thanks in advance!
Not for a node. The podCidr gets defined when you install your network overlay in initial steps when setting up a new cluster.
Yes for the cluster. but it's not that easy. You have to change the podCidr for the network overlay in your whole cluster. It's a tricky process that can be done, but if you are doing that you might as well deploy a new cluster. Keep in mind that some network overlays require a very specific PodCidr. For example, Calico requires 192.168.0.0/16
You could:
Create a new cluster with a new cidr and move your workloads gradually.
Change the IP address cidr where your GoogleCloud Redis Memorystore lives.
Hope it helps!

Apache Ignite | TCP discovery SPI

What is TCP discovery SPI in Apache Ignite and what it does?
What 127.0.0.1:47500..47509 in example-cache.xml of apache ignite?
Apache Ignite nodes are all the same and join together automatically to form a cluster. The "Discovery" mechanism is how they find each other, by using the IP address and port of at least 1 other Ignite node to join the rest of the cluster.
TCP Discovery uses a TCP connection to the addresses and ports specified.
127.0.0.1:47500..47509 is a shortened notation that says contact IP 127.0.0.1 (also known as your localhost) and use ports from 47500 to 47509.
This is used in the example code so you can easily run a few instances of Ignite for testing on your computer and they all will connect to each other as a cluster, even though they are all running on the same machine.
This is a component responsible for nodes discovery. Please refer to this page for details: https://apacheignite.readme.io/docs/tcpip-discovery

REDIS node doesn't update its IP in cluster-config-file

I have a REDIS cluster consisting of 3 masters and 3 slaves. Platform configuration on which REDIS nodes run is such that a node can change IP when it is restarted.
After restart of a node, 5 other nodes correctly update the IP address of restarted node, but that particular node keeps old IP address in its configuration.
What should be made so that the node that has changed the IP updates its configuration?