I have an issue with the below given statement. I haven't tried executing the statement as I didn't understand what is this used for
What should I pass as arguments for server.1, zoo1
server.1=zoo1:2888:3888
server.2=zoo2:2888:3888
server.3=zoo3:2888:3888
What does the above statement mean
What values should I pass and what is the above statement used for
Can u explain with an example
I have a cluster of 4 computers with high availability enabled
Machine 1 and 2 - Zookeeper, zkfc, Namenode, Resourcemanager, Journal node
Machine3 - Zookepper, Journal node
Machine4 - Datanode
Kindly help
These entries define the quorum of Zookeeper servers.
server.1=zoo1:2888:3888
server.2=zoo2:2888:3888
server.3=zoo3:2888:3888
And they follow the pattern,
server.X=server_name:port1:port2
server.X, where X is the server number in ASCII. Create a file named as myid under the Zookeeper data directory in each Zookeeper server. This file should contain the server number X as an entry in it.
server_name is the hostname of the node where the Zookeeper service is started.
port1, ZooKeeper server uses this port to connect followers to the leader.
port2, this port is used for leader election.
When a new leader arises, a follower opens a TCP connection to the
leader using this port. Because the default leader election also uses
TCP, we currently require another port for leader election. This is
the second port in the server entry.
Related
What is TCP discovery SPI in Apache Ignite and what it does?
What 127.0.0.1:47500..47509 in example-cache.xml of apache ignite?
Apache Ignite nodes are all the same and join together automatically to form a cluster. The "Discovery" mechanism is how they find each other, by using the IP address and port of at least 1 other Ignite node to join the rest of the cluster.
TCP Discovery uses a TCP connection to the addresses and ports specified.
127.0.0.1:47500..47509 is a shortened notation that says contact IP 127.0.0.1 (also known as your localhost) and use ports from 47500 to 47509.
This is used in the example code so you can easily run a few instances of Ignite for testing on your computer and they all will connect to each other as a cluster, even though they are all running on the same machine.
This is a component responsible for nodes discovery. Please refer to this page for details: https://apacheignite.readme.io/docs/tcpip-discovery
We had multiple clients configured to talk to this cluster of aerospike nodes. Now that we have removed the configuration from all the clients we are aware of, there are still some read/write requests coming to this cluster, as shown in the AMC.
I looked at the log file generated in /var/log/aerospike/aerospike.log, but could not get any information.
Update
The netstat command as mentioned in the answer by #kporter shows the number of connections, with statuses ESTABLISHED, TIME_WAIT, CLOSE_WAIT etc. But, that does not mean those connections are currently being used for get/set operations. How do I get the IPs from which aerospike operations are currently being done?
Update 2 (Solved)
As mentioned in the comments to #kporter's answer, a tcpdump command on the culprit client showed packets still being sent to the aerospike cluster which was no more referenced in the config file. This was happening while even AMC of that cluster did not show any more read/write TPS.
I later found that this stopped after doing a restart of the nginx service on the client. Please note that the config file in the client now references a new aerospike cluster and packets sent to that cluster did not stop after the nginx restart. This is weird but it worked.
Clients connect to Aerospike over port 3000:
The following command, when run on the server nodes, will show the addresses of hosts connecting to the server over port 3000.
netstat --tcp --numeric-ports | grep 3000
I want to set up HAProxy for RabbitMQ cluster. I have following queries on the same:
(1) Suppose I have a scenario where my RabbitMQ server, client, and haproxy are on different machines.
RabbitMQ node1 -> Machine1
RabbitMQ node2 -> Machine2
HAPROXY -> Machine3
RabbitMQ client -> Mahcine4
node1 and node2 have been clustered. Is this a correct configuration? The rationale behind my asking this question is : can HAProxy be setup on a machine which does not host any node or HaProxy has to be setup on a machine which host at least one RabbitMQ server node?
(2) If the above setup is valid, then my RabbitMQ client should know only HAPrxoy machine, and in that case, how shall I connect my client to HAProxy? The client code which works when RabbitMQ client has to connect to a machine hosting RabbitMQ server node will not work here.
I investigated and found answers of my questions. 1. This set up is valid in the sense it is a possible scenario. 2. Client will connect to HAProxy server.
I have Redis cluster of three instances and the cluster is powered by Redis Sentinel and they are running as [master,slave,slave].
Also and HAproxy instance is running to transfer the traffic to the master node, and those tow slaves are read only, are used by another applications.
It was very easy to configure HAproxy to select the Master Node when same auth key used for all instance, but now we have different auth keys for every instance different from others.
#listen redis-16
bind ip_address:6379 name redis
mode tcp
default_backend bk_redis_16
backend bk_redis_16
# mode tcp
option tcp-check
tcp-check connect
tcp-check send AUTH\ auth_key\r\n
tcp-check send PING\r\n
tcp-check expect string +PONG
tcp-check send info\ replication\r\n
tcp-check expect string role:master
tcp-check send QUIT\r\n
tcp-check expect string +OK
server R1 ip_address:6379 check inter 1s
server R2 ip_address:6380 check inter 1s
server R3 ip_address:6381 check inter 1s
So the above code works only when we have one passwords across {R1,R2,R3}, How to configuer HAproxy for different passwords.
I mean how to make HAproxy use the the each auth key for its server, like the following:
R1 : abc
R2 : klm
R3 : xyz
You have two primary options:
Set up an HA Proxy config for each set of servers which have different passwords.
Set up HA Proxy to not use auth but rather pass all connections through transparently.
You have other problems with the setup you list. Your read-only slaves will not have a role of "master". Thus even if you could assign each a different password, your check would refuse the connection. Also, in the case of a partition your check will allow split-brain conditions.
When using HA Proxy in front of a Sentinel managed Redis pod[1] if you try to have HA Proxy figure out where to route connections to you must have HA Proxy check all Sentinels to ensure that the Redis instance the majority of Sentinels have decided is indeed the master. Otherwise you can suffer from split-brain where two or more instances report themselves as the Master. There is actually a moment after a failover when you can see this happen.
If your master goes down and a slave is promoted, when the master comes back up it will report itself as master until Sentinel detects the master and reconfigures it to be a slave. During this time your HA Proxy check will send writes to the original master. These writes will be lost when Sentinel reconfigures it to be a slave.
For the case of option 1:
You can either run a separate configured instance of HA Proxy or you can set up front ends and multiple back ends (paired up). Personally I'd go with multiple instances of HA Proxy as it allows you to manage them without interference with each other.
For the case of option 2:
You'll need to glue Sentinel's notification mechanism to HA Proxy being reconfigured. This can easily be done using a script triggered on Sentinel to reach out and reconfigure HA Proxy on the switch-master event. The details on doing this are at http://redis.io/topics/sentinel and more directly at the bottom of the example file found at http://download.redis.io/redis-stable/sentinel.conf
In a Redis Pod + Sentinel setup with direct connectivity the clients are able to gather the information needed to determine where to connect to. When you place a non-transparent proxy in between them your proxy needs to be able to make those decisions - or have them made for it when topology changes occur - on behalf of the client.
Note: what you describe is not a Redis cluster, it is a replication setup. A Redis cluster is entirely different. I use the term "pod" to apply to a replication based setup.
We want to use a single Redis server for servers that span two subnets.
If we put Redis on just subnet A, the servers on B will have to go across a router to get to redis.
Our thought is to make the Redis server multi homed (multiple nics), attached to both subnets A and B.
1) Will this work?
2) Will Redis then attach to both IP's?
Thanks!
You can provide the bind address in the redis configuration file (bind parameter).
Now if you comment out the definition and do not provide a bind address, Redis will listen to its port on all the interfaces (i.e. it will listen to 0.0.0.0).
I did not try, but I would say a configuration with 2 addresses should work.