Akka.net Cluster without seeds - akka.net

I just came across Akka.net today, and it looks like a perfect fit for one of my projects. But I need kind of a zero-config cluster, where users just start up the app on multiple machines on their (local) network and they automatically form a cluster. I'm not sure if this is possible with Akka.net, as I wouldn't have seed nodes to put into the configuration file.
I guess, if there's an option to set seed nodes programmatically, I can broadcast to find other nodes, but it wouldn't really be guaranteed that all nodes start with the same set of seed nodes. Is it possible to start node A with seed node B, and node C with seed node A and so on?

You can set cluster node from code using Cluster plugin i.e. Cluster.Get(Context.System).Join(nodeAddress). If you want to initialize a current node as a cluster seed, just order it to join to itself (cluster.SelfAddress).
In order to join any other node to a cluster, you just need to know address of at least one node, that is part of the cluster already. So yes you can join A ⇒ B and C ⇒ A in scenario descibed by you.

Related

Two Failure Support in 3 node Redis Cluster

We have a 3 node redis cluster with redis and sentinel running on all three nodes.
One of the node is master and other two are replicas.
There are some situations when one node goes down and in those cases one of the replica nodes is promoted to master without any issue.
Now we have a use case when two nodes goes down and we want last remaining node to be promoted to master. we dont want quorum to set 1 as this may lead to some unnecessary failovers. Please suggest the possible solutions.
Assuming you run both Sentinel and Redis processes on each one of the 3 nodes, your deployment can handle a failure of single node only.
This is because after two nodes goes down, there is only one running Sentinel process which (like your said) can't form a quorum.
If you need to support 2 concurrent nodes failures you will need to increase the size of your cluster and preferably also separate Sentinel nodes from Redis nodes.

Redis Cluster minimal configuration

Actually I'm using a configuration of Redis Master-Slaves with HAProxy for Wordpress to have High Avaibility. This configuration is nice and works perfect (I'm able to remove any server to maintenance without downtime). The problem of this configuration is that only one Redis server is getting all the traffic and the others are just waiting if that server dies, so in a very high load webpage can be a problem, and add more servers is not a solution because always only one will be master.
With this in mind, I'm thinking if maybe I can just use a Redis Cluster to allow to read/write on all nodes but I'm not really sure if it will works on my setup.
My setup is limited to three nodes the most of times, and I've read in some places that Redis cluster minimal setup is three nodes, but six is recommended. This is rational because this setup allow to have Slaves nodes that will become Masters if her Master dies, and then all data will be kept, but what happend if data don't cares?. I mean, on my setups the data is just cached objects, so if don't exists it just create it again so:
The data will be lost (don't care), and the other nodes will get the objects from clients again, to serve it on later requests (like happen if a Flush the data).
The nodes will answer that data doesn't exists and will reject to cache because the object would have to be on other node that is dead.
Someone know it?
Thanks!!
When a master dies, the Redis cluster goes to a down state and any command involving a key served by the failed instance will fail.
This may differ from some other distributed software because Redis Cluster is not the kind of program that every master holds all data. In fact, the key space is horizontally partitioned and each key is served by only one master.
This is mentioned in the specification:
The key space is split into 16384 slots...
a single hash slot will be served by a single node...
The base algorithm used to map keys to hash slots is the following:
HASH_SLOT = CRC16(key) mod 16384
When you setup a cluster, you certainly ask each node to serve a set of slots, and each slot can only be served by one node. If one node dies, you lose the slots on this node unless you have a slave failover to serve them, so that any command involving keys mapped to these slots will fail.

Minimum amount of Nodes in Redis Cluster [duplicate]

I know I'm asking something very obvious about cluster failover.
I read on redis.io that, if any master cluster node fails it will affect to other master nodes until slave come to take in charge. In my structure, I'm not defining any slave and just working with 3 masters.
I'm thinking to modify the redis-trib.rb file, which will remove the defected server and will start the cluster with other 2 nodes. I'm confused about a couple of things,
1) Resharding
Could not possible until failed server goes live
2) Minimum 3 node limitation for create cluster
As per bit understanding, redis-trib.rb not allowing me to create cluster for two nodes
There might be some solution in code file :)
3) Automatic Way to Re-Create new structure with live nodes
As programmer point of view, I'm searching something automatic for my system. Something that trigger one command when Redis Cluster fails some tasks happens internally. like
Shutdown all other redis cluster servers
Remove nodes-[port].conf files from all cluster nodes folder
Start redis cluster servers
Run "redis-trib.rb create ip:port ip:port"
I'm just trying to minimize administration work :). Otherwise I need to implement some other algorithm "Data Consistency" here.
If any of you guys have any solution or idea, kindly share.
Thanks,
Sanjay Mohnani
In a cluster with only master nodes, if a node fails, data is lost. Therefore no resharding is possible, since it is not possible to migrate the data (hash slots) out of the failed node.
To keep the cluster working when a master fails, you need slave nodes (one per master). This way, when a master fails, its slave fails over (becomes the new master with the same copy of the data).
The redis-trib.rb script does not handle cluster creation with less than 3 masters, however in redis-cluster a cluster can be of any size (at least one node).
Therefore adding slave nodes can be considered an automatic solution to your problem.

Cluster Failover

I know I'm asking something very obvious about cluster failover.
I read on redis.io that, if any master cluster node fails it will affect to other master nodes until slave come to take in charge. In my structure, I'm not defining any slave and just working with 3 masters.
I'm thinking to modify the redis-trib.rb file, which will remove the defected server and will start the cluster with other 2 nodes. I'm confused about a couple of things,
1) Resharding
Could not possible until failed server goes live
2) Minimum 3 node limitation for create cluster
As per bit understanding, redis-trib.rb not allowing me to create cluster for two nodes
There might be some solution in code file :)
3) Automatic Way to Re-Create new structure with live nodes
As programmer point of view, I'm searching something automatic for my system. Something that trigger one command when Redis Cluster fails some tasks happens internally. like
Shutdown all other redis cluster servers
Remove nodes-[port].conf files from all cluster nodes folder
Start redis cluster servers
Run "redis-trib.rb create ip:port ip:port"
I'm just trying to minimize administration work :). Otherwise I need to implement some other algorithm "Data Consistency" here.
If any of you guys have any solution or idea, kindly share.
Thanks,
Sanjay Mohnani
In a cluster with only master nodes, if a node fails, data is lost. Therefore no resharding is possible, since it is not possible to migrate the data (hash slots) out of the failed node.
To keep the cluster working when a master fails, you need slave nodes (one per master). This way, when a master fails, its slave fails over (becomes the new master with the same copy of the data).
The redis-trib.rb script does not handle cluster creation with less than 3 masters, however in redis-cluster a cluster can be of any size (at least one node).
Therefore adding slave nodes can be considered an automatic solution to your problem.

ZooKeeper - adding peers dynamically?

I'm new to ZooKeeper. This is what I need.
I've a network of peers.
At t=t_1 -> [peer-1 (Leader), peer-2]
peer-1 is the master and all clients connect to this node.
At t=t_2 -> [peer-1 (Leader), peer-2, peer-3]
At some later time peer-3 joins the group. Is it possible to add peer-3 to the list of zookeeper servers "dynamically" ( i.e., without restarting ZooKeeper on peer-1 ) ?
At t=t_3 -> [peer-3 (Leader), peer-4]
After a while both peer-1 and peer-2 leave the group (e.g., die or are switched off.) Assuming that there is a way to dynamically add peer-3 and peer-4 to the group peer-3 becomes the leader and all client requests are send to peer-3.
Are there any other options that I can use apart from using ZooKeeper to do something like this.
thanks.
At the moment, you can't dynamically change the configuration of a zookeeper cluster without restarting. There is an open issue to fix this, ZOOKEEPER-107. The paper describing the cluster membership algorithm is quite interesting, and can be found here.
You can change the configuration of the cluster by restarting server nodes 1 at a time. For example, if you cluster has servers A,B,C, and you want to replace server C with D, then you can do something like,
Bring down C
Bring up D, it's peer list is A,B,D
Take down B
Change B's peer list to A,B,D
Bring up B
Take down A Change A's peer list to A,B,D
Bring up A
Change the client configuration of all clients to point to A,B,D
At t=t_1, you have a cluster with 2 zookeeper nodes. This is quite brittle, as if either node goes down, you will not be able to establish quorum (floor(N / 2) + 1), and the cluster will be unavailable. Generally zookeeper clusters are odd numbers.
I'm not sure what you are trying to do when you say,
peer-3 becomes the leader and all client requests are send to peer-3.
You can't specify which node in a zookeeper cluster is the leader, the nodes themselves will elect their leader, and leadership will change as nodes go up and down. As well, clients typically don't always connect to the leader, but clients are given list of machines in the cluster, and connect randomly to one, reconnecting if the server they are connected to goes down. You can set the leaderServes option to specify that the leader does NOT server client connections.
I would not suggest using the above for any production situation.
The above solution only works if you are ok with losing ZK quorum for while until all changes are complete.
here's why:
Bring down C Bring up D, it's peer list is A,B,D"
-> at this point A and B dont know about D
-> D knows about A B
so at this point you have only A and B functioning in quorum
next you take down B and you lose quorum.
you will lose access to zk data, until migration is complete and quorum is restored again. Most well designed apps using zk in this case failover to a readonly mode and will gracefully recover.
Until Zookeeper-107 is released under Zookeeper 3.5, you will need to choose you poison wisely.
Its better to :
just setup a new zk ensemble (zk cluster)
restore from snapshot
mirate apps from old zk ensemble to new zk ensemble
After migration is complete shutdown old zk ensemble