promoting a master in replication - replication

I am designing a replication algorithm, to promote a master among many slaves. I want it to be faster and simpler than Paxos. The basic idea is:
Assign each node a 'Promotion Priority', for example for 5 nodes there would be priorities: 50,40,30,20 and 10, 50 the highest and 10 the lowest.
When master needs to be elected, all slaves will send (at the same time) the other 4 nodes a message requesting to become a master, but only that master will be elected that will be confirmed by all slaves with a confirmation message. A slave will send confirmation message if its own 'Promotion Priority' is lower than the asking node, or if the asking node with higher priority times out to issue rejection message for its own request.
If a slave receives a rejection message from slave with higher 'Promotion Priority' it will abort the procedure.
There should be no nodes with the same priority.
There will be a minimum number of confirmation messages that a slave should collect in order to become a master.
This algorithm should be faster because all the slaves will be electing a master in parallel and the priority will help to speed up the process.
What do you think about it? Does any other algorithm for master promotion with priority exists?

What do you think about it?
It is hard to completely assess the validity of you algorithm without knowing the details of your requirements. Overall, it looks like a valid approach, but there are a few issues that I think deserve some attention.
Your question has some similarities to A distributed algorithm to assign a shared resource to one node out of many. Consequently, some of the arguments raised in my answer to that question hold for this question as well.
When master needs to be elected, all slaves will send (at the same
time) the other 4 nodes a message requesting to become a master, but
only that master will be elected that will be confirmed by all slaves
with a confirmation message.
This approach assumes that all slaves know how many slaves are present at any time -- otherwise the supposed master can never draw the conclusion when it has received a confirmation from all slaves. Implicitly, this means that no slaves can leave and join the system without breaking the algorithm.
In practice though, these slaves will come and go, because of crashes, reboots, network outages etc. The chances of this increase with the number of slaves, but whether or not this is a problem depends on your requirements. How fault tolerant does your system have to be?
By the way, since you mention that there are many slaves, I assume that you are using multicast or broadcast to send the request messages. Otherwise, depending on what many means to you, your set-up could be error prone with regard to administrating where all slaves reside.
A slave will send confirmation message if its own 'Promotion Priority'
is lower than the asking node, or if the asking node with higher
priority times out to issue rejection message for its own request.
Similar to the previous remark: a slave might draw the wrong conclusion if some slave has problem responding for whatever reason. In fact, if one slave is down or has a network problem, all other slaves will draw the same (most likely erroneous) conclusion that the non-responsive slave is the master.
This algorithm should be faster because all the slaves will be
electing a master in parallel
The issues raised in this answer are almost inherent to doing the master selection in a distributed fashion though, and hard to resolve without introducing some kind of centralized decision maker. You gain some, you lose some...
Does any other algorithm for master promotion with priority exists?
Another approach would be to have all slaves in the system constantly maintain administration about who is the current master. This could be done (at the cost of some network bandwidth) by having every slave multicasting/broadcasting its priority periodically, via some sort of heartbeat message. As a result, every slave will be aware of every other slave, and at the moment that a master needs to be selected, every slave can do that instantly. Network issues or other "system health" problems will be detected because heartbeats are missed. This algorithm is flexible with regard to slaves joining and leaving the system. The higher the heartbeat frequency, the more responsive your system will be to topology changes. However, you might still run into issues of slaves running drawing independent conclusions because of a disconnected network. If that is a problem, then you might not be able to solve this in a completely parallel fashion.

Related

Keep queues and exchanges when vhost is corrupted

We've recently come across a problem when using RabbitMQ: when the hard drive of our server is full, RabbitMQ's vhost are getting corrupted, and unusable.
The only to make RabbitMQ functional again is to delete, and recreate the corrupted hosts.
Doing so, all of our queues, and exchanges, along with the data in it, is then gone.
While this situation should not happen in prod, we're searching for a way to prevent data loss, if such an event does occur.
We've been looking at the official rabbitMQ documentation, as well as on stack exchange, but haven't found any solution to prevent data loss when a host is corrupted.
We plan on setting up a cluster at a later stage of development, which should at least help in reducing the loss of data when a vhost is corrupted, but it's not possible for now.
Is there any reliable way to either prevent vhost corruption, or to fix the vhost without losing data?
Some thoughts on this (in no particular order):
RabbitMQ has multiple high-availability configurations - relying upon a single node provides no protection against data loss.
In general, you can have one of two possible guarantees with a message, but never both:
At least once delivery - a message will be delivered at least one time, and possibly more.
At most once delivery - a message may or may not be delivered, but if it is delivered, it will never be delivered a second time
Monitoring the overall health of your nodes (i.e. disk space, processor use, memory, etc.) should be done proactively by a tool specific to that purpose. You should never be surprised by running out of a critical system resource.
If you are running one node, and that node is out of disk space, and you have a bunch of messages on it, and you're worried about data loss, wondering how RabbitMQ can help you, I would say you have your priorities mixed up.
RabbitMQ is not a database. It is not designed to reliably store messages for an indefinite time period. Please don't count on it as such.

Redis primary/secondary without replication

I am new to Redis. I read their documentation on Sentinel and Replication in which they talk about how the replicas try to remain in sync with the master as much as possible, but it is still possible that if the master fails after a successful write, the replica might not receive that write. If Sentinel then marks this replica as the new master, it is possible that the replica serves stale data.
If I cannot afford to lose consistency and prefer it over availability, how can I turn off replication so that when Sentinel marks a new replica as master, all the first requests would be cache misses and my cache can slowly warm up instead of returning potentially stale data?
Also, is that a good idea? Are there other good alternatives?
I cannot afford to lose consistency and prefer it over availability
It's not clear that redis automated failover is appropriate for your application. It looks like each client would need to carefully keep track of server availability.
Suppose you have a few clients, a master, M1, and three replicas, R2, R3, R4. Client C5 writes a new bank account balance to M1, which immediately permanently fails, and R2 is promoted to become master M2. Master did not obtain an acknowledge from a replica before replying to client. No paxos-like consensus protocol happens between servers prior to the reply being sent to C5.
C5 could remember counters / timestamps embedded in each write request, forget the write payload, and detect stale reads. But client C6 could not, unless you supply such data quickly and reliably outside the protocol. Nathan Fritz observes that your app could issue a write and then a PUBLISH event, and monitor multiple replicas with a LISTEN for that event, delaying its report of success to end user. Consider incorporating derecho into your app if the solid promises of virtual synchrony are necessary. Production releases of redis are targeted at a different part of the problem space than your primary interest.

Zookeeper vs In-memory-data-grid vs Redis

I've found different zookeeper definitions across multiple resources. Maybe some of them are taken out of context, but look at them pls:
A canonical example of Zookeeper usage is distributed-memory computation...
ZooKeeper is an open source Apacheā„¢ project that provides a centralized infrastructure and services that enable synchronization across a cluster.
Apache ZooKeeper is an open source file application program interface (API) that allows distributed processes in large systems to synchronize with each other so that all clients making requests receive consistent data.
I've worked with Redis and Hazelcast, that would be easier for me to understand Zookeeper by comparing it with them.
Could you please compare Zookeeper with in-memory-data-grids and Redis?
If distributed-memory computation, how does zookeeper differ from in-memory-data-grids?
If synchronization across cluster, than how does it differs from all other in-memory storages? The same in-memory-data-grids also provide cluster-wide locks. Redis also has some kind of transactions.
If it's only about in-memory consistent data, than there are other alternatives. Imdg allow you to achieve the same, don't they?
https://zookeeper.apache.org/doc/current/zookeeperOver.html
By default, Zookeeper replicates all your data to every node and lets clients watch the data for changes. Changes are sent very quickly (within a bounded amount of time) to clients. You can also create "ephemeral nodes", which are deleted within a specified time if a client disconnects. ZooKeeper is highly optimized for reads, while writes are very slow (since they generally are sent to every client as soon as the write takes place). Finally, the maximum size of a "file" (znode) in Zookeeper is 1MB, but typically they'll be single strings.
Taken together, this means that zookeeper is not meant to store for much data, and definitely not a cache. Instead, it's for managing heartbeats/knowing what servers are online, storing/updating configuration, and possibly message passing (though if you have large #s of messages or high throughput demands, something like RabbitMQ will be much better for this task).
Basically, ZooKeeper (and Curator, which is built on it) helps in handling the mechanics of clustering -- heartbeats, distributing updates/configuration, distributed locks, etc.
It's not really comparable to Redis, but for the specific questions...
It doesn't support any computation and for most data sets, won't be able to store the data with any performance.
It's replicated to all nodes in the cluster (there's nothing like Redis clustering where the data can be distributed). All messages are processed atomically in full and are sequenced, so there's no real transactions. It can be USED to implement cluster-wide locks for your services (it's very good at that in fact), and tehre are a lot of locking primitives on the znodes themselves to control which nodes access them.
Sure, but ZooKeeper fills a niche. It's a tool for making a distributed applications play nice with multiple instances, not for storing/sharing large amounts of data. Compared to using an IMDG for this purpose, Zookeeper will be faster, manages heartbeats and synchronization in a predictable way (with a lot of APIs for making this part easy), and has a "push" paradigm instead of "pull" so nodes are notified very quickly of changes.
The quotation from the linked question...
A canonical example of Zookeeper usage is distributed-memory computation
... is, IMO, a bit misleading. You would use it to orchestrate the computation, not provide the data. For example, let's say you had to process rows 1-100 of a table. You might put 10 ZK nodes up, with names like "1-10", "11-20", "21-30", etc. Client applications would be notified of this change automatically by ZK, and the first one would grab "1-10" and set an ephemeral node clients/192.168.77.66/processing/rows_1_10
The next application would see this and go for the next group to process. The actual data to compute would be stored elsewhere (ie Redis, SQL database, etc). If the node failed partway through the computation, another node could see this (after 30-60 seconds) and pick up the job again.
I'd say the canonical example of ZooKeeper is leader election, though. Let's say you have 3 nodes -- one is master and the other 2 are slaves. If the master goes down, a slave node must become the new leader. This type of thing is perfect for ZK.
Consistency Guarantees
ZooKeeper is a high performance, scalable service. Both reads and write operations are designed to be fast, though reads are faster than writes. The reason for this is that in the case of reads, ZooKeeper can serve older data, which in turn is due to ZooKeeper's consistency guarantees:
Sequential Consistency
Updates from a client will be applied in the order that they were sent.
Atomicity
Updates either succeed or fail -- there are no partial results.
Single System Image
A client will see the same view of the service regardless of the server that it connects to.
Reliability
Once an update has been applied, it will persist from that time forward until a client overwrites the update. This guarantee has two corollaries:
If a client gets a successful return code, the update will have been applied. On some failures (communication errors, timeouts, etc) the client will not know if the update has applied or not. We take steps to minimize the failures, but the only guarantee is only present with successful return codes. (This is called the monotonicity condition in Paxos.)
Any updates that are seen by the client, through a read request or successful update, will never be rolled back when recovering from server failures.
Timeliness
The clients view of the system is guaranteed to be up-to-date within a certain time bound. (On the order of tens of seconds.) Either system changes will be seen by a client within this bound, or the client will detect a service outage.

How to ensure flow control in RabbitMQ is never triggered?

I have a publisher pushing to a queue at a slightly larger rate than the consumers can consume. For small numbers, it is okay, but for a very large number of messages, RabbitMQ starts writing it to the disk. At a certain point of time, the disk becomes full, and flow control is triggered. From then on, the rates are really slow. Is there any way to decrease or share this load between cluster nodes? How should I design my application so that flow control is never triggered?
I am using RabbitMQ 3.2.3 on three nodes with 13G RAM, and 10G of system disk space - connected to each other through the cluster. Two of these are RAM nodes, and the remaining one is a disk node, also used for RabbitMQ management plugin.
You can tweak the configuration, upgrade hardware etc and in the end you'd probably want to put a load balancer in front of your RabbitMQ servers to balance the load between multiple RabbitMQ nodes. The problem here is that if you are publishing at a higher rate than you are consuming, eventually you will run into this problem again, and again.
I think the best way to prevent this from happening is to implement logic on the publisher side that keeps track of the number of requests waiting to be processed in the queue. If the number of requests exceeds X the publisher should either wait until the number of messages has gone down, or publish new messages at a slower rate. This type of solution of course depends on where the messages published are coming from, if they are user submitted (e.g. through a browser or client) you could show a loading-bar when the queue builds-up.
Ideally though you should focus on making the processing on the consumer side faster, and maybe scale that part up, but having something to throttle the publisher when it gets busy should help prevent buildups.

How to approach wcf services synchronization?

I have implemented a wcf service and now, my client wants it to have three copies of it, working independently on different machines. A master-slave approach. I need to find a solution that will enable behavior:
the first service that is instantiated "asks" the other two "if they are alive?" - if no, then it becomes a master and it is the one that is active on the net. The other two, once instantiated see that there is already a master alive, so they became slaves and start sleeping. There needs to be some mechanisms to periodically check if master is not dead and if so, choses the next copy that is alive to became a master (until it becomes dead respectively)
This i think should be a kind of an architectural pattern, so I would be more than glad to be given any advices.
thanks
I would suggest looking at the WCF peer channel (System.Net.PeerToPeer) to facilitate each node knowing about the other nodes. Here is a link that offers a decent introduction.
As for determining which node should be the master, the trick will be negotiating which node should be the master if two or more nodes come online at about the same time. Once the nodes become aware of each other, there needs to be some deterministic mechanism for establishing the master. For example, you could use the earliest creation time, the lowest value of the last octet of each node's IP address, or anything really. You'll just need to define some scheme that allows the nodes to negotiate this automatically.
Finally, as for checking if the master is still alive, I would suggest using the event-based mechanism described here. The master could send out periodic health-and-status events that the other nodes would register for. I'd put a try/catch/finally block at the code entry point so that if the master were to crash, it could publish one final MasterClosing event to let the slaves know it's going away. What this does not account for is a system crash, e.g., power failure, etc. To handle this, provide a timeout in the slaves so that when the timeout expires, the slaves can query the master to see if it's still there. If not, the slaves can negotiate between themselves using your deterministic algorithm about who should be the next master.