When adding a new node to an Aerospike cluster, a rebalance happens for the new node. For large data sets this takes time and some requests to the new node fail until rebalance is complete. The only solution I could figure out is retry the request until it gets the data.
Is there a better way?
I don't think it is possible to keep the node out of cluster for requests until it's done replicating because it is also master for one of the partitions.
If you are performing batch-reads, there is an improvement in 3.6.0. While the cluster is in-flux, if the client directs the read transaction to Node_A, but the partition containing the record has been moved to Node_B, Node_A proxies the request to Node_B.
Is that what you are doing?
You should not be in a position where the client cannot connect to the cluster, or it cannot complete a transaction.
I know that SO frowns on this, but can you provide more detail about the failures? What kinds of transactions are you performing? What versions are you using?
I hope this helps,
-DM
Requests shouldn't be failing, the new node will proxy to the node that currently has the data.
Prior to Aerospike 3.6.0 batch read requests were the exception. I suspect this is your problem.
Related
I am very new in REDIS cache implementation.
Could you please let me know what is the replication factor means?
How it works or What is the impact?
Thanks.
At the base of Redis replication (excluding the high availability features provided as an additional layer by Redis Cluster or Redis Sentinel) there is a very simple to use and configure leader follower (master-slave) replication: it allows replica Redis instances to be exact copies of master instances. The replica will automatically reconnect to the master every time the link breaks, and will attempt to be an exact copy of it regardless of what happens to the master.
This system works using three main mechanisms:
When a master and a replica instances are well-connected, the master keeps the replica updated by sending a stream of commands to the replica, in order to replicate the effects on the dataset happening in the master side due to: client writes, keys expired or evicted, any other action changing the master dataset.
When the link between the master and the replica breaks, for network issues or because a timeout is sensed in the master or the replica, the replica reconnects and attempts to proceed with a partial resynchronization: it means that it will try to just obtain the part of the stream of commands it missed during the disconnection.
When a partial resynchronization is not possible, the replica will ask for a full resynchronization. This will involve a more complex process in which the master needs to create a snapshot of all its data, send it to the replica, and then continue sending the stream of commands as the dataset changes.
Redis uses by default asynchronous replication, which being low latency and high performance, is the natural replication mode for the vast majority of Redis use cases.
Synchronous replication of certain data can be requested by the clients using the WAIT command. However WAIT is only able to ensure that there are the specified number of acknowledged copies in the other Redis instances, it does not turn a set of Redis instances into a CP system with strong consistency: acknowledged writes can still be lost during a failover, depending on the exact configuration of the Redis persistence. However with WAIT the probability of losing a write after a failure event is greatly reduced to certain hard to trigger failure modes.
I want to use Redis for a particular use case. I am not sure to go with a Redis Cluster or with Twemproxy + Sentinel.
I know the Cluster is a winner any day. I am just skeptical due to the MOVED responses. In case of MOVED responses, the client will connect another node and in case of resharding, it may have to connect another again. But in case of Twem, it knows where the data is residing, so it will never get a MOVED response.
There are different problems with Twem, like added hop, may increase overall turnaround time, problem with adding new nodes or if it ejects some nodes out, it won't be able to serve the requests for the keys present on that node. Extra maintenance headache as in, having sentinels for my Redis instances and mechanism for HA of twem itself.
Can anyone suggest me, should I go with Twem or Cluster? I am thinking of going with Twem as I will not be going to and fro in case of MOVED responses. But I am skeptical about it, considering the above mentioned concerns.
P.S. I am planning to using Jedis client for Redis (if that helps).
First of all, I'm not familiar with Twemproxy, so I'll only talk about your concerns on Redis Cluster.
Redis client can get the complete slot-node mapping, i.e. the location of keys, from Redis Cluster. It can cache the mapping on the client side, and sends request to the right node. So most of the time, it won't be redirected, i.e. get the MOVED message.
However, if you add/delete node or reshard the data set, client will receive MOVED message, since it still uses the old mapping. In this case, client can update its local cache, and any subsequent requests will be sent to the right node, i.e. no MOVED message any more.
A decent client library can take the above optimization to make it more efficient. So if your client library has this optimization, you don't need to worry about the MOVED penalty.
I know I'm asking something very obvious about cluster failover.
I read on redis.io that, if any master cluster node fails it will affect to other master nodes until slave come to take in charge. In my structure, I'm not defining any slave and just working with 3 masters.
I'm thinking to modify the redis-trib.rb file, which will remove the defected server and will start the cluster with other 2 nodes. I'm confused about a couple of things,
1) Resharding
Could not possible until failed server goes live
2) Minimum 3 node limitation for create cluster
As per bit understanding, redis-trib.rb not allowing me to create cluster for two nodes
There might be some solution in code file :)
3) Automatic Way to Re-Create new structure with live nodes
As programmer point of view, I'm searching something automatic for my system. Something that trigger one command when Redis Cluster fails some tasks happens internally. like
Shutdown all other redis cluster servers
Remove nodes-[port].conf files from all cluster nodes folder
Start redis cluster servers
Run "redis-trib.rb create ip:port ip:port"
I'm just trying to minimize administration work :). Otherwise I need to implement some other algorithm "Data Consistency" here.
If any of you guys have any solution or idea, kindly share.
Thanks,
Sanjay Mohnani
In a cluster with only master nodes, if a node fails, data is lost. Therefore no resharding is possible, since it is not possible to migrate the data (hash slots) out of the failed node.
To keep the cluster working when a master fails, you need slave nodes (one per master). This way, when a master fails, its slave fails over (becomes the new master with the same copy of the data).
The redis-trib.rb script does not handle cluster creation with less than 3 masters, however in redis-cluster a cluster can be of any size (at least one node).
Therefore adding slave nodes can be considered an automatic solution to your problem.
Looking through the Infinispan getting started guide it states [When in replication mode]
Infinispan only replicates data to nodes which are already in the
cluster. If a node is added to the cluster after an entry is added, it
won’t be replicated there.
Which I read as any cluster member will always be ignorant of any data that existed in the cluster before it became a cluster member.
Is there a way to force Infinispan to replicate all existing data to a new cluster member?
I see two options currently but I'm hoping I can just get Infinispan to do the work.
Use a distributed cache and live with the increase in access times inherent in the model, but this at least leaves Infinispan to handle its own state.
Create a Listener to listen for a new cache member joining and iterate through the existing data, pushing it into the new member. Unfortunately this would in effect cause every entry to replicate out to the existing cluster members again. I don't think this option will fly.
This information sounds as misleading/outdated. When the node joins a cluster, a rebalance process is initiated and when you query for these data during the rebalance prior to delivering these data to the node, the entry is fetched by remote RPC.
My understanding could be amiss here. As I understand it, Couchbase uses a smart client to automatically select which node to write to or read from in a cluster. What I DON'T understand is, when this data is written/read, is it also immediately written to all other nodes? If so, in the event of a node failure, how does Couchbase know to use a different node from the one that was 'marked as the master' for the current operation/key? Do you lose data in the event that one of your nodes fails?
This sentence from the Couchbase Server Manual gives me the impression that you do lose data (which would make Couchbase unsuitable for high availability requirements):
With fewer larger nodes, in case of a node failure the impact to the
application will be greater
Thank you in advance for your time :)
By default when data is written into couchbase client returns success just after that data is written to one node's memory. After that couchbase save it to disk and does replication.
If you want to ensure that data is persisted to disk in most client libs there is functions that allow you to do that. With help of those functions you can also enshure that data is replicated to another node. This function is called observe.
When one node goes down, it should be failovered. Couchbase server could do that automatically when Auto failover timeout is set in server settings. I.e. if you have 3 nodes cluster and stored data has 2 replicas and one node goes down, you'll not lose data. If the second node fails you'll also not lose all data - it will be available on last node.
If one node that was Master goes down and failover - other alive node becames Master. In your client you point to all servers in cluster, so if it unable to retreive data from one node, it tries to get it from another.
Also if you have 2 nodes in your disposal you can install 2 separate couchbase servers and configure XDCR (cross datacenter replication) and manually check servers availability with HA proxies or something else. In that way you'll get only one ip to connect (proxy's ip) which will automatically get data from alive server.
Hopefully Couchbase is a good system for HA systems.
Let me explain in few sentence how it works, suppose you have a 5 nodes cluster. The applications, using the Client API/SDK, is always aware of the topology of the cluster (and any change in the topology).
When you set/get a document in the cluster the Client API uses the same algorithm than the server, to chose on which node it should be written. So the client select using a CRC32 hash the node, write on this node. Then asynchronously the cluster will copy 1 or more replicas to the other nodes (depending of your configuration).
Couchbase has only 1 active copy of a document at the time. So it is easy to be consistent. So the applications get and set from this active document.
In case of failure, the server has some work to do, once the failure is discovered (automatically or by a monitoring system), a "fail over" occurs. This means that the replicas are promoted as active and it is know possible to work like before. Usually you do a rebalance of the node to balance the cluster properly.
The sentence you are commenting is simply to say that the less number of node you have, the bigger will be the impact in case of failure/rebalance, since you will have to route the same number of request to a smaller number of nodes. Hopefully you do not lose data ;)
You can find some very detailed information about this way of working on Couchbase CTO blog:
http://damienkatz.net/2013/05/dynamo_sure_works_hard.html
Note: I am working as developer evangelist at Couchbase