Let us say I've two server nodes in one data center DC1 and two more server
nodes in another data center DC2. Two data centers have some network delay.
Now I'm using SQL select statements on caches which are replicated. Now
those caches' write synchronization mode is FULL_SYNC.
Now at a time we have working clients nodes only in one DC but not both.
Let's say we have two clients in DC1.
So total nodes is 6 (2 client nodes and 2 server nodes in DC1 and 2
server nodes in DC2).
Our use case is such a way that..
2 clients should query only 2 server nodes in DC1 and not the other 2
servers in DC2.
All the cache queries should be in FULL_SYNC with 2 server nodes in DC1
and DC1-DC2 should be done in ASYNC mode.
A doubt I got which is, if in client's node discoveryspi, if I (X,Y) ip
list as server nodes ips, would the queries always reach X,Y even though the
entire topology contains X,Y,Z as server nodes?
Please someone provide us the solution for this.
Note: I saw one GridGain's capability for cluster-cluster replication but that comes under paid version. I am looking for a solution in the community edition.
A doubt I got which is, if in client's node discoveryspi, if I (X,Y) ip list as server nodes ips, would the queries always reach X,Y
even though the entire topology contains X,Y,Z as server nodes?
No, DiscoverySPI is used only for the connecting to the cluster, after that, client node will be working with all nodes in the cluster.
All the cache queries should be in FULL_SYNC with 2 server nodes in
DC1 and DC1-DC2 should be done in ASYNC mode.
It's not possible to do this, only one synchronization mode can be used for one cache in the cluster.
2 clients should query only 2 server nodes in DC1 and not the other 2 servers in DC2.
It's not possible to do this for cache operations, but you can do this for computing operations - you can send a job to a certain node with a primary or backup copy in DC1 and it will take the local partition. But compute creates some overhead compared to the plain cache operations if it used only for getting the entries.
So, as you mentioned, the best way here is the DataCenter Replication, which is available as a part of GridGain, because, based on your requirements, you need 2 separate clusters here.
Related
I have a virtual data layer cluster set up with a Netscaler load balancer. This virtual data layer dispatches queries for the client to different data sources and returns the data to the client.
In Netscaler, my VIP uses the same weight of "1" for each node in the cluster (we have 2 nodes). This with the idea of maintaining the same number of queries going to each node in the cluster. The issue is that some days one node is underbalanced with queries and that same node the next day can be overloaded with queries. I checked the virtual layer logs and noticed that the SESSIONS are balanced; but a session can have an different number of queries. So, Netscaler sends equal number of sessions to the nodes but the server may end up processing more queries that the other. What we need to do is to balance the queries, not the sessions. So, my question, is there a way to differentiate between sessions and requests (or queries) in the Netscaler settings?
I have 2 nodes, in which im trying to run 4 ignite servers, 2 on each node and 16 ignite clients, 8 on each node. I am using replicated cache mode. I could see the load on cluster is not distributed eventually to all servers.
My intension of having 2 servers per node is to split the load of 8 local clients to local servers and server can work in write behind to replicate the data across all servers.
But I could notice that only one server is taking the load, which is running at 200% cpu and other 3 servers are running at very less usage of around 20%cpu. How can I setup the cluster to eventually distribute the client loads across all servers. Thanks in advance.
I'm generating load by inserting same value 1Million times and trying to get the value using the same key
Here is your problem. Same key is always stored on the same Ignite node, according to Affinity Function (see https://apacheignite.readme.io/docs/data-grid), so only one node takes read and write load.
You should use a wide range of keys instead.
For a startup-company project, we are renting three linux servers from the same datacenter in France (from OVH).
We are using three VPS at the moment. We will later switch to dedicated servers in case of commercial success.
We want to install a replicated distributed database on these 3 VPS, using a replication factor of 2 to allow a minimum of fault-tolerance.
If possible, we'd like to use Aerospike, as we prefer it over MongoDB and CouchDB.
So my question is : is it possible to use Aerospike Community Edition to replicate the database records across these 3 VPS, without XDR ? And how can we achieve that ?
Sure, XDR is only needed for replication across datacenters. To replicate within a cluster in a datacenter, configure your namespace's replication-factor to the desired value.
If you want your data replicated on two separate but identical configuraton Aerospike clusters (clusterA with 3 VSPS, clusterB with 3 VSPs) (Is what you are asking for?) on CE without using XDR, you can instantiate two client objects in your application, use one clientA object to write to clusterA, use the other clientB object to repeat the operation to the other clusterB. You will have a performance hit but may work for you.
If you just have one cluster of 3 VSPs, setting replication factor of two in your namespace configuration automatically keeps one master record and one replica on the same cluster, record level data evenly distributed across the cluster, with master and replica of any record always on different nodes.
I want to keep two instances of Redis ( server A and B ) which are installed on different hardware to keep data synchronized. When data "X" is written to server A, I want it to be synchronized to server B as well.
The reason for that is that from my client application, whenever I need to read data I can randomly pick between the two servers, load-balancing connection from multiple requests. This also allows to have a high-availability architecture so that if one server goes down the data is still on the other's cache.
How I am performing the above is through client code only. Whenever I write, I write to both servers ( A and B).
Is there a way to specify at server configuration level that server A will be in charge of replicating data writes to B ? Something like a trigger on any writes that replicates to server B and vice versa ( writes to server B get replicated to A ) ?
It is all right here Redis replication
You might instead want to implement local caching in the application, it is way faster than fetching from redis(which is in fact pretty fast too), and if you're hosting a half decent place, the uptime is like 99,9%, so availability shouldn't be a problem.
My understanding could be amiss here. As I understand it, Couchbase uses a smart client to automatically select which node to write to or read from in a cluster. What I DON'T understand is, when this data is written/read, is it also immediately written to all other nodes? If so, in the event of a node failure, how does Couchbase know to use a different node from the one that was 'marked as the master' for the current operation/key? Do you lose data in the event that one of your nodes fails?
This sentence from the Couchbase Server Manual gives me the impression that you do lose data (which would make Couchbase unsuitable for high availability requirements):
With fewer larger nodes, in case of a node failure the impact to the
application will be greater
Thank you in advance for your time :)
By default when data is written into couchbase client returns success just after that data is written to one node's memory. After that couchbase save it to disk and does replication.
If you want to ensure that data is persisted to disk in most client libs there is functions that allow you to do that. With help of those functions you can also enshure that data is replicated to another node. This function is called observe.
When one node goes down, it should be failovered. Couchbase server could do that automatically when Auto failover timeout is set in server settings. I.e. if you have 3 nodes cluster and stored data has 2 replicas and one node goes down, you'll not lose data. If the second node fails you'll also not lose all data - it will be available on last node.
If one node that was Master goes down and failover - other alive node becames Master. In your client you point to all servers in cluster, so if it unable to retreive data from one node, it tries to get it from another.
Also if you have 2 nodes in your disposal you can install 2 separate couchbase servers and configure XDCR (cross datacenter replication) and manually check servers availability with HA proxies or something else. In that way you'll get only one ip to connect (proxy's ip) which will automatically get data from alive server.
Hopefully Couchbase is a good system for HA systems.
Let me explain in few sentence how it works, suppose you have a 5 nodes cluster. The applications, using the Client API/SDK, is always aware of the topology of the cluster (and any change in the topology).
When you set/get a document in the cluster the Client API uses the same algorithm than the server, to chose on which node it should be written. So the client select using a CRC32 hash the node, write on this node. Then asynchronously the cluster will copy 1 or more replicas to the other nodes (depending of your configuration).
Couchbase has only 1 active copy of a document at the time. So it is easy to be consistent. So the applications get and set from this active document.
In case of failure, the server has some work to do, once the failure is discovered (automatically or by a monitoring system), a "fail over" occurs. This means that the replicas are promoted as active and it is know possible to work like before. Usually you do a rebalance of the node to balance the cluster properly.
The sentence you are commenting is simply to say that the less number of node you have, the bigger will be the impact in case of failure/rebalance, since you will have to route the same number of request to a smaller number of nodes. Hopefully you do not lose data ;)
You can find some very detailed information about this way of working on Couchbase CTO blog:
http://damienkatz.net/2013/05/dynamo_sure_works_hard.html
Note: I am working as developer evangelist at Couchbase