Ganglia gmetad failover - replication

I want to know if it possible to have gmetad in a failover/replica scenario. My problem is the following:
I have 100 nodes that they speak to each other with multicast and they sync their gmond info. I have a separate machine that is running the gmetad (lets call it master1) that polls metrics from various gmonds (so far so good).
Now I want to be sure that if master1 dies, I will have a second gmetad (master2) that will have the same data. So I configured a second gmetad that reads the same gmonds. Now if master1 dies and comes again up after (lets say) 3 days, is there any way to get from master2 all the missed data and have a complete timeline in master1?
If there is no way to do that, can I use an NFS directory and point both the gmetads to write the rrds on the same directory?

If you are working in a multicast environment. All your rrd files will be saved at multiple places. So If you want Master1 to have the complete timeline data what you can do is back up the rrds and restart the gmond and gmetad process. Ganglia will again copy all the rrds from the multicast nodes.

Related

Minimum amount of Nodes in Redis Cluster [duplicate]

I know I'm asking something very obvious about cluster failover.
I read on redis.io that, if any master cluster node fails it will affect to other master nodes until slave come to take in charge. In my structure, I'm not defining any slave and just working with 3 masters.
I'm thinking to modify the redis-trib.rb file, which will remove the defected server and will start the cluster with other 2 nodes. I'm confused about a couple of things,
1) Resharding
Could not possible until failed server goes live
2) Minimum 3 node limitation for create cluster
As per bit understanding, redis-trib.rb not allowing me to create cluster for two nodes
There might be some solution in code file :)
3) Automatic Way to Re-Create new structure with live nodes
As programmer point of view, I'm searching something automatic for my system. Something that trigger one command when Redis Cluster fails some tasks happens internally. like
Shutdown all other redis cluster servers
Remove nodes-[port].conf files from all cluster nodes folder
Start redis cluster servers
Run "redis-trib.rb create ip:port ip:port"
I'm just trying to minimize administration work :). Otherwise I need to implement some other algorithm "Data Consistency" here.
If any of you guys have any solution or idea, kindly share.
Thanks,
Sanjay Mohnani
In a cluster with only master nodes, if a node fails, data is lost. Therefore no resharding is possible, since it is not possible to migrate the data (hash slots) out of the failed node.
To keep the cluster working when a master fails, you need slave nodes (one per master). This way, when a master fails, its slave fails over (becomes the new master with the same copy of the data).
The redis-trib.rb script does not handle cluster creation with less than 3 masters, however in redis-cluster a cluster can be of any size (at least one node).
Therefore adding slave nodes can be considered an automatic solution to your problem.

Redis cache in a clustered web farm? Sync between two member nodes?

Ok, so what I have are 2 web servers running inside of a Windows NLB clustered environment. The servers are identical in every respect, and as you'd expect in an NLB clustered environment, everybody is hitting the cluster name and not the individual members. We also have affinity turned off on the members in the cluster.
But, what I'm trying to do is to turn on some caching for a few large files (MP3s). It's easy enough to dial up a Redis node on one particular member and hit it, everything works like you'd expect. I can pull the data from the cache and serve it up as needed.
Now, let's add the overhead of the NLB. With an NLB in play, you may not be hitting the same web server each time. You might make your first hit to member 01, and the second hit to 02. So, I'd need a way to sync between the two servers. That way it doesn't matter which cluster member you hit, you are going to get the same data.
I don't need to worry about one cache being out of date, the only thing I'm storing in there is read only data from an internal web service.
I've only got 2 servers and it looks like redis clusters need 3. So I guess that's out.
Is this the best approach? Or perhaps there is something else better?
Reasons for redis: We only want the cache to use in-memory only. No writes to the database. Thought this would be a good fit, but need to make sure the data is available in both servers.
It's not possible to have redis multi master (writing on both). And I might say it's replication is blazing fast (check the slaveof command of Redis).
But why you need it in the same server? Access it as a service. So every node will access the actual data. If the main server goes down, the slave will promptly turn itself into a master.
One observation: you might notice that Redis makes use of disk in an async way. An append only file that it does checkpoint depending on the size from time to time so.

Redis Read-Replicas On Web Servers

I am currently developing a system that makes heavy use of redis for a series of web services.
One of the key criteria of this system is fast responses.
At present the layout (ignoring load balancers etc) is as follows:
2 x Front End Play Framework 2.x Servers
2 x Job Handling/Persistence Play Framework 2.x Servers
1 x MySQL Server
2 x Redis Servers, 1 master, 1 slave
In this setup, redis serves 2 tasks - as a shared cache and also as a message bus.
Currently the front end servers host a service which interacts in its entirety with Redis.
The front end servers try to balance reads across the pool of read servers (currently the master and 1 slave), but being Redis they need to make their writes to the master server. They handle cache updates etc by sending messages on the queues, which are picked up by the job handling servers.
The job handling servers do blocking listens (BLPOP) to the Redis write server and process tasks when necessary. They have the only connection to MySQL.
At present the read replica server is a dedicated server - more there to be able to switch it to write master if the current master fails.
I was thinking of putting a read replica slave of redis on each of the front end servers which means that read latency would be even less, and writes (messages for queues) get pushed to the write server on a separate connection.
If I need to scale, I could just add more front end servers with read slaves.
It sounds like a win/win to me as even if the write server temporarily drops out, the front end servers can still read data at least from their local slave and act accordingly.
Can anyone think of reasons why this might not be such a good idea?
I understand the advantages of this approach... but consider this: what happens when you need to scale just one component (i.e. FE server or Redis) but not the other? For example, more traffic could mean you'll need more app servers to handle it while the Redises will be considerably less loaded. On the other hand, if your dataset grows and/or more load is put on the Redises - you'll need to scale these and not the app.
The design should fit your requirements, and the simplicity of your suggested setup has a definite appeal (i.e. to scale, just add another identical lego block) but from my meager experience - anything that sounds too good to be true usually is. In the longer run, even if this works for you now, you may find yourself in a jam down the road. My advice - separate your Redis(es) from you app servers, deal with and/or work around the network and make sure each layer is available and scalable on its own right.

Redis active-active replication

I am using redis version 2.8.3. I want to build a redis cluster. But in this cluster there should be multiple master. This means I need multiple nodes that has write access and applying ability to all other nodes.
I could build a cluster with a master and multiple slaves. I just configured slaves redis.conf files and added that ;
slaveof myMasterIp myMasterPort
Thats all. Than I try to write something into db via master. It is replicated to all slaves and I really like it.
But when I try to write via a slave, it told me that slaves have no right to write. After that I just set read-only status of slave in redis.conf file to false. Hence, I could write something into db.
But I realize that, it is not replicated to my master replication so it is not replicated to all other slave neigther.
This means I could'not build an active-active cluster.
I tried to find something whether redis has active-active cluster capability. But I could not find exact answer about it.
Is it available to build active-active cluster with redis?
If it is, How can I do it ?
Thank you!
Redis v2.8.3 does not support multi-master setups. The real question, however, is why do you want to set one up? Put differently, what challenge/problem are you trying to solve?
It looks like the challenge you're trying to solve is how to reduce the network load (more on that below) by eliminating over-the-net reads. Since Redis isn't multi-master (yet), the only way to do it is by setting up each app server with a master and a slave (to the other master) - i.e. grand total of 4 Redis instances (and twice the RAM).
The simple scenario is when each app updates only a mutually-exclusive subset of the database's keys. In that scenario this kind of setup may actually be beneficial (at least in the short term). If, however, both apps can touch all keys or if even just one key is "shared" for writes between the apps, then you'll need to bake locking/conflict resolution/etc... logic into your apps to consolidate local master and slave differences (and that may be a bit of an overkill). In either case, however, you'll end up with too many (i.e. more than 1) Redises, which means more admin effort at the very least.
Also note that by colocating app and database on the same server you're setting yourself for near-certain scalability failure. What will happen when you need more compute resources for your apps or Redis? How will you add yet another app server to the mix?
Which brings me back to the actual problem you are trying to solve - network load. Why exactly is that an issue? Are your apps so throughput-heavy or is the network so thin that you are willing to go to such lengths? Or maybe latency is the issue that you want to resolve? Be the case as it may be, I recommended that you consider a time-proven design instead, namely separating Redis from the apps and putting it on its own resources. True, network will hit you in the face and you'll have to work around/with it (which is what everybody else does). On the other hand, you'll have more flexibility and control over your much simpler setup and that, in my book, is a huge gain.
Redis Enterprise has had this feature for quite a while, but if you are looking for an open source solution KeyDB is a fork with Active Active support (called Active Replica).
Setting it up is just a little more work than standard replication:
Both servers must have "active-replica yes" in their respective configuration files
On server B execute the command "replicaof [A address] [A port]"
Server B will drop its database and load server A's dataset
On server A execute the command "replicaof [B address] [B port]"
Server A will drop its database and load server B's dataset (including the data it just transferred in the prior step)
Both servers will now propagate writes to each other. You can test this by writing to a key on Server A and ensuring it is visible on B and vice versa.
https://github.com/JohnSully/KeyDB/wiki/KeyDB-(Redis-Fork):-Active-Replica-Support

Couchbase node failure

My understanding could be amiss here. As I understand it, Couchbase uses a smart client to automatically select which node to write to or read from in a cluster. What I DON'T understand is, when this data is written/read, is it also immediately written to all other nodes? If so, in the event of a node failure, how does Couchbase know to use a different node from the one that was 'marked as the master' for the current operation/key? Do you lose data in the event that one of your nodes fails?
This sentence from the Couchbase Server Manual gives me the impression that you do lose data (which would make Couchbase unsuitable for high availability requirements):
With fewer larger nodes, in case of a node failure the impact to the
application will be greater
Thank you in advance for your time :)
By default when data is written into couchbase client returns success just after that data is written to one node's memory. After that couchbase save it to disk and does replication.
If you want to ensure that data is persisted to disk in most client libs there is functions that allow you to do that. With help of those functions you can also enshure that data is replicated to another node. This function is called observe.
When one node goes down, it should be failovered. Couchbase server could do that automatically when Auto failover timeout is set in server settings. I.e. if you have 3 nodes cluster and stored data has 2 replicas and one node goes down, you'll not lose data. If the second node fails you'll also not lose all data - it will be available on last node.
If one node that was Master goes down and failover - other alive node becames Master. In your client you point to all servers in cluster, so if it unable to retreive data from one node, it tries to get it from another.
Also if you have 2 nodes in your disposal you can install 2 separate couchbase servers and configure XDCR (cross datacenter replication) and manually check servers availability with HA proxies or something else. In that way you'll get only one ip to connect (proxy's ip) which will automatically get data from alive server.
Hopefully Couchbase is a good system for HA systems.
Let me explain in few sentence how it works, suppose you have a 5 nodes cluster. The applications, using the Client API/SDK, is always aware of the topology of the cluster (and any change in the topology).
When you set/get a document in the cluster the Client API uses the same algorithm than the server, to chose on which node it should be written. So the client select using a CRC32 hash the node, write on this node. Then asynchronously the cluster will copy 1 or more replicas to the other nodes (depending of your configuration).
Couchbase has only 1 active copy of a document at the time. So it is easy to be consistent. So the applications get and set from this active document.
In case of failure, the server has some work to do, once the failure is discovered (automatically or by a monitoring system), a "fail over" occurs. This means that the replicas are promoted as active and it is know possible to work like before. Usually you do a rebalance of the node to balance the cluster properly.
The sentence you are commenting is simply to say that the less number of node you have, the bigger will be the impact in case of failure/rebalance, since you will have to route the same number of request to a smaller number of nodes. Hopefully you do not lose data ;)
You can find some very detailed information about this way of working on Couchbase CTO blog:
http://damienkatz.net/2013/05/dynamo_sure_works_hard.html
Note: I am working as developer evangelist at Couchbase