Redis HA using Docker & Kubernetes (1 Master 2 slaves) Ubuntu 16.04 - redis

I'm trying to find a viable solution using Redis as a Master/Slave(at least 2 slaves) configuration. I have Docker containers with Ubuntu 16.04 OS & Redis server/sentinel installed (latest stable).
I'm not looking for a clustered setup. I would like to have the master redis db on one pod, and the slaves on their own pod (all three will be on separate vm's or physical boxes). I'll want to use yaml/Kubernetes nodeSelector to assign where they can spin up.
From my research, it appears I want to run Redis Sentinel services on each pod as well. They key here as well is I want to specify where each Master/Slave POD can run. I've investigated https://github.com/kubernetes/kubernetes/tree/master/examples/redis but that does not give me the control I want. Maybe Redis 4.x helps, but I can't find any examples. Any pointers would be appreciated. I've searched all over this site for an answer w/o any luck.

Related

Ignite error upgrading the setup in Kubernetes

While I upgraded the Ignite that is deployed in Kubernetes (EKS) for Log4j vulnerability, I get the error below
[ignite-1] Caused by: class org.apache.ignite.spi.IgniteSpiException: BaselineTopology of joining node (54b55de4-7742-4e82-9212-7158bf51b4a9) is not compatible with BaselineTopology in the cluster. Joining node BlT id (4) is greater than cluster BlT id (3). New BaselineTopology was set on joining node with set-baseline command. Consider cleaning persistent storage of the node and adding it to the cluster again.
The setup is a 3 node cluster, with native persistence enabled (PVC). This seems to be occurring many times in our journey with Apache Ignite, having followed the official guide.
I cannot clean the storage as the pod gets restarted every now and then, by the time I get the pod shell the pod crash & restarts.
This might happen to be due to the wrong startup order, starting nodes manually in reverse order may resolve this, but I'm not sure if that is possible in K8s. Another possible issue might be related to the baseline auto-adjustment that might change your baseline unexpectedly, I suggest you turn it off if it's enabled.
One of the workarounds to clean a DB of a failing POD might be (quite tricky) - to replace Ignite image with some simple image like a plain Debian or Alpine docker images (just to be able to access CLI) keeping the same PVC attached, and once you fix the persistence issue, set the Ignite image back. The other one is - to access underlying PV directly if possible and do surgery in place.

RabbitMQ and EC2: Clusters can't join

I'm trying to create a RabbitMQ cluster.
The instances have been set up identically (They've been installed identically), they can resolve eachother's hostnames (Both with digand rabbitmqclt resolve_hostname) and their cookie hash is the same.
I'm wondering whether or not there are more steps to setting up a RabbitMQ cluster when in EC2.
I'm running RabbitMQ 3.9.13 and Ubuntu 20.04
Thank you all in advance
-brej
Basically, it should be sufficient, make sure to declare all these settings the config file of RabbitMQ, this way, each time it start, it will be able to reconnect to the cluster when needed.

Migrate a (storm+nimbus) cluster to a new Zookeeper, without loosing the information or having downtime

I have a nimbus+storm cluster using Zookeeper, and I wish to move my cluster and point it to a new Zookeeper. Do you know if this is possible? Can I keep all the information of the old zookeeper and save it in the new one? Is it possible to do it without downtime?
I have looked in the internet for this procedure but I have not found much.
Would it be as simples as change the storm.yml file in both the master . and worker nodes? Do I need a restart afterwards?
# storm.zookeeper.servers:
# - "server1"
# - "server2"
If you just change storm.yml, you'd be pointing Storm at a new empty Zookeeper cluster, and it will be like you just installed Storm from scratch. More likely, you want to grow your Zookeeper cluster to include your new machines, then update storm.yml to point at the new machines, then shrink the cluster to exclude the machines you want to move away from. That way, your Zookeeper quorum is preserved even though you've moved to other physical machines.
This is easier to do on Zookeeper 3.5 with dynamic reconfiguration http://zookeeper.apache.org/doc/r3.5.5/zookeeperReconfig.html. I'm unsure whether Storm will run on Zookeeper 3.5, but you may consider investigating whether you can upgrade to 3.5 before growing/shrinking the cluster.
Otherwise you will have to do a rolling restart to add the new Zookeeper nodes, then do another one to remove the old machines once the cluster has stabilized.
Let me suggest a hack here. This was a script provided by microsoft for migration on HD Insight cluster , but you can change it and use it for your need.
The script can be downloaded from : https://github.com/hdinsight/hdinsight-storm-examples/tree/master/tools/zkdatatool-1.0 and you can read more about it here :
https://blogs.msdn.microsoft.com/azuredatalake/2017/02/24/restarting-storm-eventhub/
I have used it in the past when i had to migrate some stuff between PaaS clusters and i can confirm it works ok!

How to monitor hadoop cluster using Ambari on centos 7

I have a small hadoop cluster i.e one master and three slave nodes. I have to monitor cluster. I have found that we can use Ambari. CentOS 7 is installed on all machines. Please provide a complete details how I can do that ?. I have found that Ambari can be used for new cluster i.e you have to install new cluster. It does not work with already running cluster?
At the moment Ambari does not support CentOS 7, so that's not going to work.
However, Ambari does not perform cluster monitoring on its own. It uses Nagios for the purpose. Nagios is an independent software project that you can setup independently. That said it's kinda painful to do.
ambari-server for Ambari 2.2+ can be installed and works good on CentOS 7.
You have to installed ambari-server on one of the hosts (master node) and can use the webUI hostname:8080 for installing ambari agents on other hosts. Alternatively, ambari agents can be installed manually on other hosts can can be linked to communicate with the ambari-server.

Redis Cluster Support in Redis 2.8.19

I just started evaluating Redis. I am using Redis 2.8.19 which the most latest stable release. Redis 2.9 is still unstable and Redis 3.0 is just available for developer's preview (not recommended for production). I was tryin to setus a cluster of Redis and when I changed my redis.conf and appended
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
and started my Redis server by
src/redis-server ./redis.conf
it gave me an error as follows
* FATAL CONFIG FILE ERROR *
Reading the configuration file, at line 2
'cluster-enabled yes'
Bad directive or wrong number of arguments
I googled the error and got to know that my version (2.8.19) does not support cluster. I was still unable to fine any such specification in Redis Docs. My question is simple. Does Redis 2.8.19 supports redis cluster configuration? Or I have to upgrade to Redis 2.9 or Redis 3.0. I am evaluating Redis because I need to deploy it in production. Please guide.
Redis Cluster support is only for versions >= 3.0.0. Redis 3.0.0 will be released as a stable version in a matter of days, it's a good idea to use it if you want to use Cluster. The cluster support is considered to be stable, however for it to be considered mature we want to see adoption. Btw there is already at least a very large site using it in production. Currently the most sane thing to do if you need Redis Cluster is to test it for your use case, and if it looks great, use it.
Redis cluster is supported only in Redis 3.0+ (which is now stable). I have written a simple API called "Simple Redis Cluster Client" which can be used in redis's sub 3.0 versions for running in a cluster like mode (Not precisely a cluster, it just distributes keys among redis nodes based on the key's hashcode, You can have a look # https://github.com/prash-mi/simple-redis-cluster-client
Cluster support for Redis is only from v3 - v2.8.19 doesn't do clustering.