How to set node priority in Windows server 2016 Failover cluster - hyper-v

I have 4 Nodes and each node have 10 - 15 resources. so is there a way that we can set Node Priority like whenever a failover happens the resource is always failed over to priority 1 node.
Server: Windows Server 2016
Failover cluster manager: 10.0
Resource configuration version: 8.0
Thanks in advance.

Change the preferred owner setting in failover cluster. Follow the following steps.
Open the Failover Cluster Console in one of the nodes
Right click on the role that you want to provide high availability (Example VM - Properties)
Select all nodes that you want in the failover cluster and make Node 1 up.

Related

Azure VM SQL Availability Group Listener Connection Problem

We are building a two-node SQL Availability Group with SQL Server 2016 SP3.
Steps taken:
1.) Build two VMs in Azure in the same region, but different Zones
2.) Install Windows Failover Cluster on both nodes
3.) Install SQL Server 2016 SP3 on each node
4.) Create a failover cluster with each node and a cloud witness
5.) Enable the failover cluster on the SQL engine service
6.) Create an availability group and add both nodes and a database
7.) Add a listener to the availability group
At this point, we can connect to the listener name if we try from the primary node with SSMS. The DNS entry has been created and assigned the IP address given to the listener.
If I go to node2 and try and connect to the listener name, I get a connection timeout. If I nslookup the correct IP is given.
When I failover from node1 to node2 the connection to the listener stops working on node1 and starts working on node 2.
We have moved node 2 to a separate subnet and still see the same behavior.
I know there are some intricacies with Azure VMs and failover clustering communications, but we have tried the things we have found concerning this.
The only thing we have been hesitant to do is the standard load balancer.
Does anyone have a direction we can look at next?

Average_ttl is 0 on one of the Redis cluster nodes

I have a Drupal cluster of 3 servers that are using HAproxy (TCP) to handle communication with a Redis cluster of 3 nodes (used for caching) on which the sentinel service is active as well.
The Redis cluster has 1 main (master) node and 2 secondary (slave) nodes in replication mode.
I recently noticed that the avg_ttl is zero on one of the secondary (slave) nodes.
It's weird, I mean the data is synced between these nodes so they should have the same keys.
I checked the configuration and they almost have the same configuration in the redis.conf file.
Any idea what could this mean?
Thanks!
avg_ttl
Replication Info

ambari cluster + poor connection between ambari-agent to ambari server

we have ambari cluster with 872 data-nodes machines , when ambari version is 2.6.x
we have for now some network problem ,
after long investigation we found that , ambari agent that runs on some machine not communicate well with the ambari server
therefore we get some strange behaviors as 5 dead data-nodes from ambari dashboard , while for sure datanodes machine are healthy
is it possible to give more tolerated value in ambari agent configuration so the ack between ambari agent to ambari server will be after more little time in order to ignore the network problems ?
something like timeout or time connection between the ambari agent to ambari server
First of all, you need to get the root cause of the issue why Data Node is showing as Dead.
Ambari agent runs on every node. It is responsible for sending
metrics and heartbeat to the Ambari server which then publishes to
your Ambari web UI.
The name node waits for 10 minutes till it declares the data node as dead and copies
the blocks to other data nodes.
If it's showing that data node is dead then please check the Ambari agent status in
the specific node by running-service ambari-agent status. Parallelly you can check the ambari-agent.log in the worker node to check why Ambari agent stopped working.
You can configure your http timeouts in ambari-agents for service tasks, http timeouts
https://github.com/apache/ambari/blob/trunk/ambari-agent/conf/unix/ambari-agent.ini
There's a HTTP Timeout section you can configure it based on your network throughput.
The file should be in /etc/ambari-agent/ambari.properties

CouchBase 2.5 2 nodes in replica: 1 node fail: the service is no more available

We are testing Couchbase with a two node cluster with one replica.
When we stop the service on one node, the other one does not respond until we restart the service or manually failover the stopped node.
Is there a way to maintain the service from the good node when one node is temporary unavailable?
If a node goes down then in order to activate the replicas on the other node you will need to manually fail it over. If you want this to happen automatically then you can enable auto-failover, but in order to use that feature I'm pretty sure you must have at least a three node cluster. When you want to add the failed node back then you can just re-add it to the cluster and rebalance.

how to use master/slave configuration in activemq using apache zookeeper?

I'm trying to configure master/slave configuration using apache zookeeper. I have 2 application servers only on which I'am running activemq. as per the tutorial given at
[1]: http://activemq.apache.org/replicated-leveldb-store.html we should have atleast 3 zookeeper servers running. since I have only 2 machines , can I run 2 zookeeper servers on 1 machine and remaining one on another ? also can I run just 2 zookeeper servers and 2 activemq servers respectively on my 2 machines ?
I will answer the zookeper parts of the question.
You can run two zookeeper nodes on a single server by specifying different port numbers. You can find more details at http://zookeeper.apache.org/doc/r3.2.2/zookeeperStarted.html under Running Replicated ZooKeeper header.
Remember to use this for testing purposes only, as running two zookeeper nodes on the same server does not help in failure scenarios.
You can have just 2 zookeeper nodes in an ensemble. This is not recommended as it is less fault tolerant. In this case, failure of one zookeeper node makes the zookeeper cluster unavailable since more than half of the nodes in the ensemble should be alive to service requests.
If you want just POC ActiveMQ, one zookeeper server is enough :
zkAddress="192.168.1.xxx:2181"
You need at least 3 AMQ serveur to valid your HA configuration. Yes, you can create 2 AMQ instances on the same node : http://activemq.apache.org/unix-shell-script.html
bin/activemq create /path/to/brokers/mybroker
Note : don't forget du change port number in activemq.xml and jetty.xml files
Note : when stopping one broker I notice that all stopping.