Uninstalling 2 instances on one node in a 2 node SQL Server 2012 cluster - sql-server-2012

I need some pointers to uninstalling 2 instances on SQL Server 2012 cluster
We have a SQLl2012 Active -Active cluster with instance A on Node 1 and Instance B, Instance C and 3 services on Node 2. I need to uninstall Instance A and B from Node2. The services could be removed/disabled.So I will have an Active-Active cluster with Instance C on Node 1 and no instance on Node2.
I have never uninstalled an instance on a cluster and we don’t have any test system so I have a couple of questions.
1. When there is no instance on Node 2, does it become an Active Passive cluster?
2. Do I have to take the node offline before the uninstall?
3. Can I get any links to uninstalling an instance? I have googled but have only come up with uninstalling nodes. I got one that said about uninstalling an instance but at the end of it, uninstalled the node. https://www.mssqltips.com/sqlservertip/2172/uninstalling-a-sql-server-clustered-instance/
4. Am I right in understanding that you need the installation media to uninstall the instance? And that I cannot use uninstall programme method for this?
5. I have a link below with a comment from Perry Whittle.https://www.sqlservercentral.com/Forums/Topic1470694-1550-1.aspx So does it mean if I use the remove node option , it will remove only the instance BUT the node will still be there ?
I had initially disabled the 2 instances on the node ,thinking that would be a safer option but that gave rise to continuous alerts on our Nagios monitoring system and Operations got errors during the monthly patching.
Thanks

Related

Setup docker-swarm to use specific nodes as backup?

Is there a way to setup docker-swarm to only use specific nodes (workers or managers) as fail-over nodes? For instance if one specific worker dies (or if a service on it dies), only then it will use another node, before that happens it's as if the node wasn't in the swarm.
No, that is not possible. However, docker-swarm does have the features to build that up. Let's say that you have 3 worker nodes in which you want to run service A. 2/3 nodes will always be available and node 3 will be the backup.
Add a label to the 3 nodes. E.g: runs=serviceA . This will make sure that your service only runs in those 3 nodes.
Make the 3rd node unable to schedule tasks by running docker node update --availability drain <NODE-ID>
Whenever you need your node back, run docker node update --availability active <NODE-ID>

Apache Ignite unable find a deployed service

I've noticed a strange behaviour of Apache Ignite which occurs fairly reliably on my 5-node Apache Ignite cluster but can be replicated with even a two node cluster. I use Apache Ignite 2.7 for Net in the Linux environment deployed in a Kubernetes cluster (each pod hosts one node).
The problem as follows. Assume we've got a cluster which consists of 2 Apache Ignite nodes, A and B. Both nodes start and initialize. A couple of Ignite Services are deployed on each node during the initialization phase. Among all, a service named QuoteService is deployed on the node B.
So far so good. The cluster works as expected. Then, the node B crashes or gets stopped for whatever reason and then restarts. All the ignite services hosted on the node B get redeployed. The node rejoins the cluster.
However, when a service on the node A is trying to call the QuoteService expected to be available on the node B, an exception gets thrown with the following message: Failed to find deployed service: QuoteService. It is strange as the line registering the service did run during the restart of the node B:
services.DeployMultiple("QuoteGenerator", new Services.Ignite.QuoteGenerator(), 8, 2);
(deploying the service as singleton does not make any difference)
A restart of either node A or node B separately does not help. The problem can only be resolved by shutting down the entire Ignite cluster and restarting all the nodes.
This condition can be reproduced even when 5 nodes are running.
This bug report may look a bit unspecific but it is hard to specify the concrete reproduce steps as the replication involves setting up at least two ignite nodes and stopping and restarting them in a sequence. So let me pose the questions this way:
1. Have you ever noticed such a condition or did you received similar reports from other users?
2. If so, what steps can you recommend to address this problem?
3. Should I wait for the next version of Apache Ignite as I read that the service deployment mechanism is currently being overhauled?
UPD:
Getting a similar problem on a running cluster even if I don't stop/start nodes. I will open another question on SA and it seems to have a different genesis.
I've figured out what caused the described behavior (although I don't understand why exactly).
I wanted to ensure that the Ignite service is only deployed on the current node so I used the following C# code to deploy the service:
var services = ignite.GetCluster().ForLocal().GetServices();
services.DeployMultiple("FlatFileService", new Services.Ignite.FlatFileService(), 8, 2);
When I changed my code to rely only on a NodeFilter to limit the deployment of the service to a specific set of nodes and got rid of "GetCluster().ForLocal().", the bug disappeared. The final code is as follows:
var flatFileServiceCfg = new ServiceConfiguration
{
Service = new Services.Ignite.FlatFileService(),
Name = "FlatFileService",
NodeFilter = new ProductServiceNodeFilter(),
MaxPerNodeCount = 2,
TotalCount = 8
};
var services = ignite.GetServices();
services.DeployAll(new[] { flatFileServiceCfg, ... other services... });
It is still strange, however, why the old code did work until the topology changed.

Re-join cluster node after seed node got restarted

Let's imagine such scenario. I have a three nodes inside my akka cluster (node A,B,C). Each node is deployed to a different physical device inside a network.
All of those nodes are wrapped inside Topshelf windows services.
Node A is my seed node, the other ones are just simply 'worker' nodes with port specified.
When I run cluster and stop node (service) B or C and then restart them. Nodes are rejoining with no issues.
I'd like to ask whether it's possible to handle other scenario which will be. When I stop seed node (node A), the other nodes - services still running and then I restart node-service A - I'd like to make nodes B,C rejoin the cluster and make the whole eco system working again.
Is such scenario possible to implement? If yes then how should I do that?
In Akka.NET cluster any node can serve as a seed node for others as long as it's a part of the cluster. "Seeds" are just a configuration thing, so you can define a list of well-known node addresses you know, that are a part of the cluster.
Regarding your case, there are several solutions I can think of:
Quite common approach is to define more than one seed node in the configuration, so that your node doesn't serve as a single point of failure. As long as at least one of the configured seed nodes is alive, everything should work fine. Keep in mind, that the seed nodes should be defined in each configuration in the exactly same order.
If your "worker" nodes have statically assigned endpoints, they can be used as seed nodes as well.
Since you can initialize the cluster programmaticaly from code, you can also use 3rd party service as a node discovery service. You can use i.e. consul for that - I've started a project, which gives such functionality. While it's not yet published, feel free to fork it or contribute, if it will help you.

RabbitMq queue being removed when node stopped

I have created two RabbitMQ nodes (say A and B) and I have cluster them. I have then done the following in the management UI :
(note that node A is intially the master)
On node A I created a queue (durable=true, auto-delete=false) and can see it shared on node B
Stopped node A, I can still see it on B (great)
Started node A again
Stopped node B, the queue has been removed from node A
This seems strange as node B was not even involved in the created of the queue
I then tried the same from node B :
On node B I created a queue (durable=true, auto-delete=false) and can see it shared on node A
Stopped node A, I can still see it on B (great)
Started node A again
Stopped node B, the queue has been removed from node A
The situation I am looking for is that no matter which node is stopped that the queue is still available on the other node.
I just noticed that the policies I setup have been removed from each node... no idea why. Just in case somebody else is having the same issue you can create policies using (e.g.)
rabbitmqctl set_policy ha-all "^com\.mydomain\." '{"ha-mode":"all","ha-sync-mode":"automatic"}'
It's immediately noticeable in the RabbitMQ Web UI as you can see the policy on the queue definition (in this case "ha-all").
See https://www.rabbitmq.com/ha.html for creating and,
See Policy Management section http://www.rabbitmq.com/man/rabbitmqctl.1.man.html for administration

CouchBase 2.5 2 nodes in replica: 1 node fail: the service is no more available

We are testing Couchbase with a two node cluster with one replica.
When we stop the service on one node, the other one does not respond until we restart the service or manually failover the stopped node.
Is there a way to maintain the service from the good node when one node is temporary unavailable?
If a node goes down then in order to activate the replicas on the other node you will need to manually fail it over. If you want this to happen automatically then you can enable auto-failover, but in order to use that feature I'm pretty sure you must have at least a three node cluster. When you want to add the failed node back then you can just re-add it to the cluster and rebalance.