Failover cluster manager for Analysis Services - ssas

I have SSAS running on Primary node (secondary node as well) and a failover cluster manager running and have the nodes added. Now I need to make sure if the SSAS service stops, it'll failover to the secondary server. (Active - Passive)
I added the SSAS as a generic service to the role, I can see the owner of the role switches to secondary when the service stops, but the current host server stays on Primary.
Should I add the service as Resource type to the cluster as well? If so, it seems to require the DLL file. any idea which SSAS DLL file should I add?

Related

How to restart a Service Fabric Application

I have a gMSA service account running a stateless Service Fabric application. The account has recently been added as a member to a new security group. We don't see that the application is working and I think its because the user claims were loaded on application start up. I've seen that to get this to work on Windows Services that we need to restart the service (mmc->Services, right click restart). I would like to do something similar in Service Fabric.
I see the option of restarting the node, but that is a more heavy handed approach than I want to use. This is in production and I want to scope the solution to the problem. The other applications on the node do not have an issue so I would prefer to not bring them down.
Service Fabric Deactivate (pause) vs Deactivate (restart)?
Thanks in advance,
Greg
What you are looking for is the Restart-ServiceFabricDeployedCodePackage command.
The Restart-ServiceFabricDeployedCodePackage cmdlet ends the code package process, which restarts all of the user service replicas hosted in that process. This restart simulates code package process failures in the cluster, which tests the failover recovery paths of your service.
You can specify a code package, or you can specify a ReplicaSelector to restart the node and code package combination where the replica is hosted. This simplifies tests on the primary host node by not having to determine which Service Fabric node is the primary node before restarting that node.

what to check if automatic failover does not work on always on availability group?

on one of my environment automatic failover does not work. what things i need to check, please help me on this?
https://support.microsoft.com/en-us/help/2833707/troubleshooting-automatic-failover-problems-in-sql-server-2012-alwayson-environments
The symptoms when automatic failover is unsuccessful
If an automatic failover event is not successful, the secondary
replica does not successfully transition to the primary role.
Therefore, the availability replica will report that this replica is
in Resolving status. Additionally, the availability databases report
that they are in Not Synchronizing status, and applications cannot
access these databases.
For example, in the following image, SQL Server Management Studio
reports that the secondary replica is in Resolving status because the
automatic failover process was unable to transition the secondary
replica into the primary role:

How does a GlassFish cluster find active IIOP endpoints?

I have a curiosity and I was searching for it without any result. In GlassFish documentation it is written:
If the GlassFish Server instance on which the application client is
deployed participates in a cluster, the GlassFish Server finds all
currently active IIOP endpoints in the cluster automatically. However,
a client should have at least two endpoints specified for
bootstrapping purposes, in case one of the endpoints has failed.
but I am asking myself how this list is created.
I've done some tests with a stand-alone client that is executed in a JVM and does some RMI calls on an application that is deployed in a GlassFish cluster and I can see from the logs that the IIOP endpoints list is completed automatically and it is set as com.sun.appserv.iiop.endpoints system property but if I stop a server instance or start another during the execution of the client the list remains the one that was created when the JVM was started.
GlassFish clustering is managed by the GMS (Group Management Service) which usually uses UDP Multicast, but can use TCP where that is not available.
See section 4 "Administering GlassFish Server Clusters" in the HA Administration Guide (PDF)
The Group Management Service (GMS) enables instances to participate in a cluster by
detecting changes in cluster membership and notifying instances of the changes. To
ensure that GMS can detect changes in cluster membership, a cluster's GMS settings
must be configured correctly.

Weblogic migratable JMS consumer doesn't follow the service to the new managed server if the old server remains running

I have a JMS service targeted at a migratable target (using an Auto-Migrate Exactly-Once policy) in a cluster which consists of 2 managed servers, at any point of time the service is hosted at one of them and the consumer (which is targeted at the cluster) is supposed to receive messages seamlessly no matter where the service is hosted.
When I manually switch the host of the migratable target (clicking migrate), without turning the hosting managed server off, the consumer fails to receive messages sent to the queues, unless I turn off the previous hosting managed server forcing the consumer to the new host.
I can rule out sender problems, I can see the messages in the queue right after them being sent.
I'll be grateful if anyone can advice on how to configure either the consumer or the migratable service to work seamlessly when migration happens.
I think that may just be a misunderstanding of how migration works. The docs state Auto-Migrate Exactly-Once:
indicates that if at least one Managed Server in the candidate list
is running, then the JMS service will be active somewhere in the
cluster if servers should fail or are shut down (either gracefully or
forcibly). For example, a migratable target hosting a path service
should use this option so if its hosting server fails or is shut down,
the path service will automatically migrate to another server and so
will always be active in the cluster. Note that this value can lead to
target grouping. For example, if you have five exactly-once migratable
targets and only one server member is started, then all five
migratable targets will be activated on that server member.
The docs also state:
Manual Service Migration—the manual migration of pinned JTA and
JMS-related services (for example, JMS server, SAF agent, path
service, and custom store) after the host server instance fails
Your server/service has neither failed or shut down, you are forcing it to migrate with a healthy host still running, so it has not met the criteria for migration.
See more here as well.
I have some experience that sounds reminiscent of what you're looking at. There was some WLS-specific capability around recognizing reconfiguration in JMS destinations as part of their clustered server design.
In one case I had to call a WLS-specific method: weblogic.jms.extensions.WLSession.setExceptionListener(). This was on their implementation of the JMS Session interface. This is analogous to the standard JMS Connection.setExceptionListener().
With this WLS-specific capability, the WLSession.setExceptionListener() callback would occur at a point where the consuming client should tear down and re-establish the connection / session / consumer in reaction to a reconfiguration (migration) that had happened.

How to load balance url request to a dedicated weblogic node?

For some performance issue, i need to process one kind of request in a dedicated node. For example, I need to process all request like http://hostname/report* on node1. So, I added a rule in load balancer to redirect http://hostname/report* to http://node1name/report*. But node1 ask me to login again. And I was logged in http://hostname/ already. How can I directly access without login again?
As #JoseK mentioned, it looks like you don't have session replication and failover configured between the servers. You will need all of your application servers to be inside the same WebLogic cluster and you will also have to pick their secondary session replication node to be the destination for in-memory replication. You can dictate this by assigning the dedicated node to a specific machine, which is then selected as the secondary replication target for all cluster members.
Also, for session replication to work, all objects within your session have to be/implement serializable.