Does anyone know of a free solution to perform failover for VMware ESXi? - replication

I would like to setup a free/custom solution to perform failover for VMware ESXi.
The setup is as follows:
2x Physical servers each with independent storage.
For each physical server there are 2x Win2k8 Enterprise servers.
In the case a physical server completely fails, we want the other (for convenience sake we can assign it with a slave role) to resume operation.
For this to occur, we need to somehow do continuous replication of the virtual servers, and in the case of the primary server failing have it take over the IP, start the virtual machines and continue operation.
I am new to VMware ESXi myself, but I am trying to research alternative solutions to the expensive VMware licensing for failover.
Thanks.

Take a look at Veeam Backup & Replication.

Related

Hyper-V Anti Affinity

I'm trying to setup anti affinity with a cluster Hyper-V setup but am struggling to get any VMs to stay apart. It seems to be that the anti affinity is simply not honored.
Setup:
3 x Hyper-V servers (server1, server2, server3)
3 x VMs (web_test_1, web_test_2, web_test3)
Attempt 1:
I ran the below script on server1:
$WEBAntiAffinity = New-Object System.Collections.Specialized.StringCollection
$WEBAntiAffinity.Add("WEB Servers")
(Get-ClusterGroup –Name WEB_TEST_1).AntiAffinityClassNames = $WEBAntiAffinity
(Get-ClusterGroup –Name WEB_TEST_2).AntiAffinityClassNames = $WEBAntiAffinity
(Get-ClusterGroup –Name WEB_TEST_3).AntiAffinityClassNames = $WEBAntiAffinity
Get-ClusterGroup |Select-Object -Property name,AntiAffinityClassNames
All three VMs were powered off before I ran the above and all created on server1.
When powering them on, they all powered on and stayed on server1.
Attempt 2:
I ran the same script above, on the additional servers (server2 and server3).
I powered off the VMs and powered them back on again, they again all remained on server1.
Attempt 3:
After having ran the script on all the servers, I restarted the servers one by one. The VMs moved between nodes as normal during the reboots but when all were rebooted I stopped all the VMs, moved them to server1 and then started them again.
My assumption was that 2 would move before powering on but that didn't happen, they all started on server1.
Anyone know what I'm doing wrong here? Am I missing some pre-reqs? There's not a great amount of examples online that I've been able to find.
This is not called out specifically in Microsoft's documentation, but to make anti-affinity rules work in Hyper-V you also need System Center Virtual Machine Manager (SCVMM). SCVMM reads the anti-affinity rules (and other rules such as affinity, priority, etc.) and performs the migrations to apply those rules.
This is equivalent to the vmware stack, where ESXi is the hypervisor, the VM configuration contains the rules, and Director or vSphere actually apply the rules.

Redis on Azure VM vs Azure Redis Cache

We have checked both Redis installed in Azure VM and Azure Redis Cache both are working same I can't see a difference in the performance Have anyone used both in large scale application if so can anyone share the performance and durability of both ?
Have analysed the following
Monitoring
In-zone replication
Multi-zone replication
Auto fail-over
Data persistence
Backup
Pricing
SSL Authentication & Encryption
All the above Azure redis have the upper hand
Still I want make sure which one is the best
Does using VM has any bottlenecks ?
I would go for Azure Redis Cache. Mainly because its fully managed. At the end of the day you do have nodes under the hood. But why should you care for maintaining a VM? Hotfixes? Patches, Seucirty Updates ..etc ..etc.
I would ask the question the other way around. Why should you use VMs at all?
MG

Unison sync across more than 2 computers

I am currently using Unison across 2 computers (server and laptop). I need to create another connection where I can timely backup my data from server.
laptop <--> server -> backup
Here the connection to backup from server can be unidirectional. Is there any way to accomplish this?
This is a very common thing to do. When setting up Unison across multiple machines, you should prefer a star topology. So if you can, you should run another instance of Unison, along with any backup-related scrips, on your machine where you are storing your backups. It should look about the same as your setup on your laptop (depending on where your backups are being stored).

Just how volatile is a Bluemix Virtual Server's own storage?

The Bluemix documentation leads a reader to believe that the only persistent storage for a virtual server is using Bluemix Block Storage. Also, the documentation leads you to believe that virtual server's own storage will not persist over restarts or failures. However, in practice, this doesn't seem to be the case at least as far as restarts are concerned. We haven't suffered any virtual server outages yet.
So we want a clearer understanding of the rationale for separating the virtual server's own storage from its attached Block Storage.
Use case: I am moving our Git server and a couple of small LAMP-based assets to a Bluemix Virtual Server as we simultaneously develop new mobile apps using Cloud Foundry. In our case, we don't anticipate scaling up the work that the virtual server does any time soon. We just want a reliable new home for an existing website.
Even if you separate application files and databases out into block storage, re-provisioning the virtual server in the event of its loss is not trivial even when the provisioning is automated with Ansible or the like. So, we are not expecting to have to be regularly provisioning the non-persistent storage of a Bluemix Virtual Server.
The Bluemix doc you reference is a bit misleading and is being corrected. The virtual server's storage on local disk does persist across restart, reboot, suspend/resume, and VM failure. If such was not the case then the OS image would be lost during any such event.
One of the key advantages of storing application data in a block storage volume is that the data will persist beyond the VM's lifecycle. That is, even if the VM is deleted, the block storage volume can be left in tact to persist data. As you mentioned, block storage volumes are often used to back DB servers so that the user data is isolated, which lends itself well to providing a higher class of storage specifically for application data, back up, recovery, etc.
In use cases where VM migration is desired the VMs can be set up to boot from a block storage volume, which enables one to more easily move the VM to a different hypervisor and simply point to the same block storage boot volume.
Based on your use case description you should be fine using VM local storage.

Switching state server to another machine in cluster

We have a number of web-apps running on IIS 6 in a cluster of machines. One of those machines is also a state server for the cluster. We do not use sticky IP's.
When we need to take down the state server machine this requires the entire cluster to be offline for a few minutes while it's switched from one machine to another.
Is there a way to switch a state server from one machine to another with zero downtime?
You could use Velocity, which is a distributed caching technology from Microsoft. You would install the cache on two or more servers. Then you would configure your web app to store session data in the Velocity cache. If you needed to reboot one of your servers, the entire state for your cluster would still be available.
You could use the SQL server option to store state. I've used this in the past and it works well as long as the ASPState table it creates is in memory. I don't know how well it would scale as an on-disk table.
If SQL server is not an option for whatever reason, you could use your load balancer to create a virtual IP for your state server and point it at the new state server when you need to change. There'd be no downtime, but people who are on your site at the time would lose their session state. I don't know what you're using for load balancing, so I don't know how difficult this would be in your environment.