What is vmotion? - virtual-machine

How vm migration / datastore migration in different from vmotion in virtualization ?
I was searching for vm migration and datastore migration. Understood what is the difference between host & datastore migration but what is vmotion. How it is related to host or datastore migration.

This should help: https://www.techopedia.com/7/31098/technology-trends/virtualization/what-is-the-difference-between-vmotion-vm-migration-and-live-migration#:~:text=The%20difference%20between%20the%20three,between%20physical%20machines%20without%20interaction.
From URL:
The difference between the three terms above is that vMotion™ is a trademarked name for a company product, while the other two terms are general terms referencing the methods of migrating virtual machines in a network.
With VM migration, IT engineers move virtual machines between physical machines without interaction.

Related

Redis setup for development environment

I am new to Redis & I am in process of setting up a Redis OSS for development region for my project. I have few questions about the deployment model which I want to have validated.
1) Will Redis run on just one node? since my request is for a development region I do not need high availability.
2) Can I create multiple databases to support various projects with one instance I am setting up?
3) I am going with Red Hat Linux since for production I plan to use Redis Enterprise considering its support model.
1) Yes - the redis-server always runs on one node.
2) Yes - you can multiple logical/shared databases on the same server using the SELECT command. That, however, is considered bad practice. You should, instead, use a different redis-server for each database. These redis-server processes CAN be run on the same physical server.
3) You can use the trial version of the Enterprise for development.
Disclaimer: I work for Redis Labs, home of OSS Redis and provider of the Enterprise products line.
Redis proides three development models: single(default),sentinel,cluster. Without considering the model you used, you can just start the redis instance to build a test development.
If you want to support various projects with one instance, you can create a project for one database. Redis supports at most 16 databases. Also, you need do your logical work in Redis for the aimed project with con your destination database.
Redis is a free and opensource software, so it is unnecessary for you to consider if it is an enterprise, unless you buy a customized version from RedisLabs or other software company.
Thanks all ..
On creating multiple databases per http://www.rediscookbook.org/multiple_databases.html its mentioned that we can create multiple databases.
Since this is for development/ test region, I would think creating multiple databases should be fine compared with the production where we would need best performance.
Redis is designed to be single thread which has its own pros and cons. Redis answers for multi core with sharding - https://redis.io/topics/partitioning.
Yes you can create multiple databases in same redis instance though it is not recommended. But since you are setting up staging model its your call.
Hope this helps.

Is there any relation between VMWare VMotion and VMFS?

I was studying VMWare's VSphere suite, cloud computing virtualization platform.
I could not figure out whether there's any relation between VMotion and VMFS in the suite?
VMotion enables the live migration of running virtual machines from
one physical server to another with zero down time.
VMFS is a clustered file system that leverages shared storage to allow multiple physical hosts to read and write to the same storage simultaneously.
Is there any relation between them?
No.
As you mention, VMFS is the file system we use by default on "block" shared storage (i.e. LUNs). This allows us to have the same LUN mounted for read/write on multiple ESXi hosts which is not allowed with many file systems.
vMotion is when we move a running VM from one ESXi host to another. We do this by copying the running memory state from one host to another. When then "stun" the VM for a short period of time and quickly move it's virtual NIC to the new server. The VM "starts" on the far side in the same state, thus it appears like the VM has always been running. That is to say we "move" the running VM even though we are actually just creating a new VM with exactly the same memory state and disk.
The only relationship is that if you have a VM whose VMDKs live in a Datastore which is shared across multiple ESXi hosts, the vMotion process doesn't have to copy the VMDK which makes the process much simpler and faster. Since VMFS is one way we can support shared storage, it is common to have VMDK's on VMFS based datastores (in this case 1 datastore = one VMFS formatted LUN). Since VMFS is our oldest shared storage technology, it's the most common and usually be best understood by our customers.
However, any shared storage will work just fine for vMotion, including VSAN, VVOL and NFS based shared storage.

To virtualize or not to virtualize a bare metal server for a kubernetes deployment

I'd like to deploy kubernetes on a large physical server (24 cores) and I'm uncertain as to a number of things.
What are the pros and cons of creating virtual machines for the k8s cluster other than running on bare-metal.
I have the following considerations:
Creating vms will allow for work load isolation. New vms for experiments can be created and assigned to devs.
On the other hand, with k8s running on bare metal a new NAMESPACE can be created for each developer for experimentation and they can run their code in it. After all their code should be running in docker containers.
Security:
Having vms would limit the amount of access given to future maintainers, limiting the amount of damage that could be done. While on the other hand the primary task for any future maintainers would be adding/deleting nodes and they would require bare metal access to do that.
Authentication:
At the moment devs would only touch the server when their code runs through the CI pipeline and their running deployments are deployed. But what about viewing logs? Could we setup tiered kubectl authentication to allow devs to only access whatever namespaces have been assigned to them (I believe this should be possible with the k8s namespace authorization plugin).
A number of vms already exist on the server. Would this be an issue?
128 cores and doubts.... That is a lot of cores for a single server.
For kubernetes however this is not relevant:
Kubernetes can use different sized servers and utilize them to the maximum. However if you combine the master server processes and the node/worker processes on a single server, you might create unwanted resource issues. You can manage those with namespaces, as you already mention.
What we do is use continuous integration with namespaces in a single dev/qa kubernetes environment in which changes have their own namespace (So we run many many namespaces) and run full environment deployments in those namespaces. A bunch of shell scripts are used to manage this. This works both with a large server as what you have, as well as it does with smaller (or virtual) boxes. The benefit of virtualization for you could mainly be in splitting the large box in smaller ones so that you can also use it for other purposes then just kubernetes (yes, kubernetes runs except MS Windows, no desktops, no kernel modules for VPN purposes, etc).
I would separate dev and prod in the form of different vms. I once had a webapp inside docker which used too many threads so the docker daemon on the host crashed. It was limited to one host luckily. You can protect this by setting limits, but it's a risk: one mistake in dev could bring down prod as well.
I think the answer is "it depends!" which is not really an answer. Personally, I would split up the machine using VM's and deploy that way. You've got better flexibility as to how much of the server's resources you carve out and you can easily create new environments, then destroy easily.
Even if these vms are really big, I think it's still easier to manage also given that you have existing vm's on the machine.
That said, there's not a technical reason that you can't run a single node server, but you may run into problems with downtime with upgrades (if that's an issue), as well as if that server needs patched or rebooted, then your entire cluster is down.
I would look at your environment needs for HA and uptime, as well as how you are going to deploy VM's (if you go that route), and decide what works the best for you.

Migrate 100+ virtual machines from on-prem to azure

Apologies if this is the wrong platform for this question.
If I want to migrate 100 VM's onto Azure VM's what all things I need to consider and how can I migrate?
This is not a comprehensive answer but some things to consider are:
- Start with a thorough inventory of the VMs to migrate. Issues to watch out for include..
- Any unsupported OS versions, including 32-bit.
- large numbers of attached drives.
- Disk drives >1TB.
- Gen 2 VHDs.
- Application and network interdependencies which need to be maintained.
- Specific performance requirements (i.e. any VMs that would need Azure premium storage, SSD drives etc.).
In developing a migration strategy some important considerations are:
- How much downtime can you tolerate? To minimize downtime look at solutions like Azure Site Recovery which supports rapid switchover. If downtime is more flexible there are more offline migration tools and scripts available.
- Understand whether to move to the new Azure Resource Manager or the Service Management deployment model. See https://azure.microsoft.com/en-us/documentation/articles/resource-group-overview/.
- Which machines to move first (pick the simplest, with fewest dependences).
- Consider cases where it may be easier to migrate the data or application to a new VM rather than migrate the VM itself).
A good forum to ask specific migration questions is: Microsoft Azure Site Recovery
Appending to sendmarsh's reply
The things you will have to consider are:
Version of virtual environment i.e VMWare or Hyper-V.
Os version, RAM Size, OS disk size, OS disk count, Number of disks, Capacity of each disk, format of hard disk, number of processor cores,number of NIC's, processor architecture, Network configurations such as IP address's, generation type if the environment is Hyper-V.
I could have missed a few more things... like checking if the VMWare tools are installed. Some of the configurations are not supported like having an iSCSI disk will not be supported. Microsoft supports not all naming conventions for the machines, so be careful in setting the name as that might affect things later.
A full length of pre-requisites list is over at:
[1]: https://azure.microsoft.com/en-us/documentation/articles/site-recovery-best-practices/#azure-virtual-machine-requirements
Update: Using Powershell to automate the migration would make your life easier.

what are the advantages of running docker on a vm?

Docker is an abstraction of OS (kernal) and below, VM is abstraction of Hardware. What is the point of running a Docker on an VM (like Azure) (apart from app portability)? should they not be directly hosting docker on the hardware?
Docker doesn't provide effective isolation for kernel-level security exploits (there's only one ring 0, and it's shared across all containers). Thus, one could reasonably wish to have the additional isolation provided by a virtualization mechanism.
Keep in mind that much of Docker's value is not about security, but about containerization -- building and distributing portable applications in such a way as to ensure that coupling between layers occurs only where and how intended.
The advantage of a cloud system like Azure is that you can go online with your credit card and get a machine up and running in a few minutes. This is enabled by that machine being virtual. Also VMs let you share hardware across multiple users with hardware-level isolation.
If everything else was equal, i.e. you didn't need any of the features of a VM, then you would be correct that a physical machine should be used, as it will run more efficiently.