How does VPS work? - virtual-machine

Technically speaking, how does a virtual private server work? How can a VPS guarantee performance while sharing the physical resources with other VPS (I'm supposing that one server can handle several multiples VPS and not just a few).

Let's begin with an important consideration:
Nowadays, 99% of servers are virtual.
Cloud providers will give you virtual servers, and even organizations managing their own private infrastructures will work with a layer of virtual instances on top of physical bare metal machines.
Of course you can still use directly a bare metal server, but it's not convenient.
Machines have become (relatively) inexpensive, and it's easy and cheap to have physical servers powerful enough to support clusters of smaller virtual instances.
It is true that there is a performance penalty, since part of the bare metal resources are spent just to execute the virtual machines. However there are huge gains in other areas: flexibility, scalability, etc.
With this out of the way, let's answer your question.
From a software standpoint, a virtual server behaves just like a bare metal server. It doesn't matter that the actual bare metal RAM and CPUs are shared with other virtual machines, your virtual server doesn't know about them.
When you say "guarantee performance", I imagine that you are comparing VPSs to the traditional shared hosting plans. In fact, hosting companies ofter sell VPSs as a premium alternative to the default shared hosting.
In that case it is an improvement over the shared hosting because you are not sharing a machine with anyone else, and you can use and customize it however you want.
On a shared host, on the other hand, the server resources are shared with other users. This means that if your neighbour's PHP website experiences a spike in traffic, yours will be penalized.

Related

How to scale host server resources to run multiple applications at once?

I have to set up a relatively big system consisting of Virtual Machines, where I will need to run several different applications. The applications will be provided to me as black boxes, either in form of software to be installed by myself (on a new VM), or in the form of Virtual Machine containing already everything for an application.
My task is to set up a host server and estimate its general resources, which will be then distributed between all Virtual Machines in my system. Some of the applications are more demanding than the others, and I have also time deadlines, so it could happen that all the application need to be executed simultaneously.
For each application I have the resources description it needs (but no corresponding time and performance estimates), so that I know how many processors and processors cores I normally will need for a single app. But how should I do with all of them running simultaneously? Should I simply add together the requirements or is there some common formula for scaling of the host servers general CPUs, Memory and Storage resources?
And one more questions. Such a system with distribution of real physical resources between several VMs - is it already a cluster? Or not yet?

Difference between bare metal (hypervisor based) and host virtualization types

What is the difference between bare-metal (hypervisor-based) and host virtualization types ?
A well-known example of a hosted hypervisor is Oracle VM VirtualBox. Others include VMWare Server and Workstation, Microsoft Virtual PC, KVM, QEMU and Parallels.
rest is over here
Can I say openstack is relying on hosted virtualization as KVM,qemu is listed in this branch
bare metal- Metal(hardware)+minimum things required to run an OS. So in that sense, bare metal hypervisor is also hosted.
host virtualization - On the existing host along with your favourite music player, text editor, hypervisor is installed just like an app.
Bare Metal Servers
• Servers that are physical and dedicated to a single tenant
• Bare metal servers are Fast provisioning servers and custom-based servers
• They are Physically isolated, powerful, consistent, scale seamlessly.
• Ted is faster than Virtual servers.
• Bare metal servers can offer better performance and security for certain workloads and are more ideal for heavy workloads.
Host Virtual Server
• Easily deployed, can be created on shared or dedicated infrastructure, customizable, can be provisioned quickly, scalable, and integrated seamlessly.
• Some of the deployment options: public virtual servers, transient virtual servers, and dedicated virtual servers.
• Host Virtual servers provision more quickly and offer a more flexible and scalable environment than bare metal.

what are the advantages of running docker on a vm?

Docker is an abstraction of OS (kernal) and below, VM is abstraction of Hardware. What is the point of running a Docker on an VM (like Azure) (apart from app portability)? should they not be directly hosting docker on the hardware?
Docker doesn't provide effective isolation for kernel-level security exploits (there's only one ring 0, and it's shared across all containers). Thus, one could reasonably wish to have the additional isolation provided by a virtualization mechanism.
Keep in mind that much of Docker's value is not about security, but about containerization -- building and distributing portable applications in such a way as to ensure that coupling between layers occurs only where and how intended.
The advantage of a cloud system like Azure is that you can go online with your credit card and get a machine up and running in a few minutes. This is enabled by that machine being virtual. Also VMs let you share hardware across multiple users with hardware-level isolation.
If everything else was equal, i.e. you didn't need any of the features of a VM, then you would be correct that a physical machine should be used, as it will run more efficiently.

best practices for setting development environment

I use Linux as primary OS. I need some suggestions regarding how should I set up my desktop and development. I do work on mostly .Net and Drupal, but some time on other lamp products and C/C++, Qt. I'm also interested in mobile (android..) and embedded development.
Currently I install everything on my main OS, even I use it a little. I use VMs a little (for lamp server).
Should I use separate VM for each kind of development (like one for .Net/Mono, another C++, one for mobile and one for db only, one for xyz things etc)
Keep primary development environment on main os and move others in VM.
main os should be messed up
keep things easy to organize (must)
performance should be optimal (optimal settings for best performance of components)
I'm interested to know how others' are doing.
There are both pros and cons with VM's.
Pros:
portability: you can move image to
different server
easy backup (but lengthy)
replication (new member joins team)
Cons
performance
hardware requirements
size of backups (20-40 GB per VM ...)
management of backed up images (what is the difference is not obvious)
keeping all images up to date
(patching / Windows updates)
For your scenario, I would create base VM with core OS and shared components (Web server, database), replicated it and installed specific tools into separate VM. If you combine tools within VM, you may end up with same mess as in case of using base OS - the advantage is that it is much easier to get rid of it ;-)
Optimal performance != using VMs
if you need to use VMs anyway, then yes: it could be better to use a separate VM for each thing that need one, unless you need more than one at once
Now that OCI containers are stable and well supported, using those through docker, podman or other similar tool is an increasingly popular option.
They are isolated, but under the same kernel, so:
they are almost as portable as virtual machines,
like virtual machines they can have their own virtual IP addresses, so they can run services not visible from the outside and without occupying port on the host, but
they don't reserve any extra space on disk or in memory like virtual machines and
they are not slowed by any virtualization layers and
mounting directories from the host is easy and does not require any special support.
The usual approach is to have the checkout in the developer's normal home directory and mount it into containers for building, testing and running.
Also building in containers is now supported by Remote Development extension for Visual Studio Code

What are the benefits of a Hypervisor VM?

I'm looking into using virtual machines to host multiple OSes and I'm looking at the free solutions which there are a lot of them. I'm confused by what a hypervisor is and why are they different or better than a "standard" virtual machine. When I mean standard I going to use the benchmark virtual machine VMWare Server 2.0.
For a dual core system with 4 GB of ram that would be capable of running a max of 3 VMs. Which is the best choice? Hypervisor or non-hypervisor and why? I've already read the Wikipedia article but the technical details are over my head. I need a basic answer of what can these different VM flavors do for me.
My main question relates to how I would do testing on multiple environments. I am concerned about the isolation of OSes so I can test applications on multiple OSes at the same time. Also which flavor gives a closer experience of how a real machine operates?
I'm considering the following:
(hypervisor)
Xen
Hyper-V
(non-hypervisor)
VirtualBox
VMWare Server 2.0
Virtual PC 2007
*The classifications of the VMs I've listed may be incorrect.
The main difference is that Hyper-V doesn't run on top of the OS but instead along with the system it runs on top of a thin layer called hypervisor. Hypervisor is a computer hardware platform virtualization software that allows multiple operating systems to run on a host computer concurrently.
Many other virtualization solution uses other techniques like emulation. For more details see Wikipedia.
Disclaimer, everything below is (broadly) my opinion.
Its helpful to consider a virtual machine monitor (a hypervisor) as a very small microkernel. It has very few jobs beyond accessing the underlying hardware, such as monitoring of event channels and granting guest domains access to specific resources .. while enforcing some kind of scheduler.
All guest machines are completely oblivious of the others, the isolation is true. Guests do not share memory with the privileged guest (or each other). So, in this instance, you could (roughly) think of each guest (even the privileged one) as a process, as far as the VMM is concerned. Typically, the first guest gets extra privileges so that it can manage the rest. This is the ideal technology to use when virtual machines are put into production and exposed to the world.
Additionally, some guests can be patched to become aware of the hypervisor, significantly increasing their performance.
On the other hand we have things like VMWare and QEMU, which rely on the host kernel to give it access to bare metal and enough memory to exist. They assume that all guests need to be presented with a complete machine, the limits put on the process presenting these (more or less) become the limits of the virtual machine. I say more or less because device mapper QoS is not commonly implemented. This is the ideal solution for trying code in some other OS, or some other architecture. A lot of people will call QEMU, Simics or even sometimes VMWare (depending on the product) a 'simulator'.
For production roll outs I use Xen, for testing something I just cross compiled I use QEMU, Simics or VirtualBox.
If you are just testing / rolling new code on various operating systems and architectures, I highly recommend #2. If your need is introspection (i.e. watching guest memory change as bad programs run in a guest) ... I'd need more explanation before answering.
Benefits of Hypervisor:
Hypervisor separates virtual machines logically, assigning each its own slice of underlying computing power, memory, and storage, thus preventing the virtual machines from interfering with each other.