I received a rather puzzling question from my lecturer about Docker after doing a presentation on the differences between docker.io and virtual machines. I told him that the main purpose of docker.io is to deploy software applications without the need of a virtual machine's hypervisor.
The question is: Is it possible for Docker to deploy images with CentOS as base to several servers with no OS installed?
Docker uses an existing OS kernel that it makes available to the containers, so : No, it cannot run on "bare-metal", you need an underlying OS to provide the kernel.
But it does not have to be CentOS to run CentOS-based containers (as long as it uses a CentOS-compatible kernel).
In addition to that, the docker software itself needs some userland utilities to run, too.
Related
I'm trying to understand the basic concepts of Docker, and lots of docs say that "Docker is not virtual machine, but a process". To me, this sentence looks quite awkward, since as far as I know, virtual machine it self also runs on host os, which makes itself a 'Process'.
Is there any big difference between the way the virtual machine works and the other normal applications/process do?
Docker is a brand name of a container management software system.
TL;DR:
Containers are a packaging concept.
VMs are a compatibility concept.
VMs are a security concept.
A container is not a process, it is an isolation of a collection of processes within a single-system-image. What is isolated? First, and foremost, the path name space. Processes within a given container share a path name space, so that they agree that /usr/bin/env is the same thing. Two processes in different containers, or perhaps inside the non-containered environment, would not necessarily see the same file for /usr/bin/env. This functionality has been a feature of UNIX derived systems for at least 40 years; under the service chroot().
More recently, containers have taken to isolate things that are not in the namespace, like processes, user ids and network interfaces. In older chroot-based systems, running ps in a container would show processes that were not in that container; although special handling hacked into to prevent a chrooted root user from gaining root access on the underlying system.
In these modern systems, not only is the pid space partitioned, but also user ids, so that root in a container does not correspond to root on the overall system.
All this is accomplished by controlling many features of the kernel in a single-system-image. The software that controls these features: Docker, amongst others.
A Virtual Machine is not part of a single-system-image. Each VM is its own logical computer, running its own kernel, shell, etc.. With some careful configuration, you can make it so various files appear within many of the VMs; but that is no different than mounting file systems exported by a network file system.
Why choose one over the other: containers share my os, and are handy to escape the .so verionitis hell caused by conflicting software systems; I can package my software in a container, and it is isolated from whatever the running system is. I cannot, however, package the kernel I need; so if my software requires ubuntu 14.02; and I am running 18.04, containers will not save me. Containers are a packaging concept.
VMs are handy to support multiple versions or types of operating systems in a single computer. Since each VM runs unique system software, I can run my 14.02 app on my 18.04 system and none is the wiser. VMs are a compatibility concept.
VMs are also handy as a security layer. Imagine that a web page has a js-bomb that can corrupt my kernel (I know, quite a stretch). If I run my browser in a container, I have corrupted my kernel. If I run it in a VM, I have corrupted that VMs kernel -- I merely have to delete it, or rewind it, and the corruption is gone. VMs are a security concept.
Currently running Windows 10 (native) and VMware Workstation 12 Player. I am running various LTS releases of Ubuntu in VMware.
I am wondering if there is way for me to run SikuliX on my main OS, Windows 10, and have the script interact with a virtual machine, running an Ubuntu OS, that I have open.
The quickstart documentation on the download site isn't very specific about the limitations of SikuliX on this topic. It simply says that you can't run it on a headless system (which VMware is not), and you need to have a monitor - the only problem is that I have no idea if SikuliX considers VMware to be a legitimate monitor or not.
I am aware of the fact that you can install Sikulix on the virtual machine itself, but this is not preferable as I would have to possibly reconfigure my VM settings to allocate more memory OR just deal with running the script at a slower pace.
Any help would be greatly appreciated.
The answer is yes, if you run SikuliX on a native host, it is possible to interact with the the interface of the virtual machine the same as running SikuliX on the virtual machine itself.
Now that I think about it, I should have probably tested this out before posting the question, but hey, if anyone has the same question as I do, now you know.
i'm working on an application that needs to be tested in a HPC cluster.
i'm thinking about using xcat as a resource manager.
i don't have much hardware resources, i have one HP desktop and MacBook laptop.
the question: is it possible to set up a virtual cluster (using virtualBox or KVM) on one hardware resource
thanks,
The short answer here is yes, depending on how much memory and disk you have available on your one machine. I've done this numerous times on a MacBook Pro with 8 GB of RAM.
The long answer is that there is absolutely nothing magical about an HPC cluster. All you need to test basic parallel applications in a simulated cluster environment are two or more VMs which meet these criteria:
Same OS, as identical as possible.
Passwordless authentication (ssh key based auth).
Same software stack in same location on all nodes (See #4 or use rsync).
At least one shared filesystem, e.g. NFS mounted $HOME
Shared network with name resolution configured (correct /etc/hosts on all nodes)
None of this requires job schedulers, provisioning tools or any complex networking. You can find many NFS setup howtos to help get one node set up to share $HOME to the others, this might be the most complicated part. VirtualBox does a good job of setting up local networking.
On top of this you can layer setting up a job scheduler like SLURM (highly recommended), provisioning tools like Warewulf or xCat, parallel filesystems across the VMs (BeeGFS is easy to set up and a great introduction), etc. I have had a full featured stateless cluster simulated on my Macbook Pro a number of times using tools from this list and VirtualBox VMs. It's a great way to learn about setting up an HPC cluster.
I like the Docker Hub with dockerfiles idea very much.
Is there a similar way to get a small working linux VirtualBox instance in a few commands, that could also be controlled from a command line?
Vagrant is a great tool that does just what you want and much more! It's a ruby application written for fast and simple setup of minimal development environments.
By default it creates VirtualBox images, but it supports VMWare and many others too. The whole setup of a box is managed by a single Vagrantfile! Your vm options, network settings and provisioning is done there.
Setting up a virtualbox box is as easy as executing just two shell commands. Checkout the Getting Started Guide for an example using Ubuntu.
You can use a vast range of prepared images from the Hashicorp Atlas or build your owns.
Also, vagrant doesn't limit you to one virtual machine per development setup, it enables you to model cluster setups on a single machine using multiple vms. I myself use docker for that part though.
Edit: fixed a typo :<
I've read this article:
How is Docker different from a normal virtual machine?
I have huge intend of converting all my virtual images into docker instances.
I can't see an angle where vm still make sense...
So what's the point to VM now? Ok... maybe the desktop virtualization to have pulseaudio working?
Once docker solve this, what else?
UPDATE
Okay... So I can't run docker in "non-linux" favour hosts...
For one point you can't run an operating system within your container that is different from the OS on the host.
On Windows and Mac OSX boot2docker is used to run Docker which is VirtualBox running a reduced Linux OS which runs Docker.
The benefits of containers are clear and well known, but the disadvantages have been glossed over somewhat.
Specifically, you don't just need the same OS type (aka linux), you get the same version of the kernel (including any mods you want.) Since containers are an OS construct, there are resource islands per OS kernel version (and different implementations for Windows, BSD or any non-linux if they exist).
VM's are secured with CPU level isolation, containers are secured with OS level isolation (with arguably a bigger attack surface).
There are many claims out there that containers are as slow and as big as VM's once you load up your container with everything you need for production and add lots of overlays, but these are all anecdotal and no large scale survey or trustable data is available yet.