What does provisioning means for VM - virtual-machine

Does it differ from "setting up" a machine ?
I can't really tell.
It seems to be if I read the doc from Vagrant but there must be something else.
It says
On the first vagrant up that creates the environment, provisioning is
run. If the environment was already created and the up is just
resuming a machine or booting it up, they will not run unless the
--provision flag is explicitly provided.
So some up need to do the "provisioning" and some up do not.

Provisioning generally refers to the distribution and installation of software. In the context of a virtual machine, it refers to configuring what software and capabilities that each instance of a virtual machine will contain. Think of it here as a virtual machine template, where each new VM instance that is spun up will contain the same software that you've asked to provision.
"Setting up" is a more generic term that appears to be used in the Vagrant documentation as referring to the creation and destruction of each virtual machine instance, e.g. "setting up" vs. "tearing down", as per the "up" and "destroy" commands. "setting up" here has nothing to do with what's actually configured in the VM instance itself, that's the provisioning part.
Put another way, when you set up a new virtual machine instance using the "up" command, it creates a basic virtual machine instance, then triggers the provisioning system to actually install the software you want into that instance. Here's the part of the documentation that hilights this:
Provisioners in Vagrant allow you to automatically install software,
alter configurations, and more on the machine as part of the vagrant
up process.
This is useful since boxes typically are not built perfectly for your
use case. Of course, if you want to just use vagrant ssh and install
the software by hand, that works. But by using the provisioning
systems built-in to Vagrant, it automates the process so that it is
repeatable.

Related

Can I run one WSL2 virtual machine instance on two system?

I'm new to the WSL2 and wondering if it's possible to run the same WSL2 ubuntu instance on both my desktop and laptop.
Now I am able to use wsl --export and wsl --import method to save and load the system to/from my portable hard drive. But these methods takes a long time.
I notice that wsl --import load a file named ext4.vhdx. Is there a way to load straightly from this file?
Update v2.0:
I was able to get a workaround and it works great.
Thanks to Booting from vhdx here, I was able to load straightly from my vhdx file on my portable hard disk. Windows track down its subsystem with regedit, So we can write our own(p.s: make sure to get BasePath right, it starts with "\\\\?", or you will not be able to access the subsystem' filesystem on your host system.):
Windows Registry Editor Version 5.00
[HKEY_USERS\【your SID here】\SOFTWARE\Microsoft\Windows\CurrentVersion\Lxss\{【UUID here】}]
"State"=dword:00000001
"DistributionName"="distribution name"
"Version"=dword:00000002
"BasePath"="vhdx folder path" 【 e.g. "\\\\?\\E:\\S061\\WSL\\ubuntu-20"】
"Flags"=dword:0000000f
"DefaultUid"=dword:000003e8
I suppose the best way to do this would be to store ext4.vhd on a network storage device accessible to both devices.
I have previosly mentioned how to move ext4.vhd. You can check that out here
Basically you need to export from one machine and import it while making sure the vhd file is configured for wsl to access from the network storage
Since this should *officially* not supported expect some performance hits
Another way would be to run WSL on one computer and ssh/remote desktop to it from another device on the network
I'm of the strong belief that sharing the same ext4 vhd between two VM's simultaneously would be a bad idea. See this and this Unix & Linux StackExchange, including the part about ...
note that sharing LVs/partitions on a single disk between the servers at the same time is NOT very safe. You should only access whole disks from any of the servers at one time.
However, as dopewind's answer mentioned, you can access the WSL instance on one computer (probably the desktop) from another (e.g. the laptop). There are several techniques you can use:
If you have Windows 10 Professional or Enterprise on one of the computers, you can enable Remote Desktop, which allows you to access pretty much everything on one computer from another. RDP ("Remote Desktop Protocol") even works from other devices such as an iPad or Android tablet (or even a phone, although that's a bit of a small screen for a "desktop"). That said, there are better, more idiomatic solutions for WSL ...
You could enable SSH server on the Windows 10 computer with the WSL instance (instructions). This may sound counterintuitive to some people, since Linux itself running in the WSL instance also includes an SSH server (by default). But by SSH'ing from (for example) your laptop into your desktop's Windows 10, you can then launch any WSL instance you have installed (if you choose to install more than one) via wsl -d <distroName>. You also avoid a lot of the network unpleasantness in the next option ...
You could, as mentioned above, enable SSH on the WSL instance (usually something like sudo service ssh start) and then ssh directly into it. However, note that WSL2 instances are NAT'd, so there's a whole lot more hackery that you have to do to get access to the network interface. There's a whole huge thread on the WSL Github about it. Personally, I'd recommend the "Windows SSH Server" option mentioned about to start out with, then you can worry about direct SSH access later if you need it.
Side note: Even though I have SSH enabled on my WSL instances, I still use Windows SSH to proxy to them, to avoid these networking issues.

Isn't virtual machine quite a type of process?

I'm trying to understand the basic concepts of Docker, and lots of docs say that "Docker is not virtual machine, but a process". To me, this sentence looks quite awkward, since as far as I know, virtual machine it self also runs on host os, which makes itself a 'Process'.
Is there any big difference between the way the virtual machine works and the other normal applications/process do?
Docker is a brand name of a container management software system.
TL;DR:
Containers are a packaging concept.
VMs are a compatibility concept.
VMs are a security concept.
A container is not a process, it is an isolation of a collection of processes within a single-system-image. What is isolated? First, and foremost, the path name space. Processes within a given container share a path name space, so that they agree that /usr/bin/env is the same thing. Two processes in different containers, or perhaps inside the non-containered environment, would not necessarily see the same file for /usr/bin/env. This functionality has been a feature of UNIX derived systems for at least 40 years; under the service chroot().
More recently, containers have taken to isolate things that are not in the namespace, like processes, user ids and network interfaces. In older chroot-based systems, running ps in a container would show processes that were not in that container; although special handling hacked into to prevent a chrooted root user from gaining root access on the underlying system.
In these modern systems, not only is the pid space partitioned, but also user ids, so that root in a container does not correspond to root on the overall system.
All this is accomplished by controlling many features of the kernel in a single-system-image. The software that controls these features: Docker, amongst others.
A Virtual Machine is not part of a single-system-image. Each VM is its own logical computer, running its own kernel, shell, etc.. With some careful configuration, you can make it so various files appear within many of the VMs; but that is no different than mounting file systems exported by a network file system.
Why choose one over the other: containers share my os, and are handy to escape the .so verionitis hell caused by conflicting software systems; I can package my software in a container, and it is isolated from whatever the running system is. I cannot, however, package the kernel I need; so if my software requires ubuntu 14.02; and I am running 18.04, containers will not save me. Containers are a packaging concept.
VMs are handy to support multiple versions or types of operating systems in a single computer. Since each VM runs unique system software, I can run my 14.02 app on my 18.04 system and none is the wiser. VMs are a compatibility concept.
VMs are also handy as a security layer. Imagine that a web page has a js-bomb that can corrupt my kernel (I know, quite a stretch). If I run my browser in a container, I have corrupted my kernel. If I run it in a VM, I have corrupted that VMs kernel -- I merely have to delete it, or rewind it, and the corruption is gone. VMs are a security concept.

Detect physical machine, as opposed to detecting a VM

Hypervisor presence can be detected via WMI like this;
select * from Win32_ComputerSystem
From that, read HypervisorPresent and if true, then it is present.
Equivalent VMIC command:
ComputerSystem get HypervisorPresent
This gives "true" for systems running under VMWare and VirtualBox and Hyper-V.
The problem is that it also gives "true", when run on physical machines when Hyper-V is installed, i.e. outside of a virtualized system.
So, my question is this:
Is it somehow possible to detect if the system is an actual physical machine even when Hyper-V is installed?
I had an idea of also checking if the Hyper-V service/role is installed, but that isn't enough since you can do nested virtualization with Hyper-V.
Check the “HKLM\SOFTWARE\Microsoft\Virtual Machine\Auto” mostly all the host details stored in this key at guest
Using HKLM\SOFTWARE\Microsoft\Virtual Machine\Guest\Parameters you get more information about it
but only works for HyperV you need to validate twice one time for other hypervisors and second time for HyperV.
There is an class check this to share vm and host details.
You can Check this for more helpful information.

Using saltstack ssh

Is there a difference between using salt-proxy ssh and directly salt-ssh? I'm interested because according to documentation both aimed to run remote commands without agent installation on the end machine.
You cant simply do salt-ssh on a proxy minion, for which you would have to write your own custom ssh interface to the remote system, because your proxy minion may not support doing salt-ssh.
How to choose between using salt-ssh vs salt-proxy totally depends on the type of a minion system.
As stated in the saltstack documentation - https://docs.saltstack.com/en/latest/topics/ssh/index.html and
https://docs.saltstack.com/en/latest/topics/proxyminion/index.html
For salt-ssh to be used, the remote system must have python installed - one of the criteria. For example, controlling ubuntu from centos.
As stated in the salt-proxy doc,
Proxy minions are a developing Salt feature that enables controlling
devices that, for whatever reason, cannot run a standard salt-minion.
Examples include network gear that has an API but runs a proprietary
OS, devices with limited CPU or memory, or devices that could run a
minion, but for security reasons, will not.

How to Install a Vagrant Box on a Bare Metal Machine?

Is there an established way to take a Vagrant box and use it as the operating system for a "bare metal" machine, i.e. a normal computer and not a hypervisor, without having to sit through an installation process?
Now I understand the common response will probably be "install an OS regularly and then use a proper configuration management tool like Puppet or Chef" but hear me out. Our IT organization would like to create a base Vagrant box with all security-related protocols and applications enforced. Then a configuration management tool like Puppet could install "useful" applications like databases and web servers on top of it.
This works best when a software developer wants to deploy a new utility to development environments or servers - they can write the Puppet code to install exactly what they want, which can be turned over to IT to run it on top of the validated Vagrant box to create a virtual machine server.
By hosting the Vagrant box internally, we can hide the security details from the developer while they write new Puppet code, they can test their Puppet code on the same environment they will run it on, and it will provision much faster during testing since the box is just downloaded once. Most "production" deployments will stay as Virtual Machines.
In rare circumstances, we may want a real, bare-metal server, not a VM, probably when we get new hardware to run more VMs or if the utility we need is very computationally intensive. It would be nice if the existing Vagrant box could be repurposed so bare-metal and virtual servers were indistinguishable.
EDIT: I found a post on askubuntu (https://askubuntu.com/questions/32499/migrate-from-a-virtual-machine-vm-to-a-physical-system) which seems to do what I want, can anyone verify if such a procedure would work on a Vagrant disk image, if there would be necessary cleanup (like Vagrant ssh keys) or if it could be generalized to non-Ubuntu operating systems (since it uses Live CD)?
A Vagrant box packaged for VirtualBox is essentially a virtual disk with metadata. Most likely it's going to have the VirtualBox tools and drivers installed, which won't do much good on a physical system. Not only that, the drivers for the physical system would need to be installed on the box image.
What you're talking about doing is a good use case for some sort of "ghosting" software that simply copies blocks of data to a physical disk. There's really no advantage to using Vagrant here that I can see.