How to Install a Vagrant Box on a Bare Metal Machine? - virtual-machine

Is there an established way to take a Vagrant box and use it as the operating system for a "bare metal" machine, i.e. a normal computer and not a hypervisor, without having to sit through an installation process?
Now I understand the common response will probably be "install an OS regularly and then use a proper configuration management tool like Puppet or Chef" but hear me out. Our IT organization would like to create a base Vagrant box with all security-related protocols and applications enforced. Then a configuration management tool like Puppet could install "useful" applications like databases and web servers on top of it.
This works best when a software developer wants to deploy a new utility to development environments or servers - they can write the Puppet code to install exactly what they want, which can be turned over to IT to run it on top of the validated Vagrant box to create a virtual machine server.
By hosting the Vagrant box internally, we can hide the security details from the developer while they write new Puppet code, they can test their Puppet code on the same environment they will run it on, and it will provision much faster during testing since the box is just downloaded once. Most "production" deployments will stay as Virtual Machines.
In rare circumstances, we may want a real, bare-metal server, not a VM, probably when we get new hardware to run more VMs or if the utility we need is very computationally intensive. It would be nice if the existing Vagrant box could be repurposed so bare-metal and virtual servers were indistinguishable.
EDIT: I found a post on askubuntu (https://askubuntu.com/questions/32499/migrate-from-a-virtual-machine-vm-to-a-physical-system) which seems to do what I want, can anyone verify if such a procedure would work on a Vagrant disk image, if there would be necessary cleanup (like Vagrant ssh keys) or if it could be generalized to non-Ubuntu operating systems (since it uses Live CD)?

A Vagrant box packaged for VirtualBox is essentially a virtual disk with metadata. Most likely it's going to have the VirtualBox tools and drivers installed, which won't do much good on a physical system. Not only that, the drivers for the physical system would need to be installed on the box image.
What you're talking about doing is a good use case for some sort of "ghosting" software that simply copies blocks of data to a physical disk. There's really no advantage to using Vagrant here that I can see.

Related

Can I run one WSL2 virtual machine instance on two system?

I'm new to the WSL2 and wondering if it's possible to run the same WSL2 ubuntu instance on both my desktop and laptop.
Now I am able to use wsl --export and wsl --import method to save and load the system to/from my portable hard drive. But these methods takes a long time.
I notice that wsl --import load a file named ext4.vhdx. Is there a way to load straightly from this file?
Update v2.0:
I was able to get a workaround and it works great.
Thanks to Booting from vhdx here, I was able to load straightly from my vhdx file on my portable hard disk. Windows track down its subsystem with regedit, So we can write our own(p.s: make sure to get BasePath right, it starts with "\\\\?", or you will not be able to access the subsystem' filesystem on your host system.):
Windows Registry Editor Version 5.00
[HKEY_USERS\【your SID here】\SOFTWARE\Microsoft\Windows\CurrentVersion\Lxss\{【UUID here】}]
"State"=dword:00000001
"DistributionName"="distribution name"
"Version"=dword:00000002
"BasePath"="vhdx folder path" 【 e.g. "\\\\?\\E:\\S061\\WSL\\ubuntu-20"】
"Flags"=dword:0000000f
"DefaultUid"=dword:000003e8
I suppose the best way to do this would be to store ext4.vhd on a network storage device accessible to both devices.
I have previosly mentioned how to move ext4.vhd. You can check that out here
Basically you need to export from one machine and import it while making sure the vhd file is configured for wsl to access from the network storage
Since this should *officially* not supported expect some performance hits
Another way would be to run WSL on one computer and ssh/remote desktop to it from another device on the network
I'm of the strong belief that sharing the same ext4 vhd between two VM's simultaneously would be a bad idea. See this and this Unix & Linux StackExchange, including the part about ...
note that sharing LVs/partitions on a single disk between the servers at the same time is NOT very safe. You should only access whole disks from any of the servers at one time.
However, as dopewind's answer mentioned, you can access the WSL instance on one computer (probably the desktop) from another (e.g. the laptop). There are several techniques you can use:
If you have Windows 10 Professional or Enterprise on one of the computers, you can enable Remote Desktop, which allows you to access pretty much everything on one computer from another. RDP ("Remote Desktop Protocol") even works from other devices such as an iPad or Android tablet (or even a phone, although that's a bit of a small screen for a "desktop"). That said, there are better, more idiomatic solutions for WSL ...
You could enable SSH server on the Windows 10 computer with the WSL instance (instructions). This may sound counterintuitive to some people, since Linux itself running in the WSL instance also includes an SSH server (by default). But by SSH'ing from (for example) your laptop into your desktop's Windows 10, you can then launch any WSL instance you have installed (if you choose to install more than one) via wsl -d <distroName>. You also avoid a lot of the network unpleasantness in the next option ...
You could, as mentioned above, enable SSH on the WSL instance (usually something like sudo service ssh start) and then ssh directly into it. However, note that WSL2 instances are NAT'd, so there's a whole lot more hackery that you have to do to get access to the network interface. There's a whole huge thread on the WSL Github about it. Personally, I'd recommend the "Windows SSH Server" option mentioned about to start out with, then you can worry about direct SSH access later if you need it.
Side note: Even though I have SSH enabled on my WSL instances, I still use Windows SSH to proxy to them, to avoid these networking issues.

Isn't virtual machine quite a type of process?

I'm trying to understand the basic concepts of Docker, and lots of docs say that "Docker is not virtual machine, but a process". To me, this sentence looks quite awkward, since as far as I know, virtual machine it self also runs on host os, which makes itself a 'Process'.
Is there any big difference between the way the virtual machine works and the other normal applications/process do?
Docker is a brand name of a container management software system.
TL;DR:
Containers are a packaging concept.
VMs are a compatibility concept.
VMs are a security concept.
A container is not a process, it is an isolation of a collection of processes within a single-system-image. What is isolated? First, and foremost, the path name space. Processes within a given container share a path name space, so that they agree that /usr/bin/env is the same thing. Two processes in different containers, or perhaps inside the non-containered environment, would not necessarily see the same file for /usr/bin/env. This functionality has been a feature of UNIX derived systems for at least 40 years; under the service chroot().
More recently, containers have taken to isolate things that are not in the namespace, like processes, user ids and network interfaces. In older chroot-based systems, running ps in a container would show processes that were not in that container; although special handling hacked into to prevent a chrooted root user from gaining root access on the underlying system.
In these modern systems, not only is the pid space partitioned, but also user ids, so that root in a container does not correspond to root on the overall system.
All this is accomplished by controlling many features of the kernel in a single-system-image. The software that controls these features: Docker, amongst others.
A Virtual Machine is not part of a single-system-image. Each VM is its own logical computer, running its own kernel, shell, etc.. With some careful configuration, you can make it so various files appear within many of the VMs; but that is no different than mounting file systems exported by a network file system.
Why choose one over the other: containers share my os, and are handy to escape the .so verionitis hell caused by conflicting software systems; I can package my software in a container, and it is isolated from whatever the running system is. I cannot, however, package the kernel I need; so if my software requires ubuntu 14.02; and I am running 18.04, containers will not save me. Containers are a packaging concept.
VMs are handy to support multiple versions or types of operating systems in a single computer. Since each VM runs unique system software, I can run my 14.02 app on my 18.04 system and none is the wiser. VMs are a compatibility concept.
VMs are also handy as a security layer. Imagine that a web page has a js-bomb that can corrupt my kernel (I know, quite a stretch). If I run my browser in a container, I have corrupted my kernel. If I run it in a VM, I have corrupted that VMs kernel -- I merely have to delete it, or rewind it, and the corruption is gone. VMs are a security concept.

Is there a fast painless way to setup a linux distro on VirtualBox?

I like the Docker Hub with dockerfiles idea very much.
Is there a similar way to get a small working linux VirtualBox instance in a few commands, that could also be controlled from a command line?
Vagrant is a great tool that does just what you want and much more! It's a ruby application written for fast and simple setup of minimal development environments.
By default it creates VirtualBox images, but it supports VMWare and many others too. The whole setup of a box is managed by a single Vagrantfile! Your vm options, network settings and provisioning is done there.
Setting up a virtualbox box is as easy as executing just two shell commands. Checkout the Getting Started Guide for an example using Ubuntu.
You can use a vast range of prepared images from the Hashicorp Atlas or build your owns.
Also, vagrant doesn't limit you to one virtual machine per development setup, it enables you to model cluster setups on a single machine using multiple vms. I myself use docker for that part though.
Edit: fixed a typo :<

MAMP/LAMP native or virtual (Virtualbox/VMware)?

What is your preferred development environment ?
Native
WAMP/MAMP/LAMP (Apache, MySQL, PHP) on Windows/MacOS/Linux
Working copy local, SVN/CVS on server
IDE/Editor on the same system (Eclipse, Aptana, Zend...)
Virtual/Native (Server on VM)
LAMP on VirtualBox/VMware
working copy in the VM
IDE/Editor on host, access to the VM with Samba, FTP, SFTP (eventually mapping with tools like WebDrive)
Virtual (VM)
Complete development environment running in a VM (server, tools, IDE)
Host is only used for special tools not available on the OS running in the VM
All have pros and cons.
With BitNami stacks you can run the exact same XAMP environment locally or remotely (and make sure everybody on your team is running the exact same stack). It is free and works on Windows, Linux, Mac.
I like having the SVN repository somewhere on a web server.
It's reasonably secure (using Apache WebDAV), and it gives me a good chance of recovering quickly from any disasters that may befall my main development machine. I have the luxury of control over my own web server, but there are lots of cheap hosts that will do the job at low cost.
As regards VM or no VM:
Advantages of VM - very fast recovery from screwing up your development environment
Ability to try out different versions or upgrades quickly
If you have many systems running the VM host, ability to quickly move the whole environment
Can choose any Host
Disadvantages of VM - performance impact; extra setup complexity.
On balance, I go for "no VM" if all the tools are available on my host system, but I do use VM when I need to run a different OS (the host system is a Mac Pro, so if I need Visual Studio, I do it with Parallels).

Setting up a development environment INSIDE a virtual machine

Heres the problem. I use around three different machines for development. My partner is using two. We have to go through the same freaking set up procedure on all five machines to get to work.
Working with a php project here, so:
Install and configure, PDT, a php debugger, and some version of XAMPP.
Then possible install an svn client, and any other tools.
Again, to each of the five machines.
What if, instead, we did all of this once, in a virtual machine that is set up with the same stack, same versions, as the production server. Then each of us could grab a copy of the VM image, run that image on each of the five machines and do all of our development in that VM. Put Eclipse, apache, mysql, the works, all in that vm.
The only negative of this approach, and please correct me on the only part, is performance. Is it really that big of an issue though? The slowest machine out of the five is a Samsung NC10 powered by an Intel Atom 1.6 ghz processor.
Do you think this is possible and practically usable? Or am I crazy?
I use a VM for development (running on my laptop) and have never had performance problems. Another approach that you could take would be to image the drive in the state that you want. Use Acronis or Ghost to re-image each machine when you need to. Only takes about 5-10 minutes to restore an image on any modern PC.
I use a VM for all my "work" as it keeps it away from my "play". This set up allows me to use the office VPN without exposing my whole machine to the office environment (which I trust about as much as the internets. ;-) Also I don't have to worry about messing up my development environment by trying games or other software. My work VM is currently running inside VirtualBox but I have used VMWare in the past. I have only noticed performance issues when using graphic intensive programs like Webex or the Terminal Server Client.
It can certainly be done. What turns me off is the size of the VM image, which would normally be several GBs. Having it on a network share means it can take longer to transfer then your current setup process takes. I guess an external hard drive would be the easiest way to move it around.
Performance wouldn't be an issue with any web development.
I have to ask why your current machines need to be "re-imaged" each time you sit down for work?
If you're using Windows you'll probably want to use SYSPREP on the master image so that the 'mini-setup' runs when you boot up the virtual machines for the first time.
Otherwise in terms of Windows' point of view, the machines have the exact same SID, hostname and other things - running multiple machines with the same SID on the same network can cause tons of headaches. Even more if you want them to communicate with each other.
I've run websphere for zSeries on a vmware virtual machine with no problem and websphere is more resource intensive then any PHP stack. I find that having a multi core machine or at least hyper threading makes it run a lot faster.
With vmware, disk operations are slower. For PHP development I doubt it would be a problem, but you'd definitely notice it if you are compiling a large C++ project. There is also Sun's VirtualBox which is free, and the latest version is rather nice (but I haven't looked at how slow disk operations are yet).
I am using that idea in practice. Virtual machines are generally great for development.
To run on multiple operating systems and multiple separate development environments.
Preserver older development environments for later support.
Can be easily backed up, when hard drive crashes no need to start from beginning.
Can be copied from developer to another, so everyone don't have to do tedious installations and configurations.
Down sides are:
Virtual machines are slower, you need more powerful computers than you would need otherwise. I would recommend having at least 4 G of ram, but preferably more like 16, fast multi core processors and fast hard drives.
Copying Windows OS virtual machines, each used copy of virtual machine should have it's own product key. When you make a copy, it needs to be registered with new product key.
Did you think about a software configuration manager like ansible, chef or puppet? With such software automation of such tasks is very easy! It can even create fresh vm and then configure it.