Vagrant: How to configure vagrant to use different Vagrantfiles - testing

How to use Vagrant for testing e.g. an application running on a Debian, Ubuntu, SuSE etc. VM.
Current project is having one Vagrantfile having configuration inside. To start the Vagrant VM I'm running
vagrant up
vagrant provision
Now I'm wondering how to do something like
vagrant up suse ... and later vagrant up debian

you can do one of the following:
having git with different branch and the Vagrantfile will be different depending your branch. Be careful though as your .vagrant directory will not match your Vagrantfile, you will need to up and provision each VM and switch the .vagrant directory.
you can use vagrant multi machine generally you use that to reproduce a multi-tier environment so each VM can be up and running at same time, but you can provision and up each VM independently
really use different project folder with each a specific Vagrantfile and a .vagrant directory, each for a specific purpose

Related

Why doesn't WSL create /run/user/${uid} directory?

I wonder why there is no directory like /run/user/1000 when using WSL2 with Ubuntu 18.04 image?
How can I fix this (if possible)?
The user runtime directory /run/user/$UID is a tmpfs created and mounted by systemd on user login.
Since WSL instances do not support systemd, there is no daemon that creates this directory. This also implies that systemd/systemctl commands do not run on your WSL boxes. Please refer to why systemd is disabled in WSL? for more details and discussions on how to hack systemd into your WSL box.

Is it bad to mount WSL paths into a container running under Docker for Windows?

I'm aware that it's not a good idea to access WSL Linux files (located in %LOCALAPPDATA%\Packages\CanonicalGroupLimited.UbuntuonWindows_79rhkp1fndgsc\LocalState\rootfs\) directly from Windows, but does that recommendation also apply to mounting a WSL path as a volume in a container running under Docker for Windows?
For example, if I first do this on Windows:
mklink /j %USERPROFILE%\wsl %LOCALAPPDATA%\Packages\CanonicalGroupLimited.UbuntuonWindows_79rhkp1fndgsc\LocalState\rootfs
Then do this in WSL with Docker already configured:
$ docker run --rm -v /c/Users/$USER/wsl/home/$USER/myapp:/myapp -ti ubuntu:18.04 bash
The above assumes the requisite "root=/" in "/etc/wsl.conf" and that the user has the same name in both environments.
I can see my files inside the container under "/myapp" just fine, but I'm not sure whether it's safe to write to that path. If both WSL and the container are running Ubuntu, is it any safer?
I really prefer to work full-time from WSL with my home directory containing the familiar Linux dot files.
And just for kicks, what if in WSL "$HOME/myapp" is a symlink to "/c/myapp"? Yes, I should then just use -v /c/myapp:/myapp for simplicity, but is traversing through the rootfs paths really bad?
Accessing the file paths through Docker on Windows still uses Windows symantecs to access the files, therefore you will bork your WSL distro instance. However the newest Windows Insider includes a Plan9 server embedded into the proprietary /init that allows access of Linux files from Windows via network share essentially. See https://blogs.msdn.microsoft.com/commandline/2019/02/15/whats-new-for-wsl-in-windows-10-version-1903/
An alternative would be to use ssh/scp from win-32 ssh on the same Windows host (or another) or from a Linux host.

Can Vagrant suffice my requirement?

I have been looking out for ways to setup an automation environment and I found this application named Vagrant. I read the docs on the site, however I wanted to know from the experts out there if Vagrant with Oracle VirtualBox would suffice my needs.
I need to have a script that will call Vagrant to initialize a VM [The VM-Image is always the same - Windows Server 2008 R2]
I need to copy some of my project related files from a shared location onto the VM
Call a Batch file that will take care of test runs for me inside the VM
Once my test run is complete, This VM needs to be self destroyed/destructed.
Also, I would like to know if the Image be a custom .ISO file?
Sounds like Vagrant and VirtualBox will work for that scenario. Also, you might find that running commands in the VM using WinRM or SSH may be the easiest way to launch tests.
If you haven't already seen it, the blog post about Windows support in Vagrant 1.6 is informative: https://www.vagrantup.com/blog/feature-preview-vagrant-1-6-windows.html
Creating a VirtualBox/Vagrant base VM from an .iso should work, and you can then do all of your work using the VM from that point onward.
To get started, you might try these steps:
Create a VirtualBox VM from your Windows .iso, using the VirtualBox GUI or cmdline tools.
Once you have the VM in the state you want it, shut it down and package it as a vagrant box - for example, on a Mac that step looks like (where Win7x64 is the dir containing the VirtualBox VM):
cd ~/VirtualBox\ VMs
vagrant package --base Win7x64 --output win7x64_base.box
Once that finishes, tell vagrant about the new base box:
vagrant box add win7x64_base /path/to/win7_base.box
Then you can vagrant init/vagrant up the VM:
mkdir win7 && cd win7
vagrant init win7x64
vagrant up
To enable SSH access, I installed Cygwin in the VM and configured sshd. So, after launching you can SSH in by running vagrant ssh
Note that if there's no Windows user in the VM named 'vagrant', you can specify the SSH username to use with vagrant ssh by placing this in your Vagrantfile:
config.ssh.username = 'user1'
As mentioned above, WinRM is also an option for remotely running commands.
And Vagrant apparently has some convenience features to make it easy to RDP into the VM, but I haven't looked at that.

Vagrant and / or Docker workflow with full OS X filesystem integration for seamless local feel?

Recently I've been dabbling with vagrant and docker. These are quite interesting tools, but I haven't been able to convince myself that it's the way to go quite yet on my OS X machine. Being an old Unix hat, I have to say that I like having a consolidated and sandboxed environment for development purposes.
I've seen a lot of chatter and a number of friends have been using vagrant with just stock vim for editing. I'm not really a fan of that approach and would probably prefer to use the vm provider's sharing mechanism OR, more likely, NFS.
Personally I'd like to be able to edit directly in TextMate, SublimeText, Emacs (on OS X), or even perhaps use RubyMine and its various IDE features, etc.
Is there any way to really get the workflow down so that such an environment will be essentially like working on a local environment without having to pull a lot of additional background strings to make things work out?
I suppose a few well placed scripts could go a long way, but I've not found any solid answers on really making this a seamless environment.
What actually worked for me was to use boot2docker which makes it easy to install a lightweight virtual machine (with VirtualBox) that will host your docker deamon and images. The only thing you need in order to run docker commands is to run $(boot2docker shellinit) when you open a new Terminal.
If you need to also have your files on an OS X folder and share them with a running docker image, you need some additional setup, but once you do it, you won't have to do it again.
Have a look here for a nice walkthrough on how to do it. The steps in short are:
Get a special boot2docker image that allows you to use shared folders for VirtualBox
Configure VirtualBox to share a folder:
VBoxManage sharedfolder add boot2docker-vm -name home -hostpath /Users
This will share your /Users folder with the boot2docker image that hosts docker.
From you Mac share the folder you need with a folder in a docker image like:
docker run -it -v /Users/me/dev/my-project:/root/src:rw ubuntu /bin/bash
One small annoyance that I haven't found how to overcome is that you do not longer access your software through localhost because it actually runs on boot2docker instance. You have to run boot2docker ip and access that ip.
Hope that helps!

NFS unavailable until NFS server restart

I have quite a strange problem with NFS. I have two systems. One is my workstation, Ubuntu 13.04, linux kernel 3.8.0. Here I've got a directory with code I am working on: /home/user/source. The other is a virtual machine running on some remote server. It has Centos 6.3 and mounts the directory at /opt/source. The point is I have got a whole development environment there needed to run my code, but I want to store the code itself on my local machine for easier acces for Eclipse and other development tools.
Unfortunately, when I reboot my local machine, the NFS filesystem is unavailable on the virtual box until I run: /etc/init.d/nfs-kernel-server restart. I cannot figure out why. Here's the only line in /etc/exports on my local machine:
/home/user/source 10.0.19.192(rw,sync,subtree_check)
And here's the line from /etc/fstab on virtual machine, where the NFS is described:
10.10.1.205:/opt/source /opt/WP nfs defaults,nofail 0 0
It looks like your NFS server wasn't started at boot time. I think, even on Ubuntu 13.04, you can still manage this with the "rcconf" program:
sudo apt-get install rcconf dialog
sudo rcconf
Then, check off nfs-kernel-server.
If for some reason this isn't working, try a similar process with the sysv-rc-conf package.