How to mount a directory in virtualbox with Salt-Stack? - virtual-machine

Currently trying to provision a VirtualBox with Salt. There is a salt-state to mount directories. However, I don't know what the device should be in the state.
The state currently looks like:
directory:
file.directory:
- user: ...
- ...
mount.mounted:
- device: <somedevicenamewhichIcantgetright>
- fstype: auto
Whatever I try for somedevicenamewhichIcantgetright isn't found, and I've looked in the docs here but I can't find anything about this.
Does anyone know what it should look like? I've tried username#machine:/path/to/directory and file://path/to/directory (although that should be on the virtualbox itself; I was taking a shot at the pack).

Are you trying to mount a directory from the host to your vm with Virtualbox? If so, you can do that in your Vagrantfile like this:
config.vm.synced_folder "saltstack/salt/", "/srv/salt"

Related

"Windows Subsystem for Linux has no installed distributions" even though 'Ubuntu' is installed

I recently moved my wsl directory to another drive due to low storage in C: drive. As per the answer provided in this StackOverflow post, I used lxrunoffline tool and moved my Ubuntu distribution to another drive (E:\wsl in my case). As soon as the distribution was moved successfully, I ran wsl to test and it worked like a charm.
Everything went fine until one day I accidentally renamed the E:\wsl folder to something else. Well, as expected, wsl didn't work. Then, I reverted back to the name wsl and expected it to work but to my surprise, it didn't find any installed distribution after that even though it's installed... πŸ˜•
E:> wsl
Windows Subsystem for Linux has no installed distributions.
Distributions can be installed by visiting the Microsoft Store:
https://aka.ms/wslstore
Is there any way to revert back to the old directory or make wsl point to a manual location?
EDIT: I don't want to reset Ubuntu as I want to retain the installed packages and preferences...
Well, I finally found a solution to this problem. 😊
This is as simple as registering the distribution using lxrunoffline tool using the rg or register command.
E:\LxRunOffline\LxRunOffline-v3.3.3>lxrunoffline rg
[ERROR] the option '-d' is required but missing
Options:
-n arg Name of the distribution
-d arg The directory containing the distribution.
-c arg The config file to use. This argument is optional.
After running the register command, I was able to start wsl as usual. But that would log you in as a "root" user and would thus start in "/root" directory. I ran the following command to start wsl as different user (this is for Ubuntu):
ubuntu config --default-user <user-name>

How to fix Vagrant error: `private_key_path` file must exist:

I've been using PuPHPet to create virtual development environments.
Yesterday I generated a config file for a new box. When I try to spin it up using the vagrant up command, I get the following error message:
C:\xx>vagrant up
Bringing machine 'default' up with 'virtualbox'
provider... There are errors in the configuration of this machine.
Please fix the following errors and try again:
SSH:
* private_key_path file must exist: P://.vagrant.d/insecure_private_key
I came across this question and moved the insecure_private_key from puphpet\files\dot\ssh to the same directory as where the Vagrantfile is. However this gives the same error.
I'm also confused by the directory given in the error message;
P://.vagrant.d/insecure_private_key
Why is the 'P' drive mentioned?
My Vagrantfile can be found here.
Appreciate any advice on solving this error.
I fixed the problem by replacing the path to insecure_private_key by hard coding the path to the insecure_private_key file.
So it went from:
config.ssh.private_key_path = [
customKey,
"#{ENV['HOME']}/.vagrant.d/insecure_private_key"
]
To:
config.ssh.private_key_path = [
customKey,
"C:/Users/My.User/.vagrant.d/insecure_private_key"
]
It looks like it's because you may have performed a vagrant destroy which deleted the insecure_private_key.
But the vagrant file looks up the puphpet\files\dot\ssh files, if they are there, it looks for the insecure_private_key.
delete (rename) the id_rsa files in puphpet\files\dot\ssh
this fixed it for me!
When you are sharing your puphet configuration to your teammates, hardcoding the private_key_path is not advisable as per the accepted answer.
My host computer is windows so i have added a new environment variable VAGRANT_HOME with value %USERPROFILE% since this is where my /.vagrant.d folder resides. When you add this variable just make sure that you close command prompts that are open so the variable will be applied
Hope this helps
You can also just delete all the files in the puphpet folder rm -rf puphpet/files/dot/ssh/* and the vm should regenerate them when you run vagrant provision.
I'm not sure what's wrong with your Vagrant installation, but this line:
vagrant_home = (ENV['VAGRANT_HOME'].to_s.split.join.length > 0) ? ENV['VAGRANT_HOME'] : "#{ENV['HOME']}/.vagrant.d"
is what sets up the variable that is later on used here:
config.ssh.private_key_path = [
customKey,
"#{vagrant_home}/insecure_private_key"
]
The reason this is happening is that as of Vagrant 1.7, it generates a unique private key for each VM you have. There's, what I consider to be, a bug in that Vagrant completely ignores user-defined private_key_path if it detects that it generated a unique key previously.
What PuPHPet is doing here is letting Vagrant generate its unique SSH key, then once the VM boots up and has SSH access, it goes in and generates another key to replace it.
The reason we're replacing it is because this new Vagrant feature only works on OSX/Linux hosts, due to Windows not having the required tools.
My way works across all OS because it does the SSH key generation within the VM itself.
All this is semi-related to your question, but the answer is that something's wrong with your Vagrant installation if those environment variables have not been defined.
Adding to PunctuationMark's answer you can also set the VAGRANT_HOME environment variable in your Vagrantfile: ENV['VAGRANT_HOME'] = ENV['USERPROFILE']
Editing this following line in Vagrantfile worked for me.
PRIVATE_KEY_SOURCE = '~/.vagrant.d/insecure_private_key'

Changing permissions of added file to a Docker volume

In the Docker best practices guide it states:
You are strongly encouraged to use VOLUME for any mutable and/or user-serviceable parts of your image.
And by looking at the source code for e.g. the cpuguy83/nagios image this can clearly be seen done, as everything from nagios to apache config directories are made available as volumes.
However, looking at the same image the apache service (and cgi-scripts for nagios) are run as the nagios user by default. So now I'm in a pickle, as I can't seem to figure how to add my own config files in order to e.g. define more hosts for nagios monitoring. I've tried:
FROM cpuguy83/nagios
ADD my_custom_config.cfg /opt/nagios/etc/conf.d/
RUN chown nagios: /opt/nagios/etc/conf.d/my_custom_config.cfg
CMD ["/opt/local/bin/start_nagios"]
I build as normal, and try to run it with docker run -d -p 8000:80 <image_hash>, however I get the following error:
Error: Cannot open config file '/opt/nagios/etc/conf.d/my_custom_config.cfg' for reading: Permission denied
And sure enough, the permissions in the folder looks like (whist the apache process runs as nagios):
# ls -l /opt/nagios/etc/conf.d/
-rw-rw---- 1 root root 861 Jan 5 13:43 my_custom_config.cfg
Now, this has been answered before (why doesn't chown work in Dockerfile), but no proper solution other than "change the original Dockerfile" has been proposed.
To be honest, I think there's some core concept here I haven't grasped (as I can't see the point of declaring config directories as VOLUME nor running services as anything other than root) - so provided a Dockerfile as above (which follows Docker best practices by adding multiple volumes) is the solution/problem:
To change NAGIOS_USER/APACHE_RUN_USER to 'root' and run everything as root?
To remove the VOLUME declarations in the Dockerfile for nagios?
Other approaches?
How would you extend the nagios dockerfile above with your own config file?
Since you are adding your own my_custom_config.cfg file directly into the container at build time just change the permissions of the my_custom_config.cfg file on your host machine and then build your image using docker build. The host machine permissions are copied into the container image.

Vagrant corrupted index file C:\Users\USERNAME\.vagrant.d/data/machine-index/index

My Windows 8.1 just crashed. Now I have some files on my dist that are corrupted. This includes my vagrant machine index (Not shure if the naming is right but I know that it is this file -> C:\Users\USERNAME.vagrant.d/data/machine-index/index).
So There is a lot of binary or hexdecimal stuff in there (Again not shure because I don't deal with this stuff usualy so correct me if I'm wrong!) And Vagrant spits out the following message if I try to start everything after boot.
vagrant up returns this
The machine index which stores all required information about
running Vagrant environments has become corrupt. This is usually
caused by external tampering of the Vagrant data folder.
Vagrant cannot manage any Vagrant environments if the index is
corrupt. Please attempt to manually correct it. If you are unable
to manually correct it, then remove the data file at the path below.
This will leave all existing Vagrant environments "orphaned" and
they'll have to be destroyed manually.
Path: C:/Users/Username/.vagrant.d/data/machine-index/index
Same thing happened to me. So I just deleted the index file and the .lock file from the machine-index folder to get Vagrant working again.
When using Vagrant 2.2.5 in Windows 10, I had to navigate to /Users/{yourname}/.vagrant.d/data/machine-index and remove both index and index.lock, so rm index then rm index.lock.
Finally I navigated back to Homestead folder and ran vagrant up.
When accidentally my laptop crashed, I had the same vagrant issue (index) on my first attempt to run vagrant up.
The machine index which stores all required information about
running Vagrant environments has become corrupt. This is usually
caused by external tampering of the Vagrant data folder.
Vagrant cannot manage any Vagrant environments if the index is
corrupt. Please attempt to manually correct it. If you are unable
to manually correct it, then remove the data file at the path below.
This will leave all existing Vagrant environments "orphaned" and
they'll have to be destroyed manually.
Path: C:/Users/{user}/.vagrant.d/data/machine-index/index
Unfortunately my issue was not solved by deleting the index and index.lock files as the most voted up answer told. I rebooted my vm using virtualbox GUI (used as VM provider) and shown up the following message.
Entering emergency mode. Exit the shell to continue.
Type "journalctl" to view system logs.
You might want to save "/run/initramfs/rdsosreport.txt" to a USB stick or /boot
after mounting them and attach it to a bug report.
I realised that crash produced errors on VM's FS. So after searching and investigation I overcame that issue by executing the command below.
xfs_repair -v -L /dev/dm-0
Environment info: OS windows10, virtual-box 6.1, vagrant 2.2.7 and vm-os centos7

Is there a way I can have a VM gain access to my computer?

I would like to have a VM to look at how applications appear and to develop OS-specific applications, however, I want to keep all my code on my Windows machine so if I decide to nuke a VM or anything like that, it's all still there.
If it matters, I'm using VirtualBox.
This is usually handled with network shares. Share your code folder from your host machine and access it from the VMs.
Aside from network shares, another tool to use for this is a version-control system.
You should always be able make a normal network connection between the VM and the hosting OS, as though it were another computer on the same network. Which, in some sense, it is.
I do this all the time.
I have a directory in a Windows drive that I mount in my host ubuntu 12.04.
I run virtualbox ubuntu 13.04 as a guest.
I want the guest to mount the Windows directory with full non-root permissions.
I do almost all my work from a bash shell, so this method is natural for me.
When searching for methods to automatically mount virtualbox shared folders,
reliable and correct methods are hard to distinguish from those that fail.
Failures include getting and setting permissions, as well as other problems.
Methods that fail include:
modifying /etc/fstab
modifying /etc/rc.local
I am fairly certain that rc.local can be used,
but no methods I have tried worked.
I welcome improvements on these guidelines.
On virtualbox 4.2.14 running nautilus (bash terminal) on an ubuntu 13.04 guest,
Below is a working method to mount Common (sharename)
on /home/$USER/Desktop/Common (mountpoint) with full permissions.
(Note the β€˜\’ command continuation character in the find command.)
First time only: create your mountpoint, modify your .bashrc file, and run it.
Respond with password when requested.
These are the four command-lines needed:
mkdir $HOME/Desktop/Common
sudo echo β€œ$USER ALL=(ALL) NOPASSWD:ALL” >> /etc/sudoers
find $HOME/Desktop/Common -maxdepth 0 -type d -empty -exec sudo \
mount -t vboxsf -o \
uid=`id -u $USER`,gid=`id -g $USER` Common $HOME/Desktop/Common \;
source ~/.bashrc # Needed if you want to mount Common in this bash.
All other times: simply launch a bash shell.
The find command mounts the shared directory if the mountpoint directory is empty.
If the mountpoint directory is not empty, it does not run the mount command.
I hope this is error-free and sufficiently general.
Please let me know of corrections and improvements.