Vagrant fails to mount NFS shared folders because of corrupted /etc/exports. How do I fix that file? - nfs

I recently tried to install a VM with vagrant but "vagrant up" always failed with the error:
Mounting NFS shared folders failed. This is most often caused by the NFS
client software not being installed on the guest machine. Please verify
that the NFS client software is properly installed, and consult any resources
specific to the linux distro you're using for more information on how to
do this.
NFS client was properly installed on my machine so I looked for other causes of errors and found a blogpost explaining that my /etc/exports might be corrupted. I restored exportsbak (which contains only commented examples), hoping that vagrant would reconfigure that file properly... but it doesn't, and the error is still there.
How can I force vagrant to regenerate that file or fix it? Thanks.

Just delete the file.
sudo rm -f /etc/exports
The file will be recreated during the vagrant up process.

I was not able to get nfs running on my Ubuntu, because I used the vagrant packages from apt (V 1.2.2)
I installed the latest Vagrant Version (1.5) from here: http://www.vagrantup.com/downloads
and nfs worked.

Check the NSF server is not installed, you can do…
dpkg -l | grep nfs-kernel-server
If it is not installed, install the required packages…
apt-get install nfs-kernel-server
apt-get install nfs-common
service nfs-kernel-server restart
sudo service portmap restart
mkdir -p /var/exports
Then in Vagranfile add line under #shared folders...
config.vm.synced_folder "www", "/var/www", :nfs => { :mount_options => "dmode=755","fmode=755"] }
When vagrant is starting it will ask for root password, to run it without root password you can edit /etc/sudoers and add following lines…
Cmnd_Alias VAGRANT_EXPORTS_ADD = /usr/bin/tee -a /etc/exports
Cmnd_Alias VAGRANT_NFSD_CHECK = /etc/init.d/nfs-kernel-server status
Cmnd_Alias VAGRANT_NFSD_START = /etc/init.d/nfs-kernel-server start
Cmnd_Alias VAGRANT_NFSD_APPLY = /usr/sbin/exportfs -ar
Cmnd_Alias VAGRANT_EXPORTS_REMOVE = /bin/sed -r -e * d -ibak /etc/exports
%sudo ALL=(root) NOPASSWD: VAGRANT_EXPORTS_ADD, VAGRANT_NFSD_CHECK, VAGRANT_NFSD_START, VAGRANT_NFSD_APPLY, VAGRANT_EXPORTS_REMOVE

if your host is Windows, then you need to install a vagrant plugin Vagrant WinNFSd.
$ vagrant plugin install vagrant-winnfsd

Related

wrong entry in limits.conf , unable to ssh to host

We have VirtualBox (using vagrant) env , by mistake made an entry in /etc/security/limits.conf [with out having a root shell open:( ] and now I am unable to ssh (the connections drops immediately).
Previously we had one such scenario (limits done by someone else) , was able to fix using vboxmanage guestcontrol copyto CLI and was able to overwrite limits.conf and then ssh was allowed, this time around the vboxmanage CLI also hangs
Tried to open the VM in GUI and went to console and tried few options , but could not get to single user mode.
Since you already tried vbox cli command and the commands hang, it means even virtualbox cannot access the system or get a shell to open.
In this case you will have to bring up a ubuntu VM and use the qemu-nbd module to fix this. The steps are given below.
Bring up a very simple ubuntu vm using hashicorp’s bionic64 on the same host machine by executing the following steps.
mkdir bionic
cd bionic
vagrant box add hashicorp/bionic64
vagrant init
Open the Vagrantfile and change the config.vm.box = "base" to config.vm.box = "hashicorp/bionic64"
Also mount the folder in the host where the .vdi file for the VM is located by adding the following to the Vagrant file by adding the following line(replace the file path with the correct one corresponding to your system. Here /nbd2 will be created on the ubuntu machine and will contain the files including the .vdi file.
config.vm.synced_folder "/home/topcat/VirtualBox\ VMs/your_vm", "/nbd2"
Now do vagrant up
Once the machine boots up
vagrant ssh #to ssh as vagrant
sudo su #to become root
apt-get update #This will refresh the apt cache
apt-get install qemu
modprobe nbd (to check if the module is loaded successfully. Will exit without any output if it is installed)
qemu-nbd -c /dev/nbd1 "/nbd2/box-disk001.vdi" - (Here change the path to whatever you gave in the config.vm.synced_folder property)
mkdir -p /mnt/vdi-boot
mount /dev/nbd1p1 /mnt/vdi-boot
cd /mnt/vdi-boot/etc/security (This folder will have all the files as it were in your VM)
touch limits.conf (if the file is already there, delete it)
chmod 644 limits.conf
chown root:root limits.conf
open the /mnt/vdi-boot/etc/security/nsswitch.conf file and check if the following three lines are present
passwd: files
shadow: files
group: files
umount /mnt/vdi-boot (unmounts the mounted path)
qemu-nbd -d /dev/nbd1 (disconnects from qemu-nbd)
Exit the VM and start the VM
Open another shell and try to ssh. It should go through fine this time.

chown: invalid user: ‘nfsnobody’ in fedora 32 after install nfs

I am install nfs using this command in fedora 32:
sudo dnf install nfs-utils
and then I create a dir to export storage:
[dolphin#MiWiFi-R4CM-srv infrastructure]$ cat /etc/exports
/home/dolphin/data/k8s/monitoring/infrastructure/jenkins *(rw,no_root_squash)
now I could mount this dir with root user like this:
sudo mount -t nfs -o v3 192.168.31.2:/home/dolphin/data/k8s/monitoring/infrastructure/jenkins /mnt
now I want to make a step forward to make it it avaliable to any user from any ip(the client could mount nfs without using sudo), so I first try to chown of this folder:
chown 777 jenkins
and then I want to make this jenkins folder group and user to nfsnobody:
[dolphin#MiWiFi-R4CM-srv infrastructure]$ chown -R nfsnobody jenkins
chown: invalid user: ‘nfsnobody’
and I do not find any nfsnobody content from /etc/passwd. what should I do to fix invalid user: ‘nfsnobody’ problem? should nfs-util added it automatically?
Right now nobody used by default probably after RedHat/Centos versions 8
You can simply use
chown -R nobody jenkins
Or
Change it from /etc/idmapd.conf
[Mapping]
Nobody-User = nfsnobody
Nobody-Group = nfsnobody
To put the changes into effect restart the rpcidmapd service and remount the NFSv4 filesystem:
service rpcidmapd restart
mount -o remount /nfs/mnt/point
On Red Hat Enterprise Linux 6, if the above settings have been applied and UID/GID’s are matched on server and client and users are still being mapped to nobody:nobody then a clearing of the idmapd cache may be required.
# nfsidmap -c

OpenfireHome - Home not found

I have xmpp server (openfire_3.9.3) that is running on my ubuntu Ubuntu 14.04.1 LTS.
I have installed openfire by following given steps
1. $ sudo tar -zxvf openfire_x_x_x.tar.gz
2. $ sudo mv openfire /opt
then I moved to openfire bin directory to start openfire as
$ cd /opt/openfire/bin
$ sudo ./openfire start
then during setup through admin console always I am getting the given error
Home not found. Define system property "openfireHome" or create and add the openfire_init.xml file to the classpath
where I need to set openfireHome ? or how can i fixed it out ?
Well it seems your user account might have permissions issue. Can't you keep openfire in your home and try to run it from there and share results?
For me, it's a permissions issue.
I'm using server(Openfire 4.7.0, build e020f58) on my local computer (macOS Monterey 12.1 (21C52)).
My SOLUTION is:
sudo chmod -R 777 /usr/local/openfire

Apache fails to start on Vagrant

In my Vagrant environment I have a guest Ubuntu Virtualbox with a LAMP with default settings.
I have my source code on the host machine in the same folder as my Vagrantfile. So on the guest Ubuntu I can access the files in the mounted /vagrant dir like this
/vagrant
/mysite
/index.php
/Vagrantfile
Now in my Apache config I add a line
Alias /mysite /vagrant/mysite
After reloading config and restarting apache I can go to localhost:8558/mysite/index.php and it works.
The problem is that when I reload Virtualbox with vagrant reload it starts Apache service before mounting the /vagrant folder. So Apache can't find the aliased dir and fails to start. i have to start it manually then
My question is - is there a way to delay Apache start so that it starts after the mounting?
Update: As a workaround I added script to the crontab that starts apache 30 seconds after the boot as described here. But I wonder if there is a better solution.
while upstart probably is a valid option, I had several issues using it with vagrant. I had to run several tasks that needed to be run as a privileged user, which I did not manage to get working with upstart.
Starting from version 1.6.0 (May 6, 2014), vagrant provides the option to run a specific provisioner every time, so also after booting a halted VM with vagrant up.
In your Vagrantfile, add:
# a file, eg after-boot.sh
config.vm.provision "shell", path: "after-boot.sh", run: "always"
# or just inline
config.vm.provision "shell", inline: "service apache2 restart", run: "always"
note the run: "always", this will force vagrant to run the provisioner always, obviously it works just as well with any other provisioning system like chef or puppet.
I would like to add a little to Zauberfisch's answer (Apache fails to start on Vagrant)
What needed to happen was this command needed to be run as a superuser AKA 'Sudo' so this was the command that was needed:
`config.vm.provision "shell", inline: "sudo service apache2 restart", run: "always"`
The reason why this didn't work for you without the sudo appears to be that Vagrant tries to run the command without /usr/sbin in PATH. For me, this worked just as well:
`config.vm.provision "shell", inline: "/usr/sbin/service apache2 restart", run: "always"`
If upstart is installed (as in Ubuntu), Vagrant emits "vagrant-mounted" event. See https://serverfault.com/a/568033/179583 to get the idea. In your script you can (re)start the Apache server.
Btw, I have a feeling that newer Apache versions just warn, but still start even if the doc root doesn't exist. The same with nginx.

libvirt and VirtualBox / Getting Started

I'm trying to get started on libvirt with VirtualBox as a virtualization solution. I installed everything and VirtualBox itself is running when using their VBoxHeadless command.
However, libvirt fails to connect to VirtualBox:
# virsh -c vbox:///session
libvir: error : could not connect to vbox:///session
error: failed to connect to the hypervisor
I could not find any hints in the libvirt documentation that point to whether I have to make any domain specific configuration before using virsh.
Does anyone have a hint? Or even better, maybe a tutorial that works through the way of using libvirt, virsh or it's APIs (my later goal) from the ground up.
If you are doing this on Ubuntu, then the problem is their libvirt package is built without VirtualBox support.
You can rebuild the package with support very easily. Something like:
apt-get source -d libvirt
sudo apt-get build-dep libvirt
dpkg-source -x libvirt*dsc
Go into the libvirt directory and edit debian/rules so that instead of --without-vbox it says --with-vbox. You can add an entry to the top of debian/changelog so the package is compiled as a different version (e.g., append ~local1 to the version).
dpkg-buildpackage -us -uc -b -rfakeroot
You'll get new .debs built in the directory above. Use dpkg -i to install the relevant ones (libvirt0, libvirt0-bin, and whatever else you want).
Double-check whether or not you have write access to /var/run/libvirt/libvirt-sock.
The socket file should have permissions similar to:
$ sudo ls -la /var/run/libvirt/libvirt-sock
srwxrwx--- 1 root libvirtd 0 2010-08-24 14:54 /var/run/libvirt/libvirt-sock
I think it could be helpful also to increase the libvirt logging capabilities by running this in your shell:
export LIBVIRT_DEBUG=1
There is Ubuntu PPA for libvirt with VirtualBox support: https://launchpad.net/~cxl/+archive/ubuntu/libvirt