Programmatically remounting usb hubs after boot ubuntu 10.04 - usb

I'm running ubuntu 10.04 and kernal 2.6.32-28-server.
There are 4 usb hubs connected to the server with 4 usb ports each. On boot only two or three of them get mounted. If I plug them in after boot then all are mounted just fine. On a different server with a more powerful power supply the usb hubs mount just fine on boot so this makes me think that there isn't enough power getting to the hubs on boot. I don't have the luxury of being able to replug them in every time so I have to figure out a way to get the usb hubs remounted programmatically sometime after boot.
So far I have tried other suggestions like.
Removing and reloading usb_storage:
sudo modprobe -w -r usb_storage; sudo modprobe usb_storage
I've also tried:
sudo modprobe -vr ehci_hcd
sudo modprobe -v ehci_hcd
But this just results in:
FATAL: Module ehci_hcd is builtin
Getting really desperate I copied some of the files in /sys/devices after successfully mounting by plugging the hubs in after boot into a separate folder. After rebooting the system I tried to copy those files back into /sys/devices but even as root I couldn't copy them.

Related

Ubuntu Server Backup and Restore via tar

I'm trying to learn how to backup and restore my Ubuntu Server via tar so I know that I have a safe system. After I untar and reboot, I have several issues, but they seem to be caused by a read-only file system. The source and destination server are both Ubuntu Server on the same version, 18.04.05 LTS. The source server is a VPS that has 6 GB RAM and 4vCPUs. The destination server is a VM on my FreeNAS machine with 6GB RAM and 2 vCPUs.
The primary applications that need to work are my Graylog server and Nagios server. I've mostly followed the instructions at Ubuntu.
First, my tar command is:
sudo tar -c --use-compress-program=pigz -f backup.tar.gz --exclude=/backup.tar.gz --exclude=/dev --exclude=/usr --exclude=/sbin --exclude=/proc --exclude=/sys --exclude=/tmp --exclude=/run --exclude=/mnt --exclude=/media --exclude=/lost+found --exclude=/home/*/.cache --exclude=/home/*/.gvfs --exclude=/home/*/.local/share/Trash --exclude=/var/log --exclude=/var/cache/apt/archives --exclude=/usr/src/linux-headers* --one-file-system /
I use pigz to utilize the VPS's 4 vCPUs to take less time. I transfer this to my VM which as a fresh copy of Ubuntu Server 18.04.05 and untar with:
sudo tar -xvpzf backup.tar.gz -C / --numeric-owner
After I reboot, I get the following as soon as I boot:
Unable to setup logging. [Errno 30] Read-only file system: '/var/log/landscape/sysinfo.log'
run-parts: /etc/update-motd.d/50-lanscape-sysinfo exited with return code 1
mktemp: failed to create file via template '/var/lib/update-notifier/tmp.XXXXXXXXXX': Read-only file system
run-parts: /etc/update-motd.d/95-hwe-eol exited with return code 1
/usr/lib/update-notifier/update-motd-fsck-at-reboot: 33: /usr/lib/update-motd-fsck-at-reboot: cannot create /var/lib/update-notifier/fsck-at-reboot: Read-only file system
I do see that some areas of the system do work like the original source. My SSH port changes, hostname changes, etc. But I get these above errors and my Graylog and Nagios servers do not work.
So I'm wondering where I went wrong in my process and any help would be appreciated. The source is a live server with backups so I'm safe there. I'm just making sure I have my ducks in a row for the future.

Is it possible to use the nmcli in WSL and create a wifi hotspot?

I am trying to setup a wifihotpot on my laptop in ubuntu 18 running as a Windows Subsystem for Linux (WSL). (Terminal only)
Following basic tutorials I wanted to run the following command:
~$ nmcli device wifi hotspot con-name my-hotspot ssid my-hotspot band bg password 123456
Error: Could not create NMClient object: Could not connect: No such file or directory.
Trying to start the networkmanager also fails:
~$ sudo service network-manager start
* Starting network connection manager NetworkManager [ OK ]
~$ sudo service network-manager status
* NetworkManager is not running
I tried the networkManager after installing network-manager:
sudo add-apt-repository ppa:nilarimogard/webupd8
Is there another way to create a wifi hotspot from Ubuntu running as a WSL? Or does it not have the right access to the windows host to pull it off?
at this time, I don't believe it is possible according to https://github.com/microsoft/WSL/issues/2438. They designed WSL to ignore calls to set interface properties. So nmcli and other commands that changes interface properties are not working, they marked it as a bug and will fix it in the future.
"WSL currently ignores the call (which was intentional at the time of the design) to set interface properties" - sunilmut
I hope to help in some way :)

NFS client under WSL - mount.nfs: No such device

I am getting the following error trying to mount a nfs export.
sudo mount 192.168.1.175:/mnt/nas /mnt/c/nas
mount.nfs: No such device
Any ideas on how to fix this?
As of October 2020: You can mount nfs with wsl2, but wsl2 itself requires a hardware virtualization available. See here: https://github.com/microsoft/WSL/issues/5838
If like me you are stuck on WSL1 you can work around this issue by mapping the drive in windows. Use the Map Network Drive feature and create a drive letter for your nfs mount e.g. G:
Now in WSL you can mount that drive letter:
sudo mkdir /mnt/g
sudo mount -t drvfs G: /mnt/g
from: How to Mount Windows Network Drives in WSL
I have not tested the access speed to a drive mapped through to WSL like this but I would expect it to be slow!
The error indicates the nfs kernel modules are not loaded correctly and
also verify the exported path "/mnt/nas" exists on sever "192.168.1.175" or not.
first of all,we understand nfs is one of tctp/ip protocol, so one client and one server are needed, So our purpose is sharing a dir on windows or wsl to a another linux, that means the windows or wsl is a server, all you guys are right about wsl nfs, it doesnt work if we use the wsl nfs inside, we can make a another nfs server on windows instead of wsl, and configure the share dirs right which we can find the dirs on wsl, e.g. /mnt/d/WORK/tftpserverDir, after that we can mount successfully. those are tips of me:
make a nfs server on windows
I dowwnload from this:
https://www.hanewin.net/nfs-e.htm
configure the shared dir in exports file
D:\WORK\tftpserverDir -name:nfsroot -umask:000 -public -mapall:0
mount the share dirs on your dst linux
mount -t nfs -o nolock -o tcp -o rsize=32768,wsize=32768 172.10.10.80:/nfsroot /sdcard/mnt

Vagrant can't connect to the VM

EDIT6: submitted an official path bug: https://github.com/mitchellh/vagrant/issues/7512
EDIT5: When I do vagrant destroy and vagrant up, everything works easily. But when I turn off the VM and turn it back on (you have to restart your PC some day), it won't work again. Either the sequence for vagrant up when the VM is created is bugged or VirtualBox is bugged. Destroying and rebuilding the VM is not the option, cause the DB migration and everything takes ~30 mins at least. Either way, DON'T USE VAGRANT ON WINDOWS 10.
EDIT4: I downgraded to Virtual Box 5.0.0.10, that fix the wrong path problem, but the error Command not in installer persists.
EDIT3: When I went into vagrant up --debug, I found out that it cycles. It gets into line
INFO subprocess: Starting process: ["C:/Program Files/Oracle/VirtualBox/VBoxManage.exe", "showvminfo", "8aaee3a3-806f-4
8ad-9928-91e2b7baba5d", "--machinereadable"]
and then it does
INFO subprocess: Command not in installer, restoring original environment...
The path to VM uses forwards slashes instead of backslashes. Is this a bug? Is there a way to manually set path to VM? I have put C:\Program Files\Oracle\VirtualBox in my PATH.
EDIT2: DON'T USE VAGRANT ON WINDOWS 10, it's bugged in many ways, also VM is not optimalized for win10 yet, you'll get bunch of issues that you won't be able to solve. Also tried the Otto from Hashicorp, not working either. Rip.
EDIT: okay, so when I do vagrant destroy and vagrant up, after 10 minutes of installation it works like a charm. But after I restart my PC or logout in any way, Vagrant is unable to connect to the VM, neither with a private key, nor with login/password. Is that a bug?
When I do vagrant up, VM starts properly, but Vagrant is unable to connect. All it says is Warning: Remote connection disconnect. Retrying...
When I try to connect via vagrant ssh, I get only ssh_exchange_identification: read: Connection reset by peer. When I check GUI of the VM, it is waiting for login, and when I login with defult login/password, it is working as intended, so the problem must be Vagrant not being able to connect to the VM.
I tried:
checking if my pc supports virtualization and checking if it is on
trying to connect with password instead of a key
configuring networking adapetrs
turning off firewall
clean reinstall
I am using Vagrant 1.8.1 and VirtualBox 5.0.20 on Windows 10.
This is my vagrant file:
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "ubuntu/trusty64"
config.vm.provider :virtualbox do |vb|
vb.memory = 2048
vb.gui = true
vb.cpus = 2
end
config.vm.network :private_network, type: "dhcp"
config.vbguest.auto_update = false
config.ssh.insert_key = false
config.vm.provision :shell, path: "bootstrap.sh"
end
[Edit 17/06/2016]
The problem should be resolved with Virtualbox 5.0.22.
https://www.virtualbox.org/wiki/Changelog
https://www.virtualbox.org/ticket/15412
[Original answer below]
In contrast to my earlier answer I now don't think that I encounter the same problem as you have described here. However I still think that you encounter a different variation the problem.
As of feedback received from Virtualbox development https://www.virtualbox.org/ticket/15412 I learned that Virtualbox 5.0.20 includes changes to the NAT Forwarding Rules to address other bugs. When a VM is saved and started again, Virtualbox now removes the network cable for 5 seconds. This is supposed to trigger the DHCP client to request a new lease. This information in turn is then used by Virtualbox to infer the IP address and NAT should work.
In my particular case I encounter this problem with Ubuntu 16.04 as guest VM whereas with Ubuntu 14.04 it works. This indicates to me that the DHClient on Ubuntu 14.04 does request a new lease after the cable was disconnected by Virtualbox whereas this is not the case with Ubuntu 16.04.
In order to verify that you encounter the same problem, I wonder if you could run the below test and let me know.
Login to the Trusty VM console (i.e. the one that you get displayed when you run the VM in the foreground)
Install 'arping' (sudo apt-get -y install arping)
Create the below script 'sendARP.sh'
#!/bin/bash
IFACE=$(ifconfig | grep 'Link encap:Ethernet' | awk '{print $1}')
IP=$(ifconfig | grep -Eo 'inet (addr:)?([0-9]*\.){3}[0-9]*' | grep -Eo '([0-9]*\.){3}[0-9]*' | grep -v '127.0.0.1')
arping -c 1 -i $IFACE $IP
Make it an executable 'chmod +x sendARP.sh'
Save the Trusty VM (vagrant suspend)
Start your Trusty VM from saved state (vagrant up)
Login to the Trusty VM console (i.e. the one that you get displayed when you run the VM in the foreground)
Run the script 'sudo ./sendARP.sh'
Test whether you can connect via SSH from the remote location/ Virtualbox host
Bugs:
https://github.com/mitchellh/vagrant/issues/7306
https://www.virtualbox.org/ticket/15412

How do I check if an USB drive is bootable?

I just created an USB drive and would like to check if it's correctly bootable without rebooting my actual computer. How should I do?
Under Linux, you have to know which device path got the drive with for example with dmesg | tail after insertion, let's assume it's /dev/sdb.
Qemu
sudo qemu -hda /dev/sdb or sudo qemu-system-x86_64 -hda /dev/sdb for 64 bits.
VirtualBox
VBoxManage internalcommands createrawvmdk -filename ~/usb.vmdk -rawdisk /dev/sdb
sudo chmod 666 /dev/sdb*
then add ~/usb.vmdk as a disk in a VM and boot on it
Don't hesitate to add other ways to do.
While it won't show if the stuff on the filesystem is capable of handling the whole boot thing you can check the boot flag with fdisk -l <drive> from a shell on a reasonably good *nix. (Which essentially tells the bios if it should try to boot the thingie or not.)