Create headless Ubuntu VM on a Ubuntu server without GUI - virtual-machine

I need to create a Ubuntu_64 virtual machine (VM1) on our lab server which is also a Ubuntu machine (U1), no GUI, to host my web server tool and the goal is to set it public for anyone to use without affecting our lab server. I successfully created VM1 with VirtualBox, but now I have no idea how to SSH from U1 to VM1, or from any other computer to VM1. If fact, I'm stuck at the step of "VM1 has been successfully started".
I've looked into several instructions, but most of them have the GUI version VirtualBox to configure VM.
Here are my main questions:
1) About VM1, how to get the IP address?
2) How to set username and login for VM1, say ssh user#127.0.0.1, what is the "user"?
Below is my code for create VM1:
VBoxManage createvm --name VM1 --ostype Ubuntu_64 --register --basefolder /pwd/
VBoxManage modifyvm VM1 --ioapic on
VBoxManage modifyvm VM1 --memory 1024 --vram 128
VBoxManage modifyvm VM1 --bridgeadapter1 vboxnet0
VBoxManage modifyvm VM1 --nic1 bridged
VBoxManage createhd --filename `pwd`/VM1/VM1_DISK.vdi --size 80000 --format VDI
VBoxManage storagectl VM1 --name "SATA Controller" --add sata --controller IntelAhci
VBoxManage storageattach VM1 --storagectl "SATA Controller" --port 0 --device 0 --type hdd --medium `pwd`/VM1/VM1_DISK.vdi
VBoxManage storagectl VM1 --name "IDE Controller" --add ide --controller PIIX4
VBoxManage storageattach VM1 --storagectl "IDE Controller" --port 1 --device 0 --type dvddrive --medium `pwd`/ubuntu-16.04.6-server-amd64.iso
VBoxManage modifyvm VM1 --boot1 dvd --boot2 disk --boot3 none --boot4 none
VBoxManage modifyvm VM1 --vrde on
VBoxManage modifyvm VM1 --vrdemulticon on --vrdeport 10001
VBoxManage startvm VM1 --type headless
Any suggestions would be appreciated!!

It's great that you're breaking this down so much and learning about how VirtualBox works. However, you should know automation frameworks exist to solve these problems. The commands you're running and the things you wish to configure like IP Addresses and Usernames can be easily configured in a file that is passed in a single command to a binary. For example Vagrant by HashiCorp.
Vagrant allows you to provision a VM using Ruby syntax in a declarative Vagrantfile. I searched Google for an example of a Vagrant Box that leverages Ubuntu and provides a Desktop Environment and found this.
So install Vagrant. Then run these commands:
vagrant init dmhughes/ubuntu-18.04-desktop-gui \
--box-version 0.0.1
vagrant up
Using Vagrant will allow you to do a lot of cool things. Pass around variables, declare multiple VMs, share secrets between them, create bridge networks between their virtual network interfaces, etc.

Related

wrong entry in limits.conf , unable to ssh to host

We have VirtualBox (using vagrant) env , by mistake made an entry in /etc/security/limits.conf [with out having a root shell open:( ] and now I am unable to ssh (the connections drops immediately).
Previously we had one such scenario (limits done by someone else) , was able to fix using vboxmanage guestcontrol copyto CLI and was able to overwrite limits.conf and then ssh was allowed, this time around the vboxmanage CLI also hangs
Tried to open the VM in GUI and went to console and tried few options , but could not get to single user mode.
Since you already tried vbox cli command and the commands hang, it means even virtualbox cannot access the system or get a shell to open.
In this case you will have to bring up a ubuntu VM and use the qemu-nbd module to fix this. The steps are given below.
Bring up a very simple ubuntu vm using hashicorp’s bionic64 on the same host machine by executing the following steps.
mkdir bionic
cd bionic
vagrant box add hashicorp/bionic64
vagrant init
Open the Vagrantfile and change the config.vm.box = "base" to config.vm.box = "hashicorp/bionic64"
Also mount the folder in the host where the .vdi file for the VM is located by adding the following to the Vagrant file by adding the following line(replace the file path with the correct one corresponding to your system. Here /nbd2 will be created on the ubuntu machine and will contain the files including the .vdi file.
config.vm.synced_folder "/home/topcat/VirtualBox\ VMs/your_vm", "/nbd2"
Now do vagrant up
Once the machine boots up
vagrant ssh #to ssh as vagrant
sudo su #to become root
apt-get update #This will refresh the apt cache
apt-get install qemu
modprobe nbd (to check if the module is loaded successfully. Will exit without any output if it is installed)
qemu-nbd -c /dev/nbd1 "/nbd2/box-disk001.vdi" - (Here change the path to whatever you gave in the config.vm.synced_folder property)
mkdir -p /mnt/vdi-boot
mount /dev/nbd1p1 /mnt/vdi-boot
cd /mnt/vdi-boot/etc/security (This folder will have all the files as it were in your VM)
touch limits.conf (if the file is already there, delete it)
chmod 644 limits.conf
chown root:root limits.conf
open the /mnt/vdi-boot/etc/security/nsswitch.conf file and check if the following three lines are present
passwd: files
shadow: files
group: files
umount /mnt/vdi-boot (unmounts the mounted path)
qemu-nbd -d /dev/nbd1 (disconnects from qemu-nbd)
Exit the VM and start the VM
Open another shell and try to ssh. It should go through fine this time.

Is it possible to use the nmcli in WSL and create a wifi hotspot?

I am trying to setup a wifihotpot on my laptop in ubuntu 18 running as a Windows Subsystem for Linux (WSL). (Terminal only)
Following basic tutorials I wanted to run the following command:
~$ nmcli device wifi hotspot con-name my-hotspot ssid my-hotspot band bg password 123456
Error: Could not create NMClient object: Could not connect: No such file or directory.
Trying to start the networkmanager also fails:
~$ sudo service network-manager start
* Starting network connection manager NetworkManager [ OK ]
~$ sudo service network-manager status
* NetworkManager is not running
I tried the networkManager after installing network-manager:
sudo add-apt-repository ppa:nilarimogard/webupd8
Is there another way to create a wifi hotspot from Ubuntu running as a WSL? Or does it not have the right access to the windows host to pull it off?
at this time, I don't believe it is possible according to https://github.com/microsoft/WSL/issues/2438. They designed WSL to ignore calls to set interface properties. So nmcli and other commands that changes interface properties are not working, they marked it as a bug and will fix it in the future.
"WSL currently ignores the call (which was intentional at the time of the design) to set interface properties" - sunilmut
I hope to help in some way :)

Is it possible to run VM with ppc64le architecture on a host machine with x86_64 architecture?

I want to test some use-cases which need to run on 'ppc64le' architecture but I don't have a host machine with ppc64le architecture.
My host system is of x86_64 architecture. Is it possible to run VM with 'ppc64le' architecture on my host machine with x86_64 architecture?
Absolutely! The only caveat is that since you're not running natively, the virtual machine needs to emulate the target (ppc64le) instruction set. This can be much slower than running native instructions.
The way to do this will depend on which tools you're using to manage your virtual machine instances. For example, virt-manager allows you to select the architecture type when you're creating a new virtual machine. If you set this to ppc64el, you'll get a ppc64el machine. Other options (like disk and network devices) can be set just like native VMs.
If you're not using any specific VM management tools, the following invocation of qemu will get a ppc64el machine going easily:
qemu-system-ppc64le \
-M pseries # use the pseries machine model \
-m 4G # with 4G of RAM \
-hda ubuntu-18.04-server-ppc64el.iso # Ubuntu installer as a virtual disk
Depending on your usage, you may want to use the following options too:
-nographic -serial pty to use a text console instead of an emulated graphics device. qemu will print the console pty on startup - something like /dev/pts/X. Run screen /dev/pts/X to access it.
-M powernv -bios skiboot.lid to use the non-virtualised ppc64el machine model, which is closer to current OpenPOWER hardware. The skiboot.lid firmware may be included in your distro's install of qemu.
-drive, -device and -netdev to configure virtual disks and networking. These work in the same manner at x86 VMs on qemu.
I hosted centos7-ppc64le on my x86_64 machine(OS RHEL-7). I used qemu + virt-install for that. First install qemu as
wget https://download.qemu.org/qemu-3.1.0-rc1.tar.xz
tar xvJf qemu-3.1.0-rc1.tar.xz
cd qemu-3.1.0-rc1
./configure
make
make install
After installation check qemu-system-ppc64le is available from the command line. Then install virt-manager,virt-install,virt-viewer and libvirt for managing the VM's. Then I started the VM as
virt-install --name centos7-ppc64le \
--disk centos7-ppc64le.qcow2 \
--machine pseries \
--arch ppc64 \
--vcpus 2 \
--cdrom CentOS-7-ppc64le-Minimal-1804.iso \
--memory 2048 \
--network=bridge:virbr0 \
--graphics vnc

Running a command on a vagrant box via ssh keeps asking for a password

I am trying to run a command on/in a vagrant box using ssh.
According to the documentation, vagrant ssh -c <command> should connect to the machine via ssh and run the command.
I tried this using a simple Ubuntu Server 16.04 box, but every time I get prompted for a password. Simply running vagrant ssh allows me to connect without providing a password.
I used the following Vagrantfile:
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
config.vm.box = "osslack/ubuntu-server-16.04-no-unattended-upgrades"
config.vm.box_version = "1.0"
end
I tried to test it with the following command: vagrant ssh -c "ls".
How can I run a command via ssh without being prompted for a password?
So, after playing around with it some more, I found a workaround/solution.
When using vagrant ssh, anything after -- will be directly passed to ssh.
So running vagrant ssh -- ls will tell ssh to run the command ls.
This does not prompt for a password.

Vagrant can't connect to the VM

EDIT6: submitted an official path bug: https://github.com/mitchellh/vagrant/issues/7512
EDIT5: When I do vagrant destroy and vagrant up, everything works easily. But when I turn off the VM and turn it back on (you have to restart your PC some day), it won't work again. Either the sequence for vagrant up when the VM is created is bugged or VirtualBox is bugged. Destroying and rebuilding the VM is not the option, cause the DB migration and everything takes ~30 mins at least. Either way, DON'T USE VAGRANT ON WINDOWS 10.
EDIT4: I downgraded to Virtual Box 5.0.0.10, that fix the wrong path problem, but the error Command not in installer persists.
EDIT3: When I went into vagrant up --debug, I found out that it cycles. It gets into line
INFO subprocess: Starting process: ["C:/Program Files/Oracle/VirtualBox/VBoxManage.exe", "showvminfo", "8aaee3a3-806f-4
8ad-9928-91e2b7baba5d", "--machinereadable"]
and then it does
INFO subprocess: Command not in installer, restoring original environment...
The path to VM uses forwards slashes instead of backslashes. Is this a bug? Is there a way to manually set path to VM? I have put C:\Program Files\Oracle\VirtualBox in my PATH.
EDIT2: DON'T USE VAGRANT ON WINDOWS 10, it's bugged in many ways, also VM is not optimalized for win10 yet, you'll get bunch of issues that you won't be able to solve. Also tried the Otto from Hashicorp, not working either. Rip.
EDIT: okay, so when I do vagrant destroy and vagrant up, after 10 minutes of installation it works like a charm. But after I restart my PC or logout in any way, Vagrant is unable to connect to the VM, neither with a private key, nor with login/password. Is that a bug?
When I do vagrant up, VM starts properly, but Vagrant is unable to connect. All it says is Warning: Remote connection disconnect. Retrying...
When I try to connect via vagrant ssh, I get only ssh_exchange_identification: read: Connection reset by peer. When I check GUI of the VM, it is waiting for login, and when I login with defult login/password, it is working as intended, so the problem must be Vagrant not being able to connect to the VM.
I tried:
checking if my pc supports virtualization and checking if it is on
trying to connect with password instead of a key
configuring networking adapetrs
turning off firewall
clean reinstall
I am using Vagrant 1.8.1 and VirtualBox 5.0.20 on Windows 10.
This is my vagrant file:
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "ubuntu/trusty64"
config.vm.provider :virtualbox do |vb|
vb.memory = 2048
vb.gui = true
vb.cpus = 2
end
config.vm.network :private_network, type: "dhcp"
config.vbguest.auto_update = false
config.ssh.insert_key = false
config.vm.provision :shell, path: "bootstrap.sh"
end
[Edit 17/06/2016]
The problem should be resolved with Virtualbox 5.0.22.
https://www.virtualbox.org/wiki/Changelog
https://www.virtualbox.org/ticket/15412
[Original answer below]
In contrast to my earlier answer I now don't think that I encounter the same problem as you have described here. However I still think that you encounter a different variation the problem.
As of feedback received from Virtualbox development https://www.virtualbox.org/ticket/15412 I learned that Virtualbox 5.0.20 includes changes to the NAT Forwarding Rules to address other bugs. When a VM is saved and started again, Virtualbox now removes the network cable for 5 seconds. This is supposed to trigger the DHCP client to request a new lease. This information in turn is then used by Virtualbox to infer the IP address and NAT should work.
In my particular case I encounter this problem with Ubuntu 16.04 as guest VM whereas with Ubuntu 14.04 it works. This indicates to me that the DHClient on Ubuntu 14.04 does request a new lease after the cable was disconnected by Virtualbox whereas this is not the case with Ubuntu 16.04.
In order to verify that you encounter the same problem, I wonder if you could run the below test and let me know.
Login to the Trusty VM console (i.e. the one that you get displayed when you run the VM in the foreground)
Install 'arping' (sudo apt-get -y install arping)
Create the below script 'sendARP.sh'
#!/bin/bash
IFACE=$(ifconfig | grep 'Link encap:Ethernet' | awk '{print $1}')
IP=$(ifconfig | grep -Eo 'inet (addr:)?([0-9]*\.){3}[0-9]*' | grep -Eo '([0-9]*\.){3}[0-9]*' | grep -v '127.0.0.1')
arping -c 1 -i $IFACE $IP
Make it an executable 'chmod +x sendARP.sh'
Save the Trusty VM (vagrant suspend)
Start your Trusty VM from saved state (vagrant up)
Login to the Trusty VM console (i.e. the one that you get displayed when you run the VM in the foreground)
Run the script 'sudo ./sendARP.sh'
Test whether you can connect via SSH from the remote location/ Virtualbox host
Bugs:
https://github.com/mitchellh/vagrant/issues/7306
https://www.virtualbox.org/ticket/15412