packer vmware iso local and on esxi - ssh

I am curious to find out why packer is failing to get ssh access on an ESXi server. The build works just fine for vmware_fusion locally.
As JSON does not seem to display nicely directly here on SF - a link to a gist with the builder configuration: https://gist.github.com/geoHeil/5acf06cb0f3afadfa347d437c2695a7c
When running
packer build -var-file variables.json -only=vmwarevmwareRemote template.json
the kickstart file is loaded, configured and installed. However, in the case of ESXi as the builder the build seems to be stuck on waiting for SSH to become available.
I noticed in the logs that:
/var/log/auth.log
2017-02-08T17:33:20Z sshd[94210]: User 'root' running command 'esxcli --formatter csv network vm list\n'
2017-02-08T17:33:25Z sshd[94210]: User 'root' running command 'esxcli --formatter csv network vm list\n'
displays a lot of the same commands.
Executing this command manually shows
esxcli --formatter csv network vm list
Name,Networks,NumPorts,WorldID,
ubunu-test,"VM Network,",1,87986,
someOther,"VM Network,",1,84833,
What could be wrong here?
edit
packer version is latest 0.12.2, esxi 6.5
edit2
when applying the suggestion of setting a network the same problem persists. But now I see 2 commands in the logs
[root#vm-bd-dev:/var/log] tail -f auth.log
2017-02-09T09:05:56Z sshd[111376]: User 'root' running command 'esxcli --formatter csv network vm list\n'
2017-02-09T09:05:56Z sshd[111376]: User 'root' running command 'esxcli --formatter csv network vm port list -w 111433\n'
The second (new) one has the following output:
ActiveFilters,DVPortID,IPAddress,MACAddress,PortID,Portgroup,TeamUplink,UplinkPortID,vSwitch,
,,0.0.0.0,00:0c:29:47:d5:3d,33554450,VM Network,vmnic2,33554437,vSwitch0,

You probably need some more vmx_data settings for the network, something like:
"vmx_data": {
"ethernet0.networkName": "VM Network",
"ethernet0.present": "true",
"ethernet0.virtualDev": "vmxnet3",
"ethernet0.startConnected": "true",
"ethernet0.addressType": "generated"
}

Switching the network interface to something not hard coded like
network --bootproto=dhcp --ipv6=auto --activate
solved the problem for me.
Apparently different interfaces (no eth0) were available on ESXi.

Related

Minikube on Windows and HyperV: Stuck on prompt "minikube login"

I'm "extremely" new to Kubernetes, and I wanted to try it out on my local machine, which is running Windows 10 along with HyperV. I saw that minikube is used for local development, and I was able to find in on Chocolatey, so I installed it using that:
choco install minikube -y
(I think this also installs kubectl)
The problem I have is that I'm not able to start it; I'm running the following command:
minikube start --vm-driver=hyperv
I have an external switch configured in HyperV (I found it as a suggestion somewhere), but when I run the command, it's stuck in Creating VM ...
I thought maybe it would give me a clue if I look at the VM created in HyperV, and when I open that, I see the following:
So, it seems that it's waiting for input, and that's why it's stuck! I tried searching for the problem, but to no avail.
I would appreciate any help
PS: It seems to me that if I wait long enough, the following message appears on the console:
Temporary Error: provisioning: error getting ssh client: Error dialing
tcp via ssh client: ssh: handshake failed: ssh: unable to authenticate,
attempted methods [none publickey], no supported methods remain
So, somehow by chance, I think I found how to resolve the issue.
First thing is that: the fact the VM is displaying that prompt (minikube login) seems to be normal, and it does NOT prevent the minikube start from succeeding.
To resolve the issue, this is what I did:
Delete ~/.kube directory
Delete ~/.minikube directory (in case it exists)
The MOST IMPORTANT step: stop/start the Hyper-V Virtual Machine Management Windows service
These steps seem to have solved the issue for me
PS: I used this command to start minikube and enable verbose logging:
minikube start --vm-driver hyperv -v 7 --alsologtostderr
Farzad, what resources have you used for setting up the minikube? Can you please clarify what do you mean by "unable to start". Are the regular kubectl commands working?
For example kubectl get nodes? That is of course if below steps won't help you.
The screenshot you shared shows a running VM:
Minikube runs a single-node Kubernetes cluster inside a VM on your
laptop for users looking to try out Kubernetes or develop with it
day-to-day.
You mentioned that you've created the vSwitch, you should be using a flag that is pointing minikube to use that external vSwitch:
minikube start --vm-driver hyperv --hyperv-virtual-switch "vSwitch name"
You also mentioned choco, did you install kubernetes-cli (as you did not mention it in the question)? It might be the reason why your commands do not work (seems like the new version downloads kubectl with choco install minikube):
kubectl is a command line interface for running commands against
Kubernetes clusters
At this moment I recommend stoping the minikube VM:
minikub stop
Delete the cluster
minikube delete
Sometimes regular minikube stop, minikube delete does not work so you might have to manually turn off the minikubeVM in Hyper-V, then I recommend to go to c:\users\%username%\ and delete .kube and .minikube.
Use cuninst minikube
Restart and install again as specified in the minikube documentation:
choco install minikube
choco install kubernetes-cli
As for the error you mentioned, let's try to run the cluster properly, and if this persists, we will take care of it.
Try this:
kubectl config use-context minikube
I encountered the same issue. The reason was I chose the wrong disk file to start my VM after creating it in Virtual Box.
This solved my issue.
minikube delete
minikube start --vm-driver hyperv -v 7 --alsologtostderr

virsh console hangs whenever I connect to Virtual Machine

Whenever I try to connect to VM using virsh console <vm name> my screen hangs and displays:
Connected to domain <vm name>
Escape character is ^]
I have found many solutions on the internet but nothing has worked for me and I am even not able to find the /etc/init directory as CentOS 7 has a different directory structure.
I need /etc/init directory to create a script which I found on the internet as a solution.
I am using only ssh connection and no GUI and I do not have any access to the physical machine.
I think you should start a console (e.g. ttyS0 ).
For example on my Debian 8 I enable it with systemd:
systemctl enable getty#tty1.service
Enable Serial Console on CentOS/RHEL 7
On the virtual machine, add ‘console=ttyS0‘ at the end of the kernel lines in the /boot/grub2/grub.cfg file:
grubby --update-kernel=ALL --args="console=ttyS0"
Note: Alternatively, you can edit the /etc/default/grub file, add console=ttyS0 to the GRUB_CMDLINE_LINUX variable and execute
grub2-mkconfig -o /boot/grub2/grub.cfg
GRUB_TERMINAL=serial
GRUB_SERIAL_COMMAND="serial –speed115200 –unit=0 –word=8 –parity=no –stop=1"
I had the same issue right after virt-install, then after trying to connect to the guest, too. I tried all the suggested solutions but none of them helped. Then I realized that I forgot to install KVM. A simple 'yum -y install kvm' resolved the issue.

"virsh list" command not showing VM created by "qemu-system-x86_64" command

I created a VM using "qemu-system-x86_64" command. The VM is up and running. I can access it and list it by command "ps -ef | grep qemu-system-x86_64.
But if I try to list the VM using "virsh list" command then I do not see it there. Could you please point me what could be the reason?
Why is "virsh list" command not able to list VMs created by "qemu-system" command? I thought that virsh is an application that uses libvirt to access KVM/linux's virtualization capabilities. So even if VM is created by any method, then also virsh should be able to query KVM to check the already running VMs on the host.
qemu-system-x86_64 is a backend used by virsh to start a VM. Although qemu-system-x86_64 is libvirt-dependent, it does not register any running instances inside virsh/libvirtd metadata.

Crashplan on FreeNAS missing /var/lib/crashplan/.ui_info

So I spent a few weeks on this problem now. I've been trying to get CrashPlan running on a headless FreeNAS server. I have found lots a tutorial to do this. However the fact is that I'm missing the .un_info file on my FreeNAS server after installing CrashPlan.
I have searched the whole file system to try and find the elusive .ui_info file.
I've tried creating it manually with information copied from desktop PC but that does not help me resolve my CrashPlan Pro app connecting to the Crashplan server service on FreeNAS.
INFO:
FreeNAS 9.3 STABLE
Crashplan 3.6.3_1 Plugin
The crashplan remote access behaviour changed several times during the last updates, however with version 3.6.3_1 you should find the .ui_info file in
/var/lib/crashplan/.ui_info
Although the jail version is 3.6.3 it's possible that Crashplan updated itself, please check this with:
tail -f /usr/pbi/crashplan-amd64/share/crashplan/log/service.log.0
In the end you want your Crashplan to update itself anyway. If the update process produces an error related to bash, please run:
pkg update
pkg install bash
ln -siv /usr/local/bin/bash /bin/bash
And restart crashplan while checking the log output with the tail -f command from above:
service crashplan restart
If you finally reach a recent version (>4.4.1), its time to remotely connect to crashplan.
The only change on the server necessary for the easiest method without ssh tunnel is the serviceHost tag in /usr/pbi/crashplan-amd64/share/crashplan/conf/my.service.xml.
<serviceUIConfig>
<serviceHost>0.0.0.0</serviceHost>
Either do this everytime you want to connect, because the token will change after every crashplan restart or use my script from here (for OS X): https://gist.github.com/Phlogi/8654e353786ed1cf0858
Copy /var/lib/crashplan/.ui_info to the correct place on your desktop machine and edit the IP address at the end (to your servers address), for example:
4339,7f1d655f-*****,192.168.1.20
That's it, you can start crashplan on your remote machine and it will connect properly, there are no other changes neccessary. Latest crashplan (>4.4.1) will actually use the IP address from .ui_info.
Install JRE. You will need to add --no-check-certificate to the JRE wget line in the install.sh file

Vagrant stuck in "Waiting for VM to Boot"

I want to preface this question by mentioning that I have indeed looked over most if not all vagrant "Waiting for VM to Boot" troubleshooting threads:
Things I've tried include:
vagrant failed to connect VM
https://superuser.com/questions/342473/vagrant-ssh-fails-with-virtualbox
https://github.com/mitchellh/vagrant/issues/410
http://vagrant.wikia.com/wiki/Usage
http://scotch.io/tutorials/get-vagrant-up-and-running-in-no-time
And more.
Here's how I setup my Vagrant:
Note: We are using Vagrant 1.2.2 since we do not at the moment have time to change configs to newer versions. I am also using VirtualBox 4.2.26.
My office has an /official/ folder which includes things such as Vagrantfile inside. Inside my Vagrantfile are these custom settings:
config.vm.box = "my_box"
config.ssh.private_key_path = "~/.ssh/github_rsa"
config.ssh.forward_agent = true
config.ssh.forward_x11 = true
config.ssh.max_tries = 300
config.vm.provision :shell, :inline => "/etc/init.d/networking restart"
I installed our custom box (called package.box) via vagrant box add my_box absolute_path/package.box which went without a hitch.
Running vagrant up, I would look at the "preview" of the VirtualBox, and it would simply be stuck at the login page. My Terminal would also only say: Waiting for VM to boot. This can take a few minutes. As far as I know, this is an SSH issue. Or my private key issues, though in my Vagrantfile I explicitly pointed to my private key location.
Interesting Notes:
Running dhclient within the VirtualBox GUI, it says command no found. Running sudo dhclient eth0 was one of the suggested fixes.
This fix: https://superuser.com/a/343775/298915 of "modify the /etc/rc.local file to include the line sh /etc/init.d/networking restart just before exit 0." did nothing to fix the issue.
Conclusion:
Having tried to re-install everything thinking I messed up a file, it did not seem to ameliorate the issue. I am unable to work with this issue. Could someone give me some insight?
So after around twelve hours of dejected troubleshooting, I was able to (finally) get the VM to boot.
Setup your private/public keys using the link provided. My box is a Debian Linux 3.2.0-4-amd64, so instead of /root/.ssh/id_rsa.pub, you have to use /home/vagrant/.ssh/id_rsa.pub (and the respective id_rsa path for the private key).
Note: make sure your files have the right permissions. Check using ls -l path, and change using chmod. Your machine may not have /home/vagrant/.ssh/authorized_keys, so generate that file with touch /home/vagrant/.ssh/authorized_keys.
Boot your VM using the VirtualBox GUI using (through either Vagrantfile boot-GUI command, or starting your VM using VirtualBox). Login using vagrant and vagrant when prompted.
Within the GUI, manually start dhclient using sudo dhclient eth0 -v. Why is it off by default? I have no idea. I found out that it was off when I tried to wget the private/public keys in the tutorial above, but was unable to.
Go to your local machine's command line and reload vagrant using vagrant reload. It should boot, and no longer hang at "Waiting for VM to Boot."
This worked for me. Though it may be different for other machines, for whatever reason Vagrant likes to break.
Suggestion: can this be saved as a script so we don't need to manually do this everytime?
EDIT: Update to the latest version of Vagrant, and you will never see this issue again. About time, huh?