Installing VMware Tools on virtual machines using Ansible - virtual-machine

I am trying to install VMware Tools on various OS on my guest machines. This is the code I have now.
---
- hosts: all
tasks:
- name: debian | installing open-vm-tools
apt: name=open-vm-tools state=present
when: ansible_os_family == "Debian"
- name: install vmware tools via Chocolatey
win_chocolatey: name=vmware-tools state=present
when: ansible_distribution == "Windows"
This is what my hosts.ini file looks like:
[my-host]
myhost.com ansible_ssh_pass=mypw ansible_ssh_user=root
This is the command I am using to run it. Which works.
ansible-playbook -i hosts.ini vmwaretools.yml
However, this is the message I get after I run it.
ok: [myhost.com]
TASK [debian | installing open-vm-tools]
*************************************** task path: /Users/Desktop/Ansible/vmwaretools.yml:5 skipping:
[myhost.com] => {"changed": false, "skip_reason":
"Conditional check failed", "skipped": true}
TASK [install vmware tools via Chocolatey]
************************************* task path: /Users/Desktop/Ansible/vmwaretools.yml:9 skipping:
[myhost.com] => {"changed": false, "skip_reason":
"Conditional check failed", "skipped": true}
PLAY RECAP
********************************************************************* myhost.com : ok=1 changed=0 unreachable=0
failed=0
Why does it say conditional fail checked? I am sure I have VMs with Debian and Windows running. Any idea why this is happening?

From your comment:
My assumption is that, once you connect to the host system, it has access to each VM and checks to see if the distribution matches, and if it does, it installs vmware tools on the VM.
No. Ansible must connect to each single virtual machine and run playbook on that machine. There is no way of delegating the tasks to the host machine.
Even when you run an ESXi host and select "Install VMware Tools" on a particular machine, the only thing it does is mounting an ISO image to the machine. The installation process then takes place locally (either by manual administrator action or through the autorun).
Why does it say conditional fail checked?
You are running the playbook on the VMware host machine which is not Debian. The second condition will never be true:
when: ansible_distribution == "Windows"
ansible_distribution contains more detailed information, like:
"ansible_distribution": "Microsoft Windows NT 10.0.14366.0"
You need to use:
when: ansible_os_family == "Windows"

Related

Installing Xfce on CentOS 7 via ansible yum

I have a Centos 7 machine freshly started from the centos/7 vagrant base box. I provision the box via ansible but the following does not work:
- name: Install xfce
yum:
name: "#^Xfce"
state: present
enablerepo: "epel"
The error is:
TASK [desktop : Install xfce]
************************************************** fatal: [xxx]: FAILED! => {"changed": false,
"changes": {"installed": ["#^Xfce"]}, "msg": "Error: Nothing to do\n",
"rc": 1, "results": ["Loaded plugins: fastestmirror\nLoading mirror
speeds from cached hostfile\n * base: mirror.ratiokontakt.de\n * epel:
mirror.de.leaseweb.net\n * extras: centos.schlundtech.de\n * updates:
centos.schlundtech.de\nGroup Xfce does not exist.\n"]}
Yet, running the following command from within the machine works:
sudo yum --enablerepo=epel -y groups install "Xfce"
What am I doing wrong?
According to the yum module documentation:
Yum itself has two types of groups. “Package groups” are specified in the rpm itself while “environment groups” are specified in a separate file (usually by the distribution). Unfortunately, this division becomes apparent to ansible users because ansible needs to operate on the group of packages in a single transaction and yum requires groups to be specified in different ways when used in that way. Package groups are specified as “#development-tools” and environment groups are “#^gnome-desktop-environment”. Use the “yum group list hidden ids” command to see which category of group the group you want to install falls into.
You're specifying your grouop as #^Xfce, which is the syntax for an "environment group", but if you look at the output of yum group list hidden ids there is no "Xfce" environment group. There is a package group by that name, and this playbook seems to install it successfully:
---
- hosts: localhost
gather_facts: false
tasks:
- name: install xfce
yum:
name: "#Xfce"
state: "present"
enablerepo: "epel"

Shared connection to server failed (trying to run an Ansible playbook)

I am quite new to SSH servers and Ansible so this might be a dumb question.
Tried to run an Ansible playbook whilst accessing the server with a private key using the bash command below.
ansible-playbook dbserv.yml -i hosts --limit local-servers --private-key=(where I put the private key)
However, I am getting this error:
fatal: [xxx]: FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_name": "setup"
},
"module_stderr": "Shared connection to xxx closed.\r\n",
"module_stdout": "/bin/sh: 1: /usr/bin/python2.7: not found\r\n",
"msg": "MODULE FAILURE" }
I have Python installed on my computer so I do not understand why I am getting this error.
OS environment:
Ubuntu 16.04.1
The error message you get is:
/usr/bin/python2.7: not found
Ansible requires the target machine to have Python installed in order to work properly (see Managed node requirements).
The most likely reason is that your target is Ubuntu 16.04 which does not come with Python 2 installed. In this case you need to install it or try the experimental support for Python 3.
If Python 2.7 is installed in a different directory, you can add a host variable, for example in your inventory file (assuming the hostname is xxx as in your question`):
xxx ansible_python_interpreter=/path/to/python2.7
To run modules with Python 3 (experimental), set:
xxx ansible_python_interpreter=/usr/bin/python3
Note: Ansible by default looks for /usr/bin/python, so it's likely that your playbook, inventory file, or ansible.cfg already contain settings for /usr/bin/python2.7 which does not exist on the target machine.

How check the httpd is enabled and running using InSpec with Kitchen-docker on CentOS?

Running my test with InSpec I am unable to test if the httpd is enabled and running.
InSpec test
describe package 'httpd' do
it { should be_installed }
end
describe service 'httpd' do
it { should be_enabled }
it { should be_running }
end
describe port 80 do
it { should be_listening }
end
The output for kitchen verify is:
System Package
✔ httpd should be installed
Service httpd
✖ should be enabled
expected that `Service httpd` is enabled
✖ should be running
expected that `Service httpd` is running
Port 80
✖ should be listening
expected `Port 80.listening?` to return true, got false
Test Summary: 1 successful, 3 failures, 0 skipped
Recipe for httpd installation:
if node['platform'] == 'centos'
# do centos installation
package 'httpd' do
action :install
end
execute "chkconfig httpd on" do
command "chkconfig httpd on"
end
execute 'apache start' do
command '/usr/sbin/httpd -DFOREGROUND &'
action :run
end
I do not know what I am doing wrong.
More info
CentOS version on docker instance
kitchen exec --command 'cat /etc/centos-release'
-----> Execute command on default-centos-72.
CentOS Linux release 7.2.1511 (Core)
Chef version installed in my host
Chef Development Kit Version: 1.0.3
chef-client version: 12.16.42
delivery version: master (83358fb62c0f711c70ad5a81030a6cae4017f103)
berks version: 5.2.0
kitchen version: 1.13.2
UPDATE 1: Kitchen yml with driver attributes
The platform has the configuration recommended by coderanger :
---
driver:
name: docker
use_sudo: false
provisioner:
name: chef_zero
verifier: inspec
platforms:
- name: centos-7.2
driver:
platform: rhel
run_command: /usr/lib/systemd/systemd
provision_command:
- /bin/yum install -y iniscripts net-tools wget
suites:
- name: default
run_list:
- recipe[apache::default]
verifier:
inspec_tests:
- test/integration
attributes:
And it is the output when run kitchen test:
... some docker steps...
Step 16 : RUN echo ssh-rsa\ AAAAB3NzaC1yc2EAAAADAQABAAABAQDIp1HE9Zbtl3zAH2KKL1mVzb7BU1WxK7mi5xpIxNRBar7EZAAzxi1pVb1JwUXFSCVoAmUyfn/lBsKlgXnUD49pKrqkeLQQW7NoG3uCFiXBUTof8nFVuLYtw4CTiAudplyMvu5J7HQIP1Hve1caY27tFs/kpkQaXHCEuIkqgrM2rreMKK0n8im9b36L2SwWyM/GwqcIS1z9mMttid7ux0\+HOWWHqZ\+7gumOauh6tLRbtjrm3YYoaIAMyv945MIX8BFPXSQixThBVOlXGA9iTwUZWjU6WvZThxVFkKPR9KZtUTuTCT7Y8\+wFtQ/9XCHpPR00YDQvS0Vgdb/LhZUDoNqV\ kitchen_docker_key >> /home/kitchen/.ssh/authorized_keys
---> Using cache
---> c0e6b9e98d6a
Successfully built c0e6b9e98d6a
d486d7ebfe000a3138db06b1424c943a0a1ee7b2a00e8a396cb8c09f9527fb4b
0.0.0.0:32841
Waiting for SSH service on localhost:32841, retrying in 3 seconds
Waiting for SSH service on localhost:32841, retrying in 3 seconds
Waiting for SSH service on localhost:32841, retrying in 3 seconds
Waiting for SSH service on localhost:32841, retrying in 3 seconds
.....
You cannot, at least not out of the box. This is one area where kitchen-docker shows its edges. We try to pretend that a container is like a tiny VM but in reality it isn't, and one notable place where the pretending breaks down is init systems. With CentOS 7, it uses systemd. It is possible to get systemd to run inside the container (see https://github.com/poise/yolover-example/blob/master/.kitchen.yml#L17-L33) but not all features are supported and it can generally be a bit odd :-/ That example should be enough to make your tests work though. For completeness, CentOS 6 uses Upstart which just flat out won't run inside Docker so no love there either.

trouble with pysphere - ansible

i am trying to deploy a VM via Ansible on my ESXi host.
I am using the following role for this:
- vsphere_guest:
vcenter_hostname: emea-esx-s18t.****.net
username: ****
password: ****
guest: newvm001
state: powered_off
vm_extra_config:
vcpu.hotadd: yes
mem.hotadd: yes
notes: This is a test VM
vm_disk:
disk1:
size_gb: 10
type: thin
datastore: ****
vm_nic:
nic1:
type: vmxnet3
network: VM Network
network_type: standard
vm_hardware:
memory_mb: 4096
num_cpus: 4
osid: windows7Server64Guest
scsi: paravirtual
esxi:
datacenter: MyDatacenter
hostname: esx-s18t.****.net
when i execute this role now via a playbook i get the following message:
root#ansible1:~/ansible# ansible-playbook -i Inventory vmware_deploy.yml
PLAY ***************************************************************************
TASK [setup] *******************************************************************
ok: [172.20.22.5]
TASK [vmware : vsphere_guest] **************************************************
fatal: [172.20.22.5]: FAILED! => {"changed": false, "failed": true, "msg": "pysphere module required"}
PLAY RECAP *********************************************************************
172.20.22.5 : ok=1 changed=0 unreachable=0 failed=1
So it seems to be "pysphere" module is missing. i've already checked that with the command:
root#ansible1:~/ansible# pip install pysphere
Requirement already satisfied (use --upgrade to upgrade): pysphere in /usr/local/lib/python2.7/dist-packages/pysphere-0 .1.7-py2.7.egg
Then i did the "upgrade" and get the following message back:
root#ansible1:~/ansible# pip install pysphere --upgrade
Requirement already up-to-date: pysphere in /usr/local/lib/python2.7/dist-packages/pysphere-0.1.7-py2.7.egg
So it seems to be it is already installed and its up-to-date , why do i get this error message then?
How can i fix it that my god damn role works fine now?
Jesus, Ansible makes me crazy ..
I hope you guys can help me, thanks in advance!
kind regards,
kgierman
EDIT:
so i've writen a new playbook with the old stuff, the new playbool lookes like this(i've added your localhost and connection local stuff):
---
- hosts: localhost
connection: local
tasks:
vsphere_guest:
vcenter_hostname: emea-esx-s18t.****.net
username: ****
password: ****
guest: newvm001
state: powered_off
vm_extra_config:
vcpu.hotadd: yes
mem.hotadd: yes
notes: This is a test VM
vm_disk:
disk1:
size_gb: 10
type: thin
datastore: ****
vm_nic:
nic1:
type: vmxnet3
network: VM Network
network_type: standard
vm_hardware:
memory_mb: 4096
num_cpus: 4
osid: windows7Server64Guest
scsi: paravirtual
esxi:
datacenter: MyDatacenter
hostname: esx-s18t.****.net
so when i execute this playbook i get the following error:
root#ansible1:~/ansible# ansible-playbook vmware2.yml
ERROR! Syntax Error while loading YAML.
The error appears to have been in '/root/ansible/vmware2.yml': line 7, column 19, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
vcenter_hostname: emea-esx-s18t.sddc-hwl-family.net
username: root
^ here
the struggle is real -.-
You generally should execute provisioning modules such as vsphere_guest on your local ansible machine.
I suspect that 172.20.22.5 is actually your ESX host, and ansible try to execute module from there, where pysphere is surely absent.
Use:
- hosts: localhost
tasks:
- vsphere_guest:
...
Ran into this issue once again on macOS / OSX...
It seems to be related to PYTHONPATH.
I have this in my .profile:
export PYTHONPATH="/usr/local/lib/python2.7/site-packages"
[ ... further down ... ]
export PYTHONPATH="/usr/local/Cellar/ansible/2.1.2.0/libexec/lib/python2.7/site-packages:/usr/local/Cellar/ansible/2.2.1.0/libexec/vendor/lib/python2.7/site-packages:$PYTHONPATH"
The first line with PYTHONPATH is where pysphere and other system modules reside.
Also take note of the specific version of Ansible!
Anyway, this seems to resolve the issue.
Source: https://github.com/debops/debops-tools/issues/159#issuecomment-236536195

Vagrant can't connect to the VM

EDIT6: submitted an official path bug: https://github.com/mitchellh/vagrant/issues/7512
EDIT5: When I do vagrant destroy and vagrant up, everything works easily. But when I turn off the VM and turn it back on (you have to restart your PC some day), it won't work again. Either the sequence for vagrant up when the VM is created is bugged or VirtualBox is bugged. Destroying and rebuilding the VM is not the option, cause the DB migration and everything takes ~30 mins at least. Either way, DON'T USE VAGRANT ON WINDOWS 10.
EDIT4: I downgraded to Virtual Box 5.0.0.10, that fix the wrong path problem, but the error Command not in installer persists.
EDIT3: When I went into vagrant up --debug, I found out that it cycles. It gets into line
INFO subprocess: Starting process: ["C:/Program Files/Oracle/VirtualBox/VBoxManage.exe", "showvminfo", "8aaee3a3-806f-4
8ad-9928-91e2b7baba5d", "--machinereadable"]
and then it does
INFO subprocess: Command not in installer, restoring original environment...
The path to VM uses forwards slashes instead of backslashes. Is this a bug? Is there a way to manually set path to VM? I have put C:\Program Files\Oracle\VirtualBox in my PATH.
EDIT2: DON'T USE VAGRANT ON WINDOWS 10, it's bugged in many ways, also VM is not optimalized for win10 yet, you'll get bunch of issues that you won't be able to solve. Also tried the Otto from Hashicorp, not working either. Rip.
EDIT: okay, so when I do vagrant destroy and vagrant up, after 10 minutes of installation it works like a charm. But after I restart my PC or logout in any way, Vagrant is unable to connect to the VM, neither with a private key, nor with login/password. Is that a bug?
When I do vagrant up, VM starts properly, but Vagrant is unable to connect. All it says is Warning: Remote connection disconnect. Retrying...
When I try to connect via vagrant ssh, I get only ssh_exchange_identification: read: Connection reset by peer. When I check GUI of the VM, it is waiting for login, and when I login with defult login/password, it is working as intended, so the problem must be Vagrant not being able to connect to the VM.
I tried:
checking if my pc supports virtualization and checking if it is on
trying to connect with password instead of a key
configuring networking adapetrs
turning off firewall
clean reinstall
I am using Vagrant 1.8.1 and VirtualBox 5.0.20 on Windows 10.
This is my vagrant file:
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "ubuntu/trusty64"
config.vm.provider :virtualbox do |vb|
vb.memory = 2048
vb.gui = true
vb.cpus = 2
end
config.vm.network :private_network, type: "dhcp"
config.vbguest.auto_update = false
config.ssh.insert_key = false
config.vm.provision :shell, path: "bootstrap.sh"
end
[Edit 17/06/2016]
The problem should be resolved with Virtualbox 5.0.22.
https://www.virtualbox.org/wiki/Changelog
https://www.virtualbox.org/ticket/15412
[Original answer below]
In contrast to my earlier answer I now don't think that I encounter the same problem as you have described here. However I still think that you encounter a different variation the problem.
As of feedback received from Virtualbox development https://www.virtualbox.org/ticket/15412 I learned that Virtualbox 5.0.20 includes changes to the NAT Forwarding Rules to address other bugs. When a VM is saved and started again, Virtualbox now removes the network cable for 5 seconds. This is supposed to trigger the DHCP client to request a new lease. This information in turn is then used by Virtualbox to infer the IP address and NAT should work.
In my particular case I encounter this problem with Ubuntu 16.04 as guest VM whereas with Ubuntu 14.04 it works. This indicates to me that the DHClient on Ubuntu 14.04 does request a new lease after the cable was disconnected by Virtualbox whereas this is not the case with Ubuntu 16.04.
In order to verify that you encounter the same problem, I wonder if you could run the below test and let me know.
Login to the Trusty VM console (i.e. the one that you get displayed when you run the VM in the foreground)
Install 'arping' (sudo apt-get -y install arping)
Create the below script 'sendARP.sh'
#!/bin/bash
IFACE=$(ifconfig | grep 'Link encap:Ethernet' | awk '{print $1}')
IP=$(ifconfig | grep -Eo 'inet (addr:)?([0-9]*\.){3}[0-9]*' | grep -Eo '([0-9]*\.){3}[0-9]*' | grep -v '127.0.0.1')
arping -c 1 -i $IFACE $IP
Make it an executable 'chmod +x sendARP.sh'
Save the Trusty VM (vagrant suspend)
Start your Trusty VM from saved state (vagrant up)
Login to the Trusty VM console (i.e. the one that you get displayed when you run the VM in the foreground)
Run the script 'sudo ./sendARP.sh'
Test whether you can connect via SSH from the remote location/ Virtualbox host
Bugs:
https://github.com/mitchellh/vagrant/issues/7306
https://www.virtualbox.org/ticket/15412