trouble with pysphere - ansible - module

i am trying to deploy a VM via Ansible on my ESXi host.
I am using the following role for this:
- vsphere_guest:
vcenter_hostname: emea-esx-s18t.****.net
username: ****
password: ****
guest: newvm001
state: powered_off
vm_extra_config:
vcpu.hotadd: yes
mem.hotadd: yes
notes: This is a test VM
vm_disk:
disk1:
size_gb: 10
type: thin
datastore: ****
vm_nic:
nic1:
type: vmxnet3
network: VM Network
network_type: standard
vm_hardware:
memory_mb: 4096
num_cpus: 4
osid: windows7Server64Guest
scsi: paravirtual
esxi:
datacenter: MyDatacenter
hostname: esx-s18t.****.net
when i execute this role now via a playbook i get the following message:
root#ansible1:~/ansible# ansible-playbook -i Inventory vmware_deploy.yml
PLAY ***************************************************************************
TASK [setup] *******************************************************************
ok: [172.20.22.5]
TASK [vmware : vsphere_guest] **************************************************
fatal: [172.20.22.5]: FAILED! => {"changed": false, "failed": true, "msg": "pysphere module required"}
PLAY RECAP *********************************************************************
172.20.22.5 : ok=1 changed=0 unreachable=0 failed=1
So it seems to be "pysphere" module is missing. i've already checked that with the command:
root#ansible1:~/ansible# pip install pysphere
Requirement already satisfied (use --upgrade to upgrade): pysphere in /usr/local/lib/python2.7/dist-packages/pysphere-0 .1.7-py2.7.egg
Then i did the "upgrade" and get the following message back:
root#ansible1:~/ansible# pip install pysphere --upgrade
Requirement already up-to-date: pysphere in /usr/local/lib/python2.7/dist-packages/pysphere-0.1.7-py2.7.egg
So it seems to be it is already installed and its up-to-date , why do i get this error message then?
How can i fix it that my god damn role works fine now?
Jesus, Ansible makes me crazy ..
I hope you guys can help me, thanks in advance!
kind regards,
kgierman
EDIT:
so i've writen a new playbook with the old stuff, the new playbool lookes like this(i've added your localhost and connection local stuff):
---
- hosts: localhost
connection: local
tasks:
vsphere_guest:
vcenter_hostname: emea-esx-s18t.****.net
username: ****
password: ****
guest: newvm001
state: powered_off
vm_extra_config:
vcpu.hotadd: yes
mem.hotadd: yes
notes: This is a test VM
vm_disk:
disk1:
size_gb: 10
type: thin
datastore: ****
vm_nic:
nic1:
type: vmxnet3
network: VM Network
network_type: standard
vm_hardware:
memory_mb: 4096
num_cpus: 4
osid: windows7Server64Guest
scsi: paravirtual
esxi:
datacenter: MyDatacenter
hostname: esx-s18t.****.net
so when i execute this playbook i get the following error:
root#ansible1:~/ansible# ansible-playbook vmware2.yml
ERROR! Syntax Error while loading YAML.
The error appears to have been in '/root/ansible/vmware2.yml': line 7, column 19, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
vcenter_hostname: emea-esx-s18t.sddc-hwl-family.net
username: root
^ here
the struggle is real -.-

You generally should execute provisioning modules such as vsphere_guest on your local ansible machine.
I suspect that 172.20.22.5 is actually your ESX host, and ansible try to execute module from there, where pysphere is surely absent.
Use:
- hosts: localhost
tasks:
- vsphere_guest:
...

Ran into this issue once again on macOS / OSX...
It seems to be related to PYTHONPATH.
I have this in my .profile:
export PYTHONPATH="/usr/local/lib/python2.7/site-packages"
[ ... further down ... ]
export PYTHONPATH="/usr/local/Cellar/ansible/2.1.2.0/libexec/lib/python2.7/site-packages:/usr/local/Cellar/ansible/2.2.1.0/libexec/vendor/lib/python2.7/site-packages:$PYTHONPATH"
The first line with PYTHONPATH is where pysphere and other system modules reside.
Also take note of the specific version of Ansible!
Anyway, this seems to resolve the issue.
Source: https://github.com/debops/debops-tools/issues/159#issuecomment-236536195

Related

Ansible failed to connect to the host via ssh, but ssh command works

From inventory.yml:
myhost:
ansible_host: myhost # actually it was ansible_ssh_host (see my answer)
ansible_user: myuser # actually it was ansible_ssh_user (see my answer)
ansible_pass: mypass # actually it was ansible_ssh_pass (see my answer)
So far, Ansible worked fine. I could also ssh myuser#myhost.
Then, I changed the ssh port from default 22 to 23 and edited the inventory.yml:
myhost:
ansible_host: myhost
ansible_user: myuser
ansible_pass: mypass # THE PROBLEM! Must be ansible_ssh_pass. (see my answer)
ansible_port: 23
As expected, I can ssh myuser#myhost -p 23, but Ansible gives the error:
fatal: [staging]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: myuser#myhost: Permission denied (publickey,password).", "unreachable": true}
What could be causing the error?
The solution is quite unexpected and slightly embarrassing:
While changing the SSH port, I also read this:
Ansible 2.0 has deprecated the “ssh” from ansible_ssh_user,
ansible_ssh_host, and ansible_ssh_port to become ansible_user,
ansible_host, and ansible_port.
I edited inventory.yml a bit too eagerly, as I also changed ansible_ssh_pass to ansible_pass. Hence: missing password -> permission denied.
So, my question had been phrased in a wrong way. I have updated it accordingly.
The variable for password is ansible_password. See documentation here to create your inventory.yml properly.
Notice that you should never store your password in plain text, but use a vault in stead.
In my case, where key based ssh is only allowed, ansible ping was failing with -
"msg": "Failed to connect to the host via ssh: Permission denied (publickey).",
The inventory file had entry for ansible_user=$user. Removing entry from inventory file helped.
My environment :
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 16.04.7 LTS
Release: 16.04
Codename: xenial
$ ansible --version
ansible 2.9.27
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/USER_NAME/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.12 (default, Mar 1 2021, 11:38:31) [GCC 5.4.0 20160609]

Installing Xfce on CentOS 7 via ansible yum

I have a Centos 7 machine freshly started from the centos/7 vagrant base box. I provision the box via ansible but the following does not work:
- name: Install xfce
yum:
name: "#^Xfce"
state: present
enablerepo: "epel"
The error is:
TASK [desktop : Install xfce]
************************************************** fatal: [xxx]: FAILED! => {"changed": false,
"changes": {"installed": ["#^Xfce"]}, "msg": "Error: Nothing to do\n",
"rc": 1, "results": ["Loaded plugins: fastestmirror\nLoading mirror
speeds from cached hostfile\n * base: mirror.ratiokontakt.de\n * epel:
mirror.de.leaseweb.net\n * extras: centos.schlundtech.de\n * updates:
centos.schlundtech.de\nGroup Xfce does not exist.\n"]}
Yet, running the following command from within the machine works:
sudo yum --enablerepo=epel -y groups install "Xfce"
What am I doing wrong?
According to the yum module documentation:
Yum itself has two types of groups. “Package groups” are specified in the rpm itself while “environment groups” are specified in a separate file (usually by the distribution). Unfortunately, this division becomes apparent to ansible users because ansible needs to operate on the group of packages in a single transaction and yum requires groups to be specified in different ways when used in that way. Package groups are specified as “#development-tools” and environment groups are “#^gnome-desktop-environment”. Use the “yum group list hidden ids” command to see which category of group the group you want to install falls into.
You're specifying your grouop as #^Xfce, which is the syntax for an "environment group", but if you look at the output of yum group list hidden ids there is no "Xfce" environment group. There is a package group by that name, and this playbook seems to install it successfully:
---
- hosts: localhost
gather_facts: false
tasks:
- name: install xfce
yum:
name: "#Xfce"
state: "present"
enablerepo: "epel"

Ansible not picking up custom module

I'm having issues with Ansible picking up a module that I've added.
The module is called 'passwordstore' https://github.com/morphje/ansible_pass_lookup/.
I'm using Ansible 2.2
In my playbook, I've added a 'library' folder and have added the contents of that GitHub directory to that folder. I've also tried uncommenting library = /usr/share/ansible/modules and adding the module files there and still doesn't get picked up.
Have also tried setting environment variable to ANSIBLE_LIBRARY=/usr/share/ansible/modules
My Ansible playbook looks like this:
---
- name: example play
hosts: all
gather_facts: false
tasks:
- name: set password
debug: msg="{{ lookup('passwordstore', 'files/test create=true')}}"
And when I run this I get this error;
ansible-playbook main.yml
PLAY [example play] ******************************************************
TASK [set password] ************************************************************
fatal: [backend.example.name]: FAILED! => {"failed": true, "msg": "lookup plugin (passwordstore) not found"}
fatal: [mastery.example.name]: FAILED! => {"failed": true, "msg": "lookup plugin (passwordstore) not found"}
to retry, use: --limit #/etc/ansible/roles/test-role/main.retry
Any guidance on what I'm missing? It may just be the way in which I'm trying to add the custom module, but any guidance would be appreciated.
It's a lookup plugin (not a module), so it should go into a directory named lookup_plugins (not library).
Alternatively, add the path to the cloned repository in ansible.cfg using the lookup-plugins setting.

Ansible provisioning ERROR! Using a SSH password instead of a key is not possible

I would like to provision with my three nodes from the last one by using Ansible.
My host machine is Windows 10.
My Vagrantfile looks like:
Vagrant.configure("2") do |config|
(1..3).each do |index|
config.vm.define "node#{index}" do |node|
node.vm.box = "ubuntu"
node.vm.box = "../boxes/ubuntu_base.box"
node.vm.network :private_network, ip: "192.168.10.#{10 + index}"
if index == 3
node.vm.provision :setup, type: :ansible_local do |ansible|
ansible.playbook = "playbook.yml"
ansible.provisioning_path = "/vagrant/ansible"
ansible.inventory_path = "/vagrant/ansible/hosts"
ansible.limit = :all
ansible.install_mode = :pip
ansible.version = "2.0"
end
end
end
end
end
My playbook looks like:
---
# my little playbook
- name: My little playbook
hosts: webservers
gather_facts: false
roles:
- create_user
My hosts file looks like:
[webservers]
192.168.10.11
192.168.10.12
[dbservers]
192.168.10.11
192.168.10.13
[all:vars]
ansible_connection=ssh
ansible_ssh_user=vagrant
ansible_ssh_pass=vagrant
After executing vagrant up --provision I got the following error:
Bringing machine 'node1' up with 'virtualbox' provider...
Bringing machine 'node2' up with 'virtualbox' provider...
Bringing machine 'node3' up with 'virtualbox' provider...
==> node3: Running provisioner: setup (ansible_local)...
node3: Running ansible-playbook...
PLAY [My little playbook] ******************************************************
TASK [create_user : Create group] **********************************************
fatal: [192.168.10.11]: FAILED! => {"failed": true, "msg": "ERROR! Using a SSH password instead of a key is not possible because Host Key checking is enabled and sshpass does not support this. Please add this host's fingerprint to your known_hosts file to manage this host."}
fatal: [192.168.10.12]: FAILED! => {"failed": true, "msg": "ERROR! Using a SSH password instead of a key is not possible because Host Key checking is enabled and sshpass does not support this. Please add this host's fingerprint to your known_hosts file to manage this host."}
PLAY RECAP *********************************************************************
192.168.10.11 : ok=0 changed=0 unreachable=0 failed=1
192.168.10.12 : ok=0 changed=0 unreachable=0 failed=1
Ansible failed to complete successfully. Any error output should be
visible above. Please fix these errors and try again.
I extended my Vagrantfile with ansible.limit = :all and added [all:vars] to the hostfile, but still cannot get through the error.
Has anyone encountered the same issue?
Create a file ansible/ansible.cfg in your project directory (i.e. ansible.cfg in the provisioning_path on the target) with the following contents:
[defaults]
host_key_checking = false
provided that your Vagrant box has sshpass already installed - it's unclear, because the error message in your question suggests it was installed (otherwise it would be "ERROR! to use the 'ssh' connection type with passwords, you must install the sshpass program"), but in your answer you add it explicitly (sudo apt-get install sshpass), like it was not
I'm using Ansible version 2.6.2 and solution with host_key_checking = false doesn't work.
Adding environment variable export ANSIBLE_HOST_KEY_CHECKING=False skipping fingerprint check.
This error can also be solved by simply export ANSIBLE_HOST_KEY_CHECKING variable.
export ANSIBLE_HOST_KEY_CHECKING=False
source: https://github.com/ansible/ansible/issues/9442
This SO post gave the answer.
I just extended the known_hosts file on the machine that is responsible for the provisioning like this:
Snippet from my modified Vagrantfile:
...
if index == 3
node.vm.provision :pre, type: :shell, path: "install.sh"
node.vm.provision :setup, type: :ansible_local do |ansible|
...
My install.sh looks like:
# add web/database hosts to known_hosts (IP is defined in Vagrantfile)
ssh-keyscan -H 192.168.10.11 >> /home/vagrant/.ssh/known_hosts
ssh-keyscan -H 192.168.10.12 >> /home/vagrant/.ssh/known_hosts
ssh-keyscan -H 192.168.10.13 >> /home/vagrant/.ssh/known_hosts
chown vagrant:vagrant /home/vagrant/.ssh/known_hosts
# reload ssh in order to load the known hosts
/etc/init.d/ssh reload
I had a similar challenge when working with Ansible 2.9.6 on Ubuntu 20.04.
When I run the command:
ansible all -m ping -i inventory.txt
I get the error:
target | FAILED! => {
"msg": "Using a SSH password instead of a key is not possible because Host Key checking is enabled and sshpass does not support this. Please add this host's fingerprint to your known_hosts file to manage this host."
}
Here's how I fixed it:
When you install ansible, it creates a file called ansible.cfg, this can be found in the /etc/ansible directory. Simply open the file:
sudo nano /etc/ansible/ansible.cfg
Uncomment this line to disable SSH key host checking
host_key_checking = False
Now save the file and you should be fine now.
Note: You could also try to add the host's fingerprint to your known_hosts file by SSHing into the server from your machine, this prompts you to save the host's fingerprint to your known_hosts file:
promisepreston#ubuntu:~$ ssh myusername#192.168.43.240
The authenticity of host '192.168.43.240 (192.168.43.240)' can't be established.
ECDSA key fingerprint is SHA256:9Zib8lwSOHjA9khFkeEPk9MjOE67YN7qPC4mm/nuZNU.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '192.168.43.240' (ECDSA) to the list of known hosts.
myusername#192.168.43.240's password:
Welcome to Ubuntu 20.04.1 LTS (GNU/Linux 5.4.0-53-generic x86_64)
That's all.
I hope this helps
run the below command, it resolved my issue
export ANSIBLE_HOST_KEY_CHECKING=False && ansible-playbook -i
all provided solutions require changes in global config file or adding environment variable what create problems to onboard new people.
Instead you can add following variable to your inventory or host vars
ansible_ssh_common_args: '-o StrictHostKeyChecking=no'
Adding ansible_ssh_common_args='-o StrictHostKeyChecking=no'
to either your inventory
like:
[all:vars]
ansible_ssh_common_args='-o StrictHostKeyChecking=no'
[all:children]
servers
[servers]
host1
OR:
[servers]
host1 ansible_ssh_common_args='-o StrictHostKeyChecking=no'

Shared connection to server failed (trying to run an Ansible playbook)

I am quite new to SSH servers and Ansible so this might be a dumb question.
Tried to run an Ansible playbook whilst accessing the server with a private key using the bash command below.
ansible-playbook dbserv.yml -i hosts --limit local-servers --private-key=(where I put the private key)
However, I am getting this error:
fatal: [xxx]: FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_name": "setup"
},
"module_stderr": "Shared connection to xxx closed.\r\n",
"module_stdout": "/bin/sh: 1: /usr/bin/python2.7: not found\r\n",
"msg": "MODULE FAILURE" }
I have Python installed on my computer so I do not understand why I am getting this error.
OS environment:
Ubuntu 16.04.1
The error message you get is:
/usr/bin/python2.7: not found
Ansible requires the target machine to have Python installed in order to work properly (see Managed node requirements).
The most likely reason is that your target is Ubuntu 16.04 which does not come with Python 2 installed. In this case you need to install it or try the experimental support for Python 3.
If Python 2.7 is installed in a different directory, you can add a host variable, for example in your inventory file (assuming the hostname is xxx as in your question`):
xxx ansible_python_interpreter=/path/to/python2.7
To run modules with Python 3 (experimental), set:
xxx ansible_python_interpreter=/usr/bin/python3
Note: Ansible by default looks for /usr/bin/python, so it's likely that your playbook, inventory file, or ansible.cfg already contain settings for /usr/bin/python2.7 which does not exist on the target machine.