I am new to ansible, got the below issue.
I was able to ssh into my client machine .but unable to run playbook.
Getting the error below:
[WARNING]: Unable to parse /etc/ansible/hosts as an inventory source
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit
localhost does not match 'all'
[WARNING]: Could not match supplied host pattern, ignoring: a
here a is my group name. my hosts given below :
---------
[a]
172.31.26.93
[all:vars]
ansible_user=vagrant
ansible_ssh_pass=vagrant
ansible_ssh_host=172.31.26.93
ansible_ssh_port=22
ansible_ssh_user='ansibleuser'
ansible_ssh_private_key_file=/home/ansibleuser/.ssh
------- my playbook file given below ----
- hosts: a
tasks:
- name: create a directory
file: path=/home/ansiblesuser/www state=directory
This is the first time I am getting this issue.
Before running the playbook just run the following command
ansible all --list-hosts
If the above error persists then go to /etc/ansible/ansible.cfg and edit the inventory path which points to your specific host file.
i had the same issue, the plugins ini and yaml were not enable in the ansible.cfg :
[inventory]
enable_plugins = yaml, ini
I suddenly experienced the same issue with an inventory that was many years in use and hadn't changed recently.
It turned out I enabled a plugin which caused this issue.
I enabled the plugin vmware_vm_inventory which was the source of the message. This showed up by running ansible-playbook -vvvv <host>
I'd figured I should define the plugin in an ansible.cfg file that is present in the folder where I run playbooks that use this plugin and leave it out of the /etc/ansible/ansible.cfg
The following solved the problem for me:
Go to the root directory /
cd etc or mkdir etc and cd etc
mkdir ansible then cd ansible
vi hosts (then add the hosts)
chmod 777 hosts.
Check ansible all -m ping
If you are still having this issue when you run
ansible-playbook -i path/to/inventory/file playbook.yml
Simply create an empty ansible.cfg file in the directory where you have your playbook.
Centos 7
this was causing above error:
ansible-playbook -i host_test -v tasks.yml
this fixed it:
ansible-playbook -i hosttest -v tasks.yml
Host file with no read permissions caused the following error message:
[WARNING]: Unable to parse /etc/ansible/hosts as an inventory source
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that
the implicit localhost does not match 'all'
Solution: change permissions
sudo chmod 744 /etc/ansible/hosts
In another occasion I got an error message due to an incorrect formatting of the hosts file:
[WARNING]: * Failed to parse /etc/ansible/hosts with yaml plugin: YAML
inventory has invalid structure, it should be a dictionary, got: <class
'ansible.parsing.yaml.objects.AnsibleUnicode'>
[WARNING]: * Failed to parse /etc/ansible/hosts with ini plugin:
/etc/ansible/hosts:3: Expected key=value host variable assignment, got: ;
[WARNING]: Unable to parse /etc/ansible/hosts as an inventory source
[WARNING]: No inventory was parsed, only implicit localhost is available
and I fixed the issue by correcting the format error.
As suggested in another answer check if hosts are available with
ansible all --list-hosts
Also check the actual location of the inventory file in the Ansible configuration file ansible.cfg.
You'll see this error if the file doesn't exist on disk as well.
This will be in the -vvvv log:
Skipping due to inventory source not existing or not being readable by the current user
Removing the spaces around the = worked for me, Ansible managed to parse my host.ini file.
Using ansible version ansible 2.9.27
able to resolve by giving complete (absolute path) for me error is like below
[WARNING]: Unable to parse /root/hosts as an inventory source
[WARNING]: No inventory was parsed, only implicit localhost is
available [WARNING]: provided hosts list is empty, only localhost is
available. Note that the implicit localhost does not match 'all'
[WARNING]: Could not match supplied host pattern, ignoring: all
Use following command
ansible-playbook -i /etc/ansible/hosts showfiles.yml
I have the same problem:
The solution was Space issues in Token
`$ ansible-playbook -i inventory/forem/setup.yml playbooks/providers/aws.yml
[WARNING]: * Failed to parse /var/home/core/selfhost/inventory/forem/setup.yml with ini plugin: Invalid host pattern '---' supplied, '---' is normally a sign this is
a YAML file.
[WARNING]: * Failed to parse /var/home/core/selfhost/inventory/forem/setup.yml with yaml plugin: We were unable to read either as JSON nor YAML, these are the errors
we got from each: JSON: Expecting value: line 1 column 1 (char 0) Syntax Error while loading YAML. could not find expected ':' The error appears to be in
'/var/home/core/selfhost/inventory/forem/setup.yml': line 85, column 11, but may be elsewhere in the file depending on the exact syntax problem. The offending line
appears to be: $ANSIBLE_VAULT;1.1;AES256 62376137383864393461613561353234643230666431643935303533346631393537363564366334 ^ here
[WARNING]: Unable to parse /var/home/core/selfhost/inventory/forem/setup.yml as an inventory source
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [Deploy Forem to AWS]
Related
I am trying to use the SSHOperator to SSH into a remote machine and run an external application through the command line. I have setup the SSH connection via the admin page.
This section of code is used to define the commands and the SSH connection to the external machine.
sshHook = SSHHook(ssh_conn_id='remote_comp')
command_1 ="""
cd /files/232-065/Rans
bash run.sh
"""
Where 'run.sh' runs the shell script:
#!/bin/sh
starccm+ -batch run_export.java Rans_Model.sim
Which simply runs the commercial software starccm+ with some options I have specified.
This section defines the task:
inlet_profile = SSHOperator(
task_id='inlet_profile',
ssh_hook=sshHook,
command=command_1
)
I have confirmed the SSH connection works by giving a simple 'ls' command and checking the output.
The error that I get is:
bash run.sh, error: run.sh: line 2: starccm+: command not found
The command in 'run.sh' works when I am logged into the machine (it does not require a GUI). This makes me think that there is a problem with the SSH session and it is not the same as the one that Apache Airflow logs into, but I am not sure how to solve this problem.
Does anyone have any experience with this?
There is no issue with SSH connection (at least from the error message). However, the issue is with starccm+ installation path.
Please check the installation path of starccm+ .
Check if the installation path is part of $PATH env variable
$ echo $PATH
If not, then install it in the standard locations like /bin or /usr/bin etc (provided they are included in $PATH variable), or export the installed director into PATH variable like this,
$ export PATH=$PATH:/<absolute_path>
It is not ideal but if you struggle with setting the path variable you can run starccm stating the full path like:
/directory/where/star/is/installed/starccm+ -batch run_export.java Rans_Model.sim
I have wired problem with running ansible
I'd like to print to log or anywhere else the extra-vars I'm executing the playbook with.
I am running the following command:
ansible-playbook /foo/main.yml --extra-vars "main_playbook=app_install start_path=work" --extra-vars {"db_config":{"db_multi_config":["value1","value2"]}}'
with the following main.yml playbook
- name: start playbook
import_playbook: "{{ playbook_dir }}/{{ main_playbook}}/{{ start_path}}.yml"
I'd like to print the values of all extra-vars passed on the command line before it gets to main.yml. Is there any way to do this?
You can get this information in the output running ansible in full debug mode:
Given the following test.yml playbook:
- hosts: localhost
gather_facts: false
and running it with the command:
ansible-playbook test.yml -vvvv -e toto=bla -e '{"test1":2}'
I get the following result (see the debug lines for playbook before play starts)
ansible-playbook [core 2.11.3]
config file = None
configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.8/dist-packages/ansible
ansible collection location = /home/user/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible-playbook
python version = 3.8.10 (default, Jun 2 2021, 10:49:15) [GCC 9.4.0]
jinja version = 2.11.3
libyaml = True
No config file found; using defaults
The vault password file /home/user/bin/vault-keyring-client is a client script.
Executing vault password client script: /home/user/bin/vault-keyring-client --vault-id avaultid1
The vault password file /home/user/bin/vault-keyring-client is a client script.
Executing vault password client script: /home/user/bin/vault-keyring-client --vault-id avaultid2
The vault password file /home/user/bin/vault-keyring-client is a client script.
Executing vault password client script: /home/user/bin/vault-keyring-client --vault-id avaultid3
setting up inventory plugins
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Parsed /etc/ansible/hosts inventory source with ini plugin
Loading callback plugin default of type stdout, v2.0 from /usr/local/lib/python3.8/dist-packages/ansible/plugins/callback/default.py
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
PLAYBOOK: test.yml **********************************************************************************************************************
Positional arguments: test.yml
verbosity: 4
connection: smart
timeout: 10
become_method: sudo
tags: ('all',)
inventory: ('/etc/ansible/hosts',)
extra_vars: ('toto=bla', '{"test1":2}')
forks: 5
1 plays in test.yml
PLAY [all] ******************************************************************************************************************************
skipping: no hosts matched
PLAY RECAP ******************************************************************************************************************************
You can't separate extra vars from all other vars, because they are merged. But you can see all variables for the host. (except for setup facts, which you can browse separately with -m setup).
The trick is to print all variables from hostvars group for 'itself', ansible_host:
ansible -i inventory -e extra_vars_here -m debug -a 'msg={{hostvars[inventory_hostname]}}' all
(replace all with specific hostname if you wish).
If you are overwhelmed, you can filter it with .keys(): hostvars[inventory_hostname].keys().
I am using Ubuntu 14.04.01 and Apache - 2.2.31.
In httpd.conf I have
User build
Group build
Trying to start apache -
apache/logs$ cat stdout.log
httpd: bad user name build
Before that I tried to run:
. bin/envvars
When I created local user "test"
useradd -m test -G sudo -s /bin/bash
and specified it in httpd.conf, then I was able to start apache.
But, I need to use LDAP user "build".
Finally, I solved this issue:
sudo ltrace -f sh apachectl configtest 2>out.log
Investigated output in out.log file and found a call - 'getpwnam("build")', which returned '0'. From documentation I understood that "The given name or uid was not found.", but when I was calling 'id build' I was able to see that user exist and list of the network groups to which this user belongs to. Then I connected to another vm, where I was able to use LDAP user to start apache and ran
ldd bin/httpd
and compared the output. One of vm's was missing "libldap-2.3.so.0"
localrc file
ADMIN_PASSWORD=password2 MYSQL_PASSWORD=password2
RABBIT_PASSWORD=password2 SERVICE_PASSWORD=password2
SERVICE_TOKEN=token2
ENABLED_SERVICES=key,n-api,n-crt,n-obj,n-cpu,n-net,n-cond,cinder,c-sch,c-api,c-vol,n-sch,n-novnc,n-xvnc,n-cauth,horizon,mysql,rabbit,ldap
KEYSTONE_IDENTITY_BACKEND=ldap
KEYSTONE_CLEAR_LDAP=yes LDAP_PASSWORD=9632
I followed this website(http://www.ibm.com/developerworks/cloud/library/cl-ldap-keystone/)
I am assuming the above snippet is from a file written in shell script. Your example looks Ok.
I checked the link you provided and noted that the line you say failed is written in the IBM example as:
KEYSTONE_IDENTITY_BACKEND = ldap
Which is not legal sh (or bash) and would cause the error message you described.
KEYSTONE_IDENTITY_BACKEND = ldap
-bash: KEYSTONE_IDENTITY_BACKEND: command not found
I suspect you copied and pasted the bad example from the link into your localrc file, which caused the error you saw, but somehow when you wrote the SO question, you corrected the mistake by removing the spaces around the "=".
Edit: Investigation
;TLDR
Create a file in the root of the devstack repo, devstack/local.conf with the contents:
[[local|localrc]]
ADMIN_PASSWORD=password2
MYSQL_PASSWORD=password2
RABBIT_PASSWORD=password2
SERVICE_PASSWORD=password2
SERVICE_TOKEN=token2
ENABLED_SERVICES=key,n-api,n-crt,n-obj,n-cpu,n-net,n-cond,cinder,c-sch,c-api,c-vol,n-sch,n-novnc,n-xvnc,n-cauth,horizon,mysql,rabbit,ldap
KEYSTONE_IDENTITY_BACKEND=ldap
KEYSTONE_CLEAR_LDAP=yes
LDAP_PASSWORD=9632
Full Description
I installed devstack on Centos7 (using the Devstack Quick Start Guide):
git clone https://git.openstack.org/openstack-dev/devstack
cd devstack
./stack.sh
I entered passwords as prompted, but eventually it failed with the error:
Error: pg_config executable not found.
Please add the directory containing pg_config to the PATH
or specify the full executable path with the option:
python setup.py build_ext --pg-config /path/to/pg_config build ...
or with the pg_config option in 'setup.cfg'.
I traced the problem to a limited PATH in the sudoers entry, and because my postgreSQL install is in a non-standard location, I linked pg_config into /usr/local/bin and ran stack.sh again:
sudo ln -s /usr/pgsql-9.3/bin/pg_config /usr/local/bin/pg_config
./stack.sh
(You probably won't have to do this if Postgres is in a standard location).
Install took a long time -
This is your host IP address: 192.168.200.181
This is your host IPv6 address: ::1
Horizon is now available at http://192.168.200.181/dashboard
Keystone is serving at http://192.168.200.181/identity/
The default users are: admin and demo
The password: 12345678
2016-07-17 18:16:32.834 | WARNING:
2016-07-17 18:16:32.834 | Using lib/neutron-legacy is deprecated, and it will be removed in the future
2016-07-17 18:16:32.834 | stack.sh completed in 1447 seconds.
I killed the devstack session and did it all again with a clean git repo and with a localrc file.
./unstack.sh
cd ..
git clone https://git.openstack.org/openstack-dev/devstack
cd devstack
cat << __EOF > local.conf
[[local|localrc]]
ADMIN_PASSWORD=password2
MYSQL_PASSWORD=password2
RABBIT_PASSWORD=password2
SERVICE_PASSWORD=password2
SERVICE_TOKEN=token2
ENABLED_SERVICES=key,n-api,n-crt,n-obj,n-cpu,n-net,n-cond,cinder,c-sch,c-api,c-vol,n-sch,n-novnc,n-xvnc,n-cauth,horizon,mysql,rabbit,ldap
KEYSTONE_IDENTITY_BACKEND=ldap
KEYSTONE_CLEAR_LDAP=yes
LDAP_PASSWORD=9632
__EOF
./stack.sh
This time there were no password prompts, so the local config was definitely read.
I'm getting this error while Ansible (1.9.2) is trying to unpack the file.
19:06:38 TASK: [jmeter | unpack jmeter] ************************************************
19:06:38 fatal: [jmeter01.veryfast.server.jenkins] => input file not found at /tmp/apache-jmeter-2.13.tgz or /tmp/apache-jmeter-2.13.tgz
19:06:38
19:06:38 FATAL: all hosts have already failed -- aborting
19:06:38
I checked on the target server, /tmp/apache-jmeter-2.13.tgz file exists and it has valid permissions (for testing I also gave 777 even though not reqd but still got the above error mesg).
I also checked md5sum of this file (compared it with what's there on the apache jmeter site) -- It matches!
# md5sum apache-jmeter-2.13.tgz|grep 53dc44a6379b7b4a57976936f3a65e03
53dc44a6379b7b4a57976936f3a65e03 apache-jmeter-2.13.tgz
When I'm using tar -xvzf on this file, tar is able to show/extract it's contents in the .tgz file.
What could I be missing? At this point, I'm wondering unarchive method/module in Ansible must have some bug.
My last resort (if I can't get unarchive in Ansible to work) would be to use Command: "tar -xzvf /tmp/....." but I don't want to do that as my first preference.
The default behavior for Unarchive is to find the file on your local system, copy it to the remote, and unpack it. I suspect if you're getting a file not found error then you need to specify copy=no in your task.