"Could not resolve hostname" Ansible - ssh

I have created my first ansible playbook according to this tutorial, so it looks like this:
---
- hosts: hostb
tasks:
- name: Create file
file:
path: /tmp/yallo
state: touch
- hosts: my_hosts
sudo: yes
tasks:
- name: Create user
user:
name: mario
shell: /bin/zsh
- name: Install zlib
yum:
name: zlib
state: latest
However, I can not figure out which hosts I should put into my hosts file. I have something like this for now:
[my_hosts]
hostA
hostB
Obviously, it is not working and I get this:
ssh: Could not resolve hostname hostb: Name or service not known
So how should I change my hosts file? I am new to ansible so I would be very grateful for some help!

Ok so the Ansible inventory can be based on following format:
HostName => IP Address
HostName => DHCP or Hosts file hostname reference localhost/cassie.local
Create your own alias => hostname ansible_host=IP Address
Group of hosts => [group_name]
That is the most basic structure you can use.
Example
# Grouping
[test-group]
# IP reference
192.168.1.3
# Local hosts file reference
localhost
# Create your own alias
test ansible_host=192.168.1.4
# Create your alias with port and user to login as
test-2 ansible_host=192.168.1.5 ansible_port=1234 ansible_user=ubuntu
Grouping of hosts will only end when the end of file or another group detected. So if you wish to have hosts that don't belong to a group, make sure they're defined above the group definition.
I.E. everything in the above example is belong to test-group, and if you do following; it will execute on all of the hosts:
ansible test-group -u ubuntu -m ping

ansible is case sensitive host name in your inventory file is hostB and in your playbook is hostb i think way its showing " Name or service not known" error
change your host name in the playbook to hostB

Related

Ansible "Failed to connect to the host via ssh: ubuntu# because target machine uses ec2-user

So I'm trying to use roles in Ansible and I'm not sure how to tell Ansible to use a specific user to ssh
So I have 2 files
site.yml
- hosts: _uat_web
- import_playbook: ../static-assignments/uat-webservers.yml
uat-webservers.yml
---
- hosts: _uat_web
remote_user: ec2-user
roles:
- webservers
So if I run ansible-playbook uat-webservers.yml everything works as expected but the idea is for site.yml to call uat-webservers.yml
So when I run ansible-playbook site.yml I get this issue
UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ubuntu#ec2-xx-xxx-xxx-xxx.compute-1.amazonaws.com: Permission denied (publickey,gssapi-keyex,gssapi-with-mic).", "unreachable": true}
I know the issue is that the target machine is using Red Hat therefore I need user ec2-user for ssh to work.
I tried putting remote_user: ec2-user in site.yml did not work. FYI I'm executing the ansible playbooks on an Ubuntu machine thats why it defaults to ubuntu user
- hosts: _uat_web #uat-webservers
- remote_user: ec2-user
- import_playbook: ../static-assignments/uat-webservers.yml
In addition, I'm using dynamic inventory aws_ec2 I know with static inventory you can specify the user in the inventory. Would love a solution in the playbook itself such as remote_user that doesn't seem to work when using the import. Thank you
In site.yml, this line by itself doesn't do anything (aside from gather facts). So it is redundant and can be removed.
- hosts: _uat_web
So if you remove that line, your import_playbook should work on it's own. ie.
# site.yml
- import_playbook: ../static-assignments/uat-webservers.yml
If you really wanted that section because you wanted to run some stuff before importing the playbook, then do:
# site.yml
- hosts: _uat_web
remote_user: ec2-user # Notice this line doesn't start with a "-" like it did in your example
tasks: # or roles:
...
- import_playbook: ../static-assignments/uat-webservers.yml
Edit:
Once the UNREACHABLE error is resolved, I think you may encounter another error where it cannot find the role. I'm not sure how your directory structure is setup, but when you use import_playbook, the imported playbook will look for the roles relative to itself.
Ie. your ../static-assignments/uat-webservers.yml playbook calls the webservers role, then it will try to find it in ../static-assignments/roles/webservers which may not exist in that path.
Some potential solutions is to look into the roles_path setting in ansible.cfg. Or potentially using a symlink to point to your main roles directory.

Ansible Inventory Specifying the Same Host with Different Users and Keys for Initial SSH User Setup and Disabling Root Access

I am attempting to have playbooks that run once to set up a new user and disable root ssh access.
For now, I am doing that by declaring all of my inventory twice. Each host needs an entry that accesses with the root user, used to create a new user, set up ssh settings, and then disable root access.
Then each host needs another entry with the new user that gets created.
My current inventory looks like this. It's only one host for now, but with a larger inventory, the repetition would just take up a ton of unnecessary space:
---
# ./hosts.yaml
---
all:
children:
master_roots:
hosts:
demo_master_root:
ansible_host: a.b.c.d # same ip as below
ansible_user: root
ansible_ssh_private_key_file: ~/.ssh/id_rsa_infra_ops
masters:
hosts:
demo_master:
ansible_host: a.b.c.d # same ip as above
ansible_user: infraops
ansible_ssh_private_key_file: ~/.ssh/id_rsa_infra_ops
Is there a cleaner way to do this?
Is this an anti-pattern in any way? It is not idempotent. It would be nice to have this run in a way that running the same playbook twice always has the same output - either "success", or "no change".
I am using DigitalOcean and they have a functionality to have this done via a bash script before the VM comes up for the first time, but I would prefer a platform-independent solution.
Here is the playbook for setting up the users & ssh settings and disabling root access
---
# ./initial-host-setup.yaml
---
# References
# Digital Ocean recommended droplet setup script:
# - https://docs.digitalocean.com/droplets/tutorials/recommended-setup
# Digital Ocean tutorial on installing kubernetes with Ansible:
# - https://www.digitalocean.com/community/tutorials/how-to-create-a-kubernetes-cluster-using-kubeadm-on-debian-9
# Ansible Galaxy (Community) recipe for securing ssh:
# - https://github.com/vitalk/ansible-secure-ssh
---
- hosts: master_roots
become: 'yes'
tasks:
- name: create the 'infraops' user
user:
state: present
name: infraops
password_lock: 'yes'
groups: sudo
append: 'yes'
createhome: 'yes'
shell: /bin/bash
- name: add authorized keys for the infraops user
authorized_key: 'user=infraops key="{{item}}"'
with_file:
'{{ hostvars[inventory_hostname].ansible_ssh_private_key_file }}.pub'
- name: allow infraops user to have passwordless sudo
lineinfile:
dest: /etc/sudoers
line: 'infraops ALL=(ALL) NOPASSWD: ALL'
validate: visudo -cf %s
- name: disable empty password login for all users
lineinfile:
dest: /etc/ssh/sshd_config
regexp: '^#?PermitEmptyPasswords'
line: PermitEmptyPasswords no
notify: restart sshd
- name: disable password login for all users
lineinfile:
dest: /etc/ssh/sshd_config
regexp: '^(#\s*)?PasswordAuthentication '
line: PasswordAuthentication no
notify: restart sshd
- name: Disable remote root user login
lineinfile:
dest: /etc/ssh/sshd_config
regexp: '^#?PermitRootLogin'
line: 'PermitRootLogin no'
notify: restart sshd
handlers:
- name: restart sshd
service:
name: sshd
state: restarted
Everything after this would use the masters inventory.
EDIT
After some research I have found that "init scripts"/"startup scripts"/"user data" scripts are supported across AWS, GCP, and DigitalOcean, potentially via cloud-init (this is what DigitalOcean uses, didn't research the others), which is cross-provider enough for me to just stick with a bash init script solution.
I would still be interested & curious if someone had a killer Ansible-only solution for this, although I am not sure there is a great way to make this happen without a pre-init script.
Regardless of any ansible limitations, it seems that without using the cloud init script, you can't have this. Either the server starts with a root or similar user to perform these actions, or the server starts without a user with those powers, then you can't perform these actions.
Further, I have seen Ansible playbooks and bash scripts that try to solve the desired "idempotence" (complete with no errors even if root is already disabled) by testing root ssh access, then falling back to another user, but "I can't ssh with root" is a poor test for "is the root user disabled" because there are plenty of ways your ssh access could fail even though the server is still configured to allow root to ssh.
EDIT 2 placing this here, since I can't use newlines in my response to a comment:
β.εηοιτ.βε responded to my assertion:
"but "I can't ssh with root" is a poor test for "is the root user disabled" because there are plenty of ways your ssh access could fail even though the server is still configured to allow root to ssh
with
then, try to ssh with infraops and assert that PermitRootLogin no is in the ssh daemon config file?"
It sounds like the suggestion is:
- attempt ssh with root
- if success, we know user/ssh setup tasks have not completed, so run those tasks
- if failure, attempt ssh with infraops
- if success, go ahead and run everything except the user creation again to ensure ssh config is as desired
- if failure... ? something else is probably wrong, since I can't ssh with either user
I am not sure what this sort of if-then failure recovery actually looks like in an Ansible script
You can overwrite host variables for a given play by using vars.
- hosts: masters
become: 'yes'
vars:
ansible_ssh_user: "root"
ansible_ssh_private_key_file: "~/.ssh/id_rsa_infra_ops"
tasks:
You could only define the demo_master group and alter the ansible_user and ansible_ssh_private_key_file at run time, using command flags --user and --private-key.
So with an host.yaml containing
all:
children:
masters:
hosts:
demo_master:
ansible_host: a.b.c.d # same ip as above
ansible_user: infraops
ansible_ssh_private_key_file: ~/.ssh/id_rsa_infra_ops
And run on - hosts: master, the first run would, for example be with
ansible-playbook initial-host-setup.yaml \
--user root \
--private-key ~/.ssh/id_rsa_root
When the subsequent runs would simply by
ansible-playbook subsequent-host-setup.yaml
Since all the required values are in the inventory already.

Ansible unable to create folder on localhost with different user

I'm executing ansible playbook with appuser whereas I wish to create folder with user webuser on localhost.
ssh keys are setup for webuser on my localhost. So after login with appuser I can simply ssh webuser#localhost to switch user to webuser.
Note: I do not have sudo priveledges so I cannot sudo to switch to webuser from appuser.
Below is my playbook that is run with user appuser but needs to create a folder 04May2020 on localhost using webuser
- name: "Play 1"
hosts: localhost
remote_user: "webuser"
vars:
ansible_ssh_extra_args: -o StrictHostKeyChecking=no
ansible_ssh_private_key_file: /app/misc_automation/ssh_keys_id_rsa
tasks:
- name: create folder for today's print
file:
path: "/webWeb/htdocs/print/04May2020"
state: directory
remote_user: webuser
However, the output shows that the folder is created with appuser instead of webuser. See output showing ssh connectivity with appuser instead of webuser.
ansible-playbook /app/Ansible/playbook/print_oracle/print.yml -i /app/Ansible/playbook/print_oracle/allhosts.hosts -vvv
TASK [create folder for today] ***********************************
task path: /app/Ansible/playbook/print_oracle/print.yml:33
Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py
Pipelining is enabled.
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: appuser
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python2 && sleep 0'
Can you please suggest if it is possible without sudo?
Putting all my comments together in a comprehensive answer.
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: appuser
This is indicating that you are connecting to localhost through the local connection plugin, either because you explicitelly re-declared the host as such or because you are using the implicit localhost. From discussions, you are in the second situation.
When using the local connection plugin, as indicated in the above documentation, the remote_user is ignored. Trying to change the user has no effect as you can see in the below test run (user (u)ids changed):
# Check we are locally running as user1
$ id -a
uid=xxxx(user1) gid=yyy(group1) groups=yyy(group1)
# Running the same command through ansible returns the same result
$ ansible localhost -a 'id -a'
localhost | CHANGED | rc=0 >>
uid=xxxx(user1) gid=yyy(group1) groups=yyy(group1)
# Trying to change the remote user has no effect
$ ansible localhost -u whatever -a 'id -a'
localhost | CHANGED | rc=0 >>
uid=xxxx(user1) gid=yyy(group1) groups=yyy(group1)
Without changing your playbook and/or inventory, the only solution is to launch the playbook as the user who needs to create the directory.
Since you have ssh available, an other solution is to declare a new host that you will use only for this purpose, which will target the local IP through ssh. (Note: you can explicitly declare localhost like this but then all connections will go through ssh which might not be what you want to do).
Somewhere at the top of you inventory, add the line:
localssh ansible_host=127.0.0.1
And in your playbook, change
hosts: localssh
Now the connection to your local machine will go through ssh and the remote_user will be obeyed correctly.
One way you can try is by setting the ansible_connection to localhost. To do this, in the directory from which you are running ansible commands, create a host_vars directory. In that sub-directory, create a file named localhost, containing the line ansible_connection: smart

Can Ansible match hosts passed as parameter without using add_hosts module

Is it possible to pass the IP address as parameter 'Source_IP' to ansible playbook and use it as hosts ?
Below is my playbook ipinhost.yml:
---
- name: Play 2- Configure Source nodes
hosts: "{{ Source_IP }}"
serial: 1
tasks:
- name: Copying from "{{ inventory_hostname }}" to this ansible server.
debug:
msg: "MY IP IS: {{ Source_IP }}"
The playbook fails to run with the message "Could not match supplied host pattern." Output below:
ansible-playbook ipinhost.yml -e Source_IP='10.8.8.11'
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
[WARNING]: Could not match supplied host pattern, ignoring: 10.8.8.11
PLAY [Play 2- Configure Source nodes] ***********************************************************************************************************************
skipping: no hosts matched
PLAY RECAP **************************************************************************************************************************************************
I do not wish to use ansible's add_host i.e i do not wish to build a dynamic host list as the Source_IP will always be a single server.
Please let me know if this is possible and how can my playbook be tweaked to make it run with hosts matching '10.8.8.11'?
If it is always a single host, a possible solution is to pass a static inline inventory to ansible-playbook.
target your play to the 'all' group. => hosts: all
call your playbook with an inlined inventory of one host. Watch out: the comma at the end of IP in the command is important:
ansible-playbook -i 10.8.8.11, ipinhost.yml

Ansible get the value of the "hosts" key from an ansible play/playbook

Is there any way that I can get the group name for the set of hosts that a play is executing on? I know that ansible has a variable called ansible_play_hosts which is a list of all the hosts that a particular play is executing on. I want the actual group name that encompasses all these hosts.
I am using ansible version 2.3.2.0
Example:
# file: hosts
[my-host-group]
hostname-1
hostname-2
# file: playbook.yml
---
- hosts: my-host-group
tasks:
- name: "Print group name for 'hosts'"
debug:
msg: "Hosts var is '{{ hosts }}'"
I want the message to print Hosts var is 'my-host-group'
{{ hostvars[inventory_hostname].group_names[0] }}