Ansible: configure nginx role to use custom port read from variable - variables

I'm trying to create my custom role to install nginx by Ansible.
I defined this defaults\main.yml
---
defaults:
user: nginx
group: nginx
version: "1.19.2-1"
download_path: "/tmp/nginx-1.19.2-1"
rpm: "/tmp/nginx-1.19.2-1.el7.ngx.x86_64.rpm"
directories:
log: /var/log/nginx
config: /etc/nginx
custom_config: /etc/nginx/conf.d
pid: /var/run
config:
- name: main
content: |
upstream backend {
ip_hash;
server localhost:9090;
server 127.0.0.1:9090;
}
server {
listen 9443 ssl;
ssl_certificate /etc/ssl/certs/cert.crt;
ssl_certificate_key /etc/ssl/private/cert.key;
location / {
proxy_pass http://backend;
}
}
server:
port:
listen:
- 9443
And this is my tasks/main.yml
---
- set_fact:
default_vars: "{{ defaults }}"
host_vars: "{{ hostvars[ansible_host]['nginx'] | default({}) }}"
install_nginx: true
- set_fact:
combined_vars: "{{ default_vars | combine(host_vars, recursive=True) }}"
- name: Gather package facts
package_facts:
manager: auto
- set_fact:
install_nginx: false
when: "'nginx' in ansible_facts.packages"
- name: Install NginX
yum:
name: "{{ combined_vars.rpm }}"
state: present
disable_gpg_check: true
become: true
when:
- install_nginx
- name: Make sure Ports Open
community.general.seport:
ports: "{{ port.listen }}"
proto: tcp
setype: http_port_t
state: present
loop_control:
loop_var: "port"
when: 'port.listen is defined'
with_items: "{{ combined_vars.config.server }}"
become: true
ignore_errors: true
Now I receive the error:
nginx: [emerg] bind() to 0.0.0.0:9443 failed (13: Permission denied)
when I try to start nginx, this because my playbook skip the section Make sure Ports Open where I set to open 9443 port (read from config), and nginx don't start on not default port if you don't add this port (this is the command to run on OS to allow 9443 port: semanage port -a -t http_port_t -p tcp 9443)
This is part of my log:
ok: [10.x.x.8] => {
"ansible_facts": {
"combined_vars": {
"config": [
{
"content": "upstream backend {\n ip_hash;\n server 10.x.x.:9090;\n server 10.x.x.10:9090;\n}\n\nserver {\n listen 9443 ssl;\n ssl_certificate /etc/ssl/certs/cert.crt;\n ssl_certificate_key /etc/ssl/private/cert.key;\n location / {\n proxy_pass http://backend;\n }\n}\n",
"name": "main"
}
],
"directories": {
"config": "/etc/nginx",
"custom_config": "/etc/nginx/conf.d",
"log": "/var/log/nginx",
"pid": "/var/run"
},
"download_path": "/tmp/nginx-1.19.2-1",
"group": "nginx",
"rpm": "/tmp/nginx-1.19.2-1.el7.ngx.x86_64.rpm",
"server": {
"port": {
"listen": [
9443
]
}
},
"user": "nginx",
"version": "1.19.2-1"
}
},
"changed": false
}
...
fatal: [10.x.x.8]: FAILED! => {
"msg": "The task includes an option with an undefined variable. The error was: 'port' is undefined\n\nThe error appears to be in '/opt/Developments/GitLab/harrisburg-infrastructure/roles/nginx/tasks/main.yml': line 95, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Make sure Ports Open Mod\n ^ here\n"
}

I solved in this way:
tasks/main.yml
- name: Make sure Ports Open Mod
community.general.seport:
ports: "{{ combined_vars.server.port.listen }}"
proto: tcp
setype: http_port_t
state: present
loop_control:
loop_var: "listen"
when: 'combined_vars.server.port.listen is defined'
with_items: "{{ combined_vars.server }}"
become: true
ignore_errors: true

Related

Is there a way in Ansible to retrieve a variable located on a different host, to get it on the localhost

I have the following playbook:
- name: Some action on the workers
hosts: workers
gather_facts: false
run_once: true
tasks:
- name: Set var for server information
set_fact:
server_info:
"name": "winserver"
"os": "Microsoft Windows Server 2019 Datacenter"
- name: Some action on the localhost
hosts: localhost
gather_facts: false
run_once: true
tasks:
- name: Show script stdout
debug:
msg:
- "{{ server_info }}"
The hosts is actually a group of servers put in a group named workers (for example server1, server2 and server3), where just one is chosen (arbitrary) to run this task. Now I need to retrieve the information from this variable on the localhost, but as I don't know on which server the first task runs, I cannot explicitly reference it by using:
"{{ hostvars['server2']['server_info'] }}"
Does someone know if there is a way to retrieve this variable on the localhost?
Q: "I don't know on which server the first task runs."
A: It's irrelevant which server the first task runs on. The variable server_info will be declared on all of them. For example, given the inventory
shell> cat hosts
[workers]
server1
server2
server3
The playbook
- hosts: workers
gather_facts: false
run_once: true
tasks:
- set_fact:
server_info: winserver
- debug:
var: hostvars[item]['server_info']
loop: "{{ ansible_play_hosts_all }}"
gives
TASK [debug] *********************************************************
ok: [server1] => (item=server1) =>
ansible_loop_var: item
hostvars[item]['server_info']: winserver
item: server1
ok: [server1] => (item=server2) =>
ansible_loop_var: item
hostvars[item]['server_info']: winserver
item: server2
ok: [server1] => (item=server3) =>
ansible_loop_var: item
hostvars[item]['server_info']: winserver
item: server
You can pick any host you like. For example,
- hosts: localhost
gather_facts: false
run_once: true
tasks:
- debug:
var: hostvars.server2.server_info
gives
TASK [debug] ************************************************************
ok: [localhost] =>
hostvars.server2.server_info: winserver
I just found the answer to this question myself: through the use of
delegate_to: localhost
delegate_facts: true
This way the variable gets stored on the localhost.
- name: Some action on the workers
hosts: workers
gather_facts: false
run_once: true
tasks:
- name: Set var for server information
set_fact:
server_info:
"name": "winserver"
"os": "Microsoft Windows Server 2019 Datacenter"
delegate_to: localhost
delegate_facts: true
- name: Some action on the localhost
hosts: localhost
gather_facts: false
run_once: true
tasks:
- name: Show script stdout
debug:
msg:
- "{{ server_info }}"

Ansible hostname and IP address

How I can use the value of hostname and IP address from hosts inventory file?
For example, I have only one host in the hosts file with name as FQDN, but this is registered on the DNS server.
I tried with some vars, but always get the hostname. But, need both of them :(
Output of request to DNS server:
nslookup host1.dinamarca.com
Server: 10.10.1.1
Address: 10.10.1.1#53
Name: host1.dinamarca.com
Address: 192.168.1.10
Example host file: (only have one host)
host1.dinamarca.com
I call the service ansible with the command:
ansible-playbook --ask-pass -i hosts test.yml
My test.yml file:
---
- name: test1
hosts: host1.dinamarca.com
remote_user: usertest
tasks:
- name: show ansible_ssh_host
debug:
msg: "{{ ansible_ssh_host }}"
- name: show inventary_hostname
debug: var=inventory_hostname
- name: show ansible_hostname
debug: var=ansible_hostname
...
Output is:
TASK [show ansible_ssh_host] ****************************************************************************************************************************************
ok: [host1.dinamarca.com] => {
"msg": "host1.dinamarca.com"
}
TASK [show inventary_hostname] **************************************************************************************************************************************
ok: [host1.dinamarca.com] => {
"inventory_hostname": "host1.dinamarca.com"
}
TASK [show ansible_hostname] ****************************************************************************************************************************************
ok: [host1.dinamarca.com] => {
"ansible_hostname": "host1"
}
PLAY RECAP ************************************************************************************************ *************************************************************
host1.dinamarca.com : ok=4 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
There is an Ansible fact called ansible_fqdn. If you need both the hostname and FQDN, you can have tasks like this:
tasks:
- name: show ansible_ssh_host
debug:
msg: "{{ ansible_ssh_host }}"
- name: show inventory_hostname
debug:
msg: "{{ inventory_hostname }}"
- name: show ansible_hostname
debug:
msg: "{{ ansible_fqdn }}"

Ansible not reporting distribution info on Ubuntu 20.04?

Example on Ubuntu 18.04 reporting distribution info in 'ansible_facts':
$ ansible -i hosts ubuntu1804 -u root -m setup -a "filter=ansible_distribution*"
ubuntu1804 | SUCCESS => {
"ansible_facts": {
"ansible_distribution": "Ubuntu",
"ansible_distribution_file_parsed": true,
"ansible_distribution_file_path": "/etc/os-release",
"ansible_distribution_file_variety": "Debian",
"ansible_distribution_major_version": "18",
"ansible_distribution_release": "bionic",
"ansible_distribution_version": "18.04"
},
"changed": false
}
Example of same command against Ubuntu 20.04:
$ ansible -i hosts ubuntu2004 -u root -m setup -a "filter=ansible_distribution*"
ubuntu2004 | SUCCESS => {
"ansible_facts": {},
"changed": false
}
Is this an issue with Ubuntu or Ansible? Is there a workaround?
Issue resolved with today's update to ansible 2.9.7.
After very research to find out Ubuntu 20.04 version then we have got released a version using ansible version-2.5.1
- hosts: localhost
become: true
gather_facts: yes
tasks:
- name: System details
debug:
msg: "{{ ansible_facts['lsb']['release'] }}"
- name: ubuntu 18
shell: echo "hello 18"
register: ub18
when: ansible_facts['lsb']['release'] == "18.04"
- debug:
msg: "{{ ub18 }}"
- name: ubuntu 20
shell: echo "hello 20"
register: ub20
when: ansible_facts['lsb']['release'] == "20.04"
- debug:
msg: "{{ ub20 }}"

"Failed to connect to the host via ssh" error Ansible

I am trying to run the following playbook on Ansible:
- hosts: localhost
connection: local
remote_user: test
gather_facts: no
vars_files:
- files/aws_creds.yml
- files/info.yml
tasks:
- name: Basic provisioning of EC2 instance
ec2:
assign_public_ip: no
aws_access_key: "{{ aws_id }}"
aws_secret_key: "{{ aws_key }}"
region: "{{ aws_region }}"
image: "{{ standard_ami }}"
instance_type: "{{ free_instance }}"
key_name: "{{ ssh_keyname }}"
count: 3
state: present
group_id: "{{ secgroup_id }}"
wait: no
#delete_on_termination: yes
instance_tags:
Name: Dawny33Template
register: ec2
- name: Add new instance to host group
add_host:
hostname: "{{ item.public_ip }}"
groupname: launched
with_items: "{{ ec2.instances }}"
## Here lies the SSH code
- name: Wait for SSH to come up
wait_for:
host: "{{ item.public_ip }}"
port: 22
delay: 60
timeout: 320
state: started
with_items: "{{ ec2.instances }}"
- name: Configure instance(s)
hosts: launched
become: True
gather_facts: True
#roles:
# - my_awesome_role
# - my_awesome_test
- name: Terminate instances
hosts: localhost
connection: local
tasks:
- name: Terminate instances that were previously launched
ec2:
state: 'absent'
instance_ids: '{{ ec2.instance_ids }}'
I am getting the following error:
TASK [setup] *******************************************************************
fatal: [52.32.183.176]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Warning: Permanently added '52.32.183.176' (ECDSA) to the list of known hosts.\r\nPermission denied (publickey).\r\n", "unreachable": true}
fatal: [52.34.255.16]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Warning: Permanently added '52.34.255.16' (ECDSA) to the list of known hosts.\r\nPermission denied (publickey).\r\n", "unreachable": true}
fatal: [52.34.253.51]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Warning: Permanently added '52.34.253.51' (ECDSA) to the list of known hosts.\r\nPermission denied (publickey).\r\n", "unreachable": true}
My ansible.cfg file already has the following:
[defaults]
host_key_checking = False
Yet, the playbook run is failing. Can someone help me with what I am doing wrong?
The answer has to lie in:
Permission denied (publickey).
You got past host key checking - your problem is with authentication.
Are you intending to use key-based authentication? If so, does
ssh <host> -l <ansible_user>
work for you, or does it produce a password prompt?
Are you trying to use password authentication? If so, it looks like your node does not allow it.
Edit:
adding -vvvv to your playbook enables SSH debugging.
is SSH setup properly? the logs indicate your public key isn't working

Ansible: setting user on dynamic ec2

I don't appear to be connecting to the remote host. Why not?
Command-line: ansible-playbook -i "127.0.0.1," -c local playbook.yml
This is the playbook. The role, create_ec2_instance, creates the variable ec2hosts used within the second portion of the playbook (ansible/playbook.yml):
# Create instance
- hosts: 127.0.0.1
connection: local
gather_facts: false
roles:
- create_ec2_instance
# Configure and install all we need
- hosts: ec2hosts
remote_user: admin
gather_facts: false
roles:
- show-hosts
- prepare-target-system
- install-project-dependencies
- install-project
This is just a simple ec2 module creation. This works as desired. (ansible/roles/create-ec2-instance/tasks/main.yml):
- name: Create instance
ec2:
region: "{{ instance_values['region'] }}"
zone: "{{ instance_values['zone'] }}"
keypair: "{{ instance_values['key_pair'] }}"
group: "{{ instance_values['security_groups'] }}"
instance_type: "{{ instance_values['instance_type'] }}"
image: "{{ instance_values['image_id'] }}"
count_tag: "{{ instance_values['name'] }}"
exact_count: 1
wait: yes
instance_tags:
Name: "{{ instance_values['name'] }}"
when: ec2_instances.instances[instance_values['name']]|default("") == ""
register: ec2_info
- name: Wait for instances to listen on port 22
wait_for:
state: started
host: "{{ ec2_info.instances[0].public_dns_name }}"
port: 22
when: ec2_info|changed
- name: Add new instance to ec2hosts group
add_host:
hostname: "{{ ec2_info.instances[0].public_ip }}"
groupname: ec2hosts
instance_id: "{{ ec2_info.instances[0].id }}"
when: ec2_info|changed
I've included extra methods for transparency, though these are really basic (ansible/roles/show-hosts/tasks/main.yml):
- name: List hosts
debug: msg="groups={{groups}}"
run_once: true
and we have (ansible/roles/prepare-target-system/tasks/main.yml):
- name: get the username running the deploy
local_action: command whoami
register: username_on_the_host
- debug: var=username_on_the_host
- name: Add necessary system packages
become: yes
become_method: sudo
package: "name={{item}} state=latest"
with_items:
- software-properties-common
- python-software-properties
- devscripts
- build-essential
- libffi-dev
- libssl-dev
- vim
Edit: I've updated to remote_user above and below is the error output:
TASK [prepare-target-system : debug] *******************************************
task path: <REDACTED>/ansible/roles/prepare-target-system/tasks/main.yml:5
ok: [35.166.52.247] => {
"username_on_the_host": {
"changed": true,
"cmd": [
"whoami"
],
"delta": "0:00:00.009067",
"end": "2017-01-07 08:23:42.033551",
"rc": 0,
"start": "2017-01-07 08:23:42.024484",
"stderr": "",
"stdout": "brianbruggeman",
"stdout_lines": [
"brianbruggeman"
],
"warnings": []
}
}
TASK [prepare-target-system : Ensure that we can update apt-repository] ********
task path: /<REDACTED>/ansible/roles/prepare-target-system/tasks/Debian.yml:2
Using module file <REDACTED>/.envs/dg2/lib/python2.7/site-packages/ansible/modules/core/packaging/os/apt.py
<35.166.52.247> ESTABLISH LOCAL CONNECTION FOR USER: brianbruggeman
<35.166.52.247> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1483799022.33-268449475843769 `" && echo ansible-tmp-1483799022.33-268449475843769="` echo $HOME/.ansible/tmp/ansible-tmp-1483799022.33-268449475843769 `" ) && sleep 0'
<35.166.52.247> PUT /var/folders/r9/kv1j05355r34570x2f5wpxpr0000gn/T/tmpK2__II TO <REDACTED>/.ansible/tmp/ansible-tmp-1483799022.33-268449475843769/apt.py
<35.166.52.247> EXEC /bin/sh -c 'chmod u+x <REDACTED>/.ansible/tmp/ansible-tmp-1483799022.33-268449475843769/ <REDACTED>/.ansible/tmp/ansible-tmp-1483799022.33-268449475843769/apt.py && sleep 0'
<35.166.52.247> EXEC /bin/sh -c 'sudo -H -S -n -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-owktjrfvqssjrqcetaxjkwowkzsqfitq; /usr/bin/python <REDACTED>/.ansible/tmp/ansible-tmp-1483799022.33-268449475843769/apt.py; rm -rf "<REDACTED>/.ansible/tmp/ansible-tmp-1483799022.33-268449475843769/" > /dev/null 2>&1'"'"' && sleep 0'
failed: [35.166.52.247] (item=[u'software-properties-common', u'python-software-properties', u'devscripts', u'build-essential', u'libffi-dev', u'libssl-dev', u'vim']) => {
"failed": true,
"invocation": {
"module_name": "apt"
},
"item": [
"software-properties-common",
"python-software-properties",
"devscripts",
"build-essential",
"libffi-dev",
"libssl-dev",
"vim"
],
"module_stderr": "sudo: a password is required\n",
"module_stdout": "",
"msg": "MODULE FAILURE"
}
to retry, use: --limit #<REDACTED>/ansible/<redacted playbook>.retry
PLAY RECAP *********************************************************************
127.0.0.1 : ok=6 changed=2 unreachable=0 failed=0
35.166.52.247 : ok=3 changed=1 unreachable=0 failed=1
Use become:
remote_user: ansible
become: true
become_user: root
Ansible docs: Become (Privilege Escalation)
For example: in my scripts i connect to remote host as user 'ansible' (because ssh is disabled for root), and then become 'root'. Rarely, i connect as 'ansible', then become 'apache' user. So, remote_user specify username to connect, become_user is username after connection.
PS Passwordless sudo for user ansible:
- name: nopasswd sudo for ansible user
lineinfile: "dest=/etc/sudoers state=present regexp='^{{ ansible_user }}' line='{{ ansible }} ALL=(ALL) NOPASSWD: ALL'"
This is known workaround, see here: Specify sudo password for Ansible