Create an arbitrary number of DigitalOcean droplets with Ansible - automation

I want to create an arbitrary number of droplets when I call a playbook with ansible.
For example:
I need to create 10 droplets running some python code.
$ ansible-playbook install_pyapp_commission_new.yml --extra-vars "number_of_droplets_to_create=10"
I tried using with_sequence: count = X but you can't apply it to roles, or inside tasks (as far as I know). My playbook looks something like this:
- name: Digital Ocean Provisioning
hosts: 127.0.0.1
gather_facts: false
connection: local
roles:
- { role: do_provision, do_droplet_number: "{{ number_of_droplets_to_create | default(01) }}" }
- name: Setting up application
gather_facts: true
user: root
hosts: do_instances
roles:
- { role: application, wait_time: 60 }
So I pass the input number of droplets to do_provision as do_droplet_number because atm I create one per run (this way I can run 10 in parallel from bash, each with a different number, thus achieving my goal, but it's a dirty solution).
I wanted to do something like this:
- name: Digital Ocean Provisioning
hosts: 127.0.0.1
gather_facts: false
connection: local
roles:
- { role: do_provision, do_droplet_number: "{{ item }}" }
with_sequence: count={{ number_of_droplets_to_create }}
But this is not valid.

This should work using loop instead of with_sequence.
It shifts the loop into the role, because playbooks can't include the 'when'.
The 'when' is needed to prevent a droplet from being created when do_droplet_number is 0.
playbook
- name: list hosts
hosts: all
gather_facts: false
vars:
thiscount: "{{ mycount | default('0') }}"
roles:
- { role: do-provision, do_droplet_number: "{{ thiscount }}" }
roles/do-provision/task/main.yml
- name: display number
debug:
msg: "mycount {{ item }}"
loop: "{{ range(0, do_droplet_number|int) |list }}"
when: thiscount > 0

Related

Is there a way in Ansible to retrieve a variable located on a different host, to get it on the localhost

I have the following playbook:
- name: Some action on the workers
hosts: workers
gather_facts: false
run_once: true
tasks:
- name: Set var for server information
set_fact:
server_info:
"name": "winserver"
"os": "Microsoft Windows Server 2019 Datacenter"
- name: Some action on the localhost
hosts: localhost
gather_facts: false
run_once: true
tasks:
- name: Show script stdout
debug:
msg:
- "{{ server_info }}"
The hosts is actually a group of servers put in a group named workers (for example server1, server2 and server3), where just one is chosen (arbitrary) to run this task. Now I need to retrieve the information from this variable on the localhost, but as I don't know on which server the first task runs, I cannot explicitly reference it by using:
"{{ hostvars['server2']['server_info'] }}"
Does someone know if there is a way to retrieve this variable on the localhost?
Q: "I don't know on which server the first task runs."
A: It's irrelevant which server the first task runs on. The variable server_info will be declared on all of them. For example, given the inventory
shell> cat hosts
[workers]
server1
server2
server3
The playbook
- hosts: workers
gather_facts: false
run_once: true
tasks:
- set_fact:
server_info: winserver
- debug:
var: hostvars[item]['server_info']
loop: "{{ ansible_play_hosts_all }}"
gives
TASK [debug] *********************************************************
ok: [server1] => (item=server1) =>
ansible_loop_var: item
hostvars[item]['server_info']: winserver
item: server1
ok: [server1] => (item=server2) =>
ansible_loop_var: item
hostvars[item]['server_info']: winserver
item: server2
ok: [server1] => (item=server3) =>
ansible_loop_var: item
hostvars[item]['server_info']: winserver
item: server
You can pick any host you like. For example,
- hosts: localhost
gather_facts: false
run_once: true
tasks:
- debug:
var: hostvars.server2.server_info
gives
TASK [debug] ************************************************************
ok: [localhost] =>
hostvars.server2.server_info: winserver
I just found the answer to this question myself: through the use of
delegate_to: localhost
delegate_facts: true
This way the variable gets stored on the localhost.
- name: Some action on the workers
hosts: workers
gather_facts: false
run_once: true
tasks:
- name: Set var for server information
set_fact:
server_info:
"name": "winserver"
"os": "Microsoft Windows Server 2019 Datacenter"
delegate_to: localhost
delegate_facts: true
- name: Some action on the localhost
hosts: localhost
gather_facts: false
run_once: true
tasks:
- name: Show script stdout
debug:
msg:
- "{{ server_info }}"

Ansible define new variable with value from other

I need some help with Ansible variables.
---
- name: create remote ansible account
hosts: all
gather_facts: false
remote_user: admin
vars:
ansible_ssh_pass: mypassword
ansible_become_pass: mypassword
publickey: "{{ inputvalue }}"
vars_files:
- publickey_file.yml
roles:
- create account
publickey_file.yml looks like this:
entry1: ssh-rsa AAAAB3....
entry2: ssh-rsa AAAAC3....
Specific task in role looks like this:
aml
- name: install SSH Key
authorized_key:
user: ansible
key: '{{ publickey }}'
become: yes
I would like to push a specific public key when specifying a variables with ansible-playbook.
I tried this, but it does not work:
ansible-playbook -i inventory.yml myplaybook.yml -e 'inputvalue=entry1'
This does not insert the value "{{ entry1 }}" but only the word 'entry1', so, inserted the key are not correct in module the authorized_key.
How can I insert, in publickey, the variable value "{{ entry1 }}" instead of 'entry1'?
You need the vars lookup in order to find a variable named as the string contained in the variable inputvalue:
publickey: "{{ lookup('vars', inputvalue) }}"

How can we access set_fact of a remote host to a localhost in Ansible

I have a remote host called dest_nodes which has a set_fact called mailsub
I wish to get the value of mailsub ina different localhost play and i use map('extract') for the same.
However, I dont get the value assigned to set_fact in localhost when i debug MAILSUBISX.
Below is my playbook:
- name: "Play 2 Configure Destination nodes"
hosts: dest_nodes
any_errors_fatal: True
tasks:
- set_fact:
mailsub: "WebSphere | on IP and NUMBER OF PROFILES"
- name: "Play 3-Construct inventory"
hosts: localhost
any_errors_fatal: True
gather_facts: no
tasks:
- debug:
msg: "MAILSUBISX:{{ groups['dest_nodes'] | map('extract', hostvars, 'mailsub') }}"
- set_fact:
mailfrom: "creator#myshop.com"
Here is the Error received when I run the playbook and debug the variable in a different host:
Output:
"msg": "MAILSUBISX:<generator object do_map at 0x7fba4b5698c0>"
I also tried the following:
- debug:
msg: "MAILSUBISX:{{ hostvars[groups['dest_nodes']]['mailsub'] }}"
and
- debug:
msg: "MAILSUBISX:{{ hostvars['dest_nodes']['mailsub'] }}"
But they too do not work.
This is how dest_nodes is constructed:
---
- name: "Play 1-Construct inventory"
hosts: localhost
any_errors_fatal: True
gather_facts: no
tasks:
- name: Add hosts
include_tasks: "{{ playbook_dir }}/gethosts1.yml"
loop:
- 01
- 02
- 03
loop_control:
loop_var: my_result
cat gethosts1.yml
---
- name: Generate JSON data
add_host:
hostname: "{{ item }}_{{ my_result }}"
groups: dest_nodes
ansible_host: "{{ item }}"_NUMBER }}"
with_items: "{{ SERVER_IP.split(',') }}"
Can you please suggest?

Using variables from one yml file in another playbook

I am new to ansible and am trying to use variables from a vars.yml file in a playbook.yml file.
vars.yml
---
- firstvar:
id: 1
name: One
- secondvar:
id: 2
name: two
playbook.yml
---
- hosts: localhost
tasks:
- name: Import vars
include_vars:
file: ./vars.yml
name: vardata
- name: Use FirstVar
iso_vlan:
vlan_id: "{{ vardata.firstvar.id }}"
name: "{{ vardata.firstvar.name }}"
state: present
- name: Use Secondvar
iso_vlan:
vlan_id: "{{ vardata.secondvar.id }}"
name: "{{ vardata.secondvar.name }}"
state: present
So you can see here I am treating the imported variable data, which is stored in vardata, as object and trying to call each of them in other tasks. I am pretty sure these imported vars at the first task are only available in that very task. How can I use that in other tasks? It would output as variables undefined for each tasks. Any input is appreciated.
Your vars.yml file isn't formatted correctly.
Try this:
---
firstvar:
id: 1
name: One
secondvar:
id: 2
name: two
I used this to test it:
---
- hosts: localhost
tasks:
- name: Import vars
include_vars:
file: ./vars.yml
name: vardata
- name: debug
debug:
msg: "{{ vardata.firstvar.name }}"
- name: more debug
debug:
msg: "{{ vardata.secondvar.id }}"
On top of the error you made when declaring the variables (syntax is very important), you can also define include_vars: ./vars.yml such that you can just call {{ firstvar.name }}, {{ firstvar.id }} immediately. Much more leaner/shorter.

Ansible - How to ssh into an instance without the 'authenticity of host' prompt?

I am using ansible to create several ec2 instances, copy files into those newly created servers and run commands on those servers. The issue is that after creating the servers I still have to enter yes in the following ssh prompt:
TASK [Adding /etc/rc.local2 to consul servers] *********************************
changed: [localhost -> 172.31.52.147] => (item={u'ip': u'172.31.52.147', u'number': 0})
The authenticity of host '172.31.57.20 (172.31.57.20)' can't be established.
ECDSA key fingerprint is 5e:c3:2e:52:10:29:1c:44:6f:d3:ac:10:78:10:01:89.
Are you sure you want to continue connecting (yes/no)? yes
changed: [localhost -> 172.31.57.20] => (item={u'ip': u'172.31.57.20', u'number': 1})
The authenticity of host '172.31.57.19 (172.31.57.19)' can't be established.
ECDSA key fingerprint is 4e:71:15:fe:c9:ec:3f:54:65:e8:a1:66:74:92:f4:ff.
Are you sure you want to continue connecting (yes/no)? yes
How can I have ansible ignore this prompt and just answer yes automatically? For reference here is my playbook:
---
- hosts: localhost
connection: local
gather_facts: false
sudo: yes
vars_files:
- ami-keys.yml
- ami-image.yml
tasks:
- name: create 3 consul servers
ec2:
aws_access_key: '{{ aws_access_key }}'
aws_secret_key: '{{ aws_secret_key }}'
key_name: terra
group: default
instance_type: t2.micro
image: '{{ ami }}'
region: '{{ region }}'
wait: true
exact_count: 3
count_tag:
Name: consul-server
instance_tags:
Name: consul-server
register: ec2
- name: Wait for SSH to come up
wait_for: host={{ item }} port=22 delay=1 timeout=480 state=started
with_items:
- "{{ ec2['tagged_instances'][0]['private_ip'] }}"
- "{{ ec2['tagged_instances'][1]['private_ip'] }}"
- "{{ ec2['tagged_instances'][2]['private_ip'] }}"
# shows the json data for the instances created
- name: consul server ec2 instance json data
debug:
msg: "{{ ec2['tagged_instances'] }}"
# bootstrapping
- name: Adding /etc/rc.local2 to consul servers
template:
src: template/{{ item.number }}.sh
dest: /etc/rc.local2
delegate_to: "{{ item.ip }}"
with_items:
- ip: "{{ ec2['tagged_instances'][0]['private_ip'] }}"
number: 0
- ip: "{{ ec2['tagged_instances'][1]['private_ip'] }}"
number: 1
- ip: "{{ ec2['tagged_instances'][2]['private_ip'] }}"
number: 2
ignore_errors: true
- name: give /etc/rc.local2 permissions to run and starting swarm
shell: "{{ item[1] }}"
delegate_to: "{{ item[0] }}"
with_nested:
- [ "{{ ec2['tagged_instances'][0]['private_ip'] }}",
"{{ ec2['tagged_instances'][1]['private_ip'] }}",
"{{ ec2['tagged_instances'][2]['private_ip'] }}" ]
- [ "sudo chmod +x /etc/rc.local2",
"sleep 10",
"consul reload",
"docker run --name swarm-manager -d -p 4000:4000 --restart=unless-stopped \
swarm manage -H :4000 \
--replication --advertise \
$(hostname -i):4000 \
consul://$(hostname -i):8500" ]
ignore_errors: true
Note: I have already tried running:
ansible-playbook -e 'host_key_checking=False' consul-server.yml
and it does not remove the prompt.
Going into /etc/ansible/ansible.cfg and uncommenting the line host_key_checking=False does remove the prompt however I want to avoid doing this and either enter something into my playbook or the command line when I run my playbook instead.
The common recommendation is to set host_key_checking=False in the Ansible configuration. This is a bad idea, because it assumes your network connection will never be compromised.
A much better idea that only assumes the network isn't MitMed when you first create the servers is to use ssh-keyscan to add the servers' fingerprints to the known hosts file:
- name: accept new ssh fingerprints
shell: ssh-keyscan -H {{ item.public_ip }} >> ~/.ssh/known_hosts
with_items: '{{ ec2.instances }}'