I need some help with Ansible variables.
---
- name: create remote ansible account
hosts: all
gather_facts: false
remote_user: admin
vars:
ansible_ssh_pass: mypassword
ansible_become_pass: mypassword
publickey: "{{ inputvalue }}"
vars_files:
- publickey_file.yml
roles:
- create account
publickey_file.yml looks like this:
entry1: ssh-rsa AAAAB3....
entry2: ssh-rsa AAAAC3....
Specific task in role looks like this:
aml
- name: install SSH Key
authorized_key:
user: ansible
key: '{{ publickey }}'
become: yes
I would like to push a specific public key when specifying a variables with ansible-playbook.
I tried this, but it does not work:
ansible-playbook -i inventory.yml myplaybook.yml -e 'inputvalue=entry1'
This does not insert the value "{{ entry1 }}" but only the word 'entry1', so, inserted the key are not correct in module the authorized_key.
How can I insert, in publickey, the variable value "{{ entry1 }}" instead of 'entry1'?
You need the vars lookup in order to find a variable named as the string contained in the variable inputvalue:
publickey: "{{ lookup('vars', inputvalue) }}"
Related
I need to deploy TICK.
How do you use variables in kapacitor.conf?
EX: username = "{{ admin }}"
I have a kapacitor.conf with variables to replace, and I have a file default.yml with variables.
Kapacitor.conf
username = "{{ admin }}"
password = "{{ admin_password }}"
default.yml
---
admin: admin
admin_password: admin
An option would be to use lineinfile. Given the variables
> cat default.yml
username: admin
password: admin_password
the playbook below
- hosts: localhost
vars_files:
- default.yml
tasks:
- lineinfile:
path: Kapacitor.conf
regexp: "^{{ item.key }}:"
line: "{{ item.key }}:{{ item.value }}"
create: yes
loop:
- {key: 'admin', value: "{{ username }}"}
- {key: 'admin_password', value: "{{ password }}"}
gives:
> cat Kapacitor.conf
admin:admin
admin_password:admin_password
next (for some first) option would be template.
Is this possible? I have a playbook looking like this:
vars:
BDNAME: ""
- name: Add a tenant using a JSON string
aci_bd:
tenant: "common"
bd: "{{ BDNAME }}"
vrf: "PIGGE"
hostname: '1.1.1.1'
username: "x"
password: "x"
use_ssl: yes
validate_certs: false
It works if i provide an extra variable in the commandline:
ansible-playbook apic.yml -i server.yml --extra-vars BDNAME='pooh'
Then BDNAME gets the value pooh.
But is there any way that i can define pooh as a variable. So if i run the playbook like i just did, BDNAME get the value of that variable.
So something like
vars:
BDNAME: ""
POOH: nisse
Then BDNAME should be nisse.
Define BDNAME in playbook directly from the extra variable POOH. That should do what you want. But it would be easier to use POOH instead of BDNAME.
Here is a example playbook:
---
- hosts: localhost
vars:
BDNAME: "{{ POOH }}"
tasks:
- name: print BDNAME
debug:
msg: "{{ BDNAME }}"
if you call it with:
ansible-playbook playbook.yml -e '{"POOH": "Oliver"}'
you will see:
TASK [print BDNAME] **********************************************************************************************************************************************************************************************************************************************************
ok: [localhost] => {
"changed": false,
"msg": "Oliver"
}
I am using ansible to create several ec2 instances, copy files into those newly created servers and run commands on those servers. The issue is that after creating the servers I still have to enter yes in the following ssh prompt:
TASK [Adding /etc/rc.local2 to consul servers] *********************************
changed: [localhost -> 172.31.52.147] => (item={u'ip': u'172.31.52.147', u'number': 0})
The authenticity of host '172.31.57.20 (172.31.57.20)' can't be established.
ECDSA key fingerprint is 5e:c3:2e:52:10:29:1c:44:6f:d3:ac:10:78:10:01:89.
Are you sure you want to continue connecting (yes/no)? yes
changed: [localhost -> 172.31.57.20] => (item={u'ip': u'172.31.57.20', u'number': 1})
The authenticity of host '172.31.57.19 (172.31.57.19)' can't be established.
ECDSA key fingerprint is 4e:71:15:fe:c9:ec:3f:54:65:e8:a1:66:74:92:f4:ff.
Are you sure you want to continue connecting (yes/no)? yes
How can I have ansible ignore this prompt and just answer yes automatically? For reference here is my playbook:
---
- hosts: localhost
connection: local
gather_facts: false
sudo: yes
vars_files:
- ami-keys.yml
- ami-image.yml
tasks:
- name: create 3 consul servers
ec2:
aws_access_key: '{{ aws_access_key }}'
aws_secret_key: '{{ aws_secret_key }}'
key_name: terra
group: default
instance_type: t2.micro
image: '{{ ami }}'
region: '{{ region }}'
wait: true
exact_count: 3
count_tag:
Name: consul-server
instance_tags:
Name: consul-server
register: ec2
- name: Wait for SSH to come up
wait_for: host={{ item }} port=22 delay=1 timeout=480 state=started
with_items:
- "{{ ec2['tagged_instances'][0]['private_ip'] }}"
- "{{ ec2['tagged_instances'][1]['private_ip'] }}"
- "{{ ec2['tagged_instances'][2]['private_ip'] }}"
# shows the json data for the instances created
- name: consul server ec2 instance json data
debug:
msg: "{{ ec2['tagged_instances'] }}"
# bootstrapping
- name: Adding /etc/rc.local2 to consul servers
template:
src: template/{{ item.number }}.sh
dest: /etc/rc.local2
delegate_to: "{{ item.ip }}"
with_items:
- ip: "{{ ec2['tagged_instances'][0]['private_ip'] }}"
number: 0
- ip: "{{ ec2['tagged_instances'][1]['private_ip'] }}"
number: 1
- ip: "{{ ec2['tagged_instances'][2]['private_ip'] }}"
number: 2
ignore_errors: true
- name: give /etc/rc.local2 permissions to run and starting swarm
shell: "{{ item[1] }}"
delegate_to: "{{ item[0] }}"
with_nested:
- [ "{{ ec2['tagged_instances'][0]['private_ip'] }}",
"{{ ec2['tagged_instances'][1]['private_ip'] }}",
"{{ ec2['tagged_instances'][2]['private_ip'] }}" ]
- [ "sudo chmod +x /etc/rc.local2",
"sleep 10",
"consul reload",
"docker run --name swarm-manager -d -p 4000:4000 --restart=unless-stopped \
swarm manage -H :4000 \
--replication --advertise \
$(hostname -i):4000 \
consul://$(hostname -i):8500" ]
ignore_errors: true
Note: I have already tried running:
ansible-playbook -e 'host_key_checking=False' consul-server.yml
and it does not remove the prompt.
Going into /etc/ansible/ansible.cfg and uncommenting the line host_key_checking=False does remove the prompt however I want to avoid doing this and either enter something into my playbook or the command line when I run my playbook instead.
The common recommendation is to set host_key_checking=False in the Ansible configuration. This is a bad idea, because it assumes your network connection will never be compromised.
A much better idea that only assumes the network isn't MitMed when you first create the servers is to use ssh-keyscan to add the servers' fingerprints to the known hosts file:
- name: accept new ssh fingerprints
shell: ssh-keyscan -H {{ item.public_ip }} >> ~/.ssh/known_hosts
with_items: '{{ ec2.instances }}'
Could someone say me, what am I doing wrong? I'm working with Amazon EC2 instance and want to have agent forwarded to user rails, but when I run next task:
- acl: name={{ item }} etype=user entity=rails permissions=rwx state=present
with_items:
- "{{ ansible_env.SSH_AUTH_SOCK|dirname }}"
- "{{ ansible_env.SSH_AUTH_SOCK }}"
sudo: true
I see failed result:
(item=/tmp/ssh-ULvzaZpq2U) => {"failed": true, "item": "/tmp/ssh-ULvzaZpq2U"}
msg: path not found or not accessible!
When I try to it manually, without ansible, it looks good:
setfacl -m rails:rwx "$SSH_AUTH_SOCK"
setfacl -m rails:x $(dirname "$SSH_AUTH_SOCK")
sudo -u rails ssh -T git#github.com //Hi KELiON! You've successfully authenticated, but GitHub does not provide shell access.
I even tried to run new instance and run test ansible playbook:
#!/usr/bin/env ansible-playbook
---
- hosts: all
remote_user: ubuntu
tasks:
- user: name=rails
sudo: true
- name: Add ssh agent line to sudoers
lineinfile:
dest: /etc/sudoers
state: present
regexp: SSH_AUTH_SOCK
line: Defaults env_keep += "SSH_AUTH_SOCK"
sudo: true
- acl: name={{ item }} etype=user entity=rails permissions=rwx state=present
with_items:
- "{{ ansible_env.SSH_AUTH_SOCK|dirname }}"
- "{{ ansible_env.SSH_AUTH_SOCK }}"
sudo: true
- name: Test that git ssh connection is working.
command: ssh -T git#github.com
sudo: true
sudo_user: rails
ansible.cfg is:
[ssh_connection]
pipelining=True
ssh_args=-o ForwardAgent=yes -o ControlMaster=auto -o ControlPersist=60s
[defaults]
sudo_flags=-HE
hostfile=staging
But the same result. Any ideas?
I had the same issue and found the answer at https://github.com/ansible/ansible/issues/7235#issuecomment-45842303
My solution varied a bit from his, because acl didn’t work for me, so I:
Changed ansible.cfg:
[defaults]
sudo_flags=-HE
[ssh_connection]
# COMMENTED OUT: ssh_args = -o ForwardAgent=yes
Added tasks/ssh_agent_hack.yml containing:
- name: "(ssh-agent hack: grant access to {{ deploy_user }})"
# SSH-agent socket is forwarded for the current user only (0700 file). Let's change it
# See: https://github.com/ansible/ansible/issues/7235#issuecomment-45842303
# See: http://serverfault.com/questions/107187/ssh-agent-forwarding-and-sudo-to-another-user
become: false
file: group={{deploy_user}} mode=g+rwx path={{item}}
with_items:
- "{{ ansible_env.SSH_AUTH_SOCK|dirname }}"
- "{{ ansible_env.SSH_AUTH_SOCK }}"
NOTE - the become: false setting is because I ssh in as root - If you ssh in as something else, then you will need to become root to do the fix, and then below become your deploy_user (if it isnt the user you are ssh'ing in as).
And then called it from my deploy.yml playbook:
- hosts: apps
gather_facts: True
become: True
become_user: "{{deploy_user}}"
pre_tasks:
- include: tasks/ssh_agent_hack.yml
tags: [ 'deploy' ]
roles:
- { role: carlosbuenosvinos.ansistrano-deploy, tags: [ 'deploy' ] }
Side note - Adding ForwardAgent yes to the host entry in ~/.ssh/config didn't affect what worked (I tried all 8 combinations :- only setting sudo_flags but not ssh_args works but it doesn't matter if you set forwarding on or off in ~/.ssh/config for opensssh - tested under ubuntu trusty)
Also note: I have pipelining=True in ansible.cfg
This worked for me in ansible v2.3.0.0:
$ vi ansible.cfg
[defaults]
roles_path = ./roles
retry_files_enabled = False
[ssh_connection]
ssh_args=-o ControlMaster=auto -o ControlPersist=60s -o ControlPath=/tmp/ansible-ssh-%h-%p-%r -o ForwardAgent=yes
$ vi roles/pull-code/tasks/main.yml
- name: '(Hack: keep SSH forwarding socket)'
lineinfile:
dest: /etc/sudoers
insertafter: '^#?\s*Defaults\s+env_keep\b'
line: 'Defaults env_keep += "SSH_AUTH_SOCK"'
- name: '(Hack: grant access to the socket to {{app_user}})'
become: false
acl: name='{{item}}' etype=user entity='{{app_user}}' permissions="rwx" state=present
with_items:
- "{{ ansible_env.SSH_AUTH_SOCK|dirname }}"
- "{{ ansible_env.SSH_AUTH_SOCK }}"
- name: Pull the code
become: true
become_user: '{{app_user}}'
git:
repo: '{{repository}}'
dest: '{{code_dest}}'
accept_hostkey: yes
I know this answer is late to the party, but the other answers seemed a bit overly complicated when I distilled my solution to the bare minimum. Here's an example playbook to clone a git repo that requires authentication for access via ssh:
- hosts: all
connection: ssh
vars:
# forward agent so access to git via ssh works
ansible_ssh_extra_args: '-o ForwardAgent=yes'
utils_repo: "git#git.example.com:devops/utils.git"
utils_dir: "/opt/utils"
tasks:
- name: Install Utils
git:
repo: "{{ utils_repo }}"
dest: "{{ utils_dir }}"
update: true
accept_hostkey: yes
become: true
become_method: sudo
# Need this to ensure we have the SSH_AUTH_SOCK environment variable
become_flags: '-HE'
I want to create an arbitrary number of droplets when I call a playbook with ansible.
For example:
I need to create 10 droplets running some python code.
$ ansible-playbook install_pyapp_commission_new.yml --extra-vars "number_of_droplets_to_create=10"
I tried using with_sequence: count = X but you can't apply it to roles, or inside tasks (as far as I know). My playbook looks something like this:
- name: Digital Ocean Provisioning
hosts: 127.0.0.1
gather_facts: false
connection: local
roles:
- { role: do_provision, do_droplet_number: "{{ number_of_droplets_to_create | default(01) }}" }
- name: Setting up application
gather_facts: true
user: root
hosts: do_instances
roles:
- { role: application, wait_time: 60 }
So I pass the input number of droplets to do_provision as do_droplet_number because atm I create one per run (this way I can run 10 in parallel from bash, each with a different number, thus achieving my goal, but it's a dirty solution).
I wanted to do something like this:
- name: Digital Ocean Provisioning
hosts: 127.0.0.1
gather_facts: false
connection: local
roles:
- { role: do_provision, do_droplet_number: "{{ item }}" }
with_sequence: count={{ number_of_droplets_to_create }}
But this is not valid.
This should work using loop instead of with_sequence.
It shifts the loop into the role, because playbooks can't include the 'when'.
The 'when' is needed to prevent a droplet from being created when do_droplet_number is 0.
playbook
- name: list hosts
hosts: all
gather_facts: false
vars:
thiscount: "{{ mycount | default('0') }}"
roles:
- { role: do-provision, do_droplet_number: "{{ thiscount }}" }
roles/do-provision/task/main.yml
- name: display number
debug:
msg: "mycount {{ item }}"
loop: "{{ range(0, do_droplet_number|int) |list }}"
when: thiscount > 0