I don't appear to be connecting to the remote host. Why not?
Command-line: ansible-playbook -i "127.0.0.1," -c local playbook.yml
This is the playbook. The role, create_ec2_instance, creates the variable ec2hosts used within the second portion of the playbook (ansible/playbook.yml):
# Create instance
- hosts: 127.0.0.1
connection: local
gather_facts: false
roles:
- create_ec2_instance
# Configure and install all we need
- hosts: ec2hosts
remote_user: admin
gather_facts: false
roles:
- show-hosts
- prepare-target-system
- install-project-dependencies
- install-project
This is just a simple ec2 module creation. This works as desired. (ansible/roles/create-ec2-instance/tasks/main.yml):
- name: Create instance
ec2:
region: "{{ instance_values['region'] }}"
zone: "{{ instance_values['zone'] }}"
keypair: "{{ instance_values['key_pair'] }}"
group: "{{ instance_values['security_groups'] }}"
instance_type: "{{ instance_values['instance_type'] }}"
image: "{{ instance_values['image_id'] }}"
count_tag: "{{ instance_values['name'] }}"
exact_count: 1
wait: yes
instance_tags:
Name: "{{ instance_values['name'] }}"
when: ec2_instances.instances[instance_values['name']]|default("") == ""
register: ec2_info
- name: Wait for instances to listen on port 22
wait_for:
state: started
host: "{{ ec2_info.instances[0].public_dns_name }}"
port: 22
when: ec2_info|changed
- name: Add new instance to ec2hosts group
add_host:
hostname: "{{ ec2_info.instances[0].public_ip }}"
groupname: ec2hosts
instance_id: "{{ ec2_info.instances[0].id }}"
when: ec2_info|changed
I've included extra methods for transparency, though these are really basic (ansible/roles/show-hosts/tasks/main.yml):
- name: List hosts
debug: msg="groups={{groups}}"
run_once: true
and we have (ansible/roles/prepare-target-system/tasks/main.yml):
- name: get the username running the deploy
local_action: command whoami
register: username_on_the_host
- debug: var=username_on_the_host
- name: Add necessary system packages
become: yes
become_method: sudo
package: "name={{item}} state=latest"
with_items:
- software-properties-common
- python-software-properties
- devscripts
- build-essential
- libffi-dev
- libssl-dev
- vim
Edit: I've updated to remote_user above and below is the error output:
TASK [prepare-target-system : debug] *******************************************
task path: <REDACTED>/ansible/roles/prepare-target-system/tasks/main.yml:5
ok: [35.166.52.247] => {
"username_on_the_host": {
"changed": true,
"cmd": [
"whoami"
],
"delta": "0:00:00.009067",
"end": "2017-01-07 08:23:42.033551",
"rc": 0,
"start": "2017-01-07 08:23:42.024484",
"stderr": "",
"stdout": "brianbruggeman",
"stdout_lines": [
"brianbruggeman"
],
"warnings": []
}
}
TASK [prepare-target-system : Ensure that we can update apt-repository] ********
task path: /<REDACTED>/ansible/roles/prepare-target-system/tasks/Debian.yml:2
Using module file <REDACTED>/.envs/dg2/lib/python2.7/site-packages/ansible/modules/core/packaging/os/apt.py
<35.166.52.247> ESTABLISH LOCAL CONNECTION FOR USER: brianbruggeman
<35.166.52.247> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1483799022.33-268449475843769 `" && echo ansible-tmp-1483799022.33-268449475843769="` echo $HOME/.ansible/tmp/ansible-tmp-1483799022.33-268449475843769 `" ) && sleep 0'
<35.166.52.247> PUT /var/folders/r9/kv1j05355r34570x2f5wpxpr0000gn/T/tmpK2__II TO <REDACTED>/.ansible/tmp/ansible-tmp-1483799022.33-268449475843769/apt.py
<35.166.52.247> EXEC /bin/sh -c 'chmod u+x <REDACTED>/.ansible/tmp/ansible-tmp-1483799022.33-268449475843769/ <REDACTED>/.ansible/tmp/ansible-tmp-1483799022.33-268449475843769/apt.py && sleep 0'
<35.166.52.247> EXEC /bin/sh -c 'sudo -H -S -n -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-owktjrfvqssjrqcetaxjkwowkzsqfitq; /usr/bin/python <REDACTED>/.ansible/tmp/ansible-tmp-1483799022.33-268449475843769/apt.py; rm -rf "<REDACTED>/.ansible/tmp/ansible-tmp-1483799022.33-268449475843769/" > /dev/null 2>&1'"'"' && sleep 0'
failed: [35.166.52.247] (item=[u'software-properties-common', u'python-software-properties', u'devscripts', u'build-essential', u'libffi-dev', u'libssl-dev', u'vim']) => {
"failed": true,
"invocation": {
"module_name": "apt"
},
"item": [
"software-properties-common",
"python-software-properties",
"devscripts",
"build-essential",
"libffi-dev",
"libssl-dev",
"vim"
],
"module_stderr": "sudo: a password is required\n",
"module_stdout": "",
"msg": "MODULE FAILURE"
}
to retry, use: --limit #<REDACTED>/ansible/<redacted playbook>.retry
PLAY RECAP *********************************************************************
127.0.0.1 : ok=6 changed=2 unreachable=0 failed=0
35.166.52.247 : ok=3 changed=1 unreachable=0 failed=1
Use become:
remote_user: ansible
become: true
become_user: root
Ansible docs: Become (Privilege Escalation)
For example: in my scripts i connect to remote host as user 'ansible' (because ssh is disabled for root), and then become 'root'. Rarely, i connect as 'ansible', then become 'apache' user. So, remote_user specify username to connect, become_user is username after connection.
PS Passwordless sudo for user ansible:
- name: nopasswd sudo for ansible user
lineinfile: "dest=/etc/sudoers state=present regexp='^{{ ansible_user }}' line='{{ ansible }} ALL=(ALL) NOPASSWD: ALL'"
This is known workaround, see here: Specify sudo password for Ansible
Related
My issue is that I have one server where the sudoers for the ansible user is like this:
ansible ALL=(root) NOPASSWD: /usr/bin/su - root
Hence, the only way to switch to the root user is:
sudo su - root
When I try to run the below ansible playbook:
---
- name: Configure Local Repo server address
hosts: lab
remote_user: ansible
become: yes
become_user: root
become_method: runas
tasks:
- name: test whoami
become: yes
shell:
cmd: whoami
register: whoami_output
- debug: var=whoami_output
- name: Deploy local.repo file to the hosts
become: yes
copy:
src: /etc/ansible/files/local.repo
dest: /etc/yum.repos.d/local.repo
owner: ansible
group: ansible
mode: 0644
backup: yes
register: deploy_file_output
- debug: var=deploy_file_output
I got the following error:
ansible-playbook --private-key /etc/ansible/keys/ansible_key /etc/ansible/playbooks/local_repo_provisioning.yml
PLAY [Configure Local Repo server address] *****************************************************************************************************************************************************************************************************
TASK [Gathering Facts] *************************************************************************************************************************************************************************************************************************
ok: [10.175.65.12]
TASK [test whoami] *****************************************************************************************************************************************************************************************************************************
changed: [10.175.65.12]
TASK [debug] ***********************************************************************************************************************************************************************************************************************************
ok: [10.175.65.12] => {
"whoami_output": {
"changed": true,
"cmd": "whoami",
"delta": "0:00:00.003301",
"end": "2023-01-15 17:53:56.312715",
"failed": false,
"msg": "",
"rc": 0,
"start": "2023-01-15 17:53:56.309414",
"stderr": "",
"stderr_lines": [],
"stdout": "ansible",
"stdout_lines": [
"ansible"
]
}
}
TASK [Deploy local.repo file to the hosts] *****************************************************************************************************************************************************************************************************
fatal: [10.175.65.12]: FAILED! => {"changed": false, "checksum": "2356deb90d20d5f31351c719614d5b5760ab967d", "msg": "Destination /etc/yum.repos.d not writable"}
PLAY RECAP *************************************************************************************************************************************************************************************************************************************
10.175.65.12 : ok=3 changed=1 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
When I tried to use become_method: sudo I got the "Missing sudo password" message. Further, when I tried become_method: su I got the "Timeout (12s) waiting for privilege escalation prompt:" message.
All in all, would someone know how to explain how ansible runs the commands deppending on the "become_method" set? Is there a way to switch to the root user with that kind of sudoers conf?
Thanks in advance!
Example on Ubuntu 18.04 reporting distribution info in 'ansible_facts':
$ ansible -i hosts ubuntu1804 -u root -m setup -a "filter=ansible_distribution*"
ubuntu1804 | SUCCESS => {
"ansible_facts": {
"ansible_distribution": "Ubuntu",
"ansible_distribution_file_parsed": true,
"ansible_distribution_file_path": "/etc/os-release",
"ansible_distribution_file_variety": "Debian",
"ansible_distribution_major_version": "18",
"ansible_distribution_release": "bionic",
"ansible_distribution_version": "18.04"
},
"changed": false
}
Example of same command against Ubuntu 20.04:
$ ansible -i hosts ubuntu2004 -u root -m setup -a "filter=ansible_distribution*"
ubuntu2004 | SUCCESS => {
"ansible_facts": {},
"changed": false
}
Is this an issue with Ubuntu or Ansible? Is there a workaround?
Issue resolved with today's update to ansible 2.9.7.
After very research to find out Ubuntu 20.04 version then we have got released a version using ansible version-2.5.1
- hosts: localhost
become: true
gather_facts: yes
tasks:
- name: System details
debug:
msg: "{{ ansible_facts['lsb']['release'] }}"
- name: ubuntu 18
shell: echo "hello 18"
register: ub18
when: ansible_facts['lsb']['release'] == "18.04"
- debug:
msg: "{{ ub18 }}"
- name: ubuntu 20
shell: echo "hello 20"
register: ub20
when: ansible_facts['lsb']['release'] == "20.04"
- debug:
msg: "{{ ub20 }}"
I am using ansible to create several ec2 instances, copy files into those newly created servers and run commands on those servers. The issue is that after creating the servers I still have to enter yes in the following ssh prompt:
TASK [Adding /etc/rc.local2 to consul servers] *********************************
changed: [localhost -> 172.31.52.147] => (item={u'ip': u'172.31.52.147', u'number': 0})
The authenticity of host '172.31.57.20 (172.31.57.20)' can't be established.
ECDSA key fingerprint is 5e:c3:2e:52:10:29:1c:44:6f:d3:ac:10:78:10:01:89.
Are you sure you want to continue connecting (yes/no)? yes
changed: [localhost -> 172.31.57.20] => (item={u'ip': u'172.31.57.20', u'number': 1})
The authenticity of host '172.31.57.19 (172.31.57.19)' can't be established.
ECDSA key fingerprint is 4e:71:15:fe:c9:ec:3f:54:65:e8:a1:66:74:92:f4:ff.
Are you sure you want to continue connecting (yes/no)? yes
How can I have ansible ignore this prompt and just answer yes automatically? For reference here is my playbook:
---
- hosts: localhost
connection: local
gather_facts: false
sudo: yes
vars_files:
- ami-keys.yml
- ami-image.yml
tasks:
- name: create 3 consul servers
ec2:
aws_access_key: '{{ aws_access_key }}'
aws_secret_key: '{{ aws_secret_key }}'
key_name: terra
group: default
instance_type: t2.micro
image: '{{ ami }}'
region: '{{ region }}'
wait: true
exact_count: 3
count_tag:
Name: consul-server
instance_tags:
Name: consul-server
register: ec2
- name: Wait for SSH to come up
wait_for: host={{ item }} port=22 delay=1 timeout=480 state=started
with_items:
- "{{ ec2['tagged_instances'][0]['private_ip'] }}"
- "{{ ec2['tagged_instances'][1]['private_ip'] }}"
- "{{ ec2['tagged_instances'][2]['private_ip'] }}"
# shows the json data for the instances created
- name: consul server ec2 instance json data
debug:
msg: "{{ ec2['tagged_instances'] }}"
# bootstrapping
- name: Adding /etc/rc.local2 to consul servers
template:
src: template/{{ item.number }}.sh
dest: /etc/rc.local2
delegate_to: "{{ item.ip }}"
with_items:
- ip: "{{ ec2['tagged_instances'][0]['private_ip'] }}"
number: 0
- ip: "{{ ec2['tagged_instances'][1]['private_ip'] }}"
number: 1
- ip: "{{ ec2['tagged_instances'][2]['private_ip'] }}"
number: 2
ignore_errors: true
- name: give /etc/rc.local2 permissions to run and starting swarm
shell: "{{ item[1] }}"
delegate_to: "{{ item[0] }}"
with_nested:
- [ "{{ ec2['tagged_instances'][0]['private_ip'] }}",
"{{ ec2['tagged_instances'][1]['private_ip'] }}",
"{{ ec2['tagged_instances'][2]['private_ip'] }}" ]
- [ "sudo chmod +x /etc/rc.local2",
"sleep 10",
"consul reload",
"docker run --name swarm-manager -d -p 4000:4000 --restart=unless-stopped \
swarm manage -H :4000 \
--replication --advertise \
$(hostname -i):4000 \
consul://$(hostname -i):8500" ]
ignore_errors: true
Note: I have already tried running:
ansible-playbook -e 'host_key_checking=False' consul-server.yml
and it does not remove the prompt.
Going into /etc/ansible/ansible.cfg and uncommenting the line host_key_checking=False does remove the prompt however I want to avoid doing this and either enter something into my playbook or the command line when I run my playbook instead.
The common recommendation is to set host_key_checking=False in the Ansible configuration. This is a bad idea, because it assumes your network connection will never be compromised.
A much better idea that only assumes the network isn't MitMed when you first create the servers is to use ssh-keyscan to add the servers' fingerprints to the known hosts file:
- name: accept new ssh fingerprints
shell: ssh-keyscan -H {{ item.public_ip }} >> ~/.ssh/known_hosts
with_items: '{{ ec2.instances }}'
Here is the problem I'm working on.
I have an ansible server
I have another server M
I have other servers B1, B2, B3... all known by ansible
I have a hosts file such as this
[CTRL]
M
[SLAVES]
B1
B2
B3
I want to generate a ssh key on my master (not ansible itself) and deploy it on my other slave servers to permit the master to connect on the slaves by keys.
Here is what I tried :
- hosts: CTRL
remote_user: root
vars_prompt:
- name: ssh_password
prompt : Please enter password for ssh key copy on remote nodes
private: yes
tasks:
- yum: name=sshpass state=present
sudo: yes
- name: generate ssh key on the controller
shell : ssh-keygen -b 2048 -t rsa -f /root/.ssh/id_rsa -q -N /dev/null
- name: copy ssh key to the other nodes
shell : sshpass -p '{{ ssh_password }}' ssh-copy-id root#'{{ item }}'
with_items: groups['SLAVES']
delegate_to: "{{ groups['CTRL'][0] }}"
The key generation works but no matter how I work I have a problem copying the key to the slave hosts
failed: [M -> M] => (item=B1) => {"changed": true, "cmd": "sshpass -p 'mypassword' ssh-copy-id root#'B1'", "delta": "0:00:00.101102", "end": "2016-07-18 11:08:56.985623", "item": "B1", "rc": 6, "start": "2016-07-18 11:08:56.884521", "warnings": []}
failed: [M -> M] => (item=B2) => {"changed": true, "cmd": "sshpass -p 'mypassword' ssh-copy-id root#'B2'", "delta": "0:00:00.101102", "end": "2016-07-18 11:08:56.985623", "item": "B1", "rc": 6, "start": "2016-07-18 11:08:56.884521", "warnings": []}
failed: [M -> M] => (item=B3) => {"changed": true, "cmd": "sshpass -p 'mypassword' ssh-copy-id root#'B3'", "delta": "0:00:00.101102", "end": "2016-07-18 11:08:56.985623", "item": "B1", "rc": 6, "start": "2016-07-18 11:08:56.884521", "warnings": []}
Do you know how I could correct my code or maybe do you have a simplier way to do what I want to do ?
Thank you.
This is more neat solution without file fetch:
---
- hosts: M
tasks:
- name: generate key pair
shell: ssh-keygen -b 2048 -t rsa -f /root/.ssh/id_rsa -q -N /dev/null
args:
creates: /root/.ssh/id_rsa
- name: test public key
shell: ssh-keygen -l -f /root/.ssh/id_rsa.pub
changed_when: false
- name: retrieve public key
shell: cat /root/.ssh/id_rsa.pub
register: master_public_key
changed_when: false
- hosts: SLAVES
tasks:
- name: add master public key to slaves
authorized_key:
user: root
key: "{{ hostvars['M'].master_public_key.stdout }}"
One of possible solutions (my first answer):
---
- hosts: M
tasks:
- name: generate key pair
shell: ssh-keygen -b 2048 -t rsa -f /root/.ssh/id_rsa -q -N /dev/null
- name: fetch public key
fetch:
src: /root/.ssh/id_rsa.pub
dest: tmp/
flat: yes
- hosts: SLAVES
tasks:
- name: add master public key to slaves
authorized_key:
user: root
key: "{{ lookup('file', 'tmp/id_rsa.pub') }}"
Could someone say me, what am I doing wrong? I'm working with Amazon EC2 instance and want to have agent forwarded to user rails, but when I run next task:
- acl: name={{ item }} etype=user entity=rails permissions=rwx state=present
with_items:
- "{{ ansible_env.SSH_AUTH_SOCK|dirname }}"
- "{{ ansible_env.SSH_AUTH_SOCK }}"
sudo: true
I see failed result:
(item=/tmp/ssh-ULvzaZpq2U) => {"failed": true, "item": "/tmp/ssh-ULvzaZpq2U"}
msg: path not found or not accessible!
When I try to it manually, without ansible, it looks good:
setfacl -m rails:rwx "$SSH_AUTH_SOCK"
setfacl -m rails:x $(dirname "$SSH_AUTH_SOCK")
sudo -u rails ssh -T git#github.com //Hi KELiON! You've successfully authenticated, but GitHub does not provide shell access.
I even tried to run new instance and run test ansible playbook:
#!/usr/bin/env ansible-playbook
---
- hosts: all
remote_user: ubuntu
tasks:
- user: name=rails
sudo: true
- name: Add ssh agent line to sudoers
lineinfile:
dest: /etc/sudoers
state: present
regexp: SSH_AUTH_SOCK
line: Defaults env_keep += "SSH_AUTH_SOCK"
sudo: true
- acl: name={{ item }} etype=user entity=rails permissions=rwx state=present
with_items:
- "{{ ansible_env.SSH_AUTH_SOCK|dirname }}"
- "{{ ansible_env.SSH_AUTH_SOCK }}"
sudo: true
- name: Test that git ssh connection is working.
command: ssh -T git#github.com
sudo: true
sudo_user: rails
ansible.cfg is:
[ssh_connection]
pipelining=True
ssh_args=-o ForwardAgent=yes -o ControlMaster=auto -o ControlPersist=60s
[defaults]
sudo_flags=-HE
hostfile=staging
But the same result. Any ideas?
I had the same issue and found the answer at https://github.com/ansible/ansible/issues/7235#issuecomment-45842303
My solution varied a bit from his, because acl didn’t work for me, so I:
Changed ansible.cfg:
[defaults]
sudo_flags=-HE
[ssh_connection]
# COMMENTED OUT: ssh_args = -o ForwardAgent=yes
Added tasks/ssh_agent_hack.yml containing:
- name: "(ssh-agent hack: grant access to {{ deploy_user }})"
# SSH-agent socket is forwarded for the current user only (0700 file). Let's change it
# See: https://github.com/ansible/ansible/issues/7235#issuecomment-45842303
# See: http://serverfault.com/questions/107187/ssh-agent-forwarding-and-sudo-to-another-user
become: false
file: group={{deploy_user}} mode=g+rwx path={{item}}
with_items:
- "{{ ansible_env.SSH_AUTH_SOCK|dirname }}"
- "{{ ansible_env.SSH_AUTH_SOCK }}"
NOTE - the become: false setting is because I ssh in as root - If you ssh in as something else, then you will need to become root to do the fix, and then below become your deploy_user (if it isnt the user you are ssh'ing in as).
And then called it from my deploy.yml playbook:
- hosts: apps
gather_facts: True
become: True
become_user: "{{deploy_user}}"
pre_tasks:
- include: tasks/ssh_agent_hack.yml
tags: [ 'deploy' ]
roles:
- { role: carlosbuenosvinos.ansistrano-deploy, tags: [ 'deploy' ] }
Side note - Adding ForwardAgent yes to the host entry in ~/.ssh/config didn't affect what worked (I tried all 8 combinations :- only setting sudo_flags but not ssh_args works but it doesn't matter if you set forwarding on or off in ~/.ssh/config for opensssh - tested under ubuntu trusty)
Also note: I have pipelining=True in ansible.cfg
This worked for me in ansible v2.3.0.0:
$ vi ansible.cfg
[defaults]
roles_path = ./roles
retry_files_enabled = False
[ssh_connection]
ssh_args=-o ControlMaster=auto -o ControlPersist=60s -o ControlPath=/tmp/ansible-ssh-%h-%p-%r -o ForwardAgent=yes
$ vi roles/pull-code/tasks/main.yml
- name: '(Hack: keep SSH forwarding socket)'
lineinfile:
dest: /etc/sudoers
insertafter: '^#?\s*Defaults\s+env_keep\b'
line: 'Defaults env_keep += "SSH_AUTH_SOCK"'
- name: '(Hack: grant access to the socket to {{app_user}})'
become: false
acl: name='{{item}}' etype=user entity='{{app_user}}' permissions="rwx" state=present
with_items:
- "{{ ansible_env.SSH_AUTH_SOCK|dirname }}"
- "{{ ansible_env.SSH_AUTH_SOCK }}"
- name: Pull the code
become: true
become_user: '{{app_user}}'
git:
repo: '{{repository}}'
dest: '{{code_dest}}'
accept_hostkey: yes
I know this answer is late to the party, but the other answers seemed a bit overly complicated when I distilled my solution to the bare minimum. Here's an example playbook to clone a git repo that requires authentication for access via ssh:
- hosts: all
connection: ssh
vars:
# forward agent so access to git via ssh works
ansible_ssh_extra_args: '-o ForwardAgent=yes'
utils_repo: "git#git.example.com:devops/utils.git"
utils_dir: "/opt/utils"
tasks:
- name: Install Utils
git:
repo: "{{ utils_repo }}"
dest: "{{ utils_dir }}"
update: true
accept_hostkey: yes
become: true
become_method: sudo
# Need this to ensure we have the SSH_AUTH_SOCK environment variable
become_flags: '-HE'