Ansible ssh error: mux_client_read_packet: read header failed: Broken pipe Received exit status from master - ssh

I have a script /wd/remoteuser/stopALL.sh on remotehost i.e 10.0.0.211 and takes 3 seconds to complete execution and has full permission 775 for remoteuser.
Note: /wd/remoteuser/stopALL.sh does not exist on the host where ansible runs.
I wish to trigger the stop script on remotehost from my ansiblehost.
Below is how i run my ansible playbook.
ansible-playbook /app/playbook/ovs.yml -i /app/playbook/ovs.hosts -t stop -f 5 -e Environment=PROD -e Country=SRILANKA -vvvv
cat /app/playbook/ovs.yml
---
- name: Play 1- check for login and mount point
hosts: "*{{ Country }}_{{ Environment }}"
user: "{{ USER }}"
any_errors_fatal: true
vars:
ansible_ssh_extra_args: -o StrictHostKeyChecking=no -o ConnectTimeout=90 -o ServerAliveInterval=50
ansible_ssh_private_key_file: /app/ssh_keys/id_rsa
gather_facts: false
tasks:
- name: Execute backup stop1 script
tags: stop,restart
script: "{{ stopscript }}"
args:
chdir: "{{ stopscript | dirname }}"
register: stopscriptoutput
- name: Debug stopscript
tags: stop,restart
debug:
msg: "{{ stopscriptoutput.stdout }}"
cat /app/playbook/ovs.hosts
[APP_SRILANKA_PROD]
10.0.0.211 USER=remoteuser stopscript=/wd/remoteuser/stopALL.sh countrydet=SRILANKA evt=PROD
Output:
<10.0.0.211> (0, '', 'OpenSSH_7.4p1, OpenSSL 1.0.2k-fips 26 Jan 2017\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 58: Applying options for *\r\ndebug3: kex names ok: [curve25519-sha256,curve25519-sha256#libssh.org,diffie-hellman-group14-sha1,diffie-hellman-group14-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group-exchange-sha1,diffie-hellman-group-exchange-sha256,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,gss-gex-sha1-,gss-group14-sha1-]\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 190236\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
fatal: [10.0.0.211]: FAILED! => {
"changed": false,
"msg": "Could not find or access '/wd/remoteuser/stopALL.sh' on the Ansible Controller.\nIf you are using a module and expect the file to exist on the remote, see the remote_src option"
}
NO MORE HOSTS LEFT *****************************************************************************************************************************************************
PLAY RECAP *************************************************************************************************************************************************************
10.0.0.211 : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
I get this ssh read header failed: Broken pipe error even if I use the shell module as shown below.
- name: Execute backup stop1 script
tags: stop
shell: "sleep 90; {{ stopscript }}; sleep 90"
register: stopscriptoutput
Kindly suggest how can I resolve the ssh broken pipe error and get the script to execute remotely?

Set the proxy in environment variable and it started working

Related

Overwrite vars_prompt variable in playbook with host variable from inventory in Ansible

I want to overwrite some variables in my playbook file from the inventory file for a host that are defined as "vars_prompt". If I understand it correctly, Ansible shouldn't prompt for the variables if they were already set before, however, it still prompts for the variables when I try to execute the playbook.
How can I overwrite the "vars_prompt" variables from the inventory or is this not possible because of the variable precedence definition of Ansible?
Example:
playbook.yml
---
- name: Install Gateway
hosts: all
become: yes
vars_prompt:
- name: "hostname"
prompt: "Hostname"
private: no
...
inventory.yml
---
all:
children:
gateways:
hosts:
gateway:
ansible_host: 192.168.1.10
ansible_user: user
hostname: "gateway-name"
...
Q: "If I understand it correctly, Ansible shouldn't prompt for the variables if they were already set before, however, it still prompts for the variables when I try to execute the playbook."
A: You're wrong. Ansible won't prompt for variables defined by the command line --extra-vars. Quoting from Interactive input: prompts:
Prompts for individual vars_prompt variables will be skipped for any variable that is already defined through the command line --extra-vars option, ...
You can't overwrite vars_prompt variables from the inventory. See Understanding variable precedence. Inventory variables (3.-9.) is lower precedence compared to play vars_prompt (13.). The precedence of extra vars is 22.
Use the module pause to ask for the hostname if any variable is not defined. For example, the inventory
shell> cat hosts
host_1
host_2
and the playbook
hosts: all
gather_facts: false
vars:
hostnames: "{{ ansible_play_hosts_all|
map('extract', hostvars, 'hostname')|
list }}"
hostnames_undef: "{{ hostnames|from_yaml|
select('eq', 'AnsibleUndefined')|
length > 0 }}"
tasks:
- debug:
msg: |
hostnames: {{ hostnames }}
hostnames_undef: {{ hostnames_undef }}
run_once: true
- pause:
prompt: "Hostname"
register: out
when: hostnames_undef
run_once: true
- set_fact:
hostname: "{{ out.user_input }}"
when: hostname is not defined
- debug:
var: hostname
gives
shell> ansible-playbook pb.yml
PLAY [all] ************************************************************************************
TASK [debug] **********************************************************************************
ok: [host_1] =>
msg: |-
hostnames: [AnsibleUndefined, AnsibleUndefined]
hostnames_undef: True
TASK [pause] **********************************************************************************
[pause]
Hostname:
gw.example.com^Mok: [host_1]
TASK [set_fact] *******************************************************************************
ok: [host_1]
ok: [host_2]
TASK [debug] **********************************************************************************
ok: [host_1] =>
hostname: gw.example.com
ok: [host_2] =>
hostname: gw.example.com
PLAY RECAP ************************************************************************************
host_1: ok=4 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host_2: ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
The playbook won't ovewrite variables defined in the inventory. For example
shell> cat hosts
host_1
host_2 hostname=gw2.example.com
gives
TASK [debug] **********************************************************************************
ok: [host_1] =>
hostname: gw.example.com
ok: [host_2] =>
hostname: gw2.example.com
I don't know if you can stop the prompts but you can se a default value directly in vars_prompts. In this way you do not need to type "gateway-name" every time.
vars_prompt:
- name: "hostname"
prompt: "Hostname"
private: no
default: "gateway-name"
Source: https://docs.ansible.com/ansible/latest/user_guide/playbooks_prompts.html

Ansible: How do you properly skip ssh first connection to fresh host?

Context: I'm trying to automate the provision of a fresh new server, but when a new machine is spawned and my ansible playbook is played against it from my provisioning server the usual message pops out:
The authenticity of host '192.168.1.25 (192.168.1.25)' can't be established.
ECDSA key fingerprint is SHA256:QF/AyFhYXaz5bjZ1O+kvceoOjBzmI8M1PYmg3lukYmE.
Are you sure you want to continue connecting (yes/no/[fingerprint])?
I am aware this question has been answered a couple times already but, I do not want to add this line to my .cfg file or give the relative argument when I launch an ansible-playbook command.
Problem: So this answer came to my attention https://stackoverflow.com/a/54735937/18647199
I copy pasted the two tasks in my playbook and if they're by themselves the script runs properly. Skipping the aforementioned prompt (even though it skips it on one server that I still have to made the first connection) see:
TASK [Check known_hosts for 192.168.1.14] **************************************
ok: [192.168.1.16 -> localhost]
ok: [192.168.1.14 -> localhost]
ok: [192.168.1.25 -> localhost]
TASK [Ignore host key for 192.168.1.14 on first run] ***************************
skipping: [192.168.1.14]
skipping: [192.168.1.16]
skipping: [192.168.1.25]
PLAY RECAP *********************************************************************
192.168.1.14 : ok=1 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
192.168.1.16 : ok=1 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
192.168.1.25 : ok=1 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
But if I add just one more task to it, it asks again for the auth prompt that I'm trying to skip.
p.s. using OpenSSH, latest current version.
What I'm trying to run:
---
#all
- hosts: all
#connection: local
become: true
gather_facts: false #otherwise ssh prompt appears
tasks:
- name: Check known_hosts
local_action: shell ssh-keygen -F "{{ inventory_hostname }}"
register: is_known
failed_when: false
changed_when: false
ignore_errors: yes
- name: debug message
debug:
msg: the "{{ inventory_hostname }}"" was tested with output "{{ is_known }}"
- name: Ignore host key for "{{ inventory_hostname }}" on first run
when: is_known.rc == 1
set_fact:
ansible_ssh_common_args: '-o StrictHostKeyChecking=no'
- name: Bootstrap check
stat:
path: /home/bot/bootstrapped-ok
register: bootstrap_result
[..] more code
Debug output:
ansible-playbook debug-bootstrap.yml
PLAY [all] *********************************************************************
TASK [Check known_hosts] *******************************************************
ok: [192.168.1.16 -> localhost]
ok: [192.168.1.14 -> localhost]
ok: [192.168.1.25 -> localhost]
TASK [debug message] ***********************************************************
ok: [192.168.1.14] => {
"msg": "the \"192.168.1.14\"\" was tested with output \"{'msg': 'non-zero return code', 'cmd': 'ssh-keygen -F \"192.168.1.14\"', 'stdout': '', 'stderr': 'do_known_hosts: hostkeys_foreach failed: No such file or directory', 'rc': 255, 'start': '2022-04-02 12:30:50.940041', 'end': '2022-04-02 12:30:50.943287', 'delta': '0:00:00.003246', 'changed': False, 'failed': False, 'stdout_lines': [], 'stderr_lines': ['do_known_hosts: hostkeys_foreach failed: No such file or directory'], 'failed_when_result': False}\""
}
ok: [192.168.1.16] => {
"msg": "the \"192.168.1.16\"\" was tested with output \"{'msg': 'non-zero return code', 'cmd': 'ssh-keygen -F \"192.168.1.16\"', 'stdout': '', 'stderr': 'do_known_hosts: hostkeys_foreach failed: No such file or directory', 'rc': 255, 'start': '2022-04-02 12:30:50.937097', 'end': '2022-04-02 12:30:50.941015', 'delta': '0:00:00.003918', 'changed': False, 'failed': False, 'stdout_lines': [], 'stderr_lines': ['do_known_hosts: hostkeys_foreach failed: No such file or directory'], 'failed_when_result': False}\""
}
ok: [192.168.1.25] => {
"msg": "the \"192.168.1.25\"\" was tested with output \"{'msg': 'non-zero return code', 'cmd': 'ssh-keygen -F \"192.168.1.25\"', 'stdout': '', 'stderr': 'do_known_hosts: hostkeys_foreach failed: No such file or directory', 'rc': 255, 'start': '2022-04-02 12:30:50.978944', 'end': '2022-04-02 12:30:50.982119', 'delta': '0:00:00.003175', 'changed': False, 'failed': False, 'stdout_lines': [], 'stderr_lines': ['do_known_hosts: hostkeys_foreach failed: No such file or directory'], 'failed_when_result': False}\""
}
TASK [Ignore host key for "192.168.1.14" on first run] *************************
skipping: [192.168.1.14]
skipping: [192.168.1.16]
skipping: [192.168.1.25]
TASK [Bootstrap check] *********************************************************
The authenticity of host '192.168.1.25 (192.168.1.25)' can't be established.
ECDSA key fingerprint is SHA256:QF/AyFhYXaz5bjZ1O+kvceoOjBzmI8M1PYmg3lukYmE.
Are you sure you want to continue connecting (yes/no/[fingerprint])? ok: [192.168.1.16]
ok: [192.168.1.14]
So it seems like the command shell ssh-keygen -F "{{ inventory_hostname }}" isn't doing what it's supposed to do as if we had to launch that via terminal.
Question: Does anyone know how to implement that "one-time skip" or has a better way to do this for a fully automated provisioning / deploy?
(I tried to create an unique .yml file with scarce results, I hit a wall and have not many ideas left on how to continue a fully automated provisioning)
Just added mine answer to How to ignore ansible SSH authenticity checking? which list lots of options.
This is what we are using for stable hosts (when running the playbook from Jenkins and you simply want to accept the host key when connecting to the host for the first time) in inventory file:
[all:vars]
ansible_ssh_common_args='-o StrictHostKeyChecking=accept-new'
And this is what we have for temporary hosts (in the end this will ignore they host key at all):
[all:vars]
ansible_ssh_common_args='-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null'
There is also environment variable or you can add it into group/host variables file. No need to have it in the inventory - it was just convenient in our case.
Maybe this could help?

How to detect unreachable target hosts in ansible

I wish to grab in a variable sshreachable if a target hosts all_hosts are reachable or not.
I wrote the below playbook for the same.
- name: Play 3- check telnet nodes
hosts: localhost
ignore_unreachable: yes
- name: Check all port numbers are accessible from current host
include_tasks: innertelnet.yml
with_items: "{{ groups['all_hosts'] }}"
cat innertelnet.yml
---
- name: Check ssh connectivity
block:
- raw: "ssh -o BatchMode=yes root#{{ item }} echo success"
ignore_errors: yes
register: sshcheck
- debug:
msg: "SSHCHECK variable:{{ sshcheck }}"
- set_fact:
sshreachable: 'SSH SUCCESS'
when: sshcheck.unreachable == 'false'
- set_fact:
sshreachable: 'SSH FAILED'
when: sshcheck.unreachable == 'true'
- debug:
msg: "INNERSSH1: {{ sshreachable }}"
Unfortunately, i get error like below:
Output:
TASK [raw] *********************************************************************
fatal: [localhost]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Shared connection to 10.9.9.126 closed.", "skip_reason": "Host localhost is unreachable", "unreachable": true}
TASK [debug] ***********************************************************************************************************************************************************
task path:
ok: [localhost] => {
"msg": "SSHCHECK variable:{'msg': u'Failed to connect to the host via ssh: Shared connection to 10.9.9.126 closed.', 'unreachable': True, 'changed': False}"
}
TASK [set_fact] ****************************************************************
skipping: [localhost]
TASK [set_fact] ****************************************************************
skipping: [localhost]
TASK [debug] *******************************************************************
fatal: [localhost]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'sshreachable' is undefined\n\nThe error appears to be in '/app/playbook/checkssh/innertelnet.yml': line 45, column 10, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - debug:\n ^ here\n"}
PLAY RECAP *********************************************************************
10.0.116.194 : ok=101 changed=1 unreachable=9 failed=0 skipped=12 rescued=0 ignored=95
localhost : ok=5 changed=0 unreachable=1 failed=1 skipped=4 rescued=0 ignored=0
Can you please suggest changes to my code to get this to work?
The error seems to indicate that sshreachable variable is not getting set as the when: condition does not match. I.e. sshcheck.unreachable might not be something returned by raw.
For this purpose, command module should be enough, and we can evaluate the return code of the command to set_fact.
You could do something like:
- block:
- command: ssh -o BatchMode=yes user#host1 echo success
ignore_errors: yes
register: sshcheck
- set_fact:
sshreachable: "{{ sshcheck is success }}"
- debug:
msg: "Host1 reachable: {{ sshreachable | string }}"
Update:
raw module seems to work the same way. Example (including #mdaniel's valuable input):
- block:
- raw: ssh -o BatchMode=yes user#host1 echo success
ignore_errors: yes
register: sshcheck
- set_fact:
sshreachable: SSH SUCCESS
when: sshcheck is success
- set_fact:
sshreachable: SSH FAILED
when: sshcheck is failed
- debug:
msg: "Host1 reachable: {{ sshreachable }}"

Ansible: setting user on dynamic ec2

I don't appear to be connecting to the remote host. Why not?
Command-line: ansible-playbook -i "127.0.0.1," -c local playbook.yml
This is the playbook. The role, create_ec2_instance, creates the variable ec2hosts used within the second portion of the playbook (ansible/playbook.yml):
# Create instance
- hosts: 127.0.0.1
connection: local
gather_facts: false
roles:
- create_ec2_instance
# Configure and install all we need
- hosts: ec2hosts
remote_user: admin
gather_facts: false
roles:
- show-hosts
- prepare-target-system
- install-project-dependencies
- install-project
This is just a simple ec2 module creation. This works as desired. (ansible/roles/create-ec2-instance/tasks/main.yml):
- name: Create instance
ec2:
region: "{{ instance_values['region'] }}"
zone: "{{ instance_values['zone'] }}"
keypair: "{{ instance_values['key_pair'] }}"
group: "{{ instance_values['security_groups'] }}"
instance_type: "{{ instance_values['instance_type'] }}"
image: "{{ instance_values['image_id'] }}"
count_tag: "{{ instance_values['name'] }}"
exact_count: 1
wait: yes
instance_tags:
Name: "{{ instance_values['name'] }}"
when: ec2_instances.instances[instance_values['name']]|default("") == ""
register: ec2_info
- name: Wait for instances to listen on port 22
wait_for:
state: started
host: "{{ ec2_info.instances[0].public_dns_name }}"
port: 22
when: ec2_info|changed
- name: Add new instance to ec2hosts group
add_host:
hostname: "{{ ec2_info.instances[0].public_ip }}"
groupname: ec2hosts
instance_id: "{{ ec2_info.instances[0].id }}"
when: ec2_info|changed
I've included extra methods for transparency, though these are really basic (ansible/roles/show-hosts/tasks/main.yml):
- name: List hosts
debug: msg="groups={{groups}}"
run_once: true
and we have (ansible/roles/prepare-target-system/tasks/main.yml):
- name: get the username running the deploy
local_action: command whoami
register: username_on_the_host
- debug: var=username_on_the_host
- name: Add necessary system packages
become: yes
become_method: sudo
package: "name={{item}} state=latest"
with_items:
- software-properties-common
- python-software-properties
- devscripts
- build-essential
- libffi-dev
- libssl-dev
- vim
Edit: I've updated to remote_user above and below is the error output:
TASK [prepare-target-system : debug] *******************************************
task path: <REDACTED>/ansible/roles/prepare-target-system/tasks/main.yml:5
ok: [35.166.52.247] => {
"username_on_the_host": {
"changed": true,
"cmd": [
"whoami"
],
"delta": "0:00:00.009067",
"end": "2017-01-07 08:23:42.033551",
"rc": 0,
"start": "2017-01-07 08:23:42.024484",
"stderr": "",
"stdout": "brianbruggeman",
"stdout_lines": [
"brianbruggeman"
],
"warnings": []
}
}
TASK [prepare-target-system : Ensure that we can update apt-repository] ********
task path: /<REDACTED>/ansible/roles/prepare-target-system/tasks/Debian.yml:2
Using module file <REDACTED>/.envs/dg2/lib/python2.7/site-packages/ansible/modules/core/packaging/os/apt.py
<35.166.52.247> ESTABLISH LOCAL CONNECTION FOR USER: brianbruggeman
<35.166.52.247> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1483799022.33-268449475843769 `" && echo ansible-tmp-1483799022.33-268449475843769="` echo $HOME/.ansible/tmp/ansible-tmp-1483799022.33-268449475843769 `" ) && sleep 0'
<35.166.52.247> PUT /var/folders/r9/kv1j05355r34570x2f5wpxpr0000gn/T/tmpK2__II TO <REDACTED>/.ansible/tmp/ansible-tmp-1483799022.33-268449475843769/apt.py
<35.166.52.247> EXEC /bin/sh -c 'chmod u+x <REDACTED>/.ansible/tmp/ansible-tmp-1483799022.33-268449475843769/ <REDACTED>/.ansible/tmp/ansible-tmp-1483799022.33-268449475843769/apt.py && sleep 0'
<35.166.52.247> EXEC /bin/sh -c 'sudo -H -S -n -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-owktjrfvqssjrqcetaxjkwowkzsqfitq; /usr/bin/python <REDACTED>/.ansible/tmp/ansible-tmp-1483799022.33-268449475843769/apt.py; rm -rf "<REDACTED>/.ansible/tmp/ansible-tmp-1483799022.33-268449475843769/" > /dev/null 2>&1'"'"' && sleep 0'
failed: [35.166.52.247] (item=[u'software-properties-common', u'python-software-properties', u'devscripts', u'build-essential', u'libffi-dev', u'libssl-dev', u'vim']) => {
"failed": true,
"invocation": {
"module_name": "apt"
},
"item": [
"software-properties-common",
"python-software-properties",
"devscripts",
"build-essential",
"libffi-dev",
"libssl-dev",
"vim"
],
"module_stderr": "sudo: a password is required\n",
"module_stdout": "",
"msg": "MODULE FAILURE"
}
to retry, use: --limit #<REDACTED>/ansible/<redacted playbook>.retry
PLAY RECAP *********************************************************************
127.0.0.1 : ok=6 changed=2 unreachable=0 failed=0
35.166.52.247 : ok=3 changed=1 unreachable=0 failed=1
Use become:
remote_user: ansible
become: true
become_user: root
Ansible docs: Become (Privilege Escalation)
For example: in my scripts i connect to remote host as user 'ansible' (because ssh is disabled for root), and then become 'root'. Rarely, i connect as 'ansible', then become 'apache' user. So, remote_user specify username to connect, become_user is username after connection.
PS Passwordless sudo for user ansible:
- name: nopasswd sudo for ansible user
lineinfile: "dest=/etc/sudoers state=present regexp='^{{ ansible_user }}' line='{{ ansible }} ALL=(ALL) NOPASSWD: ALL'"
This is known workaround, see here: Specify sudo password for Ansible

Ansible and ForwardAgent for sudo_user

Could someone say me, what am I doing wrong? I'm working with Amazon EC2 instance and want to have agent forwarded to user rails, but when I run next task:
- acl: name={{ item }} etype=user entity=rails permissions=rwx state=present
with_items:
- "{{ ansible_env.SSH_AUTH_SOCK|dirname }}"
- "{{ ansible_env.SSH_AUTH_SOCK }}"
sudo: true
I see failed result:
(item=/tmp/ssh-ULvzaZpq2U) => {"failed": true, "item": "/tmp/ssh-ULvzaZpq2U"}
msg: path not found or not accessible!
When I try to it manually, without ansible, it looks good:
setfacl -m rails:rwx "$SSH_AUTH_SOCK"
setfacl -m rails:x $(dirname "$SSH_AUTH_SOCK")
sudo -u rails ssh -T git#github.com //Hi KELiON! You've successfully authenticated, but GitHub does not provide shell access.
I even tried to run new instance and run test ansible playbook:
#!/usr/bin/env ansible-playbook
---
- hosts: all
remote_user: ubuntu
tasks:
- user: name=rails
sudo: true
- name: Add ssh agent line to sudoers
lineinfile:
dest: /etc/sudoers
state: present
regexp: SSH_AUTH_SOCK
line: Defaults env_keep += "SSH_AUTH_SOCK"
sudo: true
- acl: name={{ item }} etype=user entity=rails permissions=rwx state=present
with_items:
- "{{ ansible_env.SSH_AUTH_SOCK|dirname }}"
- "{{ ansible_env.SSH_AUTH_SOCK }}"
sudo: true
- name: Test that git ssh connection is working.
command: ssh -T git#github.com
sudo: true
sudo_user: rails
ansible.cfg is:
[ssh_connection]
pipelining=True
ssh_args=-o ForwardAgent=yes -o ControlMaster=auto -o ControlPersist=60s
[defaults]
sudo_flags=-HE
hostfile=staging
But the same result. Any ideas?
I had the same issue and found the answer at https://github.com/ansible/ansible/issues/7235#issuecomment-45842303
My solution varied a bit from his, because acl didn’t work for me, so I:
Changed ansible.cfg:
[defaults]
sudo_flags=-HE
[ssh_connection]
# COMMENTED OUT: ssh_args = -o ForwardAgent=yes
Added tasks/ssh_agent_hack.yml containing:
- name: "(ssh-agent hack: grant access to {{ deploy_user }})"
# SSH-agent socket is forwarded for the current user only (0700 file). Let's change it
# See: https://github.com/ansible/ansible/issues/7235#issuecomment-45842303
# See: http://serverfault.com/questions/107187/ssh-agent-forwarding-and-sudo-to-another-user
become: false
file: group={{deploy_user}} mode=g+rwx path={{item}}
with_items:
- "{{ ansible_env.SSH_AUTH_SOCK|dirname }}"
- "{{ ansible_env.SSH_AUTH_SOCK }}"
NOTE - the become: false setting is because I ssh in as root - If you ssh in as something else, then you will need to become root to do the fix, and then below become your deploy_user (if it isnt the user you are ssh'ing in as).
And then called it from my deploy.yml playbook:
- hosts: apps
gather_facts: True
become: True
become_user: "{{deploy_user}}"
pre_tasks:
- include: tasks/ssh_agent_hack.yml
tags: [ 'deploy' ]
roles:
- { role: carlosbuenosvinos.ansistrano-deploy, tags: [ 'deploy' ] }
Side note - Adding ForwardAgent yes to the host entry in ~/.ssh/config didn't affect what worked (I tried all 8 combinations :- only setting sudo_flags but not ssh_args works but it doesn't matter if you set forwarding on or off in ~/.ssh/config for opensssh - tested under ubuntu trusty)
Also note: I have pipelining=True in ansible.cfg
This worked for me in ansible v2.3.0.0:
$ vi ansible.cfg
[defaults]
roles_path = ./roles
retry_files_enabled = False
[ssh_connection]
ssh_args=-o ControlMaster=auto -o ControlPersist=60s -o ControlPath=/tmp/ansible-ssh-%h-%p-%r -o ForwardAgent=yes
$ vi roles/pull-code/tasks/main.yml
- name: '(Hack: keep SSH forwarding socket)'
lineinfile:
dest: /etc/sudoers
insertafter: '^#?\s*Defaults\s+env_keep\b'
line: 'Defaults env_keep += "SSH_AUTH_SOCK"'
- name: '(Hack: grant access to the socket to {{app_user}})'
become: false
acl: name='{{item}}' etype=user entity='{{app_user}}' permissions="rwx" state=present
with_items:
- "{{ ansible_env.SSH_AUTH_SOCK|dirname }}"
- "{{ ansible_env.SSH_AUTH_SOCK }}"
- name: Pull the code
become: true
become_user: '{{app_user}}'
git:
repo: '{{repository}}'
dest: '{{code_dest}}'
accept_hostkey: yes
I know this answer is late to the party, but the other answers seemed a bit overly complicated when I distilled my solution to the bare minimum. Here's an example playbook to clone a git repo that requires authentication for access via ssh:
- hosts: all
connection: ssh
vars:
# forward agent so access to git via ssh works
ansible_ssh_extra_args: '-o ForwardAgent=yes'
utils_repo: "git#git.example.com:devops/utils.git"
utils_dir: "/opt/utils"
tasks:
- name: Install Utils
git:
repo: "{{ utils_repo }}"
dest: "{{ utils_dir }}"
update: true
accept_hostkey: yes
become: true
become_method: sudo
# Need this to ensure we have the SSH_AUTH_SOCK environment variable
become_flags: '-HE'