Ansible and ForwardAgent for sudo_user - ssh

Could someone say me, what am I doing wrong? I'm working with Amazon EC2 instance and want to have agent forwarded to user rails, but when I run next task:
- acl: name={{ item }} etype=user entity=rails permissions=rwx state=present
with_items:
- "{{ ansible_env.SSH_AUTH_SOCK|dirname }}"
- "{{ ansible_env.SSH_AUTH_SOCK }}"
sudo: true
I see failed result:
(item=/tmp/ssh-ULvzaZpq2U) => {"failed": true, "item": "/tmp/ssh-ULvzaZpq2U"}
msg: path not found or not accessible!
When I try to it manually, without ansible, it looks good:
setfacl -m rails:rwx "$SSH_AUTH_SOCK"
setfacl -m rails:x $(dirname "$SSH_AUTH_SOCK")
sudo -u rails ssh -T git#github.com //Hi KELiON! You've successfully authenticated, but GitHub does not provide shell access.
I even tried to run new instance and run test ansible playbook:
#!/usr/bin/env ansible-playbook
---
- hosts: all
remote_user: ubuntu
tasks:
- user: name=rails
sudo: true
- name: Add ssh agent line to sudoers
lineinfile:
dest: /etc/sudoers
state: present
regexp: SSH_AUTH_SOCK
line: Defaults env_keep += "SSH_AUTH_SOCK"
sudo: true
- acl: name={{ item }} etype=user entity=rails permissions=rwx state=present
with_items:
- "{{ ansible_env.SSH_AUTH_SOCK|dirname }}"
- "{{ ansible_env.SSH_AUTH_SOCK }}"
sudo: true
- name: Test that git ssh connection is working.
command: ssh -T git#github.com
sudo: true
sudo_user: rails
ansible.cfg is:
[ssh_connection]
pipelining=True
ssh_args=-o ForwardAgent=yes -o ControlMaster=auto -o ControlPersist=60s
[defaults]
sudo_flags=-HE
hostfile=staging
But the same result. Any ideas?

I had the same issue and found the answer at https://github.com/ansible/ansible/issues/7235#issuecomment-45842303
My solution varied a bit from his, because acl didn’t work for me, so I:
Changed ansible.cfg:
[defaults]
sudo_flags=-HE
[ssh_connection]
# COMMENTED OUT: ssh_args = -o ForwardAgent=yes
Added tasks/ssh_agent_hack.yml containing:
- name: "(ssh-agent hack: grant access to {{ deploy_user }})"
# SSH-agent socket is forwarded for the current user only (0700 file). Let's change it
# See: https://github.com/ansible/ansible/issues/7235#issuecomment-45842303
# See: http://serverfault.com/questions/107187/ssh-agent-forwarding-and-sudo-to-another-user
become: false
file: group={{deploy_user}} mode=g+rwx path={{item}}
with_items:
- "{{ ansible_env.SSH_AUTH_SOCK|dirname }}"
- "{{ ansible_env.SSH_AUTH_SOCK }}"
NOTE - the become: false setting is because I ssh in as root - If you ssh in as something else, then you will need to become root to do the fix, and then below become your deploy_user (if it isnt the user you are ssh'ing in as).
And then called it from my deploy.yml playbook:
- hosts: apps
gather_facts: True
become: True
become_user: "{{deploy_user}}"
pre_tasks:
- include: tasks/ssh_agent_hack.yml
tags: [ 'deploy' ]
roles:
- { role: carlosbuenosvinos.ansistrano-deploy, tags: [ 'deploy' ] }
Side note - Adding ForwardAgent yes to the host entry in ~/.ssh/config didn't affect what worked (I tried all 8 combinations :- only setting sudo_flags but not ssh_args works but it doesn't matter if you set forwarding on or off in ~/.ssh/config for opensssh - tested under ubuntu trusty)
Also note: I have pipelining=True in ansible.cfg

This worked for me in ansible v2.3.0.0:
$ vi ansible.cfg
[defaults]
roles_path = ./roles
retry_files_enabled = False
[ssh_connection]
ssh_args=-o ControlMaster=auto -o ControlPersist=60s -o ControlPath=/tmp/ansible-ssh-%h-%p-%r -o ForwardAgent=yes
$ vi roles/pull-code/tasks/main.yml
- name: '(Hack: keep SSH forwarding socket)'
lineinfile:
dest: /etc/sudoers
insertafter: '^#?\s*Defaults\s+env_keep\b'
line: 'Defaults env_keep += "SSH_AUTH_SOCK"'
- name: '(Hack: grant access to the socket to {{app_user}})'
become: false
acl: name='{{item}}' etype=user entity='{{app_user}}' permissions="rwx" state=present
with_items:
- "{{ ansible_env.SSH_AUTH_SOCK|dirname }}"
- "{{ ansible_env.SSH_AUTH_SOCK }}"
- name: Pull the code
become: true
become_user: '{{app_user}}'
git:
repo: '{{repository}}'
dest: '{{code_dest}}'
accept_hostkey: yes

I know this answer is late to the party, but the other answers seemed a bit overly complicated when I distilled my solution to the bare minimum. Here's an example playbook to clone a git repo that requires authentication for access via ssh:
- hosts: all
connection: ssh
vars:
# forward agent so access to git via ssh works
ansible_ssh_extra_args: '-o ForwardAgent=yes'
utils_repo: "git#git.example.com:devops/utils.git"
utils_dir: "/opt/utils"
tasks:
- name: Install Utils
git:
repo: "{{ utils_repo }}"
dest: "{{ utils_dir }}"
update: true
accept_hostkey: yes
become: true
become_method: sudo
# Need this to ensure we have the SSH_AUTH_SOCK environment variable
become_flags: '-HE'

Related

how to get inside Vault (ssh) with Ansible playbook?

Im using Vagrant and I want to start vault server from inside the vagrant box, via ansible playbook.
to do so, without playbook, I need to execute # vagrant ssh and then I'm in the vagrant box and can start the vault server using # vault server -dev.
I want to execute the # vault server -dev directly from the playbook. any ideas how?
this is my playbook -
---
- name: Playbook to install and use Vault
become: true
hosts: all
tasks:
- name: Uptade1
become: true
become_user: root
shell: apt update
- name: gpg
become: true
become_user: root
shell: apt install gpg
- name: verify key
become: true
become_user: root
shell: wget -O- https://apt.releases.hashicorp.com/gpg | gpg --dearmor | sudo tee /usr/share/keyrings/hashicorp-archive-keyring.gpg >/dev/null
- name: fingerprint
become: true
become_user: root
shell: gpg --no-default-keyring --keyring /usr/share/keyrings/hashicorp-archive-keyring.gpg --fingerprint
- name: repository
become: true
become_user: root
shell: echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
- name: update2
become: true
become_user: root
shell: apt update
- name: vault install
become: true
become_user: root
shell: apt install vault
- name: start vault
become: true
become_user: vagrant
shell: vault server -dev -dev-listen-address=0.0.0.0:8200
the last one is my try to start the vault server but it gets stuck in the
TASK [start vault] *********************************************************
I also tried adding
- name: start vault
become: true
shell: vagrant ssh
before but then I get :
TASK [start vault] *************************************************************
fatal: [default]: FAILED! => {"changed": true, "cmd": "vagrant ssh", "delta": "0:00:00.003245", "end": "2022-07-03 16:18:31.480702", "msg": "non-zero return code", "rc": 127, "start": "2022-07-03 16:18:31.477457", "stderr": "/bin/sh: 1: vagrant: not found", "stderr_lines": ["/bin/sh: 1: vagrant: not found"], "stdout": "", "stdout_lines": []}
this is my Vagrantfile if needed:
Vagrant.configure("2") do |config|
VAGRANT_DEFAULT_PROVIDER = "virtualbox"
config.vm.hostname = "carebox-idan"
config.vm.provision "ansible", playbook: "playbook.yml"
config.vm.box = "laravel/homestead"
config.vm.network "forwarded_port", guest: 8200, host: 8200, auto_correct: "true"
config.ssh.forward_agent = true
end
thank you.

Ansible not reporting distribution info on Ubuntu 20.04?

Example on Ubuntu 18.04 reporting distribution info in 'ansible_facts':
$ ansible -i hosts ubuntu1804 -u root -m setup -a "filter=ansible_distribution*"
ubuntu1804 | SUCCESS => {
"ansible_facts": {
"ansible_distribution": "Ubuntu",
"ansible_distribution_file_parsed": true,
"ansible_distribution_file_path": "/etc/os-release",
"ansible_distribution_file_variety": "Debian",
"ansible_distribution_major_version": "18",
"ansible_distribution_release": "bionic",
"ansible_distribution_version": "18.04"
},
"changed": false
}
Example of same command against Ubuntu 20.04:
$ ansible -i hosts ubuntu2004 -u root -m setup -a "filter=ansible_distribution*"
ubuntu2004 | SUCCESS => {
"ansible_facts": {},
"changed": false
}
Is this an issue with Ubuntu or Ansible? Is there a workaround?
Issue resolved with today's update to ansible 2.9.7.
After very research to find out Ubuntu 20.04 version then we have got released a version using ansible version-2.5.1
- hosts: localhost
become: true
gather_facts: yes
tasks:
- name: System details
debug:
msg: "{{ ansible_facts['lsb']['release'] }}"
- name: ubuntu 18
shell: echo "hello 18"
register: ub18
when: ansible_facts['lsb']['release'] == "18.04"
- debug:
msg: "{{ ub18 }}"
- name: ubuntu 20
shell: echo "hello 20"
register: ub20
when: ansible_facts['lsb']['release'] == "20.04"
- debug:
msg: "{{ ub20 }}"

"Failed to connect to the host via ssh" error Ansible

I am trying to run the following playbook on Ansible:
- hosts: localhost
connection: local
remote_user: test
gather_facts: no
vars_files:
- files/aws_creds.yml
- files/info.yml
tasks:
- name: Basic provisioning of EC2 instance
ec2:
assign_public_ip: no
aws_access_key: "{{ aws_id }}"
aws_secret_key: "{{ aws_key }}"
region: "{{ aws_region }}"
image: "{{ standard_ami }}"
instance_type: "{{ free_instance }}"
key_name: "{{ ssh_keyname }}"
count: 3
state: present
group_id: "{{ secgroup_id }}"
wait: no
#delete_on_termination: yes
instance_tags:
Name: Dawny33Template
register: ec2
- name: Add new instance to host group
add_host:
hostname: "{{ item.public_ip }}"
groupname: launched
with_items: "{{ ec2.instances }}"
## Here lies the SSH code
- name: Wait for SSH to come up
wait_for:
host: "{{ item.public_ip }}"
port: 22
delay: 60
timeout: 320
state: started
with_items: "{{ ec2.instances }}"
- name: Configure instance(s)
hosts: launched
become: True
gather_facts: True
#roles:
# - my_awesome_role
# - my_awesome_test
- name: Terminate instances
hosts: localhost
connection: local
tasks:
- name: Terminate instances that were previously launched
ec2:
state: 'absent'
instance_ids: '{{ ec2.instance_ids }}'
I am getting the following error:
TASK [setup] *******************************************************************
fatal: [52.32.183.176]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Warning: Permanently added '52.32.183.176' (ECDSA) to the list of known hosts.\r\nPermission denied (publickey).\r\n", "unreachable": true}
fatal: [52.34.255.16]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Warning: Permanently added '52.34.255.16' (ECDSA) to the list of known hosts.\r\nPermission denied (publickey).\r\n", "unreachable": true}
fatal: [52.34.253.51]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Warning: Permanently added '52.34.253.51' (ECDSA) to the list of known hosts.\r\nPermission denied (publickey).\r\n", "unreachable": true}
My ansible.cfg file already has the following:
[defaults]
host_key_checking = False
Yet, the playbook run is failing. Can someone help me with what I am doing wrong?
The answer has to lie in:
Permission denied (publickey).
You got past host key checking - your problem is with authentication.
Are you intending to use key-based authentication? If so, does
ssh <host> -l <ansible_user>
work for you, or does it produce a password prompt?
Are you trying to use password authentication? If so, it looks like your node does not allow it.
Edit:
adding -vvvv to your playbook enables SSH debugging.
is SSH setup properly? the logs indicate your public key isn't working

Ansible: setting user on dynamic ec2

I don't appear to be connecting to the remote host. Why not?
Command-line: ansible-playbook -i "127.0.0.1," -c local playbook.yml
This is the playbook. The role, create_ec2_instance, creates the variable ec2hosts used within the second portion of the playbook (ansible/playbook.yml):
# Create instance
- hosts: 127.0.0.1
connection: local
gather_facts: false
roles:
- create_ec2_instance
# Configure and install all we need
- hosts: ec2hosts
remote_user: admin
gather_facts: false
roles:
- show-hosts
- prepare-target-system
- install-project-dependencies
- install-project
This is just a simple ec2 module creation. This works as desired. (ansible/roles/create-ec2-instance/tasks/main.yml):
- name: Create instance
ec2:
region: "{{ instance_values['region'] }}"
zone: "{{ instance_values['zone'] }}"
keypair: "{{ instance_values['key_pair'] }}"
group: "{{ instance_values['security_groups'] }}"
instance_type: "{{ instance_values['instance_type'] }}"
image: "{{ instance_values['image_id'] }}"
count_tag: "{{ instance_values['name'] }}"
exact_count: 1
wait: yes
instance_tags:
Name: "{{ instance_values['name'] }}"
when: ec2_instances.instances[instance_values['name']]|default("") == ""
register: ec2_info
- name: Wait for instances to listen on port 22
wait_for:
state: started
host: "{{ ec2_info.instances[0].public_dns_name }}"
port: 22
when: ec2_info|changed
- name: Add new instance to ec2hosts group
add_host:
hostname: "{{ ec2_info.instances[0].public_ip }}"
groupname: ec2hosts
instance_id: "{{ ec2_info.instances[0].id }}"
when: ec2_info|changed
I've included extra methods for transparency, though these are really basic (ansible/roles/show-hosts/tasks/main.yml):
- name: List hosts
debug: msg="groups={{groups}}"
run_once: true
and we have (ansible/roles/prepare-target-system/tasks/main.yml):
- name: get the username running the deploy
local_action: command whoami
register: username_on_the_host
- debug: var=username_on_the_host
- name: Add necessary system packages
become: yes
become_method: sudo
package: "name={{item}} state=latest"
with_items:
- software-properties-common
- python-software-properties
- devscripts
- build-essential
- libffi-dev
- libssl-dev
- vim
Edit: I've updated to remote_user above and below is the error output:
TASK [prepare-target-system : debug] *******************************************
task path: <REDACTED>/ansible/roles/prepare-target-system/tasks/main.yml:5
ok: [35.166.52.247] => {
"username_on_the_host": {
"changed": true,
"cmd": [
"whoami"
],
"delta": "0:00:00.009067",
"end": "2017-01-07 08:23:42.033551",
"rc": 0,
"start": "2017-01-07 08:23:42.024484",
"stderr": "",
"stdout": "brianbruggeman",
"stdout_lines": [
"brianbruggeman"
],
"warnings": []
}
}
TASK [prepare-target-system : Ensure that we can update apt-repository] ********
task path: /<REDACTED>/ansible/roles/prepare-target-system/tasks/Debian.yml:2
Using module file <REDACTED>/.envs/dg2/lib/python2.7/site-packages/ansible/modules/core/packaging/os/apt.py
<35.166.52.247> ESTABLISH LOCAL CONNECTION FOR USER: brianbruggeman
<35.166.52.247> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1483799022.33-268449475843769 `" && echo ansible-tmp-1483799022.33-268449475843769="` echo $HOME/.ansible/tmp/ansible-tmp-1483799022.33-268449475843769 `" ) && sleep 0'
<35.166.52.247> PUT /var/folders/r9/kv1j05355r34570x2f5wpxpr0000gn/T/tmpK2__II TO <REDACTED>/.ansible/tmp/ansible-tmp-1483799022.33-268449475843769/apt.py
<35.166.52.247> EXEC /bin/sh -c 'chmod u+x <REDACTED>/.ansible/tmp/ansible-tmp-1483799022.33-268449475843769/ <REDACTED>/.ansible/tmp/ansible-tmp-1483799022.33-268449475843769/apt.py && sleep 0'
<35.166.52.247> EXEC /bin/sh -c 'sudo -H -S -n -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-owktjrfvqssjrqcetaxjkwowkzsqfitq; /usr/bin/python <REDACTED>/.ansible/tmp/ansible-tmp-1483799022.33-268449475843769/apt.py; rm -rf "<REDACTED>/.ansible/tmp/ansible-tmp-1483799022.33-268449475843769/" > /dev/null 2>&1'"'"' && sleep 0'
failed: [35.166.52.247] (item=[u'software-properties-common', u'python-software-properties', u'devscripts', u'build-essential', u'libffi-dev', u'libssl-dev', u'vim']) => {
"failed": true,
"invocation": {
"module_name": "apt"
},
"item": [
"software-properties-common",
"python-software-properties",
"devscripts",
"build-essential",
"libffi-dev",
"libssl-dev",
"vim"
],
"module_stderr": "sudo: a password is required\n",
"module_stdout": "",
"msg": "MODULE FAILURE"
}
to retry, use: --limit #<REDACTED>/ansible/<redacted playbook>.retry
PLAY RECAP *********************************************************************
127.0.0.1 : ok=6 changed=2 unreachable=0 failed=0
35.166.52.247 : ok=3 changed=1 unreachable=0 failed=1
Use become:
remote_user: ansible
become: true
become_user: root
Ansible docs: Become (Privilege Escalation)
For example: in my scripts i connect to remote host as user 'ansible' (because ssh is disabled for root), and then become 'root'. Rarely, i connect as 'ansible', then become 'apache' user. So, remote_user specify username to connect, become_user is username after connection.
PS Passwordless sudo for user ansible:
- name: nopasswd sudo for ansible user
lineinfile: "dest=/etc/sudoers state=present regexp='^{{ ansible_user }}' line='{{ ansible }} ALL=(ALL) NOPASSWD: ALL'"
This is known workaround, see here: Specify sudo password for Ansible

Ansible - How to ssh into an instance without the 'authenticity of host' prompt?

I am using ansible to create several ec2 instances, copy files into those newly created servers and run commands on those servers. The issue is that after creating the servers I still have to enter yes in the following ssh prompt:
TASK [Adding /etc/rc.local2 to consul servers] *********************************
changed: [localhost -> 172.31.52.147] => (item={u'ip': u'172.31.52.147', u'number': 0})
The authenticity of host '172.31.57.20 (172.31.57.20)' can't be established.
ECDSA key fingerprint is 5e:c3:2e:52:10:29:1c:44:6f:d3:ac:10:78:10:01:89.
Are you sure you want to continue connecting (yes/no)? yes
changed: [localhost -> 172.31.57.20] => (item={u'ip': u'172.31.57.20', u'number': 1})
The authenticity of host '172.31.57.19 (172.31.57.19)' can't be established.
ECDSA key fingerprint is 4e:71:15:fe:c9:ec:3f:54:65:e8:a1:66:74:92:f4:ff.
Are you sure you want to continue connecting (yes/no)? yes
How can I have ansible ignore this prompt and just answer yes automatically? For reference here is my playbook:
---
- hosts: localhost
connection: local
gather_facts: false
sudo: yes
vars_files:
- ami-keys.yml
- ami-image.yml
tasks:
- name: create 3 consul servers
ec2:
aws_access_key: '{{ aws_access_key }}'
aws_secret_key: '{{ aws_secret_key }}'
key_name: terra
group: default
instance_type: t2.micro
image: '{{ ami }}'
region: '{{ region }}'
wait: true
exact_count: 3
count_tag:
Name: consul-server
instance_tags:
Name: consul-server
register: ec2
- name: Wait for SSH to come up
wait_for: host={{ item }} port=22 delay=1 timeout=480 state=started
with_items:
- "{{ ec2['tagged_instances'][0]['private_ip'] }}"
- "{{ ec2['tagged_instances'][1]['private_ip'] }}"
- "{{ ec2['tagged_instances'][2]['private_ip'] }}"
# shows the json data for the instances created
- name: consul server ec2 instance json data
debug:
msg: "{{ ec2['tagged_instances'] }}"
# bootstrapping
- name: Adding /etc/rc.local2 to consul servers
template:
src: template/{{ item.number }}.sh
dest: /etc/rc.local2
delegate_to: "{{ item.ip }}"
with_items:
- ip: "{{ ec2['tagged_instances'][0]['private_ip'] }}"
number: 0
- ip: "{{ ec2['tagged_instances'][1]['private_ip'] }}"
number: 1
- ip: "{{ ec2['tagged_instances'][2]['private_ip'] }}"
number: 2
ignore_errors: true
- name: give /etc/rc.local2 permissions to run and starting swarm
shell: "{{ item[1] }}"
delegate_to: "{{ item[0] }}"
with_nested:
- [ "{{ ec2['tagged_instances'][0]['private_ip'] }}",
"{{ ec2['tagged_instances'][1]['private_ip'] }}",
"{{ ec2['tagged_instances'][2]['private_ip'] }}" ]
- [ "sudo chmod +x /etc/rc.local2",
"sleep 10",
"consul reload",
"docker run --name swarm-manager -d -p 4000:4000 --restart=unless-stopped \
swarm manage -H :4000 \
--replication --advertise \
$(hostname -i):4000 \
consul://$(hostname -i):8500" ]
ignore_errors: true
Note: I have already tried running:
ansible-playbook -e 'host_key_checking=False' consul-server.yml
and it does not remove the prompt.
Going into /etc/ansible/ansible.cfg and uncommenting the line host_key_checking=False does remove the prompt however I want to avoid doing this and either enter something into my playbook or the command line when I run my playbook instead.
The common recommendation is to set host_key_checking=False in the Ansible configuration. This is a bad idea, because it assumes your network connection will never be compromised.
A much better idea that only assumes the network isn't MitMed when you first create the servers is to use ssh-keyscan to add the servers' fingerprints to the known hosts file:
- name: accept new ssh fingerprints
shell: ssh-keyscan -H {{ item.public_ip }} >> ~/.ssh/known_hosts
with_items: '{{ ec2.instances }}'