How to copy a file and add dynamic content on startup of an EC2 instance using CloudFormation? - authentication

I need to copy the content of a cert file which comes from the secretsmanager into an EC2 instance on startup using CloudFormation.
Edit:
I added an IAM Role, a Policy, and an InstanceProfile in my code to ensure that I can access the SecretsManager value using UserData
The code looks like this now:
SecretsManagerAccessRole:
Type: AWS::IAM::Role
Properties:
RoleName: CloudFormationSecretsManagerAccessRole
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
AWS: !Sub arn:aws:iam::${AWS::AccountId}:root
Action: sts:AssumeRole
Path: "/"
SecretsManagerInstanceProfile:
Type: AWS::IAM::InstanceProfile
Properties:
Path: "/"
Roles: [ !Ref SecretsManagerAccessRole ]
SecretsManagerInstancePolicy:
Type: AWS::IAM::Policy
Properties:
PolicyName: SecretsManagerAccessPolicy,
PolicyDocument:
Statement:
- Effect: Allow
Action: secretsmanager:GetSecretValue
Resource: <arn-of-the-secret>
Roles: [ !Ref SecretsManagerAccessRole ]
LinuxEC2Instance:
Type: AWS::EC2::Instance
Properties:
IamInstanceProfile: !Ref SecretsManagerInstanceProfile
UserData:
Fn::Base64: !Sub |
#!/bin/bash -xe
yum update -y
groupadd -g 110 ansible
adduser ansible -g ansible
mkdir -p /home/ansible/.ssh
chmod 700 /home/ansible/.ssh
aws secretsmanager get-secret-value \
--secret-id <arn-of-the-secret> \
--region ${AWS::Region} \
--query 'SecretString' \
--output text > /home/ansible/.ssh/authorized_keys
chmod 000644 /home/ansible/.ssh/authorized_keys
chown -R ansible.ansible /home/ansible/.ssh/
cat /home/ansible/.ssh/authorized_keys
During startup of the instance, I get this issue here:
Unable to locate credentials. You can configure credentials by running "aws configure".
It seems like the user did not get the necessary role to perform this action in UserData? Why is that?

I tried few things, but all failed. The only thing that worked was to use UserData.
For example, you could have the following:
LinuxEC2Instance:
Type: AWS::EC2::Instance
Properties:
ImageId: ami-08f3d892de259504d # AL2 in us-east-1
InstanceType: t2.micro
IamInstanceProfile: <name-of-instance-profile>
KeyName: MyKeyPair
UserData:
Fn::Base64: !Sub |
#!/bin/bash -xe
yum update -y
groupadd -g 110 ansible
adduser ansible -g ansible
mkdir -p /home/ansible/.ssh
chmod 700 /home/ansible/.ssh
secret_value=$(aws secretsmanager get-secret-value \
--secret-id <arn-of-the-secret> \
--region ${AWS::Region} \
--query 'SecretString' \
--output text)
# have to check the exact command here of jq
echo ${!secret_value} | jq -r '.key' > /home/ansible/.ssh/authorized_keys
chmod 000644 /home/ansible/.ssh/authorized_keys
chown -R ansible.ansible /home/ansible/.ssh/
You also would need to add an instance role/profile to the instance
so that it can read the secret. The role could contain the following
policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ReadSecretValue",
"Effect": "Allow",
"Action": "secretsmanager:GetSecretValue",
"Resource": "<arn-of-secret>"
}
]
}
edit:
In you KMS is used for encryption of the secret, the instance role would need to have permissions for KMS as well.

Ok, I got it working, this is the full answer, the code below worked for me, in addition, I needed to add 'kms:GenerateDataKey', 'kms:Decrypt' to the permissions for it to properly retrieve the secret, finally I needed to use jq to retrieve the value out of the JSON format I got from secrets manager:
CFNInstanceProfile:
Type: AWS::IAM::InstanceProfile
Properties:
Path: /
Roles:
- !Ref CFNAccessRole
CFNAccessRole:
Type: AWS::IAM::Role
Properties:
RoleName: CFNAccessRole
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Principal:
Service: ec2.amazonaws.com
Action: sts:AssumeRole
Path: /
CFNInstancePolicy:
Type: AWS::IAM::Policy
Properties:
PolicyName: SecretsManagerAccessPolicy,
PolicyDocument:
Statement:
- Effect: Allow
Action: ['secretsmanager:GetSecretValue', 'kms:GenerateDataKey', 'kms:Decrypt']
Resource: '*'
Roles:
- !Ref CFNAccessRole
# EC2 Instance creation
LinuxEC2Instance:
Type: AWS::EC2::Instance
Properties:
IamInstanceProfile: !Ref CFNInstanceProfile
UserData:
Fn::Base64: !Sub |
#!/bin/bash -xe
yum update -y
groupadd -g 110 ansible
adduser ansible -g ansible
mkdir -p /home/ansible/.ssh
chmod 700 /home/ansible/.ssh
aws secretsmanager get-secret-value \
--secret-id <arn-of-the-secret> \
--region ${AWS::Region} \
--query 'SecretString' \
--output text | jq -r ".key" > /home/ansible/.ssh/authorized_keys
chmod 000644 /home/ansible/.ssh/authorized_keys
chown -R ansible.ansible /home/ansible/.ssh/
cat /home/ansible/.ssh/authorized_keys

Related

AWS::CloudFormation::Init fails to create files from S3 Bucket although the retrieval via aws "s3 cp s3" is successful

I was trying to retreive a file from S3 Bucket during initialization of EC2 instance.
There are no failures (Syntax). But there is something missing.
The initial_setup.sh file is not created in the root directory
There are so many articles all state the same (or at least as per my humble understanding as newbie)
Parameters:
MyVPC: { Type: String, Default: vpc-000xxx }
myNstdKeyName: { Type: String, Default: xxx-key-test }
myNstdBucket: { Type: String, Default: myBucket }
myNstdEC2HostSubnet: { Type: String, Default: subnet-0xxxxx }
myNstdImageId: { Type: 'AWS::EC2::Image::Id', Default: 'ami-0xxxx' }
Resources:
#Allow incoming SSH and all types of outgoing traffic
SSHSecGrp4Pub:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Security group to allow SSH connection in public subnet
VpcId: !Ref MyVPC
SecurityGroupIngress: [ { IpProtocol: tcp, FromPort: 22, ToPort: 22, CidrIp: 0.0.0.0/0 } ]
SecurityGroupEgress: [ { IpProtocol: -1, CidrIp: 0.0.0.0/0, FromPort: 1, ToPort: 65535 } ]
#Assume Role
SAPEC2Role:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
RoleName: EC2AWSAccess
AssumeRolePolicyDocument:
Statement: [ { Effect: Allow, Principal: { Service: ec2.amazonaws.com } , Action: [ 'sts:AssumeRole' ] } ]
#Policy for the above role
S3RolePolicy:
Type: AWS::IAM::Policy
Properties:
PolicyName: "S3DownloadPolicy"
Roles: [ !Ref SAPEC2Role ]
PolicyDocument:
Statement:
- Effect: Allow
Action: [ 's3:GetObject' ]
Resource: !Sub "arn:aws:s3:::${myNstdBucket}/*"
#Profile for EC2 Instance
SAPEC2Profile:
Type: AWS::IAM::InstanceProfile
Properties: { InstanceProfileName: SAPEC2Profile, Roles: [ !Ref SAPEC2Role ] }
#My EC2 Instance
EC2:
Type: AWS::EC2::Instance
Metadata:
AWS::CloudFormation::Authentication:
S3Access:
type: "S3"
roleName: { Ref: "SAPEC2Role" }
buckets: [ !Ref myNstdBucket ]
AWS::CloudFormation::Init:
config:
files:
/root/initial_setup.sh: {
source: !Sub "https://${myNstdBucket}.s3.eu-central-1.amazonaws.com/initial_setup.sh",
mode: "000777",
owner: root,
group: root,
authentication: "S3Access"
}
commands:
myStarter:
command: "/bin/bash /root/initial_setup.sh"
Properties:
SubnetId: !Ref myNstdEC2HostSubnet
ImageId: !Ref myNstdImageId
InstanceType: t2.micro
KeyName: !Ref myNstdKeyName
IamInstanceProfile: !Ref SAPEC2Profile
SecurityGroupIds: [ !Ref SSHSecGrp4Pub ]
UserData:
Fn::Base64: !Sub |
#!/bin/bash
echo "my test file" > /root/testfile.txt
After the instance is initialized and i try it out with aws cp s3://mybucket/initial_setup.sh
It works but i have to go over with dos2unix.
The alternative would be to put it in UserData. But this should also work with commands. (^)
Someone also here had almost the same situation but it was mentioned that :
"For any commands to work we need to provide a shell environment in Userdata without which it cannot create any files"
And i added it too as a last line after the security group.
UserData:
Fn::Base64: !Sub |
#!/bin/bash
echo "my test file" > /root/testfile.txt
So the /root/testfile.txt gets created with the specified text in it.
But the required file from the bucket did not show up.
i had a misconception that just by stating Metadata Stuff, it will get executed as well. But now i'm half way through.
This was the missing part under the UserData
UserData:
Fn::Base64: !Sub |
cd /tmp
wget https://s3.amazonaws.com/cloudformation-examples/aws-cfn-bootstrap-latest.tar.gz
gzip -df aws-cfn-bootstrap-latest.tar.gz
tar -xvf aws-cfn-bootstrap-latest.tar
chmod -R 755 /tmp/aws-cfn-bootstrap-1.4
pip install --upgrade pip
pip install --upgrade setuptools
pip install awscli --ignore-installed six &> /dev/null
export PYTHONPATH=/tmp/aws-cfn-bootstrap-1.4
/tmp/aws-cfn-bootstrap-1.4/bin/cfn-init -v --stack ${AWS::StackName} --resource EC2 --region ${AWS::Region}
It is actually the last line which makes the difference. The lines before it were used to setup the configuration as the image used is a non-aws image. So it does not bring the capability out of the box.

Assign variable within kubernetes yaml job

I would like to run a command within the yaml file for kubernetes:
Here is the part of the yaml file that i use
The idea is to calculate a precent value based on mapped and unmapped values. mapped and unmapped are set properly but the percent line fails
I think the problem comes from the single quotes in the BEGIN statement of the awk command which i guess need to escape ???
If mapped=8 and unmapped=7992
Then percent is (8/(8+7992)*100) = 0.1%
command: ["/bin/sh","-c"]
args: ['
...
echo "Executing command" &&
map=${grep -c "^#" outfile.mapped.fq} &&
unmap=${grep -c "^#" outfile.unmapped.fq} &&
percent=$(awk -v CONVFMT="%.10g" -v map="$map" -v unmap="$unmap" "BEGIN { print ((map/(unmap+map))*100)}") &&
echo "finished"
']
Thanks to the community comments: Ed Morton & david
For those files with data, please create configmap:
outfile.mapped.fq
outfile.unmapped.fq
kubectl create configmap config-volume --from-file=/path_to_directory_with_files/
Create pod:
apiVersion: v1
kind: Pod
metadata:
name: awk-ubu
spec:
containers:
- name: awk-ubuntu
image: ubuntu
workingDir: /test
command: [ "/bin/sh", "-c" ]
args:
- echo Executing_command;
map=$(grep -c "^#" outfile.mapped.fq);
unmap=$(grep -c "^#" outfile.unmapped.fq);
percent=$(awk -v CONVFMT="%.10g" -v map="$map" -v unmap="$unmap" "BEGIN { print ((map/(unmap+map))*100)}");
echo $percent;
echo Finished;
volumeMounts:
- name: special-config
mountPath: /test
volumes:
- name: special-config
configMap:
# Provide the name of the ConfigMap containing the files you want
# to add to the container
name: config-volume
restartPolicy: Never
Once completed verify the result:
kubectl logs awk-ubu
Executing_command
53.3333
Finished

Ansible: setting user on dynamic ec2

I don't appear to be connecting to the remote host. Why not?
Command-line: ansible-playbook -i "127.0.0.1," -c local playbook.yml
This is the playbook. The role, create_ec2_instance, creates the variable ec2hosts used within the second portion of the playbook (ansible/playbook.yml):
# Create instance
- hosts: 127.0.0.1
connection: local
gather_facts: false
roles:
- create_ec2_instance
# Configure and install all we need
- hosts: ec2hosts
remote_user: admin
gather_facts: false
roles:
- show-hosts
- prepare-target-system
- install-project-dependencies
- install-project
This is just a simple ec2 module creation. This works as desired. (ansible/roles/create-ec2-instance/tasks/main.yml):
- name: Create instance
ec2:
region: "{{ instance_values['region'] }}"
zone: "{{ instance_values['zone'] }}"
keypair: "{{ instance_values['key_pair'] }}"
group: "{{ instance_values['security_groups'] }}"
instance_type: "{{ instance_values['instance_type'] }}"
image: "{{ instance_values['image_id'] }}"
count_tag: "{{ instance_values['name'] }}"
exact_count: 1
wait: yes
instance_tags:
Name: "{{ instance_values['name'] }}"
when: ec2_instances.instances[instance_values['name']]|default("") == ""
register: ec2_info
- name: Wait for instances to listen on port 22
wait_for:
state: started
host: "{{ ec2_info.instances[0].public_dns_name }}"
port: 22
when: ec2_info|changed
- name: Add new instance to ec2hosts group
add_host:
hostname: "{{ ec2_info.instances[0].public_ip }}"
groupname: ec2hosts
instance_id: "{{ ec2_info.instances[0].id }}"
when: ec2_info|changed
I've included extra methods for transparency, though these are really basic (ansible/roles/show-hosts/tasks/main.yml):
- name: List hosts
debug: msg="groups={{groups}}"
run_once: true
and we have (ansible/roles/prepare-target-system/tasks/main.yml):
- name: get the username running the deploy
local_action: command whoami
register: username_on_the_host
- debug: var=username_on_the_host
- name: Add necessary system packages
become: yes
become_method: sudo
package: "name={{item}} state=latest"
with_items:
- software-properties-common
- python-software-properties
- devscripts
- build-essential
- libffi-dev
- libssl-dev
- vim
Edit: I've updated to remote_user above and below is the error output:
TASK [prepare-target-system : debug] *******************************************
task path: <REDACTED>/ansible/roles/prepare-target-system/tasks/main.yml:5
ok: [35.166.52.247] => {
"username_on_the_host": {
"changed": true,
"cmd": [
"whoami"
],
"delta": "0:00:00.009067",
"end": "2017-01-07 08:23:42.033551",
"rc": 0,
"start": "2017-01-07 08:23:42.024484",
"stderr": "",
"stdout": "brianbruggeman",
"stdout_lines": [
"brianbruggeman"
],
"warnings": []
}
}
TASK [prepare-target-system : Ensure that we can update apt-repository] ********
task path: /<REDACTED>/ansible/roles/prepare-target-system/tasks/Debian.yml:2
Using module file <REDACTED>/.envs/dg2/lib/python2.7/site-packages/ansible/modules/core/packaging/os/apt.py
<35.166.52.247> ESTABLISH LOCAL CONNECTION FOR USER: brianbruggeman
<35.166.52.247> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1483799022.33-268449475843769 `" && echo ansible-tmp-1483799022.33-268449475843769="` echo $HOME/.ansible/tmp/ansible-tmp-1483799022.33-268449475843769 `" ) && sleep 0'
<35.166.52.247> PUT /var/folders/r9/kv1j05355r34570x2f5wpxpr0000gn/T/tmpK2__II TO <REDACTED>/.ansible/tmp/ansible-tmp-1483799022.33-268449475843769/apt.py
<35.166.52.247> EXEC /bin/sh -c 'chmod u+x <REDACTED>/.ansible/tmp/ansible-tmp-1483799022.33-268449475843769/ <REDACTED>/.ansible/tmp/ansible-tmp-1483799022.33-268449475843769/apt.py && sleep 0'
<35.166.52.247> EXEC /bin/sh -c 'sudo -H -S -n -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-owktjrfvqssjrqcetaxjkwowkzsqfitq; /usr/bin/python <REDACTED>/.ansible/tmp/ansible-tmp-1483799022.33-268449475843769/apt.py; rm -rf "<REDACTED>/.ansible/tmp/ansible-tmp-1483799022.33-268449475843769/" > /dev/null 2>&1'"'"' && sleep 0'
failed: [35.166.52.247] (item=[u'software-properties-common', u'python-software-properties', u'devscripts', u'build-essential', u'libffi-dev', u'libssl-dev', u'vim']) => {
"failed": true,
"invocation": {
"module_name": "apt"
},
"item": [
"software-properties-common",
"python-software-properties",
"devscripts",
"build-essential",
"libffi-dev",
"libssl-dev",
"vim"
],
"module_stderr": "sudo: a password is required\n",
"module_stdout": "",
"msg": "MODULE FAILURE"
}
to retry, use: --limit #<REDACTED>/ansible/<redacted playbook>.retry
PLAY RECAP *********************************************************************
127.0.0.1 : ok=6 changed=2 unreachable=0 failed=0
35.166.52.247 : ok=3 changed=1 unreachable=0 failed=1
Use become:
remote_user: ansible
become: true
become_user: root
Ansible docs: Become (Privilege Escalation)
For example: in my scripts i connect to remote host as user 'ansible' (because ssh is disabled for root), and then become 'root'. Rarely, i connect as 'ansible', then become 'apache' user. So, remote_user specify username to connect, become_user is username after connection.
PS Passwordless sudo for user ansible:
- name: nopasswd sudo for ansible user
lineinfile: "dest=/etc/sudoers state=present regexp='^{{ ansible_user }}' line='{{ ansible }} ALL=(ALL) NOPASSWD: ALL'"
This is known workaround, see here: Specify sudo password for Ansible

Ansible - Moving ssh keys between two nodes

Here is the problem I'm working on.
I have an ansible server
I have another server M
I have other servers B1, B2, B3... all known by ansible
I have a hosts file such as this
[CTRL]
M
[SLAVES]
B1
B2
B3
I want to generate a ssh key on my master (not ansible itself) and deploy it on my other slave servers to permit the master to connect on the slaves by keys.
Here is what I tried :
- hosts: CTRL
remote_user: root
vars_prompt:
- name: ssh_password
prompt : Please enter password for ssh key copy on remote nodes
private: yes
tasks:
- yum: name=sshpass state=present
sudo: yes
- name: generate ssh key on the controller
shell : ssh-keygen -b 2048 -t rsa -f /root/.ssh/id_rsa -q -N /dev/null
- name: copy ssh key to the other nodes
shell : sshpass -p '{{ ssh_password }}' ssh-copy-id root#'{{ item }}'
with_items: groups['SLAVES']
delegate_to: "{{ groups['CTRL'][0] }}"
The key generation works but no matter how I work I have a problem copying the key to the slave hosts
failed: [M -> M] => (item=B1) => {"changed": true, "cmd": "sshpass -p 'mypassword' ssh-copy-id root#'B1'", "delta": "0:00:00.101102", "end": "2016-07-18 11:08:56.985623", "item": "B1", "rc": 6, "start": "2016-07-18 11:08:56.884521", "warnings": []}
failed: [M -> M] => (item=B2) => {"changed": true, "cmd": "sshpass -p 'mypassword' ssh-copy-id root#'B2'", "delta": "0:00:00.101102", "end": "2016-07-18 11:08:56.985623", "item": "B1", "rc": 6, "start": "2016-07-18 11:08:56.884521", "warnings": []}
failed: [M -> M] => (item=B3) => {"changed": true, "cmd": "sshpass -p 'mypassword' ssh-copy-id root#'B3'", "delta": "0:00:00.101102", "end": "2016-07-18 11:08:56.985623", "item": "B1", "rc": 6, "start": "2016-07-18 11:08:56.884521", "warnings": []}
Do you know how I could correct my code or maybe do you have a simplier way to do what I want to do ?
Thank you.
This is more neat solution without file fetch:
---
- hosts: M
tasks:
- name: generate key pair
shell: ssh-keygen -b 2048 -t rsa -f /root/.ssh/id_rsa -q -N /dev/null
args:
creates: /root/.ssh/id_rsa
- name: test public key
shell: ssh-keygen -l -f /root/.ssh/id_rsa.pub
changed_when: false
- name: retrieve public key
shell: cat /root/.ssh/id_rsa.pub
register: master_public_key
changed_when: false
- hosts: SLAVES
tasks:
- name: add master public key to slaves
authorized_key:
user: root
key: "{{ hostvars['M'].master_public_key.stdout }}"
One of possible solutions (my first answer):
---
- hosts: M
tasks:
- name: generate key pair
shell: ssh-keygen -b 2048 -t rsa -f /root/.ssh/id_rsa -q -N /dev/null
- name: fetch public key
fetch:
src: /root/.ssh/id_rsa.pub
dest: tmp/
flat: yes
- hosts: SLAVES
tasks:
- name: add master public key to slaves
authorized_key:
user: root
key: "{{ lookup('file', 'tmp/id_rsa.pub') }}"

Ansible and ForwardAgent for sudo_user

Could someone say me, what am I doing wrong? I'm working with Amazon EC2 instance and want to have agent forwarded to user rails, but when I run next task:
- acl: name={{ item }} etype=user entity=rails permissions=rwx state=present
with_items:
- "{{ ansible_env.SSH_AUTH_SOCK|dirname }}"
- "{{ ansible_env.SSH_AUTH_SOCK }}"
sudo: true
I see failed result:
(item=/tmp/ssh-ULvzaZpq2U) => {"failed": true, "item": "/tmp/ssh-ULvzaZpq2U"}
msg: path not found or not accessible!
When I try to it manually, without ansible, it looks good:
setfacl -m rails:rwx "$SSH_AUTH_SOCK"
setfacl -m rails:x $(dirname "$SSH_AUTH_SOCK")
sudo -u rails ssh -T git#github.com //Hi KELiON! You've successfully authenticated, but GitHub does not provide shell access.
I even tried to run new instance and run test ansible playbook:
#!/usr/bin/env ansible-playbook
---
- hosts: all
remote_user: ubuntu
tasks:
- user: name=rails
sudo: true
- name: Add ssh agent line to sudoers
lineinfile:
dest: /etc/sudoers
state: present
regexp: SSH_AUTH_SOCK
line: Defaults env_keep += "SSH_AUTH_SOCK"
sudo: true
- acl: name={{ item }} etype=user entity=rails permissions=rwx state=present
with_items:
- "{{ ansible_env.SSH_AUTH_SOCK|dirname }}"
- "{{ ansible_env.SSH_AUTH_SOCK }}"
sudo: true
- name: Test that git ssh connection is working.
command: ssh -T git#github.com
sudo: true
sudo_user: rails
ansible.cfg is:
[ssh_connection]
pipelining=True
ssh_args=-o ForwardAgent=yes -o ControlMaster=auto -o ControlPersist=60s
[defaults]
sudo_flags=-HE
hostfile=staging
But the same result. Any ideas?
I had the same issue and found the answer at https://github.com/ansible/ansible/issues/7235#issuecomment-45842303
My solution varied a bit from his, because acl didn’t work for me, so I:
Changed ansible.cfg:
[defaults]
sudo_flags=-HE
[ssh_connection]
# COMMENTED OUT: ssh_args = -o ForwardAgent=yes
Added tasks/ssh_agent_hack.yml containing:
- name: "(ssh-agent hack: grant access to {{ deploy_user }})"
# SSH-agent socket is forwarded for the current user only (0700 file). Let's change it
# See: https://github.com/ansible/ansible/issues/7235#issuecomment-45842303
# See: http://serverfault.com/questions/107187/ssh-agent-forwarding-and-sudo-to-another-user
become: false
file: group={{deploy_user}} mode=g+rwx path={{item}}
with_items:
- "{{ ansible_env.SSH_AUTH_SOCK|dirname }}"
- "{{ ansible_env.SSH_AUTH_SOCK }}"
NOTE - the become: false setting is because I ssh in as root - If you ssh in as something else, then you will need to become root to do the fix, and then below become your deploy_user (if it isnt the user you are ssh'ing in as).
And then called it from my deploy.yml playbook:
- hosts: apps
gather_facts: True
become: True
become_user: "{{deploy_user}}"
pre_tasks:
- include: tasks/ssh_agent_hack.yml
tags: [ 'deploy' ]
roles:
- { role: carlosbuenosvinos.ansistrano-deploy, tags: [ 'deploy' ] }
Side note - Adding ForwardAgent yes to the host entry in ~/.ssh/config didn't affect what worked (I tried all 8 combinations :- only setting sudo_flags but not ssh_args works but it doesn't matter if you set forwarding on or off in ~/.ssh/config for opensssh - tested under ubuntu trusty)
Also note: I have pipelining=True in ansible.cfg
This worked for me in ansible v2.3.0.0:
$ vi ansible.cfg
[defaults]
roles_path = ./roles
retry_files_enabled = False
[ssh_connection]
ssh_args=-o ControlMaster=auto -o ControlPersist=60s -o ControlPath=/tmp/ansible-ssh-%h-%p-%r -o ForwardAgent=yes
$ vi roles/pull-code/tasks/main.yml
- name: '(Hack: keep SSH forwarding socket)'
lineinfile:
dest: /etc/sudoers
insertafter: '^#?\s*Defaults\s+env_keep\b'
line: 'Defaults env_keep += "SSH_AUTH_SOCK"'
- name: '(Hack: grant access to the socket to {{app_user}})'
become: false
acl: name='{{item}}' etype=user entity='{{app_user}}' permissions="rwx" state=present
with_items:
- "{{ ansible_env.SSH_AUTH_SOCK|dirname }}"
- "{{ ansible_env.SSH_AUTH_SOCK }}"
- name: Pull the code
become: true
become_user: '{{app_user}}'
git:
repo: '{{repository}}'
dest: '{{code_dest}}'
accept_hostkey: yes
I know this answer is late to the party, but the other answers seemed a bit overly complicated when I distilled my solution to the bare minimum. Here's an example playbook to clone a git repo that requires authentication for access via ssh:
- hosts: all
connection: ssh
vars:
# forward agent so access to git via ssh works
ansible_ssh_extra_args: '-o ForwardAgent=yes'
utils_repo: "git#git.example.com:devops/utils.git"
utils_dir: "/opt/utils"
tasks:
- name: Install Utils
git:
repo: "{{ utils_repo }}"
dest: "{{ utils_dir }}"
update: true
accept_hostkey: yes
become: true
become_method: sudo
# Need this to ensure we have the SSH_AUTH_SOCK environment variable
become_flags: '-HE'