pinging ec2 instance from ansible - ssh

I have an ec2 amazon linux running which I can ssh in to using:
ssh -i "keypair.pem" ec2-user#some-ip.eu-west-1.compute.amazonaws.com
but when I try to ping the server using ansible I get:
testserver | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh.",
"unreachable": true
}
I'm using the following hosts file:
testserver ansible_ssh_host=some-ip.eu-west-1.compute.amazonaws.com ansible_ssh_user=ec2-user ansible_ssh_private_key_file=/Users/me/playbook/key-pair.pem
and running the following command to run ansible:
ansible testserver -i hosts -m ping -vvvvv
The output is:
<some-ip.eu-west-1.compute.amazonaws.com> ESTABLISH SSH CONNECTION FOR USER: ec2-user
<some-ip.eu-west-1.compute.amazonaws.com> SSH: ansible.cfg set ssh_args: (-o)(ControlMaster=auto)(-o)(ControlPersist=60s)
<some-ip.eu-west-1.compute.amazonaws.com> SSH: ANSIBLE_PRIVATE_KEY_FILE/private_key_file/ansible_ssh_private_key_file set: (-o)(IdentityFile="/Users/me/playbook/key-pair.pem")
<some-ip.eu-west-1.compute.amazonaws.com> SSH: ansible_password/ansible_ssh_pass not set: (-o)(KbdInteractiveAuthentication=no)(-o)(PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey)(-o)(PasswordAuthentication=no)
<some-ip.eu-west-1.compute.amazonaws.com> SSH: ANSIBLE_REMOTE_USER/remote_user/ansible_user/user/-u set: (-o)(User=ec2-user)
<some-ip.eu-west-1.compute.amazonaws.com> SSH: ANSIBLE_TIMEOUT/timeout set: (-o)(ConnectTimeout=10)
<some-ip.eu-west-1.compute.amazonaws.com> SSH: PlayContext set ssh_common_args: ()
<some-ip.eu-west-1.compute.amazonaws.com> SSH: PlayContext set ssh_extra_args: ()
<some-ip.eu-west-1.compute.amazonaws.com> SSH: found only ControlPersist; added ControlPath: (-o)(ControlPath=/Users/me/.ansible/cp/ansible-ssh-%h-%p-%r)
<some-ip.eu-west-1.compute.amazonaws.com> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/Users/me/playbook/key-pair.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 -o ControlPath=/Users/me/.ansible/cp/ansible-ssh-%h-%p-%r ec2-52-18-106-35.eu-west-1.compute.amazonaws.com '/bin/sh -c '"'"'( umask 22 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1462096401.65-214839021792201 `" && echo "` echo $HOME/.ansible/tmp/ansible-tmp-1462096401.65-214839021792201 `" )'"'"''
testserver | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh.",
"unreachable": true
}
What am i doing wrong?

Try this Solution it worked fine for me
ansible ipaddress -m ping -i inventory -u ec2-user
where inventory is the host file name.
inventory :
[host]
xx.xx.xx.xx
[host:vars]
ansible_user=ec2-user
ansible_ssh_private_key_file=/location of your pem file/filename.pem
I was facing the problem as I didn't give the location of the host file I was referring to.

This is what my host file looks like.
[apache] is the group of hosts on which we are going to install apache server.
ansible_ssh_private_key_file should be the path of the dowloaded .pem file to access your instances. In my case both instances have same credentials.
[apache]
50.112.133.205 ansible_ssh_user=ubuntu
54.202.7.87 ansible_ssh_user=ubuntu
[apache:vars]
ansible_user=ubuntu
ansible_ssh_private_key_file=/home/hashimyousaf/Desktop/hashim_key_oregon.pem

I was having a similar problem, and reading throughTroubleshooting Connecting to Your Instance helped me. Specifically, I was pinging an Ubuntu instance from an Amazon-Linux instance but forgot to change the connection username from "ec2-user" to "ubuntu"!

You have to change the hosts file and make sure you have the correct username
test2 ansible_ssh_host=something.something.eu-west-1.compute.amazonaws.com ansible_ssh_user=theUser
'test2' - is the name I have give to the ssh machice on my local ansible hosts file
'ansible_ssh_host=something.something.eu-west-1.compute.amazonaws.com' - This is the connection to the ec2 instance
'ansible_ssh_user=theUser' - The user of the instance. (Important)
'ssh' into your instance
[theUser#Instance:] make sure you copy the 'theUser' into the hosts and place as the 'ansible_ssh_user' variable
then try to ping it.
If this does not work, check if you have rights to the ICMP packeting in the amazon aws enabled.

Worked for me ->
vi inventory
[hosts]
serveripaddress ansible_ssh_user=ec2-user
[hosts:vars]
ansible_user=ec2-user
ansible_ssh_private_key_file=/home/someuser/ansible1.pem
chmod 400 ansible1.pem
ansible -i inventory hosts -m ping -u ec2-user

Related

Create a config file for ssh command

I have a ssh command as below:
ssh -o ProxyCommand="ssh ubuntu#ip_addr -W %h:%p" ubuntu#ip_addr2 -L port:ip_addr3:port
I want to create a config file for this command, but I don't know what is the option of -L, here is my config file so far:
Host cassandra-khatkesh
User ubuntu
Hostname ip_addr2
ProxyCommand ssh ubuntu#ip_addr -W %h:%p
Anyone knows how can I add -L to config file?
-L corresponds to the LocalForward keyword.
Host cassandra-khatkesh
User ubuntu
Hostname ip_addr2
ProxyCommand ssh ubuntu#ip_addr -W %h:%p
LocalForward port ip_addr3:port
Note that the local and remote endpoints are specified separately, not as single :-delimited string.

Need to use ansible to connect to a host via a jump host using a key in the jump host

I have a machine that is accessible through a jump host.
What I need is this.
A is my local machine
B is the jump host
C is the destination machine
I need to connect to C using ansible via B but use a private key in B.
Current config is the inventory file is as shown below
[deployment_host:vars]
ansible_port = 22 # remote host port
ansible_user = <user_to_the_Target_machine> # remote user host
private_key_file = <key file to bastion in my laptop> # laptop key to login to bastion host
ansible_ssh_common_args='-o StrictHostKeyChecking=no -o ProxyCommand="ssh -o \'ForwardAgent yes\' <user>#<bastion> -p 2222 \'ssh-add /home/<user>/.ssh/id_rsa && nc %h 22\'"'
[deployment_host]
10.200.120.218 ansible_ssh_port=22 ansible_ssh_extra_args='-o StrictHostKeyChecking=no'
How can I do that
I have not made any changes to my ssh config and when i run ansible like below
ansible -vvv all -i inventory.ini -m shell -a 'hostname'
I get this error
ansible 2.9.0
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.9.5 (default, May 11 2021, 08:20:37) [GCC 10.3.0]
No config file found; using defaults
host_list declined parsing /root/temp_ansible/inventory.ini as it did not pass its verify_file() method
script declined parsing /root/temp_ansible/inventory.ini as it did not pass its verify_file() method
auto declined parsing /root/temp_ansible/inventory.ini as it did not pass its verify_file() method
yaml declined parsing /root/temp_ansible/inventory.ini as it did not pass its verify_file() method
Parsed /root/temp_ansible/inventory.ini inventory source with ini plugin
META: ran handlers
<10.200.120.218> ESTABLISH SSH CONNECTION FOR USER: <user> # remote user host
<10.200.120.218> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o Port=22 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="<user> # remote user host"' -o ConnectTimeout=10 -o StrictHostKeyChecking=no -o 'ProxyCommand=ssh -o '"'"'ForwardAgent yes'"'"' <user>#35.223.214.105 -p 2222 '"'"'ssh-add /home/<user>/.ssh/id_rsa && nc %h 22'"'"'' -o StrictHostKeyChecking=no -o ControlPath=/root/.ansible/cp/ec0480070b 10.200.120.218 '/bin/sh -c '"'"'echo '"'"'"'"'"'"'"'"'~<user> # remote user host'"'"'"'"'"'"'"'"' && sleep 0'"'"''
<10.200.120.218> (255, b'', b'kex_exchange_identification: Connection closed by remote host\r\nConnection closed by UNKNOWN port 65535\r\n')
10.200.120.218 | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: kex_exchange_identification: Connection closed by remote host\r\nConnection closed by UNKNOWN port 65535",
"unreachable": true
}
I figured out the solution.
For me this was
adding both entries of my server A and B into the ~/.ssh/config
Host bastion1
HostName <IP/FQDN>
StrictHostKeyChecking no
User <USER>
IdentityFile <File to log into the first bastion server> # should be present in your local machine.
Host bastion2
HostName <IP/FQDN>
StrictHostKeyChecking no
User <USER>
IdentityFile <File to log into the second bastion server> # should be present in your local machine.
ProxyJump bastion
Then editing the inventory file like shown below.
[deployment_host]
VM_IP ansible_user=<vm_user> ansible_ssh_extra_args='-o StrictHostKeyChecking=no' ansible_ssh_private_key_file=<file to login to VM>
[deployment_host:vars]
ansible_ssh_common_args='-J bastion1,bastion2'
Then any ansible command with this inventory should work without issue
❯ ansible all -i inventory.ini -m shell -a "hostname"
<VM_IP> | CHANGED | rc=0 >>
development-host-1
NOTE: All these ssh keys should be in your local system. You can get
the bastion2 private key from bastion1 using ansible and the same for
the VM from bastion2 using ansible ad-hoc fetch

How do I configure Ansible to jump through two bastion hosts?

I want to write an Ansible playbook (using Ansible 2.7.5) that will jump through two hosts before reaching the intended server to do things such as install docker and python, etc.
I'm able to get Ansible to jump through one host into server1 by adding this to my hosts file:
[server1:vars]
ansible_ssh_common_args='-o ProxyCommand="ssh -W %h:%p -q bastion"'
I have also updated my ~/.ssh/config file:
Host bastion
Hostname YY.YY.YY.YY
User user
IdentityFile ~/.ssh/bastion_private_key
Host server1
Hostname XX.XX.XX.XX
User user
IdentityFile ~/.ssh/private_key
ProxyJump bastion
However, I now also need to do this through two hosts. I've added the following to ~/.ssh/config:
Host server2
Hostname ZZ.ZZ.ZZ.ZZ
User user
IdentityFile ~/.ssh/private_key_3
ProxyJump server1
This allows me to type ssh server2 and open a shell inside server2. So that seems to be working.
But, I do not know how to change the hosts file to jump through both of these hosts. I've tried:
ansible_ssh_common_args='-o ProxyCommand="ssh -J bastion,server1"'
and
ansible_ssh_common_args='-o ProxyCommand="ssh -W %h:%p -q bastion ssh -W %h:%p -q server1"'
Neither work, and both result in a timeout. What should I do to make Ansible jump through bastion and then server1 so that it can reach server2?
This is the result when I run -vvvv (with some path and names obfuscated):
ansible-playbook 2.7.5
config file = /path/to/dir/ansible.cfg
configured module search path = [u'/home/user/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible-playbook
python version = 2.7.15rc1 (default, Nov 12 2018, 14:31:15) [GCC 7.3.0]
Using /path/to/dir/ansible.cfg as config file
setting up inventory plugins
/path/to/dir/hosts did not meet host_list requirements, check plugin documentation if this is unexpected
/path/to/dir/hosts did not meet script requirements, check plugin documentation if this is unexpected
/path/to/dir/hosts inventory source with ini plugin
[WARNING]: Found both group and host with same name: server2
statically imported: /path/to/dir/tasks/ansible.yml
Loading callback plugin default of type stdout, v2.0 from /usr/lib/python2.7/dist-packages/ansible/plugins/callback/default.pyc
PLAYBOOK: enable-ansible.yml *********************************************************************************************************************************
1 plays in enable-ansible.yml
PLAY [server2] ****************************************************************************************************************************************
TASK [Gathering Facts] ***************************************************************************************************************************************
task path: /path/to/dir/enable-ansible.yml:2
<server2> ESTABLISH SSH CONNECTION FOR USER: ubuntu
<server2> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ubuntu -o ConnectTimeout=10 -o 'ProxyCommand=ssh -W %h:%p -q bastion ssh -W %h:%p -q server1' -o ControlPath=/home/user/.ansible/cp/460e3f86d3 server2 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /tmp/ansible-tmp-1546192323.33-48994637286535 `" && echo ansible-tmp-1546192323.33-48994637286535="` echo /tmp/ansible-tmp-1546192323.33-48994637286535 `" ) && sleep 0'"'"''
<server2> (255, '', 'OpenSSH_7.6p1 Ubuntu-4ubuntu0.1, OpenSSL 1.0.2n 7 Dec 2017\r\ndebug1: Reading configuration data /home/user/.ssh/config\r\ndebug1: /home/user/.ssh/config line 70: Applying options for server2\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug1: Control socket "/home/user/.ansible/cp/460e3f86d3" does not exist\r\ndebug1: Executing proxy command: exec ssh -W SERVER2_IP_ADDRESS:22 -q bastion ssh -W SERVER2_IP_ADDRESS:22 -q server1\r\ndebug3: timeout: 10000 ms remain after connect\r\ndebug1: key_load_public: No such file or directory\r\ndebug1: identity file /home/user/.ssh/bastion type -1\r\ndebug1: key_load_public: No such file or directory\r\ndebug1: identity file /home/user/.ssh/bastion-cert type -1\r\ndebug1: Local version string SSH-2.0-OpenSSH_7.6p1 Ubuntu-4ubuntu0.1\r\ndebug1: permanently_drop_suid: 1000\r\nConnection timed out during banner exchange\r\n')
fatal: [server2]: UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: OpenSSH_7.6p1 Ubuntu-4ubuntu0.1, OpenSSL 1.0.2n 7 Dec 2017\r\ndebug1: Reading configuration data /home/user/.ssh/config\r\ndebug1: /home/user/.ssh/config line 70: Applying options for server2\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug1: Control socket \"/home/user/.ansible/cp/460e3f86d3\" does not exist\r\ndebug1: Executing proxy command: exec ssh -W SERVER2_IP_ADDRESS:22 -q bastion ssh -W SERVER2_IP_ADDRESS:22 -q server1\r\ndebug3: timeout: 10000 ms remain after connect\r\ndebug1: key_load_public: No such file or directory\r\ndebug1: identity file /home/user/.ssh/bastion type -1\r\ndebug1: key_load_public: No such file or directory\r\ndebug1: identity file /home/user/.ssh/bastion-cert type -1\r\ndebug1: Local version string SSH-2.0-OpenSSH_7.6p1 Ubuntu-4ubuntu0.1\r\ndebug1: permanently_drop_suid: 1000\r\nConnection timed out during banner exchange\r\n",
"unreachable": true
}
to retry, use: --limit #/home/user/Documents/repos/cloud-devops/enable-ansible.retry
PLAY RECAP ***************************************************************************************************************************************************
server2 : ok=0 changed=0 unreachable=1 failed=0
For some added context, this playbook is logging into the remote server as a non-root account and creating the ansible user in it. And to reiterate, this playbook works when I am only jumping through one host.
Just use
ansible_ssh_common_args='-J bastion,server1'

Ansible cannot ssh into VM created by Vagrant

I have a very simple Vagrantfile:
config.vm.define "one" do |one|
one.vm.box = "centos/7"
end
config.ssh.insert_key = false
end
(Note it was creating vm but exiting with a failure untill I installed vbguest plugin)
After vm was created I wanted to execute a simple Ansible job. My inventory file (Vagrant forwarded 22 port on guest to 2222 on host):
[one]
127.0.0.1 ansible_ssh_port=2222 ansible_ssh_user=vagrant ansible_ssh_private_key_file=C:/Users/Lukasz/.vagrant.d/insecure_private_key
And here's the Docker command (from Windows cmd):
docker run --rm -v /c/Users/Lukasz/ansible/ansible:/home:rw -w /home williamyeh/ansible:ubuntu14.04 ansible-playbook -i inventory/testvms site.yml --check -vvvv
Finally, here's the output of the command:
<127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: vagrant
<127.0.0.1> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=2222 -o 'IdentityFile="C:/Users/Lukasz/.vagrant.d/insecure_private_key"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o PreferredAuthentications=privatekey -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r 127.0.0.1 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo ~/.ansible/tmp/ansible-tmp-1488381378.63-13786642598588 `" && echo ansible-tmp-1488381378.63-13786642598588="` echo ~/.ansible/tmp/ansible-tmp-1488381378.63-13786642598588 `" ) && sleep 0'"'"''
fatal: [127.0.0.1]: UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: OpenSSH_7.2p2 Ubuntu-4ubuntu2.1, OpenSSL 1.0.2g 1 Mar 2016\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug1: Control socket \"/root/.ansible/cp/ansible-ssh-127.0.0.1-2222-vagrant\" does not exist\r\ndebug2: resolving \"127.0.0.1\" port 2222\r\ndebug2: ssh_connect_direct: needpriv 0\r\ndebug1: Connecting to 127.0.0.1 [127.0.0.1] port 2222.\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug1: connect to address 127.0.0.1 port 2222: Connection refused\r\nssh: connect to host 127.0.0.1 port 2222: Connection refused\r\n",
"unreachable": true
}
I can ssh to this VM manualy with no problem - specifying user, port and private key.
Am I doing something wrong?
EDIT 1:
I have mounted folder with the private key: -v /c/Users/Lukasz/.vagrant.d/:/home/.ssh/ and refer to it from inventory file: ansible_ssh_private_key_file=/home/.ssh/insecure_private_key. Also assigned a static IP in the vagrantfile, and used it in docker command. Errror now is "Connection timed out".
There's a misunderstanding of how loopback addresses work and also an underestimation of how complex system you actually run.
In the scenario described in your question, you are running four machines with four separate network stacks:
a physical machine Windows
a CentOS VM (supposedly running under VirtualBox, orchestrated by Vagrant)
a Docker Linux machine which is running in the background when you install Docker for Windows (judging from your sentence "the docker command (from windows cmd)")
an Ansible container running under the Docker's Linux machine
Each of these machines has its own loopback address (127.0.0.1) which is not accessible from any other machine.
You have one port mapping:
Vagrant set a mapping for tnt CentOS virtual machine under the control of VirtualBox so that the VM's port 22 is accessible on the Windows machine loopback address (127.0.0.1) port 2222.
And thus you can connect with SSH client from Windows to the CentOS machine.
However, Docker for Windows runs a separate Linux machine and configures the docker command so that when you execute docker from Windows command-line prompt, you actually work directly on this Linux machine (as you run containers, you don't actually need to access this Docker host directly, so you can be unaware of its existence).
Like it was not enough, each container you run will have its own loopback 127.0.0.1 address.
As a result there is no way an Ansible container would reach the loopback address of your physical Windows machine.
Probably the easiest solution would be to configure the CentOS box to run on a public network, with a static IP address (see Vagrant: Public Networks) by adding for example the following line to the Vagrantfile:
config.vm.network "public_network", ip: "192.168.0.17"
Then you should use this address in the inventory file and follow Konstantin's advice to make the private key available to the container:
[one]
192.168.0.17 ansible_ssh_user=vagrant ansible_ssh_private_key_file=/path/to/insecure_private_key/mapped/inside/container
It seems that you specify windows path for ansible_ssh_private_key_file in your inventory, but use this inventory from inside the container.
You should map C:/Users/Lukasz/.vagrant.d/ into your container and set ansible_ssh_private_key_file from container's perspective.

Ansible through Bastion server SSH Error

Following this guide (and others) running-ansible-through-ssh-bastion-host.
I have my ssh.cfg file set up to allow connecting to a host behind multiple bastions.
proxy -> util -> monitor -> more
I can connect to the util server:
[self#home]$ ssh -F ssh.cfg util
...
[self#util]$
and the monitoring server:
[self#home]$ ssh -F ssh.cfg monitor
...
[self#monitor]$
ssh.conf:
Host *
ServerAliveInterval 60
ControlMaster auto
ControlPath ~/.ssh/mux-%r#%h:%p
ControlPersist 15m
Host proxy
HostName proxy01.com
ForwardAgent yes
Host util
HostName util01.priv
ProxyCommand ssh -W %h:%p proxy
Host monitor
HostName mon01.priv
ProxyCommand ssh -W %h:%p util
ansible inventory file:
[bastion]
proxy
[utility]
util
monitor
ansible.cfg:
[ssh_connection]
ssh_args = -F ./ssh.cfg -o ControlMaster=auto -o ControlPersist=15m
control_path = ~/.ssh/ansible-%%r#%%h:%%p
When I execute any ansible commands, they appear to hit the proxy host without any problem, but fail to connect to the util host and the monitor host.
> ansible all -a "/bin/echo hello"
util | UNREACHABLE! => {
"changed": false,
"msg": "SSH Error: data could not be sent to the remote host. Make sure this host can be reached over ssh",
"unreachable": true
}
proxy | SUCCESS | rc=0 >>
hello
monitor | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh.",
"unreachable": true
}
ADDITIONAL:
after some more hacking around, I have key'd the monitor host, and found that ansible can connect to the proxy,and the monitor, but fails on the util host... which is extremely odd because it has to pass through the util host to hit the monitor.
util | UNREACHABLE! => {
"changed": false,
"msg": "SSH Error: data could not be sent to the remote host. Make sure this host can be reached over ssh",
"unreachable": true
}
proxy | SUCCESS | rc=0 >>
hello
monitor | SUCCESS | rc=0 >>
hello
After trying different guides, this solution work for me to use the ansible over the server that doesn't have directory ssh but via proxy/bastion.
Here is my ~/.ssh/config file:
Host *
ServerAliveInterval 60
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
ForwardAgent yes
####### Access to the Private Subnet Server through Proxy/bastion ########
Host proxy-server
HostName x.x.x.x
ForwardAgent yes
Host private-server
HostName y.y.y.y
ProxyCommand ssh -q proxy-server nc -q0 %h %p
Hope that help you.
For some unknown reason Ansible ignores multiple hosts, following config helped me
Host 10.*
StrictHostKeyChecking no
GSSAPIAuthentication no
ProxyCommand ssh -W %h:%p -l ubuntu -i ~/.ssh/key.pem 11.22.33.44
ControlMaster auto
ControlPersist 15m
User ubuntu
IdentityFile ~/.ssh/key.pem