Following this guide (and others) running-ansible-through-ssh-bastion-host.
I have my ssh.cfg file set up to allow connecting to a host behind multiple bastions.
proxy -> util -> monitor -> more
I can connect to the util server:
[self#home]$ ssh -F ssh.cfg util
...
[self#util]$
and the monitoring server:
[self#home]$ ssh -F ssh.cfg monitor
...
[self#monitor]$
ssh.conf:
Host *
ServerAliveInterval 60
ControlMaster auto
ControlPath ~/.ssh/mux-%r#%h:%p
ControlPersist 15m
Host proxy
HostName proxy01.com
ForwardAgent yes
Host util
HostName util01.priv
ProxyCommand ssh -W %h:%p proxy
Host monitor
HostName mon01.priv
ProxyCommand ssh -W %h:%p util
ansible inventory file:
[bastion]
proxy
[utility]
util
monitor
ansible.cfg:
[ssh_connection]
ssh_args = -F ./ssh.cfg -o ControlMaster=auto -o ControlPersist=15m
control_path = ~/.ssh/ansible-%%r#%%h:%%p
When I execute any ansible commands, they appear to hit the proxy host without any problem, but fail to connect to the util host and the monitor host.
> ansible all -a "/bin/echo hello"
util | UNREACHABLE! => {
"changed": false,
"msg": "SSH Error: data could not be sent to the remote host. Make sure this host can be reached over ssh",
"unreachable": true
}
proxy | SUCCESS | rc=0 >>
hello
monitor | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh.",
"unreachable": true
}
ADDITIONAL:
after some more hacking around, I have key'd the monitor host, and found that ansible can connect to the proxy,and the monitor, but fails on the util host... which is extremely odd because it has to pass through the util host to hit the monitor.
util | UNREACHABLE! => {
"changed": false,
"msg": "SSH Error: data could not be sent to the remote host. Make sure this host can be reached over ssh",
"unreachable": true
}
proxy | SUCCESS | rc=0 >>
hello
monitor | SUCCESS | rc=0 >>
hello
After trying different guides, this solution work for me to use the ansible over the server that doesn't have directory ssh but via proxy/bastion.
Here is my ~/.ssh/config file:
Host *
ServerAliveInterval 60
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
ForwardAgent yes
####### Access to the Private Subnet Server through Proxy/bastion ########
Host proxy-server
HostName x.x.x.x
ForwardAgent yes
Host private-server
HostName y.y.y.y
ProxyCommand ssh -q proxy-server nc -q0 %h %p
Hope that help you.
For some unknown reason Ansible ignores multiple hosts, following config helped me
Host 10.*
StrictHostKeyChecking no
GSSAPIAuthentication no
ProxyCommand ssh -W %h:%p -l ubuntu -i ~/.ssh/key.pem 11.22.33.44
ControlMaster auto
ControlPersist 15m
User ubuntu
IdentityFile ~/.ssh/key.pem
Related
I have a machine that is accessible through a jump host.
What I need is this.
A is my local machine
B is the jump host
C is the destination machine
I need to connect to C using ansible via B but use a private key in B.
Current config is the inventory file is as shown below
[deployment_host:vars]
ansible_port = 22 # remote host port
ansible_user = <user_to_the_Target_machine> # remote user host
private_key_file = <key file to bastion in my laptop> # laptop key to login to bastion host
ansible_ssh_common_args='-o StrictHostKeyChecking=no -o ProxyCommand="ssh -o \'ForwardAgent yes\' <user>#<bastion> -p 2222 \'ssh-add /home/<user>/.ssh/id_rsa && nc %h 22\'"'
[deployment_host]
10.200.120.218 ansible_ssh_port=22 ansible_ssh_extra_args='-o StrictHostKeyChecking=no'
How can I do that
I have not made any changes to my ssh config and when i run ansible like below
ansible -vvv all -i inventory.ini -m shell -a 'hostname'
I get this error
ansible 2.9.0
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.9.5 (default, May 11 2021, 08:20:37) [GCC 10.3.0]
No config file found; using defaults
host_list declined parsing /root/temp_ansible/inventory.ini as it did not pass its verify_file() method
script declined parsing /root/temp_ansible/inventory.ini as it did not pass its verify_file() method
auto declined parsing /root/temp_ansible/inventory.ini as it did not pass its verify_file() method
yaml declined parsing /root/temp_ansible/inventory.ini as it did not pass its verify_file() method
Parsed /root/temp_ansible/inventory.ini inventory source with ini plugin
META: ran handlers
<10.200.120.218> ESTABLISH SSH CONNECTION FOR USER: <user> # remote user host
<10.200.120.218> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o Port=22 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="<user> # remote user host"' -o ConnectTimeout=10 -o StrictHostKeyChecking=no -o 'ProxyCommand=ssh -o '"'"'ForwardAgent yes'"'"' <user>#35.223.214.105 -p 2222 '"'"'ssh-add /home/<user>/.ssh/id_rsa && nc %h 22'"'"'' -o StrictHostKeyChecking=no -o ControlPath=/root/.ansible/cp/ec0480070b 10.200.120.218 '/bin/sh -c '"'"'echo '"'"'"'"'"'"'"'"'~<user> # remote user host'"'"'"'"'"'"'"'"' && sleep 0'"'"''
<10.200.120.218> (255, b'', b'kex_exchange_identification: Connection closed by remote host\r\nConnection closed by UNKNOWN port 65535\r\n')
10.200.120.218 | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: kex_exchange_identification: Connection closed by remote host\r\nConnection closed by UNKNOWN port 65535",
"unreachable": true
}
I figured out the solution.
For me this was
adding both entries of my server A and B into the ~/.ssh/config
Host bastion1
HostName <IP/FQDN>
StrictHostKeyChecking no
User <USER>
IdentityFile <File to log into the first bastion server> # should be present in your local machine.
Host bastion2
HostName <IP/FQDN>
StrictHostKeyChecking no
User <USER>
IdentityFile <File to log into the second bastion server> # should be present in your local machine.
ProxyJump bastion
Then editing the inventory file like shown below.
[deployment_host]
VM_IP ansible_user=<vm_user> ansible_ssh_extra_args='-o StrictHostKeyChecking=no' ansible_ssh_private_key_file=<file to login to VM>
[deployment_host:vars]
ansible_ssh_common_args='-J bastion1,bastion2'
Then any ansible command with this inventory should work without issue
❯ ansible all -i inventory.ini -m shell -a "hostname"
<VM_IP> | CHANGED | rc=0 >>
development-host-1
NOTE: All these ssh keys should be in your local system. You can get
the bastion2 private key from bastion1 using ansible and the same for
the VM from bastion2 using ansible ad-hoc fetch
How can you write the following setup in an ssh config.
### The Bastion Host
Host bastion-host-nickname
HostName bastion-hostname
### The Remote Host
Host remote-host-nickname
HostName remote-hostname
ProxyJump bastion-host-nickname
### The Remote Host VM
Host remote-host-vm-nickname
Hostname remote-vm-hostname
????
I have a bastian sever through which my remote-host can be reached via ssh. This connection is working as expected. On my remote-host there are a few virtual machines (the remote host vm) that can also be reached via ssh.
AllowTCPForwarding is disabled in the sshd_config of the remote-host. Therefore neither an SSH tunnel nor a ProxyCommand can be used. With both you get the error message "... administratively prohibited". The sshd_config should stay that way.
My preferred approach is that I connect to the remote-host and execute the following command:
[#remote-host]
"ssh -t -i keyfile user#remote-vm-hostname \" whoami \ ""
How can I describe this ssh command in my ssh_config?
Especially so that this ssh command can only be executed on my remote host.
I have written a ssh config file that specifies a typical jump server setting:
Host host1
HostName 11.11.11.11
User useroo
IdentityFile some/key/file
Host host2
HostName 192.11.11.10
User useroo
IdentityFile some/other/key
ProxyCommand ssh -W %h:%p host1
I can successfully connect with ssh host2 when I save this as ~/.ssh/config. However if I save the config somewhere else as xy_conf, calling ssh -F xy_conf host2 results in an error saying
ssh: Could not resolve hostname host1: Name or service not known
ssh_exchange_identification: Connection closed by remote host
Is this the expected behavior? How else can I set this config temporarily? I don't want to set it as ~/.ssh/config.
OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.8, OpenSSL 1.0.1f 6 Jan 2014
Using different location for ssh_config affects only the first call of ssh, but not the second (from ProxyCommand). You need to pass the same argument to the secondssh` too:
ProxyCommand ssh -F xy_conf -W %h:%p host1
I have an ec2 amazon linux running which I can ssh in to using:
ssh -i "keypair.pem" ec2-user#some-ip.eu-west-1.compute.amazonaws.com
but when I try to ping the server using ansible I get:
testserver | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh.",
"unreachable": true
}
I'm using the following hosts file:
testserver ansible_ssh_host=some-ip.eu-west-1.compute.amazonaws.com ansible_ssh_user=ec2-user ansible_ssh_private_key_file=/Users/me/playbook/key-pair.pem
and running the following command to run ansible:
ansible testserver -i hosts -m ping -vvvvv
The output is:
<some-ip.eu-west-1.compute.amazonaws.com> ESTABLISH SSH CONNECTION FOR USER: ec2-user
<some-ip.eu-west-1.compute.amazonaws.com> SSH: ansible.cfg set ssh_args: (-o)(ControlMaster=auto)(-o)(ControlPersist=60s)
<some-ip.eu-west-1.compute.amazonaws.com> SSH: ANSIBLE_PRIVATE_KEY_FILE/private_key_file/ansible_ssh_private_key_file set: (-o)(IdentityFile="/Users/me/playbook/key-pair.pem")
<some-ip.eu-west-1.compute.amazonaws.com> SSH: ansible_password/ansible_ssh_pass not set: (-o)(KbdInteractiveAuthentication=no)(-o)(PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey)(-o)(PasswordAuthentication=no)
<some-ip.eu-west-1.compute.amazonaws.com> SSH: ANSIBLE_REMOTE_USER/remote_user/ansible_user/user/-u set: (-o)(User=ec2-user)
<some-ip.eu-west-1.compute.amazonaws.com> SSH: ANSIBLE_TIMEOUT/timeout set: (-o)(ConnectTimeout=10)
<some-ip.eu-west-1.compute.amazonaws.com> SSH: PlayContext set ssh_common_args: ()
<some-ip.eu-west-1.compute.amazonaws.com> SSH: PlayContext set ssh_extra_args: ()
<some-ip.eu-west-1.compute.amazonaws.com> SSH: found only ControlPersist; added ControlPath: (-o)(ControlPath=/Users/me/.ansible/cp/ansible-ssh-%h-%p-%r)
<some-ip.eu-west-1.compute.amazonaws.com> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/Users/me/playbook/key-pair.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 -o ControlPath=/Users/me/.ansible/cp/ansible-ssh-%h-%p-%r ec2-52-18-106-35.eu-west-1.compute.amazonaws.com '/bin/sh -c '"'"'( umask 22 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1462096401.65-214839021792201 `" && echo "` echo $HOME/.ansible/tmp/ansible-tmp-1462096401.65-214839021792201 `" )'"'"''
testserver | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh.",
"unreachable": true
}
What am i doing wrong?
Try this Solution it worked fine for me
ansible ipaddress -m ping -i inventory -u ec2-user
where inventory is the host file name.
inventory :
[host]
xx.xx.xx.xx
[host:vars]
ansible_user=ec2-user
ansible_ssh_private_key_file=/location of your pem file/filename.pem
I was facing the problem as I didn't give the location of the host file I was referring to.
This is what my host file looks like.
[apache] is the group of hosts on which we are going to install apache server.
ansible_ssh_private_key_file should be the path of the dowloaded .pem file to access your instances. In my case both instances have same credentials.
[apache]
50.112.133.205 ansible_ssh_user=ubuntu
54.202.7.87 ansible_ssh_user=ubuntu
[apache:vars]
ansible_user=ubuntu
ansible_ssh_private_key_file=/home/hashimyousaf/Desktop/hashim_key_oregon.pem
I was having a similar problem, and reading throughTroubleshooting Connecting to Your Instance helped me. Specifically, I was pinging an Ubuntu instance from an Amazon-Linux instance but forgot to change the connection username from "ec2-user" to "ubuntu"!
You have to change the hosts file and make sure you have the correct username
test2 ansible_ssh_host=something.something.eu-west-1.compute.amazonaws.com ansible_ssh_user=theUser
'test2' - is the name I have give to the ssh machice on my local ansible hosts file
'ansible_ssh_host=something.something.eu-west-1.compute.amazonaws.com' - This is the connection to the ec2 instance
'ansible_ssh_user=theUser' - The user of the instance. (Important)
'ssh' into your instance
[theUser#Instance:] make sure you copy the 'theUser' into the hosts and place as the 'ansible_ssh_user' variable
then try to ping it.
If this does not work, check if you have rights to the ICMP packeting in the amazon aws enabled.
Worked for me ->
vi inventory
[hosts]
serveripaddress ansible_ssh_user=ec2-user
[hosts:vars]
ansible_user=ec2-user
ansible_ssh_private_key_file=/home/someuser/ansible1.pem
chmod 400 ansible1.pem
ansible -i inventory hosts -m ping -u ec2-user
I want to send files from machineA which has opened a reverse tunnel with a server. The reverse tunnel connects port 22 on machineA with port 2222 on the server:
autossh -M 0 -q -f -N -o "ServerAliveInterval 120" -o "ServerAliveCountMax 1" -R 2222:localhost:22 userserver#server.com
If I do:
scp file userserver#server.com:.
then SCP sends the file with a new login over SSH, in my case using public/private key.
But if I do:
scp -P 2222 file userserver#localhost:.
I get a "connection refused" message. The same happens if I replace 2222 above with the port found with:
netstat | grep ssh | grep ESTABLISHED
How I can send files without opening a new ssh connection (without handshake)?
You can use ControlMaster option in your ssh_config (~/.ssh/config), which will create persistent connection for further ssh/scp/sftp sessions. It is easy as pie:
Host yourhost
Hostname fqdn.tld
Port port_number # if required, but probably yes, if you do port-forwarding
ControlMaster auto
ControlPath ~/.ssh/master-%r#%h
ControlPersist 5m