I have a very simple Vagrantfile:
config.vm.define "one" do |one|
one.vm.box = "centos/7"
end
config.ssh.insert_key = false
end
(Note it was creating vm but exiting with a failure untill I installed vbguest plugin)
After vm was created I wanted to execute a simple Ansible job. My inventory file (Vagrant forwarded 22 port on guest to 2222 on host):
[one]
127.0.0.1 ansible_ssh_port=2222 ansible_ssh_user=vagrant ansible_ssh_private_key_file=C:/Users/Lukasz/.vagrant.d/insecure_private_key
And here's the Docker command (from Windows cmd):
docker run --rm -v /c/Users/Lukasz/ansible/ansible:/home:rw -w /home williamyeh/ansible:ubuntu14.04 ansible-playbook -i inventory/testvms site.yml --check -vvvv
Finally, here's the output of the command:
<127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: vagrant
<127.0.0.1> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=2222 -o 'IdentityFile="C:/Users/Lukasz/.vagrant.d/insecure_private_key"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o PreferredAuthentications=privatekey -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r 127.0.0.1 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo ~/.ansible/tmp/ansible-tmp-1488381378.63-13786642598588 `" && echo ansible-tmp-1488381378.63-13786642598588="` echo ~/.ansible/tmp/ansible-tmp-1488381378.63-13786642598588 `" ) && sleep 0'"'"''
fatal: [127.0.0.1]: UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: OpenSSH_7.2p2 Ubuntu-4ubuntu2.1, OpenSSL 1.0.2g 1 Mar 2016\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug1: Control socket \"/root/.ansible/cp/ansible-ssh-127.0.0.1-2222-vagrant\" does not exist\r\ndebug2: resolving \"127.0.0.1\" port 2222\r\ndebug2: ssh_connect_direct: needpriv 0\r\ndebug1: Connecting to 127.0.0.1 [127.0.0.1] port 2222.\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug1: connect to address 127.0.0.1 port 2222: Connection refused\r\nssh: connect to host 127.0.0.1 port 2222: Connection refused\r\n",
"unreachable": true
}
I can ssh to this VM manualy with no problem - specifying user, port and private key.
Am I doing something wrong?
EDIT 1:
I have mounted folder with the private key: -v /c/Users/Lukasz/.vagrant.d/:/home/.ssh/ and refer to it from inventory file: ansible_ssh_private_key_file=/home/.ssh/insecure_private_key. Also assigned a static IP in the vagrantfile, and used it in docker command. Errror now is "Connection timed out".
There's a misunderstanding of how loopback addresses work and also an underestimation of how complex system you actually run.
In the scenario described in your question, you are running four machines with four separate network stacks:
a physical machine Windows
a CentOS VM (supposedly running under VirtualBox, orchestrated by Vagrant)
a Docker Linux machine which is running in the background when you install Docker for Windows (judging from your sentence "the docker command (from windows cmd)")
an Ansible container running under the Docker's Linux machine
Each of these machines has its own loopback address (127.0.0.1) which is not accessible from any other machine.
You have one port mapping:
Vagrant set a mapping for tnt CentOS virtual machine under the control of VirtualBox so that the VM's port 22 is accessible on the Windows machine loopback address (127.0.0.1) port 2222.
And thus you can connect with SSH client from Windows to the CentOS machine.
However, Docker for Windows runs a separate Linux machine and configures the docker command so that when you execute docker from Windows command-line prompt, you actually work directly on this Linux machine (as you run containers, you don't actually need to access this Docker host directly, so you can be unaware of its existence).
Like it was not enough, each container you run will have its own loopback 127.0.0.1 address.
As a result there is no way an Ansible container would reach the loopback address of your physical Windows machine.
Probably the easiest solution would be to configure the CentOS box to run on a public network, with a static IP address (see Vagrant: Public Networks) by adding for example the following line to the Vagrantfile:
config.vm.network "public_network", ip: "192.168.0.17"
Then you should use this address in the inventory file and follow Konstantin's advice to make the private key available to the container:
[one]
192.168.0.17 ansible_ssh_user=vagrant ansible_ssh_private_key_file=/path/to/insecure_private_key/mapped/inside/container
It seems that you specify windows path for ansible_ssh_private_key_file in your inventory, but use this inventory from inside the container.
You should map C:/Users/Lukasz/.vagrant.d/ into your container and set ansible_ssh_private_key_file from container's perspective.
Related
I have a machine that is accessible through a jump host.
What I need is this.
A is my local machine
B is the jump host
C is the destination machine
I need to connect to C using ansible via B but use a private key in B.
Current config is the inventory file is as shown below
[deployment_host:vars]
ansible_port = 22 # remote host port
ansible_user = <user_to_the_Target_machine> # remote user host
private_key_file = <key file to bastion in my laptop> # laptop key to login to bastion host
ansible_ssh_common_args='-o StrictHostKeyChecking=no -o ProxyCommand="ssh -o \'ForwardAgent yes\' <user>#<bastion> -p 2222 \'ssh-add /home/<user>/.ssh/id_rsa && nc %h 22\'"'
[deployment_host]
10.200.120.218 ansible_ssh_port=22 ansible_ssh_extra_args='-o StrictHostKeyChecking=no'
How can I do that
I have not made any changes to my ssh config and when i run ansible like below
ansible -vvv all -i inventory.ini -m shell -a 'hostname'
I get this error
ansible 2.9.0
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.9.5 (default, May 11 2021, 08:20:37) [GCC 10.3.0]
No config file found; using defaults
host_list declined parsing /root/temp_ansible/inventory.ini as it did not pass its verify_file() method
script declined parsing /root/temp_ansible/inventory.ini as it did not pass its verify_file() method
auto declined parsing /root/temp_ansible/inventory.ini as it did not pass its verify_file() method
yaml declined parsing /root/temp_ansible/inventory.ini as it did not pass its verify_file() method
Parsed /root/temp_ansible/inventory.ini inventory source with ini plugin
META: ran handlers
<10.200.120.218> ESTABLISH SSH CONNECTION FOR USER: <user> # remote user host
<10.200.120.218> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o Port=22 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="<user> # remote user host"' -o ConnectTimeout=10 -o StrictHostKeyChecking=no -o 'ProxyCommand=ssh -o '"'"'ForwardAgent yes'"'"' <user>#35.223.214.105 -p 2222 '"'"'ssh-add /home/<user>/.ssh/id_rsa && nc %h 22'"'"'' -o StrictHostKeyChecking=no -o ControlPath=/root/.ansible/cp/ec0480070b 10.200.120.218 '/bin/sh -c '"'"'echo '"'"'"'"'"'"'"'"'~<user> # remote user host'"'"'"'"'"'"'"'"' && sleep 0'"'"''
<10.200.120.218> (255, b'', b'kex_exchange_identification: Connection closed by remote host\r\nConnection closed by UNKNOWN port 65535\r\n')
10.200.120.218 | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: kex_exchange_identification: Connection closed by remote host\r\nConnection closed by UNKNOWN port 65535",
"unreachable": true
}
I figured out the solution.
For me this was
adding both entries of my server A and B into the ~/.ssh/config
Host bastion1
HostName <IP/FQDN>
StrictHostKeyChecking no
User <USER>
IdentityFile <File to log into the first bastion server> # should be present in your local machine.
Host bastion2
HostName <IP/FQDN>
StrictHostKeyChecking no
User <USER>
IdentityFile <File to log into the second bastion server> # should be present in your local machine.
ProxyJump bastion
Then editing the inventory file like shown below.
[deployment_host]
VM_IP ansible_user=<vm_user> ansible_ssh_extra_args='-o StrictHostKeyChecking=no' ansible_ssh_private_key_file=<file to login to VM>
[deployment_host:vars]
ansible_ssh_common_args='-J bastion1,bastion2'
Then any ansible command with this inventory should work without issue
❯ ansible all -i inventory.ini -m shell -a "hostname"
<VM_IP> | CHANGED | rc=0 >>
development-host-1
NOTE: All these ssh keys should be in your local system. You can get
the bastion2 private key from bastion1 using ansible and the same for
the VM from bastion2 using ansible ad-hoc fetch
We have a RHEL 7 remote server where I created a dummy user called gitlabci.
While SSH'd into the remote server, I generated a public-private key pair (for use when grabbing files from GitLab)
Uploaded the public key as a deploy key for use later when we get our CI set up
Generated another public-private key pair in my local machine (for use when SSH'ing into the remote server from the GitLab Runner)
Added the public key to the remote server's authorized_keys
Added the private key to the project's CI environment variables
The idea is when the CI runs, the GitLab runner will SSH into the remote server as the gitlabci user I created then fetch the branch into the web directory using the deploy keys.
I thought I have set up the keys properly but whenever the runner tries to SSH, the connection gets refused.
$ which ssh-agent || ( apt-get update -y && apt-get install openssh-client git -y )
...
$ eval $(ssh-agent -s)
Agent pid 457
$ echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add - > /dev/null
Identity added: (stdin) (GitLab CI)
$ mkdir -p ~/.ssh
$ chmod 700 ~/.ssh
$ [[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config
$ ssh gitlabci#random.server.com
Pseudo-terminal will not be allocated because stdin is not a terminal.
ssh: connect to host random.server.com port 22: Connection refused
ERROR: Job failed: exit code 1
When I tried to SSH into the remote server via GitBash on my local machine using the key pair I generated it did work.
$ ssh -i ~/.ssh/gitlabci gitlabci#random.server.com
Last login: Mon Nov 4 13:49:59 2019 from machine01.work.server.com
ssh: connect to host random.server.com port 22: Connection refused
"Connection refused" means that the ssh client transmitted a connection request to the named host and port, and it received in response a so-called "reset" packet, indicating that the remote server was refusing to accept the connection.
If you can connect to random.server.com from one host but get connection refused from another host, a few possible explanations come to mind:
You might have an entry in your .ssh/config file which substitutes a different name or address for random.server.com. For example, an entry like the following would cause ssh to connect to random2.server.com when you request random.server.com:
Host random.server.com
Hostname random2.server.com
The IP address lookup for "random.server.com" is returning the wrong address somehow, so ssh is trying to connect to the wrong server. For example, someone might have added an entry to /etc/hosts for that hostname.
Some firewall or other packet inspection software is interfering with the connection attempt by responding with a fake reset packet.
I have an ec2 amazon linux running which I can ssh in to using:
ssh -i "keypair.pem" ec2-user#some-ip.eu-west-1.compute.amazonaws.com
but when I try to ping the server using ansible I get:
testserver | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh.",
"unreachable": true
}
I'm using the following hosts file:
testserver ansible_ssh_host=some-ip.eu-west-1.compute.amazonaws.com ansible_ssh_user=ec2-user ansible_ssh_private_key_file=/Users/me/playbook/key-pair.pem
and running the following command to run ansible:
ansible testserver -i hosts -m ping -vvvvv
The output is:
<some-ip.eu-west-1.compute.amazonaws.com> ESTABLISH SSH CONNECTION FOR USER: ec2-user
<some-ip.eu-west-1.compute.amazonaws.com> SSH: ansible.cfg set ssh_args: (-o)(ControlMaster=auto)(-o)(ControlPersist=60s)
<some-ip.eu-west-1.compute.amazonaws.com> SSH: ANSIBLE_PRIVATE_KEY_FILE/private_key_file/ansible_ssh_private_key_file set: (-o)(IdentityFile="/Users/me/playbook/key-pair.pem")
<some-ip.eu-west-1.compute.amazonaws.com> SSH: ansible_password/ansible_ssh_pass not set: (-o)(KbdInteractiveAuthentication=no)(-o)(PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey)(-o)(PasswordAuthentication=no)
<some-ip.eu-west-1.compute.amazonaws.com> SSH: ANSIBLE_REMOTE_USER/remote_user/ansible_user/user/-u set: (-o)(User=ec2-user)
<some-ip.eu-west-1.compute.amazonaws.com> SSH: ANSIBLE_TIMEOUT/timeout set: (-o)(ConnectTimeout=10)
<some-ip.eu-west-1.compute.amazonaws.com> SSH: PlayContext set ssh_common_args: ()
<some-ip.eu-west-1.compute.amazonaws.com> SSH: PlayContext set ssh_extra_args: ()
<some-ip.eu-west-1.compute.amazonaws.com> SSH: found only ControlPersist; added ControlPath: (-o)(ControlPath=/Users/me/.ansible/cp/ansible-ssh-%h-%p-%r)
<some-ip.eu-west-1.compute.amazonaws.com> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/Users/me/playbook/key-pair.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 -o ControlPath=/Users/me/.ansible/cp/ansible-ssh-%h-%p-%r ec2-52-18-106-35.eu-west-1.compute.amazonaws.com '/bin/sh -c '"'"'( umask 22 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1462096401.65-214839021792201 `" && echo "` echo $HOME/.ansible/tmp/ansible-tmp-1462096401.65-214839021792201 `" )'"'"''
testserver | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh.",
"unreachable": true
}
What am i doing wrong?
Try this Solution it worked fine for me
ansible ipaddress -m ping -i inventory -u ec2-user
where inventory is the host file name.
inventory :
[host]
xx.xx.xx.xx
[host:vars]
ansible_user=ec2-user
ansible_ssh_private_key_file=/location of your pem file/filename.pem
I was facing the problem as I didn't give the location of the host file I was referring to.
This is what my host file looks like.
[apache] is the group of hosts on which we are going to install apache server.
ansible_ssh_private_key_file should be the path of the dowloaded .pem file to access your instances. In my case both instances have same credentials.
[apache]
50.112.133.205 ansible_ssh_user=ubuntu
54.202.7.87 ansible_ssh_user=ubuntu
[apache:vars]
ansible_user=ubuntu
ansible_ssh_private_key_file=/home/hashimyousaf/Desktop/hashim_key_oregon.pem
I was having a similar problem, and reading throughTroubleshooting Connecting to Your Instance helped me. Specifically, I was pinging an Ubuntu instance from an Amazon-Linux instance but forgot to change the connection username from "ec2-user" to "ubuntu"!
You have to change the hosts file and make sure you have the correct username
test2 ansible_ssh_host=something.something.eu-west-1.compute.amazonaws.com ansible_ssh_user=theUser
'test2' - is the name I have give to the ssh machice on my local ansible hosts file
'ansible_ssh_host=something.something.eu-west-1.compute.amazonaws.com' - This is the connection to the ec2 instance
'ansible_ssh_user=theUser' - The user of the instance. (Important)
'ssh' into your instance
[theUser#Instance:] make sure you copy the 'theUser' into the hosts and place as the 'ansible_ssh_user' variable
then try to ping it.
If this does not work, check if you have rights to the ICMP packeting in the amazon aws enabled.
Worked for me ->
vi inventory
[hosts]
serveripaddress ansible_ssh_user=ec2-user
[hosts:vars]
ansible_user=ec2-user
ansible_ssh_private_key_file=/home/someuser/ansible1.pem
chmod 400 ansible1.pem
ansible -i inventory hosts -m ping -u ec2-user
I have an ansible playbook which connects to a virtual machine via a non-standard ssh port (forwarded to localhost) and a different user than the host user (vagrant).
The ssh port is specified in the ansible inventory:
[vms]
localhost:2222
The username given on the command line to ansible-playbook:
ansible-playbook -i <inventory from above> <some playbook> -u vagrant
The communication with the VM works correctly, however, %p always expands to 22 and %r to the host username.
Consequently, I cannot flush the SSH connection (for the user's changed group membership to take effect) like this:
- name: flush the ssh connection
command: ssh -o ControlPath="~/.ansible/cp/ansible-ssh-%h-%p-%r" -O stop {{inventory_hostname}}
delegate_to: 127.0.0.1
Am I making a silly mistake somewhere? Alternatively, is there a different way to flush the SSH connection?
The percent expand is not expanded by ansible, but by ssh later on.
Sorry, forgot to add the most important part
Using
command: ssh -o ControlPath=[...] -O stop {{inventory_hostname}}
will use default port, because you didn't specify it on the command-line. You would have to specify also the port to "flush" the connection this way:
command: ssh -o ControlPath=[...] -O stop -p {{inventory_port}} {{inventory_hostname}}
But I don't think it is needed. Ansible should clean up the connections when the playbook ends and I don't see any different reason why to do that.
I've setup a VM on Fedora 17 with KVM and have configured a bridge network for the KVM. Both the host and the VM use manual IP configuration, with the host's IP as 192.168.0.2, the VM's 192.168.0.10.
From the VM I can connect to the host without any problems, but from the host I can't SSH to the VM,even though I still can ping the KVM from the host. Trying to ssh just gives me the result "no route to host".
Oh, I have iptables disabled so I don't think this is the problem of the firewall.
Also ensure that the kernel is configure for ip forwarding:
$ sudo sysctl -a | grep net.ipv4.ip_forward
net.ipv4.ip_forward = 1
It should have a value of 1, not 0. If needed, enable with these commands:
echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
sudo sysctl -p /etc/sysctl.conf
There are two ways :
* Using proxy tunnel to create a channel for host from guest :
From guest run following command :
ssh -L 2000:localhost_ip:2000 username#hostip
explore ssh man to get the inside.
* Difficult to setup, but proper configuration while running guest :
follow
http://www.cse.iitd.ernet.in/~prathmesh/random.html#Connecting_qemu_guest_to_real_network