I have a master and a slave.
I can connect via ssh from master to the slave.
Ansible can't connect from master to the slave.
Question: What am I doing wrong, so that ansible cant connect, but ssh can?
Successful connection from master to slave via ssh
vagrant#master:~$ ssh slave.local
Enter passphrase for key '/home/vagrant/.ssh/id_rsa':
vagrant#slave.local's password:
Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.4.0-87-generic x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
17 packages can be updated.
9 updates are security updates.
----------------------------------------------------------------
Ubuntu 16.04.3 LTS built 2017-09-08
----------------------------------------------------------------
Last login: Thu Sep 28 15:20:21 2017 from 10.0.0.10
vagrant#slave:~$
Ansible error: "Permission denied (publickey,password)"
vagrant#master:~$ ansible all -m ping -u vagrant
The authenticity of host 'slave.local (10.0.0.11)' can't be established.
ECDSA key fingerprint is SHA256:tRGlinvTj/c2gpTayZ/mYzyWbs63s+BUX81TdKJ+0jQ.
Are you sure you want to continue connecting (yes/no)? yes
Enter passphrase for key '/home/vagrant/.ssh/id_rsa':
slave.local | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: Warning: Permanently added 'slave.local' (ECDSA) to the list of known hosts.\r\nPermission denied (publickey,password).\r\n",
"unreachable": true
}
This is my hosts file
vagrant#master:~$ cat /etc/ansible/hosts
[web]
slave.local
The solution was to add the private key in openSSH format to the file /home/vagrant/.ssh/id_rsa
This is where ansible is looking for the key.
This I could find out, by starting ansible in verbose mode, using key "-vvvv"
ansible all -m ping -u vagrant -vvvv
The verbose output was
10.0.0.11 | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: OpenSSH_7.2p2 Ubuntu-4ubuntu2.2, OpenSSL 1.0.2g 1 Mar 2016\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug1: Control socket \"/home/vagrant/.ansible/cp/a72f4dc97e\" does not exist\r\ndebug2: resolving \"10.0.0.11\" port 22\r\ndebug2: ssh_connect_direct: needpriv 0\r\ndebug1: Connecting to 10.0.0.11 [10.0.0.11] port 22.\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug1: fd 3 clearing O_NONBLOCK\r\ndebug1: Connection established.\r\ndebug3: timeout: 10000 ms remain after connect\r\ndebug1: key_load_public: No such file or directory\r\ndebug1: identity file /home/vagrant/.ssh/id_rsa type -1\r\ndebug1: key_load_public: No such file or directory\r\ndebug1: identity file ...
Related
I have created a Virtual Machine with multipass, I am trying to connect to this instance over ssh, with the command:
ssh -vvv -i back_key ubuntu#10.136.38.199
At first, I tried to connect to my instance from a Github Action, but I got a timeout error, I thought that it may have been a Github issue.
But with a second computer, I couldn't connect to the VM either.
The error I got:
ubuntu#laptop-number2:~$ ssh -vvv -i back_key ubuntu#10.136.38.199
OpenSSH_8.2p1 Ubuntu-4ubuntu0.3, OpenSSL 1.1.1f 31 Mar 2020
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files
debug1: /etc/ssh/ssh_config line 21: Applying options for *
debug2: resolve_canonicalize: hostname 10.136.38.199 is address
debug2: ssh_connect_direct
debug1: Connecting to 10.136.38.199 [10.136.38.199] port 22.
debug1: connect to address 10.136.38.199 port 22: Resource temporarily unavailable
ssh: connect to host 10.136.38.199 port 22: Resource temporarily unavailable
Wheither it's from Github Action or from a second computer, I can't connect to the multipass instance over ssh.
But, I can connect to instance with the host computer.
I thought it may be a Firewall issue, so I disabled it with:
sudo systemctl stop ufw
I did this in the VM and the host machine, then I restarted ssh inside the instance.
The reason: I got those issues was the network I was working on. The ssh port for the server couldn't be reached.
I knew that by using nmap:
nmap -Pn -p 22 <IP_OF_SERVER>
The result was: The port is filtered.
Working with a mobile network didn't solve it either, since my ISP block this port. The solution was using the network from my house for the ssh server.
I'm trying to use rsync on my dev server to download files to my local machine after checking out a branch on the dev server.
Before using wsl2, I used to be able to do the following:
Remote server
rsync -ave "ssh -p 22001" --delete --exclude-from ~/rsync_exclude_list.txt ~/as/ alex#localhost:/home/alexmk92/code/project
Local SSH config
Host dev-tunnel
HostName dev.sever.co.uk
User as
ControlMaster auto
ControlPath ~/.ssh/sockets/%r#%h:%p
RemoteForward 22001 localhost:22
Host dev
HostName dev.server.co.uk
User as
RequestTTY yes
RemoteCommand cd as; bash
I can then run these with ssh dev and ssh -fvN dev-tunnel if from the remote server I type ssh -p 22001 alex#localhost then I get:
debug1: remote forward success for: listen 22001, connect localhost:22
debug1: All remote forwarding requests processed
debug1: client_input_channel_open: ctype forwarded-tcpip rchan 2 win 2097152 max 32768
debug1: client_request_forwarded_tcpip: listen localhost port 22001, originator 127.0.0.1 port 34472
debug1: connect_next: host localhost ([127.0.0.1]:22) in progress, fd=5
debug1: channel 1: new [127.0.0.1]
debug1: confirm forwarded-tcpip
debug1: channel 1: connection failed: Connection refused
connect_to localhost port 22: failed.
debug1: channel 1: free: 127.0.0.1, nchannels 2
I'm guessing this is because WSL2 no longer runs on localhost, and is instead isolated within Hypervisor. Which probably means windows is receiving this request on localhost:22 (where no SSH server is running) and then hangs up the connection.
How can I forward the request to my WSL2 SSH process?
It is possible to add a port mapping to WSL2 machines, using the following WSH script:
$port = 3000;
$addr = '0.0.0.0';
$remoteaddr = bash.exe -c "ifconfig eth0 | grep 'inet '"
$found = $remoteaddr -match '\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}';
if( $found ) {
$remoteaddr = $matches[0];
} else {
echo "Error: ip address of WSL 2 cannot be found";
exit;
}
Invoke-Expression "netsh interface portproxy delete v4tov4 listenport=$port listenaddress=$addr"
Invoke-Expression "netsh interface portproxy add v4tov4 listenport=$port listenaddress=$addr connectport=$port connectaddress=$remoteaddr"
echo "Success: Port mapping added!";
Of course, you need to change to port and maybe the IP address (first two lines)
Maybe you need to run the script as admin...
I installed RHEL 8.2 with a free developer license (bare hardware), it looks like sshd is installed, running by default with port 22 already open, I did not have to do anything to install sshd or open the port.
[root#<hostname> etc]# systemctl status sshd
● sshd.service - OpenSSH server daemon
Loaded: loaded (/usr/lib/systemd/system/sshd.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2020-08-17 13:35:12 MDT; 1h 7min ago
...
but on Windows 10 Pro (with cygwin ssh client installed),
ssh <user>#<ip-address>
I get this error
ssh: connect to host <ip-address> port 22: Permission denied
On the RHEL 8.2 installation, in a bash terminal, I can successfully ssh locally: ssh <user>#<ip-address> and it works OK.
Any ideas?
This is what I am getting:
From: 192.168.0.153
To: 192.168.0.106
$ssh -Tv <user>#<ip-address>
OpenSSH_8.3p1, OpenSSL 1.1.1f 31 Mar 2020
debug1: Connecting to 192.168.0.106 [192.168.0.106] port 22.
debug1: connect to address 192.168.0.106 port 22: Permission denied
ssh: connect to host 192.168.0.106 port 22: Permission denied
but on 192.168.0.106, it is showing sshd running and port 22 open.
On the machine itself, I can ssh ($ssh <user>#localhost works)
On the server I want to reach, it shows port 22 as open, ssh service enabled (192.168.0.106)
#firewall-cmd --list-all
public (active)
...
interfaces: enp37s0
services: cockpit dhcpv6-client http ssh
ports: 22/tcp
...
First, check the output of ssh -Tv <user>#<ip-address>
It will tell you:
if it can actually contact the server
what local private key it is using
Make sure you have:
generated a public/private key pair in %USERPROFILE%\.ssh, using openSSH ssh-keygen command (ssh-keygen -m PEM -t rsa -P "")
added the content of id_rsa.pub to ~user/.ssh/authorized_keys on the server side.
I had this problem. I had my virtual machine set up for a wired connection. I had to turn on the wired connection in the Red Hat settings. Settings -> Network -> Wired Toggle: ON
Once I turned on the wired connection I was able to make my ssh connections externally.
I am using gitlab secrets to pass the ssh private key for it to connect to a remote server. For scp works fine but running ssh doesn't.
I can even see the ssh logs on the server when the gitlab pipeline runs and tries to do ssh.
Here is the output from gitlab-pipeline:
ssh -i /root/.ssh/id_rsa -vvv root#$DEPLOYMENT_SERVER_IP "docker login -u ${CI_REGISTRY_USER} -p ${CI_REGISTRY_PASSWORD} ${CI_REGISTRY};"
OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug2: resolving "157.245.xxx.xxx" port 22
debug2: ssh_connect_direct: needpriv 0
debug1: Connecting to 157.245.xxx.xxx [157.245.xxx.xxx] port 22.
debug1: connect to address 157.245.xxx.xxx port 22: Connection refused
ssh: connect to host 157.245.xxx.xxx port 22: Connection refused
Here is my gitlab pipeline which fails:
deploy_production:
stage: deploy
image: python:3.6-alpine
before_script:
- 'which ssh-agent || ( apk update && apk add openssh-client)'
- eval "$(ssh-agent -s)"
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- echo "$DEPLOY_SERVER_PRIVATE_KEY" | tr -d '\r' > ~/.ssh/id_rsa
- chmod 600 ~/.ssh/id_rsa
- ssh-add ~/.ssh/id_rsa
- apk add gcc musl-dev libffi-dev openssl-dev iputils
- ssh-keyscan $DEPLOYMENT_SERVER_IP >> ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
script:
- scp -r ./docker-compose.yml root#${DEPLOYMENT_SERVER_IP}:~/
- scp -r ./env/production/docker-compose.yml root#${DEPLOYMENT_SERVER_IP}:~/docker-compose-prod.yml
- ssh -i /root/.ssh/id_rsa -vvv root#$DEPLOYMENT_SERVER_IP "docker login -u ${CI_REGISTRY_USER} -p ${CI_REGISTRY_PASSWORD} ${CI_REGISTRY};"
environment: production
only:
- "master"
sshd auth logs:
sshd[27552]: Connection closed by 35.231.235.202 port 53870 [preauth]
sshd[27554]: Connection closed by 35.231.235.202 port 53872 [preauth]
sshd[27553]: Connection closed by 35.231.235.202 port 53874 [preauth]
sshd[27558]: Accepted publickey for root from 35.231.235.202 port 53876 ssh2: RSA SHA256:bS8IsyG4kyKcTtfrW+h4kw1JXbBSQfO6Jk6X/JKL1CU
sshd[27558]: pam_unix(sshd:session): session opened for user root by (uid=0)
systemd-logind[945]: New session 649 of user root.
sshd[27558]: Received disconnect from 35.231.235.202 port 53876:11: disconnected by user
sshd[27558]: Disconnected from user root 35.231.235.202 port 53876
sshd[27558]: pam_unix(sshd:session): session closed for user root
systemd-logind[945]: Removed session 649.
sshd[27560]: Received disconnect from 222.186.15.160 port 64316:11: [preauth]
sshd[27560]: Disconnected from authenticating user root 222.186.15.160 port 64316 [preauth]
sshd[27685]: Accepted publickey for root from 35.231.235.202 port 53878 ssh2: RSA SHA256:bS8IsyG4kyKcTtfrW+h4kw1JXbBSQfO6Jk6X/JKL1CU
sshd[27685]: pam_unix(sshd:session): session opened for user root by (uid=0)
systemd-logind[945]: New session 650 of user root.
sshd[27685]: Received disconnect ected by user
sshd[27685]: Disconnected from user root 35.231.235.202 port 53878
sshd[27685]: pam_unix(sshd:session): session closed for user root
systemd-logind[945]: Removed session 650.
Finally figured out why is this happening. I see that the issue is with the ufw firewall rule for ssh on my server. It is rate limited and since in my gitlab-pipeline I am doing scp 2 times followed by ssh which is possibly happening too quick, the server refuses the connection.
It works outside of gitlab pipeline as doing it manually would be slow.
So I'm on my local machine, and I'm sshing into a google compute server.
From this google compute server, I'm trying to establish an ssh tunnel to a third party server ($host) using the following command:
ssh username#$host -L 3306:127.0.0.1:3306 -N
And after hanging for 20-30 seconds, I get:
ssh: connect to host $host port 22: Connection timed out
I can use the exact same command on my local machinet to the third party server and it works fine.
I've killed anything using the 3306 port on the google compute server.
I've opened port 22 and 3306 on the google server through the interface (through I can't tell if this applies to outbound connections also).
Not sure where to go from here, any help would be appreciated.
Edit1: The google server can successfully ping the third party server.
Edit2: Just tried it from the company server, it doesn't work there either. Both he google-compute and the company server are linux (Deb Wee and Ubuntu respectively) and the local machine is windows. The fact that I'm sshing into them shouldn't make a difference should it?
Edit3: Changed the default SSH port on the google server to 22222 and connected to it using that instead. Trying to connect to third party now with:
sudo ssh -p 22 username#$host -L 3306:127.0.0.1:3306 -N -v -v -v
Debug output is:
OpenSSH_6.6.1, OpenSSL 1.0.1e 11 Feb 2013
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug2: ssh_connect: needpriv 0
debug1: Connecting to $host [$host] port 22.
And after that it just hangs.
Debug output on local machine using same command is is:
OpenSSH_4.6p1, OpenSSL 0.9.8e 23 Feb 2007
debug2: ssh_connect: needpriv 0
debug1: Connecting to $host [$host] port 22.
debug1: Connection established.
*other junk*
Turns out the third party server had ssh blocked from anywhere outside Australia
-_-