ssh fails to connect but scp works in gitlab-ci - ssh

I am using gitlab secrets to pass the ssh private key for it to connect to a remote server. For scp works fine but running ssh doesn't.
I can even see the ssh logs on the server when the gitlab pipeline runs and tries to do ssh.
Here is the output from gitlab-pipeline:
ssh -i /root/.ssh/id_rsa -vvv root#$DEPLOYMENT_SERVER_IP "docker login -u ${CI_REGISTRY_USER} -p ${CI_REGISTRY_PASSWORD} ${CI_REGISTRY};"
OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug2: resolving "157.245.xxx.xxx" port 22
debug2: ssh_connect_direct: needpriv 0
debug1: Connecting to 157.245.xxx.xxx [157.245.xxx.xxx] port 22.
debug1: connect to address 157.245.xxx.xxx port 22: Connection refused
ssh: connect to host 157.245.xxx.xxx port 22: Connection refused
Here is my gitlab pipeline which fails:
deploy_production:
stage: deploy
image: python:3.6-alpine
before_script:
- 'which ssh-agent || ( apk update && apk add openssh-client)'
- eval "$(ssh-agent -s)"
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- echo "$DEPLOY_SERVER_PRIVATE_KEY" | tr -d '\r' > ~/.ssh/id_rsa
- chmod 600 ~/.ssh/id_rsa
- ssh-add ~/.ssh/id_rsa
- apk add gcc musl-dev libffi-dev openssl-dev iputils
- ssh-keyscan $DEPLOYMENT_SERVER_IP >> ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
script:
- scp -r ./docker-compose.yml root#${DEPLOYMENT_SERVER_IP}:~/
- scp -r ./env/production/docker-compose.yml root#${DEPLOYMENT_SERVER_IP}:~/docker-compose-prod.yml
- ssh -i /root/.ssh/id_rsa -vvv root#$DEPLOYMENT_SERVER_IP "docker login -u ${CI_REGISTRY_USER} -p ${CI_REGISTRY_PASSWORD} ${CI_REGISTRY};"
environment: production
only:
- "master"
sshd auth logs:
sshd[27552]: Connection closed by 35.231.235.202 port 53870 [preauth]
sshd[27554]: Connection closed by 35.231.235.202 port 53872 [preauth]
sshd[27553]: Connection closed by 35.231.235.202 port 53874 [preauth]
sshd[27558]: Accepted publickey for root from 35.231.235.202 port 53876 ssh2: RSA SHA256:bS8IsyG4kyKcTtfrW+h4kw1JXbBSQfO6Jk6X/JKL1CU
sshd[27558]: pam_unix(sshd:session): session opened for user root by (uid=0)
systemd-logind[945]: New session 649 of user root.
sshd[27558]: Received disconnect from 35.231.235.202 port 53876:11: disconnected by user
sshd[27558]: Disconnected from user root 35.231.235.202 port 53876
sshd[27558]: pam_unix(sshd:session): session closed for user root
systemd-logind[945]: Removed session 649.
sshd[27560]: Received disconnect from 222.186.15.160 port 64316:11: [preauth]
sshd[27560]: Disconnected from authenticating user root 222.186.15.160 port 64316 [preauth]
sshd[27685]: Accepted publickey for root from 35.231.235.202 port 53878 ssh2: RSA SHA256:bS8IsyG4kyKcTtfrW+h4kw1JXbBSQfO6Jk6X/JKL1CU
sshd[27685]: pam_unix(sshd:session): session opened for user root by (uid=0)
systemd-logind[945]: New session 650 of user root.
sshd[27685]: Received disconnect ected by user
sshd[27685]: Disconnected from user root 35.231.235.202 port 53878
sshd[27685]: pam_unix(sshd:session): session closed for user root
systemd-logind[945]: Removed session 650.

Finally figured out why is this happening. I see that the issue is with the ufw firewall rule for ssh on my server. It is rate limited and since in my gitlab-pipeline I am doing scp 2 times followed by ssh which is possibly happening too quick, the server refuses the connection.
It works outside of gitlab pipeline as doing it manually would be slow.

Related

WSL2 SSH RemoteForward connect back

I'm trying to use rsync on my dev server to download files to my local machine after checking out a branch on the dev server.
Before using wsl2, I used to be able to do the following:
Remote server
rsync -ave "ssh -p 22001" --delete --exclude-from ~/rsync_exclude_list.txt ~/as/ alex#localhost:/home/alexmk92/code/project
Local SSH config
Host dev-tunnel
HostName dev.sever.co.uk
User as
ControlMaster auto
ControlPath ~/.ssh/sockets/%r#%h:%p
RemoteForward 22001 localhost:22
Host dev
HostName dev.server.co.uk
User as
RequestTTY yes
RemoteCommand cd as; bash
I can then run these with ssh dev and ssh -fvN dev-tunnel if from the remote server I type ssh -p 22001 alex#localhost then I get:
debug1: remote forward success for: listen 22001, connect localhost:22
debug1: All remote forwarding requests processed
debug1: client_input_channel_open: ctype forwarded-tcpip rchan 2 win 2097152 max 32768
debug1: client_request_forwarded_tcpip: listen localhost port 22001, originator 127.0.0.1 port 34472
debug1: connect_next: host localhost ([127.0.0.1]:22) in progress, fd=5
debug1: channel 1: new [127.0.0.1]
debug1: confirm forwarded-tcpip
debug1: channel 1: connection failed: Connection refused
connect_to localhost port 22: failed.
debug1: channel 1: free: 127.0.0.1, nchannels 2
I'm guessing this is because WSL2 no longer runs on localhost, and is instead isolated within Hypervisor. Which probably means windows is receiving this request on localhost:22 (where no SSH server is running) and then hangs up the connection.
How can I forward the request to my WSL2 SSH process?
It is possible to add a port mapping to WSL2 machines, using the following WSH script:
$port = 3000;
$addr = '0.0.0.0';
$remoteaddr = bash.exe -c "ifconfig eth0 | grep 'inet '"
$found = $remoteaddr -match '\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}';
if( $found ) {
$remoteaddr = $matches[0];
} else {
echo "Error: ip address of WSL 2 cannot be found";
exit;
}
Invoke-Expression "netsh interface portproxy delete v4tov4 listenport=$port listenaddress=$addr"
Invoke-Expression "netsh interface portproxy add v4tov4 listenport=$port listenaddress=$addr connectport=$port connectaddress=$remoteaddr"
echo "Success: Port mapping added!";
Of course, you need to change to port and maybe the IP address (first two lines)
Maybe you need to run the script as admin...

Remote connection using ssh to oracle-solaris-114-sru29 not working with -o option

Remote ssh connection not working with Solaris 11.4.29 with "-o PreferredAuthentications=password" argument, while the same works fine without it.
ssh -2 -vvv -l - works
ssh -2 -vvv -l -o PreferredAuthentications=password - doesn't work
From any Linux server when below command is executed to remote Solaris 11.4.29 :
ssh -2 -vvv -l -o PreferredAuthentications=password - doesn't work
Verbose messages -
debug1: PAM: setting PAM_TTY to "ssh"
debug1: PAM: password authentication failed for : Authentication failed

Ansible says "Permission denied (publickey,password)"

I have a master and a slave.
I can connect via ssh from master to the slave.
Ansible can't connect from master to the slave.
Question: What am I doing wrong, so that ansible cant connect, but ssh can?
Successful connection from master to slave via ssh
vagrant#master:~$ ssh slave.local
Enter passphrase for key '/home/vagrant/.ssh/id_rsa':
vagrant#slave.local's password:
Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.4.0-87-generic x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
17 packages can be updated.
9 updates are security updates.
----------------------------------------------------------------
Ubuntu 16.04.3 LTS built 2017-09-08
----------------------------------------------------------------
Last login: Thu Sep 28 15:20:21 2017 from 10.0.0.10
vagrant#slave:~$
Ansible error: "Permission denied (publickey,password)"
vagrant#master:~$ ansible all -m ping -u vagrant
The authenticity of host 'slave.local (10.0.0.11)' can't be established.
ECDSA key fingerprint is SHA256:tRGlinvTj/c2gpTayZ/mYzyWbs63s+BUX81TdKJ+0jQ.
Are you sure you want to continue connecting (yes/no)? yes
Enter passphrase for key '/home/vagrant/.ssh/id_rsa':
slave.local | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: Warning: Permanently added 'slave.local' (ECDSA) to the list of known hosts.\r\nPermission denied (publickey,password).\r\n",
"unreachable": true
}
This is my hosts file
vagrant#master:~$ cat /etc/ansible/hosts
[web]
slave.local
The solution was to add the private key in openSSH format to the file /home/vagrant/.ssh/id_rsa
This is where ansible is looking for the key.
This I could find out, by starting ansible in verbose mode, using key "-vvvv"
ansible all -m ping -u vagrant -vvvv
The verbose output was
10.0.0.11 | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: OpenSSH_7.2p2 Ubuntu-4ubuntu2.2, OpenSSL 1.0.2g 1 Mar 2016\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug1: Control socket \"/home/vagrant/.ansible/cp/a72f4dc97e\" does not exist\r\ndebug2: resolving \"10.0.0.11\" port 22\r\ndebug2: ssh_connect_direct: needpriv 0\r\ndebug1: Connecting to 10.0.0.11 [10.0.0.11] port 22.\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug1: fd 3 clearing O_NONBLOCK\r\ndebug1: Connection established.\r\ndebug3: timeout: 10000 ms remain after connect\r\ndebug1: key_load_public: No such file or directory\r\ndebug1: identity file /home/vagrant/.ssh/id_rsa type -1\r\ndebug1: key_load_public: No such file or directory\r\ndebug1: identity file ...

Advance ssh config file

How to ssh directly to Remote Server, below is the details description.
Local machine ---> Jump1 ----> Jump2 ----> Remote Server
From local machine there is no direct access to Remote Server and Jump2 is disable
Remote Server can only be accessed from Jump2
There is no sshkegen to remote server we have to give the paswword manually.
from Local Machine we access the Jump1 with ip and port 2222 then from Jump 1 we access the Jump2 with host name default port 22.
With ssh/config file we were able to access the jump2 server without any problem. But my requirement is to directly access the remote server.
is there any possible way I don't mind entering the password for remote server.
Log
ssh -vvv root#ip address
OpenSSH_5.3p1, OpenSSL 1.0.1e-fips 11 Feb 2013
debug1: Reading configuration data /root/.ssh/config
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: Applying options for *
debug2: ssh_connect: needpriv 0
debug1: Connecting to ip address [ip address] port 22.
My Config file
Host jump1
Hostname ip.109
Port 2222
User avdy
Host jump2
Hostname ip.138
Port 22
ProxyCommand ssh -W %h:%p jump1
User avdy
Host remote-server
Hostname ip.8
Port 22
ProxyCommand ssh -W %h:%p jump2
User root
Set your ~/.ssh/config:
Host Jump1
User jump1user
Port 2222
Host Jump2
ProxyCommand ssh -W %h:%p Jump1
User jump2user
Host RemoveServer
ProxyCommand ssh -W %h:%p Jump2
User remoteUser
Or with new OpenSSH 7.3:
Host RemoveServer
ProxyJump jump1user#Jump1,jump2user#Jump2
User remoteUser
Then you can connect simply using ssh RemoteServer

Can't get SSH ProxyCommand to work (ssh_exchange_identification: Connection closed by remote host)

I'm unsuccessfully trying to use SSH ProxyCommand to connect to a server via a jump box. My config is below, I'm running this command:
ssh 10.0.2.54 -F ssh.config -vv
Host x.x.x.x
User ec2-user
HostName x.x.x.x
ProxyCommand none
IdentityFile /Users/me/.ssh/keys.pem
BatchMode yes
PasswordAuthentication no
Host *
ServerAliveInterval 60
TCPKeepAlive yes
ProxyCommand ssh -W %h:%p -q ec2-user#x.x.x.x
ControlMaster auto
ControlPersist 8h
User ec2-user
IdentityFile /Users/me/.ssh/keys.pem
The result is:
OpenSSH_6.2p2, OSSLShim 0.9.8r 8 Dec 2011
debug1: Reading configuration data ssh.config
debug1: ssh.config line 9: Applying options for *
debug1: auto-mux: Trying existing master
debug1: Control socket "/Users/me/.ssh/mux-ec2-user#10.0.2.54:22" does not exist
debug2: ssh_connect: needpriv 0
debug1: Executing proxy command: exec ssh -W 10.0.2.54:22 -q ec2-user#x.x.x.x
debug1: identity file /Users/me/.ssh/keys.pem type -1
debug1: identity file /Users/me/.ssh/keys.pem-cert type -1
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_6.2
debug1: permanently_drop_suid: 501
How can I get this to work/troubleshoot the issue?
Thanks,
ControlPersist in combination with ProxyCommand is not effective and you miss ControlPath option. But it is not a problem here.
First of all, if you are using non-standard config file and you want it to be used even by the proxy command, you need to specify it even there. The -q option makes the connection quiet so you have no idea what is going on under the hood of the proxy command. LogLevel DEBUG3 option is quite useful.
This line:
ProxyCommand ssh -W %h:%p -q ec2-user#x.x.x.x
needs to be (and you don't need the username as it is already specified above):
ProxyCommand ssh -W %h:%p -F ssh.config x.x.x.x
You have also wrong order of parameters in your command:
ssh 10.0.2.54 -F ssh.config -vv
needs to be:
ssh -F ssh.config 10.0.2.54
as you can read from manual page. And -vv is not needed if you use LogLevel option.
Then it should work for you (at least it did for me, otherwise investigate the log).