Forwarding port from the SSH terminal - ssh

In my company we use terminal server and we have the IDE (PhpStorm) in a remote machine. I want to use Xdebug with the web and apps that we are developing but every time I want to use it I have to open Putty and create a tunnel with the next parameters.
After that I login myself with the credentials and everything it's ready to use Xdebug. If I run a file I get this.
I want to do the same from the SSH terminal in PhpStorm. I can connect to the terminal but if I try to use the command to forward the ports I get errors all the time. I show you some of them. Sorry for the indentation, I don't know how to fix it. I try to use the port 220 because I think it is the one for this server.
dirsorpor3#da01:~$ ssh -R 9000:localhost:9000
usage: ssh [-1246AaCfgKkMNnqsTtVvXxYy] [-b bind_address] [-c cipher_spec]
[-D [bind_address:]port] [-E log_file] [-e escape_char]
[-F configfile] [-I pkcs11] [-i identity_file]
[-L [bind_address:]port:host:hostport] [-l login_name] [-m mac_spec]
[-O ctl_cmd] [-o option] [-p port]
[-Q cipher | cipher-auth | mac | kex | key]
[-R [bind_address:]port:host:hostport] [-S ctl_path] [-W host:port]
[-w local_tun[:remote_tun]] [user#]hostname [command]
dirsorpor3#da01:~$ ssh -R 9000:localhost:9000 10.77.82.11
ssh: connect to host 10.77.82.11 port 22: Connection refused
dirsorpor3#da01:~$ ssh -R 9000:localhost:9000 ^C
dirsorpor3#da01:~$ ssh -R 9000:localhost:9000 http://10.77.82.11
ssh: Could not resolve hostname http://10.77.82.11: Name or service not known
dirsorpor3#da01:~$ ssh -R 9000:localhost:9000 10.77.82.11
ssh: connect to host 10.77.82.11 port 22: Connection refused
dirsorpor3#da01:~$ ssh -R 9000:localhost:9000 localhost
ssh: connect to host localhost port 22: Connection refused
dirsorpor3#da01:~$ ssh -R 9000:localhost:9000 10.77.82.11
ssh: connect to host 10.77.82.11 port 22: Connection refused
dirsorpor3#da01:~$ ssh -R 9000:localhost:9000 10.77.82.11
ssh: connect to host 10.77.82.11 port 22: Connection refused
dirsorpor3#da01:~$ ssh -R 9000:localhost:9000 10.77.82.11:220
ssh: Could not resolve hostname 10.77.82.11:220: Name or service not known
dirsorpor3#da01:~$ ssh -R 9000:localhost:9000 10.77.82.11
ssh: connect to host 10.77.82.11 port 22: Connection refused
dirsorpor3#da01:~$ ssh -R 9000:localhost:9000 http://10.77.82.11
ssh: Could not resolve hostname http://10.77.82.11: Name or service not known
dirsorpor3#da01:~$ ssh -R 9000:localhost:9000
usage: ssh [-1246AaCfgKkMNnqsTtVvXxYy] [-b bind_address] [-c cipher_spec]
[-D [bind_address:]port] [-E log_file] [-e escape_char]
[-F configfile] [-I pkcs11] [-i identity_file]
[-L [bind_address:]port:host:hostport] [-l login_name] [-m mac_spec]
[-O ctl_cmd] [-o option] [-p port]
[-Q cipher | cipher-auth | mac | kex | key]
[-R [bind_address:]port:host:hostport] [-S ctl_path] [-W host:port]
[-w local_tun[:remote_tun]] [user#]hostname [command]
dirsorpor3#da01:~$ ssh -R 9000:localhost:9000 10.77.82.11
ssh: connect to host 10.77.82.11 port 22: Connection refused
dirsorpor3#da01:~$ ssh -R 9000:localhost:9000 10.77.82.11:220
ssh: Could not resolve hostname 10.77.82.11:220: Name or service not known
dirsorpor3#da01:~$ ssh -R 9000:localhost:9000 10.77.82.11
ssh: connect to host 10.77.82.11 port 22: Connection refused
dirsorpor3#da01:~$ ssh -R 9000:localhost:9000 localhost
ssh: connect to host localhost port 22: Connection refused
dirsorpor3#da01:~$ ssh -R 9000:10.77.82.31:9000 10.77.82.11
ssh: connect to host 10.77.82.11 port 22: Connection refused
dirsorpor3#da01:~$ ssh -R 9000:localhost:9000 10.77.82.31
ssh: connect to host 10.77.82.31 port 22: Connection refused
dirsorpor3#da01:~$ ssh -R 9000:localhost:9000 10.77.82.31:220
ssh: Could not resolve hostname 10.77.82.31:220: Name or service not known

You are trying to create a tunnel being logged onto the server via ssh already, that would not work.
You need to start ssh from the Windows machine, not from the server.
ssh -R 9000:localhost:9000 dirsorpor3#10.77.82.31 -p 220

Related

ssh fails to connect but scp works in gitlab-ci

I am using gitlab secrets to pass the ssh private key for it to connect to a remote server. For scp works fine but running ssh doesn't.
I can even see the ssh logs on the server when the gitlab pipeline runs and tries to do ssh.
Here is the output from gitlab-pipeline:
ssh -i /root/.ssh/id_rsa -vvv root#$DEPLOYMENT_SERVER_IP "docker login -u ${CI_REGISTRY_USER} -p ${CI_REGISTRY_PASSWORD} ${CI_REGISTRY};"
OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug2: resolving "157.245.xxx.xxx" port 22
debug2: ssh_connect_direct: needpriv 0
debug1: Connecting to 157.245.xxx.xxx [157.245.xxx.xxx] port 22.
debug1: connect to address 157.245.xxx.xxx port 22: Connection refused
ssh: connect to host 157.245.xxx.xxx port 22: Connection refused
Here is my gitlab pipeline which fails:
deploy_production:
stage: deploy
image: python:3.6-alpine
before_script:
- 'which ssh-agent || ( apk update && apk add openssh-client)'
- eval "$(ssh-agent -s)"
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- echo "$DEPLOY_SERVER_PRIVATE_KEY" | tr -d '\r' > ~/.ssh/id_rsa
- chmod 600 ~/.ssh/id_rsa
- ssh-add ~/.ssh/id_rsa
- apk add gcc musl-dev libffi-dev openssl-dev iputils
- ssh-keyscan $DEPLOYMENT_SERVER_IP >> ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
script:
- scp -r ./docker-compose.yml root#${DEPLOYMENT_SERVER_IP}:~/
- scp -r ./env/production/docker-compose.yml root#${DEPLOYMENT_SERVER_IP}:~/docker-compose-prod.yml
- ssh -i /root/.ssh/id_rsa -vvv root#$DEPLOYMENT_SERVER_IP "docker login -u ${CI_REGISTRY_USER} -p ${CI_REGISTRY_PASSWORD} ${CI_REGISTRY};"
environment: production
only:
- "master"
sshd auth logs:
sshd[27552]: Connection closed by 35.231.235.202 port 53870 [preauth]
sshd[27554]: Connection closed by 35.231.235.202 port 53872 [preauth]
sshd[27553]: Connection closed by 35.231.235.202 port 53874 [preauth]
sshd[27558]: Accepted publickey for root from 35.231.235.202 port 53876 ssh2: RSA SHA256:bS8IsyG4kyKcTtfrW+h4kw1JXbBSQfO6Jk6X/JKL1CU
sshd[27558]: pam_unix(sshd:session): session opened for user root by (uid=0)
systemd-logind[945]: New session 649 of user root.
sshd[27558]: Received disconnect from 35.231.235.202 port 53876:11: disconnected by user
sshd[27558]: Disconnected from user root 35.231.235.202 port 53876
sshd[27558]: pam_unix(sshd:session): session closed for user root
systemd-logind[945]: Removed session 649.
sshd[27560]: Received disconnect from 222.186.15.160 port 64316:11: [preauth]
sshd[27560]: Disconnected from authenticating user root 222.186.15.160 port 64316 [preauth]
sshd[27685]: Accepted publickey for root from 35.231.235.202 port 53878 ssh2: RSA SHA256:bS8IsyG4kyKcTtfrW+h4kw1JXbBSQfO6Jk6X/JKL1CU
sshd[27685]: pam_unix(sshd:session): session opened for user root by (uid=0)
systemd-logind[945]: New session 650 of user root.
sshd[27685]: Received disconnect ected by user
sshd[27685]: Disconnected from user root 35.231.235.202 port 53878
sshd[27685]: pam_unix(sshd:session): session closed for user root
systemd-logind[945]: Removed session 650.
Finally figured out why is this happening. I see that the issue is with the ufw firewall rule for ssh on my server. It is rate limited and since in my gitlab-pipeline I am doing scp 2 times followed by ssh which is possibly happening too quick, the server refuses the connection.
It works outside of gitlab pipeline as doing it manually would be slow.

Advance ssh config file

How to ssh directly to Remote Server, below is the details description.
Local machine ---> Jump1 ----> Jump2 ----> Remote Server
From local machine there is no direct access to Remote Server and Jump2 is disable
Remote Server can only be accessed from Jump2
There is no sshkegen to remote server we have to give the paswword manually.
from Local Machine we access the Jump1 with ip and port 2222 then from Jump 1 we access the Jump2 with host name default port 22.
With ssh/config file we were able to access the jump2 server without any problem. But my requirement is to directly access the remote server.
is there any possible way I don't mind entering the password for remote server.
Log
ssh -vvv root#ip address
OpenSSH_5.3p1, OpenSSL 1.0.1e-fips 11 Feb 2013
debug1: Reading configuration data /root/.ssh/config
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: Applying options for *
debug2: ssh_connect: needpriv 0
debug1: Connecting to ip address [ip address] port 22.
My Config file
Host jump1
Hostname ip.109
Port 2222
User avdy
Host jump2
Hostname ip.138
Port 22
ProxyCommand ssh -W %h:%p jump1
User avdy
Host remote-server
Hostname ip.8
Port 22
ProxyCommand ssh -W %h:%p jump2
User root
Set your ~/.ssh/config:
Host Jump1
User jump1user
Port 2222
Host Jump2
ProxyCommand ssh -W %h:%p Jump1
User jump2user
Host RemoveServer
ProxyCommand ssh -W %h:%p Jump2
User remoteUser
Or with new OpenSSH 7.3:
Host RemoveServer
ProxyJump jump1user#Jump1,jump2user#Jump2
User remoteUser
Then you can connect simply using ssh RemoteServer

Ansible through Bastion server SSH Error

Following this guide (and others) running-ansible-through-ssh-bastion-host.
I have my ssh.cfg file set up to allow connecting to a host behind multiple bastions.
proxy -> util -> monitor -> more
I can connect to the util server:
[self#home]$ ssh -F ssh.cfg util
...
[self#util]$
and the monitoring server:
[self#home]$ ssh -F ssh.cfg monitor
...
[self#monitor]$
ssh.conf:
Host *
ServerAliveInterval 60
ControlMaster auto
ControlPath ~/.ssh/mux-%r#%h:%p
ControlPersist 15m
Host proxy
HostName proxy01.com
ForwardAgent yes
Host util
HostName util01.priv
ProxyCommand ssh -W %h:%p proxy
Host monitor
HostName mon01.priv
ProxyCommand ssh -W %h:%p util
ansible inventory file:
[bastion]
proxy
[utility]
util
monitor
ansible.cfg:
[ssh_connection]
ssh_args = -F ./ssh.cfg -o ControlMaster=auto -o ControlPersist=15m
control_path = ~/.ssh/ansible-%%r#%%h:%%p
When I execute any ansible commands, they appear to hit the proxy host without any problem, but fail to connect to the util host and the monitor host.
> ansible all -a "/bin/echo hello"
util | UNREACHABLE! => {
"changed": false,
"msg": "SSH Error: data could not be sent to the remote host. Make sure this host can be reached over ssh",
"unreachable": true
}
proxy | SUCCESS | rc=0 >>
hello
monitor | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh.",
"unreachable": true
}
ADDITIONAL:
after some more hacking around, I have key'd the monitor host, and found that ansible can connect to the proxy,and the monitor, but fails on the util host... which is extremely odd because it has to pass through the util host to hit the monitor.
util | UNREACHABLE! => {
"changed": false,
"msg": "SSH Error: data could not be sent to the remote host. Make sure this host can be reached over ssh",
"unreachable": true
}
proxy | SUCCESS | rc=0 >>
hello
monitor | SUCCESS | rc=0 >>
hello
After trying different guides, this solution work for me to use the ansible over the server that doesn't have directory ssh but via proxy/bastion.
Here is my ~/.ssh/config file:
Host *
ServerAliveInterval 60
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
ForwardAgent yes
####### Access to the Private Subnet Server through Proxy/bastion ########
Host proxy-server
HostName x.x.x.x
ForwardAgent yes
Host private-server
HostName y.y.y.y
ProxyCommand ssh -q proxy-server nc -q0 %h %p
Hope that help you.
For some unknown reason Ansible ignores multiple hosts, following config helped me
Host 10.*
StrictHostKeyChecking no
GSSAPIAuthentication no
ProxyCommand ssh -W %h:%p -l ubuntu -i ~/.ssh/key.pem 11.22.33.44
ControlMaster auto
ControlPersist 15m
User ubuntu
IdentityFile ~/.ssh/key.pem

Scp through ssh tunnel opened

I want to send files from machineA which has opened a reverse tunnel with a server. The reverse tunnel connects port 22 on machineA with port 2222 on the server:
autossh -M 0 -q -f -N -o "ServerAliveInterval 120" -o "ServerAliveCountMax 1" -R 2222:localhost:22 userserver#server.com
If I do:
scp file userserver#server.com:.
then SCP sends the file with a new login over SSH, in my case using public/private key.
But if I do:
scp -P 2222 file userserver#localhost:.
I get a "connection refused" message. The same happens if I replace 2222 above with the port found with:
netstat | grep ssh | grep ESTABLISHED
How I can send files without opening a new ssh connection (without handshake)?
You can use ControlMaster option in your ssh_config (~/.ssh/config), which will create persistent connection for further ssh/scp/sftp sessions. It is easy as pie:
Host yourhost
Hostname fqdn.tld
Port port_number # if required, but probably yes, if you do port-forwarding
ControlMaster auto
ControlPath ~/.ssh/master-%r#%h
ControlPersist 5m

ssh: connect to host bitbucket.org port 22: Connection timed out fatal

Whole error is:
ssh: connect to host bitbucket.org port 22: Connection timed out
fatal: The remote end hung up unexpectedly
I'm getting this error when I do push from two of my projects which are on different servers (countries).
What could be problem?
UPDATE:
Using
ssh -v
I'm getting this:
usage: ssh [-somecode] [-b bind_address] [-c cipher_spec]
[-D [bind_address:]port] [-e escape_char] [-F configfile]
[-i identity_file] [-L [bind_address:]port:host:hostport]
[-l login_name] [-m mac_spec] [-O ctl_cmd] [-o option] [-p port]
[-R [bind_address:]port:host:hostport] [-S ctl_path]
[-w tunnel:tunnel] [user#]hostname [command]
This may get it working again
edit the ssh config file:
nano ~/.ssh/config
make sure you have these lines:
Host bitbucket.org
Hostname altssh.bitbucket.org
Port 443
check if you don't have iptable rules for ssh outgoing connections,
if true, add port 22.
For multiple ports:
iptables -t filter -A OUTPUT -p tcp --match multiport --dport 22,1111,2222,3333 -j ACCEPT
check if bitbucket is down
Open Browser
Go to http://www.bitbucket.org
If the page doesn't display, it means bitbucket is down.
Solution: Wait for it :)