SSH works fine with .ssh/config, but fabric breaks - ssh

I have an .ssh/config:
Host host01 host01.in.mynet
User costello
HostName 1.2.3.4
Port 22222
Which is working fine with plain ssh:
ssh host01
costello#host01 ~ ยป
But fabric is not using that config:
$ fab deploy:host=host01
[host01] Executing task 'deploy'
Fatal error: Low level socket error connecting to host host01 on port 22: Connection refused (tried 1 time)
Underlying exception:
Connection refused
Aborting.
Why is fabric not using the ssh's configuration? I would really like to avoid duplicating the configuration for fabric or, even worse, change the ssh port of my server.

Related

SSH connection timed out on port 22 even with ufw inactive

I have always connected to my server with ssh (ssh user#ip_address) and had no problems, but suddenly I get this error message:
ssh: connect to host 178.79.131.53 port 22: Connection timed out
I ping the server and i'ts fine
I also disabled ufw thinking that the firewall was the problem but still the same error
I can connect to other servers
Also when I do nmap 178.79.131.53 -p 22 still says I get this message:
Nmap scan report for ServerName (xxx.xx.xxx.xx)
Host is up (0.062s latency).
PORT STATE SERVICE
22/tcp filtered ssh
Nmap done: 1 IP address (1 host up) scanned in 0.70 seconds
Note: It'a Linode server

VS code ssh trouble for non-22 port

VSCode Version: 1.40.0
Local OS Version: Windows 10
Remote OS Version: CentOS 7
Remote Extension/Connection Type: SSH
Steps to Reproduce:
Use SSH to connect host with non-22 port and declare it in ssh_config, then start connecting
Host refused with error message being "ssh connect ot host XXX.XXX.XX.XX port 22: Connection refused"
I think this message indicates that ssh use port 22 to connect host. But, I have changed it in configure file.

SSH port-forwarding works on one port but not on the other

I am trying this command
ssh username#example -L 27017:10.230.0.6:27017 -L 9201:10.290.0.8:9200 -L 5601:10.210.0.5:5601
The port forwarding works for the 27107 but not the others, do I need to override the ports?
I always get the same error which is:
channel 8: open failed: connect failed: Connection timed out
channel 7: open failed: connect failed: Connection timed out
ssh username#example ... -L 9201:10.290.0.8:9200 -L 5601:10.210.0.5:5601
...
channel 8: open failed: connect failed: Connection timed out
When you connect to port 9201 or 5601 on your local system, that connection is tunneled through your ssh link to the ssh server on the remote ssh server. From there, the ssh server makes a TCP connection to the target of the tunnel--10.290.0.8:9200 or 10.210.0.5:5601--and relays data between the tunneled connection and the connection to target of the tunnel.
The "Connection timed out" error is coming from the remote ssh server when it tries to make the TCP connection to the target of the tunnel. "Connection timed out" means that the ssh server process transmitted a TCP connection request to the target system, and it never received a response.
Common reasons for a connection timeout include:
The target system is down or disconnected from the network.
Some firewall or other network device is blocking traffic between the ssh server and the target system.
The IP address and/or port is incorrect, and the connection attempts are going to the wrong place.

OS X SSH connect to host operation timed out

When running
ssh -v myuser#xx.xxx.xxx.xx
I connect to the server and can operate the session
When running
ssh myuser#xx.xxx.xxx.xx
the behaviour returns
ssh: connect to host xx.xxx.xxx.xx port 22: Operation timed out
THis behaviour appeared after I stated on the server:
ssh-add ~/.ssh/id_rsa
thus adding the id to the agent has messed up ssh... How to fix?

SSH tunnel stops working after EC2 instance restart

I have an SSH tunnel from an EC2 instance (say A) to another with an Elastic IP (say B). It worked perfectly. Yet, B had a failure. So I had to stop it, and start a new instance with the same Elastic IP. And now the exact same SSH tunnel does not work anymore. Yet:
I can still SSH from A to B. So I know my keys are in place
I tried the exact same tunnel from another instance than A, and it works as expected.
So somehow, it is as if A detected a problem when B went down, and it is now blocking the traffic.
Tunnel:
/usr/bin/ssh -o StrictHostKeyChecking=no -i /path_to/id_dsa -f -p 22 -N -L 26:www.foo.com:80 ssh_tunnel#amazon_public_ip
And when I try Curl here is what I get:
curl -v -H "Host: www.foo.com" http://localhost:26/foofoo
* Hostname was NOT found in DNS cache
* Trying 127.0.0.1...
* connect to 127.0.0.1 port 26 failed: Connection refused
* Failed to connect to localhost port 26: Connection refused
* Closing connection 0
curl: (7) Failed to connect to localhost port 26: Connection refused
Am I missing something?
I found the issue. I did not pay attention, but when I was SSH-ing into the instance, I was getting a warning message: WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!. Since it worked anyway, I thought it was not a problem. It turns out it makes the tunnel fail.
So I just removed the offending RSA key from known_hosts and now it works.