I'm getting ssh: connect to host xx.xx.xx.xx port 22: Connection refused after updating fstab for mounting Google bucket. fstab entry that I have added is as follows,
bucket mount_point fuse rw,nosuid,nodev,relatime,user_id=1004,group_id=1005,default_permissions 0 0
ssh connection is came up after updating fstab and restarting the vm. Is there a relationship between fstab entry and ssh connection issue? and how I'm supposed to change fstab entry while I can't connect vm through ssh?
Changes to /etc/fstab should not cause a problem with SSH
I would start by checking the firewall rules of your project
gcloud compute firewall-rules list
Add default-allow-ssh rule if it is missing
gcloud compute firewall-rules create default-allow-ssh --allow tcp:22
Did you install a firewall in the OS?
If you could share the output of nmap against the external IP of your instance it will be great.
Related
Currently, I have built a small datacenter environment in OTC with Terraform. based on Ubuntu 20.04 images.
The idea is to have a jump host in the setup phase and for operational purposes that allows spontaneous access to service frontends via ssh proxy jumps without permanently routing them to the public net.
Basic setup works fine so far - I can access the jump host with ssh, and can access the internal machines from there with ssh when I put the private key onto the jump host. So, cloudwise the security seems to be fine. Key pair is generated with ed25519, I use the same key for jump host and internal servers (for now).
What I cannot achieve is the proxy jump as a chained command from my outside machine.
On the jump host, I set AllowTcpForwarding to "yes" in /etc/ssh/sshd_config and restarted ssh and sshd services.
My current local ssh config looks like this:
Host otc
User ubuntu
Hostname <FloatingIP-Address>
Port 22
StrictHostKeyChecking=no
UserKnownHostsFile=/dev/null
IdentityFile= ~/.ssh/ssh_access
ControlPath ~/.ssh/cm-%r#%h:%p
ControlMaster auto
ControlPersist 10m
Host 10.*
User ubuntu
Port 22
IdentityFile=~/.ssh/ssh_access
ProxyJump otc
StrictHostKeyChecking=no
UserKnownHostsFile=/dev/null
I can use this to ssh otc to the jump host.
What I would expect is that I could use e.g. ssh 10.0.0.56 to reach an internal host without further ado. As well I should be able to use commands like ssh -L 8080:10.0.0.56:8080 10.0.0.56 -N to map an internal server's port to a localhost port on my external machine. This is how I managed that successfully on other hosting scenarios in the public cloud.
All I get is:
Stdio forwarding request failed: Session open refused by peer
kex_exchange_identification: Connection closed by remote host
Journal on the Jump host says:
Jul 30 07:19:04 dev-nc-o-bastion sshd[2176]: refused local port forward: originator 127.0.0.1 port 65535, target 10.0.0.56 port 22
What I checked as well:
ufw is off on the Jump Host.
replaced ProxyJump configuration with ProxyCommand
So I am at the end of my knowledge. Has anyone a hint what else could be the reason? Any help welcome!
Ok, cause is found (but not yet fully explained).
My local ssh setting was allowing multiplexed forwards (ControlMaster auto ) which caused the creation of a unix socket file for the Controlpath in ~/.ssh.
I had to login to the jump host to AllowTcpForwarding in the first place.
After rebooting the sshd, I returned to the local machine and the failure occured when trying to forward to the remote internal machine.
After deleting the socket file in ~/.ssh, the connection can now be established as needed. Obviously, the persistent tunnel was not impacted by the restarted daemon on the jump host and simply refused to follow the new directive.
This cost me two days. On the bright side, I learned a lot about ssh :o
I am attempting to connect (via SSH) one GCE VM instance to another GCE VM instance (which will be referred to as Machine 1 and Machine 2 from now one).
So far I have generated (via ssh-keygen -t rsa -f ~/.ssh/ssh_key) a public and private key on Machine 1, and have added the contents of ssh_key.pub to the ~/.ssh/authorized_keys file on Machine 2.
However, whenever I try to connect them via ssh using the following command: gcloud compute ssh --project [PROJECT_ID] --zone [ZONE] [Machine_2_Name] it simply times out (Connection timed out. ERROR: (gcloud.compute.ssh) [/usr/bin/ssh] exited with return code [255].)
I have doubled checked that each VM instance has plenty of disk space, and their firewall settings are permissive, and OS Login is not enabled. I have read through the answer here but nothing is working.
What am I doing wrong? How do I properly SSH from one GCE VM instance to another?
The problem I was having was that each VM was using a different network/sub-network with different firewall configurations. After making one using the same network/sub-network, I was able to easily ssh into one from the other via
username#machine1:~$ ssh machine2
I tested the same scenario on my side and I got the same result as you said. Then I ran this command inside the machine to debug the SSH process to try to narrow down the issue:
gcloud compute ssh YOUR_INSTANCE_NAME --zone ZONE --ssh-flag="-vvv"
Then I got this result:
debug1: connect to address 35.x.x.x port 22: Connection timed out
ssh: connect to host 35.x.x.x port 22: Connection timed out
So, means the instance 1 is unable to connect to the external IP address of instance 2. I only added a new firewall rule and it works.
After running above mentioned command, if you see any permission denied message, it means you did not copy the public key to the source machine properly.
I was setting up a firewall with UFW in Ubuntu server, I skipped the step sudo ufw allow ssh and instead run the command sudo ufw enable. Rebooted the VPS but now when I try to connect using ssh, I get the following error ssh: connect to host {IP Address} port 22: Operation timed out.
I am using Google Cloud Compute Infrastructure and I'm not understanding details in this article https://cloud.google.com/compute/docs/ssh-in-browser#ssherror
Is there a way I can rollback?
You can login to your instance using the serial console. After logging in you should run the command: sudo ufw allow ssh, for allowing ssh access to your instance.
See Interacting with the Serial Console for more information
I have remote host/server with ssh access.
I have my computer in my work network which can connect via ssh only
in within this network.
And i can not connect via ssh to other world because of port 22
blocked by firewall.
I am trying to create ssh tunnel to forward example localhost:80 to remote_server:22.(i suppose to connect via ssh to localhost and will be forwarded to my remote server)
I tried for example without proxy
sudo ssh -L localhost:443:remote_server_ip:22 root#remote_host_name
and with proxy
https://wiki.archlinux.org/index.php/Tunneling_SSH_through_HTTP_proxies_using_HTTP_Connect
I have read a lot and checked stackoverflow but it still is not clear for me how to resolve this issue.
I set up SSH on my Ubuntu server (running XMonad) and generated a key for my laptop that I used to connect to my home server with. I also went on my wireless router and forward port 22 for SSH use. I can SSH fine when I'm at home using the standard:
ssh user#ipaddress
However when I'm outside of my local network I get this error:
ssh: connect to host xxx.xx.xx.xxx port 22: Connection refused
Everything I read says I need to either a) check that my port 22 is forward (which it is) or b) check that sshd is actually running on my Ubuntu server (which it is).
Any ideas what is preventing my SSH from working when I'm remote?
Add the following line your ssh user config file if it doesn't exits You can create the config file as shown below.
vi ~/.ssh/config
Host *
ServerAliveInterval 300
Change the permission as below:
chmod 600 ~/.ssh/config
Restart the daemon. Hope this helps.
https://serverfault.com/a/371563/617303
For me this was the cause.
In your /etc/ssh/sshd_config or /etc/ssh_ssh_config check to make sure GSSAPI Auth is disabled (set to no).
GSSAPIAuthentication no
Then restart the service or machine.