I'm having problem accessing one of my VMs (called myvm1 here) after having restored a disk from a snapshot. Here is what I did yesterday (which worked just fine):
I made a snapshot of disk1.
I created a new disk, called disk2, using the snapshot created above.
I attached the disk to myvm1 though Google Console.
I unmounted disk1 and mounted disk2.
I deleted disk1.
Everything worked fine, and database data on disk2 was accessible as desired. There's not much else on that disk.
Today, what I wanted to do, was to "rename" disk2 to disk1 (to avoid future problems with our Terraform setups). I did this by doing the following:
I made a snapshot of disk2.
I created a new disk, calling it disk1, using the snapshot above.
I attached the disk to myvm1 using the terminal:
gcloud compute --project=myproject instances attach-disk myvm1 --disk disk1
After this, when I attempted to ssh into myvm1 (to unmount and mount), I get a
ssh: connect to host myvm1 port 22: Connection refused
I have attempted the following to solve this/investigate:
stopping and starting the VM (takes a considerably longer time than other VMs in the same project) repeatedly
detaching disk1 (and re-attaching it)
Other information:
other VMs in the same project are still accessible via ssh.
I did nothing else to the VM yesterday or today but what I have written above. The system has not been in use between yesterday and now (it was shut down over night to save money).
Using the Google Console SSH does not work, BUT it does not work for the other VMs either, as we connect using private keys.
"The instance is booting up and sshd is not yet running." - It's listed as RUNNING.
"The instance is not running sshd." I have not manually disabled sshd.
"sshd is listening on a port other than the one you are connecting to." I've made no changes to ports.
"There is no firewall rule allowing SSH access on the port." Also, under "Firewall rules and routes details" port 22 is enabled. Also, firewall rules are identical to the other VMs in the same project.
"The firewall rule allowing SSH access is enabled, but is not configured to allow connections from GCP Console services." We don't want to be able to connect via GCP Console so that doesn't matter.
"The instance is shut down." - It's running.
Debug information for the ssh-call:
me#mycomputer:~/project$ ssh myvm1 -vvv
OpenSSH_7.2p2 Ubuntu-4ubuntu2.4, OpenSSL 1.0.2g 1 Mar 2016
debug1: Reading configuration data /home/me/.ssh/config
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug2: resolving "myvm1" port 22
debug2: ssh_connect_direct: needpriv 0
debug1: Connecting to myvm1 [10.23.0.3] port 22.
debug1: connect to address 10.23.0.3 port 22: Connection refused
ssh: connect to host myvm1 port 22: Connection refused
I've looked at the solution mentioned here
Why Google Cloud Compute Engine instance gives ssh connection refused after restart?
but since I have not yet mounted/unmounted any of the disks I don't see how that could be the same problem.
I would very much appreciate any help you can give me. Solutions involving creating a new instance are not relevant, as I want to know what went wrong in the first place, so that this does not happen in a production environment. Thankfully myvm1 is just a sandbox system.
A port 22 error can come from two sources: firewall not properly set up on GCP or port 22 not accepting SSH connections from within your instance. Assuming that firewall is properly set up since it works on other instances, please try to log in with serial console and check your iptable.
In order to connect to serial console you will have to perform the following:
1). Activate the “Connect to serial console” button.
Go to VM instances, click on your VM, Edit, and active “enable connecting to serial ports” in the Remote access area and click on save.
2). Create a username and password.
Go to Vm instance, click on your Vm again, Edit, and fill up the custom metadata section with:
In key: startup-script
In value:
#!/bin/bash
sudo useradd -G sudo pamela
sudo echo 'pamela:pamela5' | chpasswd
(This is a script that creates a username : pamela and password: pamela5, which you are going to use later. Please use something else for security purposes)
3). A reboot is needed for changes to take effect.
I had the same problem. I think the snapshot file is corrupted.
Related
Currently, I have built a small datacenter environment in OTC with Terraform. based on Ubuntu 20.04 images.
The idea is to have a jump host in the setup phase and for operational purposes that allows spontaneous access to service frontends via ssh proxy jumps without permanently routing them to the public net.
Basic setup works fine so far - I can access the jump host with ssh, and can access the internal machines from there with ssh when I put the private key onto the jump host. So, cloudwise the security seems to be fine. Key pair is generated with ed25519, I use the same key for jump host and internal servers (for now).
What I cannot achieve is the proxy jump as a chained command from my outside machine.
On the jump host, I set AllowTcpForwarding to "yes" in /etc/ssh/sshd_config and restarted ssh and sshd services.
My current local ssh config looks like this:
Host otc
User ubuntu
Hostname <FloatingIP-Address>
Port 22
StrictHostKeyChecking=no
UserKnownHostsFile=/dev/null
IdentityFile= ~/.ssh/ssh_access
ControlPath ~/.ssh/cm-%r#%h:%p
ControlMaster auto
ControlPersist 10m
Host 10.*
User ubuntu
Port 22
IdentityFile=~/.ssh/ssh_access
ProxyJump otc
StrictHostKeyChecking=no
UserKnownHostsFile=/dev/null
I can use this to ssh otc to the jump host.
What I would expect is that I could use e.g. ssh 10.0.0.56 to reach an internal host without further ado. As well I should be able to use commands like ssh -L 8080:10.0.0.56:8080 10.0.0.56 -N to map an internal server's port to a localhost port on my external machine. This is how I managed that successfully on other hosting scenarios in the public cloud.
All I get is:
Stdio forwarding request failed: Session open refused by peer
kex_exchange_identification: Connection closed by remote host
Journal on the Jump host says:
Jul 30 07:19:04 dev-nc-o-bastion sshd[2176]: refused local port forward: originator 127.0.0.1 port 65535, target 10.0.0.56 port 22
What I checked as well:
ufw is off on the Jump Host.
replaced ProxyJump configuration with ProxyCommand
So I am at the end of my knowledge. Has anyone a hint what else could be the reason? Any help welcome!
Ok, cause is found (but not yet fully explained).
My local ssh setting was allowing multiplexed forwards (ControlMaster auto ) which caused the creation of a unix socket file for the Controlpath in ~/.ssh.
I had to login to the jump host to AllowTcpForwarding in the first place.
After rebooting the sshd, I returned to the local machine and the failure occured when trying to forward to the remote internal machine.
After deleting the socket file in ~/.ssh, the connection can now be established as needed. Obviously, the persistent tunnel was not impacted by the restarted daemon on the jump host and simply refused to follow the new directive.
This cost me two days. On the bright side, I learned a lot about ssh :o
I am attempting to connect (via SSH) one GCE VM instance to another GCE VM instance (which will be referred to as Machine 1 and Machine 2 from now one).
So far I have generated (via ssh-keygen -t rsa -f ~/.ssh/ssh_key) a public and private key on Machine 1, and have added the contents of ssh_key.pub to the ~/.ssh/authorized_keys file on Machine 2.
However, whenever I try to connect them via ssh using the following command: gcloud compute ssh --project [PROJECT_ID] --zone [ZONE] [Machine_2_Name] it simply times out (Connection timed out. ERROR: (gcloud.compute.ssh) [/usr/bin/ssh] exited with return code [255].)
I have doubled checked that each VM instance has plenty of disk space, and their firewall settings are permissive, and OS Login is not enabled. I have read through the answer here but nothing is working.
What am I doing wrong? How do I properly SSH from one GCE VM instance to another?
The problem I was having was that each VM was using a different network/sub-network with different firewall configurations. After making one using the same network/sub-network, I was able to easily ssh into one from the other via
username#machine1:~$ ssh machine2
I tested the same scenario on my side and I got the same result as you said. Then I ran this command inside the machine to debug the SSH process to try to narrow down the issue:
gcloud compute ssh YOUR_INSTANCE_NAME --zone ZONE --ssh-flag="-vvv"
Then I got this result:
debug1: connect to address 35.x.x.x port 22: Connection timed out
ssh: connect to host 35.x.x.x port 22: Connection timed out
So, means the instance 1 is unable to connect to the external IP address of instance 2. I only added a new firewall rule and it works.
After running above mentioned command, if you see any permission denied message, it means you did not copy the public key to the source machine properly.
Background
I have a machine in production running an elixir application (no access to iex, only to erl) and I am tasked with running an analysis on why we are consuming so much CPU. The idea here would be to launch observer, check the processes tab and see the processes with the most reductions.
How am I connecting?
To connect I am following a tutorial from a blog:
https://sgeos.github.io/elixir/erlang/observer/2016/09/16/elixir_erlang_running_otp_observer_remotely.html 1
Their instructions are as follows:
launch the app in the production machine with a cookie and a name
from local run: ssh user#public_ip "epmd -names" to get the name of the app and the port used
from local create a ssh tunnel to the remote machine: ssh -L 4369:user#public_ip:4369 -L 42877:user#public_ip:42877 user#public_ip (4369 is the epmd port by default, 42877 is the port of the app)
from local connect to the remote machine using the node's name: erl -name "user#app_name" -setcookie "mah_cookie" -hidden -run observer
Problem
And now in theory I should be able to use observer on the machine. Instead however I am greeted with the following error:
Protocol ‘inet_tcp’: register/listen error: epmd_close
So, after scouring the dark side of internet, I decided to use sudo journalctl -f to check all the logs of the machine and I found this:
channel 3: open failed: administratively prohibited: open failed
my_app_name sshd[8917]: error: connect_to flame#99.999.99.999: unknown host (Name or service not known)
/scripts/watchdog.sh")
my_app_name CRON[9985]: pam_unix(cron:session): session closed for user flame
Where:
erlang -name: my_app_name
machine user: flame
machine public ip: 99.999.99.999 (obviously not real)
so it tells me, unknown host ?? I am confused since 99.999.99.999 is the public IP of the machine itself!
Questions
What am I doing wrong?
I read that in older versions of erlang I can’t monitor a machine with observer if they are in different networks (which is the case, because I want to monitor this machine from my localhost) but I didn’t find any information regarding this in modern days.
If this is in fact impossible, what alternatives do I have?
Solution
After 3 days of non-stop searching, I finally found something that works.
To summarize I am putting it here everything I did.
All steps in local machine:
get the ports from the remote server:
> ssh remote-user#remote-ip "epmd -names"
epmd: up and running on port 4369 with data:
name super_duper_app at port 43175
create a ssh tunel with the ports:
ssh remote-user#remote-ip -L4369:localhost:4369 -L43175:localhost:43175
On another terminal in your local machine, run a iex terminal with the cookie the app in your remote server is using. Then connect to it and start observer:
iex --name observer#127.0.0.1 --cookie super_duper_cookie
Node.connect :"super_duper_app#127.0.0.1"
> true
:observer.start
With observer started, select the machine from the Nodes menu.
Possible setbacks
If you have tried this and it didn't work there are a few things you can check for:
Check if the EPMD port on your local machine is free, if not, kill the process using it and free it.
Check your ssh tunneling keys and configurations for permissions. As #Roberto Aloi pointed out this link can be useful: https://unix.stackexchange.com/questions/14160/ssh-tunneling-error-channel-1-open-failed-administratively-prohibited-open
I stopped and restarted an ubuntu 14.04 Google Cloud Compute Engine instance, and now my ssh connection is refused with:
ssh: connect to host 146.148.114.98 port 22: Connection refused
This already happened a previous time, I thought there was a problem with the machine, I deleted it and recreated and it started working again. I don't want to be recreating instances every time. The ssh troubleshooting page of google cloud is quite messy. My firewall rules seem to be ok. Anyone has a solution for this?
NAME NETWORK SRC_RANGES RULES SRC_TAGS TARGET_TAGS
default-allow-http default 0.0.0.0/0 tcp:80 http-server
default-allow-https default 0.0.0.0/0 tcp:443 https-server
default-allow-icmp default 0.0.0.0/0 icmp
default-allow-internal default 10.128.0.0/9 tcp:0-65535,udp:0-65535,icmp
default-allow-rdp default 0.0.0.0/0 tcp:3389
default-allow-ssh default 0.0.0.0/0 tcp:22
This is the output for: ps aux | grep ssh
root 29 0.0 0.4 55184 2860 ? Ss 11:26 0:00 /usr/sbin/sshd -p 22 -o AuthorizedKeysCommand=/google/devshell/authorized_keys.sh -o Author
izedKeysCommandUser=root
root 183 0.0 0.9 82692 5940 ? Ss 11:26 0:00 sshd: fbeshox [priv]
fbeshox 218 0.0 0.7 82692 4424 ? S 11:26 0:00 sshd: fbeshox#pts/0
fbeshox 522 0.0 0.3 12728 2200 pts/1 S+ 12:12 0:00 grep ssh
Here the verbose results of the ssh connetion attempt.
ssh -i .ssh/keyname username#130.211.53.51 -vvv
OpenSSH_6.2p2, OSSLShim 0.9.8r 8 Dec 2011
debug1: Reading configuration data /Users/xxxx/.ssh/config
debug1: Reading configuration data /etc/ssh_config
debug1: /etc/ssh_config line 20: Applying options for *
debug1: /etc/ssh_config line 102: Applying options for *
debug2: ssh_connect: needpriv 0
debug1: Connecting to 130.211.53.51 [130.211.53.51] port 22.
debug1: connect to address 130.211.53.51 port 22: Connection refused
ssh: connect to host 130.211.53.51 port 22: Connection refused
It is possible that sshguard, a security tool installed on Ubuntu by default, is interfering with your connection. Basically sshguard might have incorrectly decided that your IP address is 'attacking' your instance and blocked the IP.
If you can log in from a different location, such as the Web SSH provided by the Cloud Console, try using sudo iptables -S to see if there are any firewall rules on the instance (different than the GCE firewall) created by sshguard. If so try disabling sshguard or adding your IP address on the whitlelist (http://www.sshguard.net/docs/whitelist/).
I know this is an old question but today I had similar issue and the problem was very simple - IP of the instance is changed after restart - so I had to update ssh string accordingly
This issue often happens after you changed your default zone or region.
Then, you must update the ssh keys in your metadata by
sudo gcloud compute config-ssh
You can see also the changes in the web interface under Compute Engine | Metadata | SSH Keys.
Try to SSH into the instance with a different username .Google Compute is a bit shaky at times . Try to SSH into the instance using VM instance page in Compute Engine . If SSH takes too much time and refuses connection, then login with a different username in SSH . You can login with a different name using a settings icon on top right corner of SSH window . If these all these doesn't go well, I will advise you to re create one more instance, as it has also been my experience that Google Compute Engine instances are not stable in terms of SSH accessibility and tend to create problems .It's better to use putty as a client to SSH into Compute Engine than the SSH terminals Google provides . Let me know if that helps you :)
I understood that my problem raised after detaching a disk from the console when the machine was stopped with that disk mounted. First unmount the disk, then detach it. Never seen the problem again.
Also make sure you did not recursively modify permissions for the /etc folder.
For example:
chmod -R 775 /etc
This will prevent you from logging back into the VM, even from the Web console and gcloud cli.
Instead, modify permissions on a more granular level e.g. /etc/nginx, etc.
I had the same problem and solved it by first connecting to the VM via browser SSH (from the "VM instances" web overview), which also fails, then choose the offered option of retrying without Cloud Identity-Aware Proxy which worked.
Afterwards all SSH-connections work again (both in browser and local shell).
I'm using msysgit (Git-1.7.11-preview20120710.exe). I tried GitStack on Win8, which is a git server, and after uninstalling it because it didn't work out, I got blocked from accesing port 22 anymore. Here's what msysgit throws at me every time I try to push/clone/etc.
Welcome to Git (version 1.7.11-preview20120710)
Run 'git help git' to display the help index.
Run 'git help ' to display help for specific commands.
Alain#ALAIN-PC /d/Sync/Web/Work/Current/10042012_Madmen
$ cd madmen-intellectuals/
Alain#ALAIN-PC /d/Sync/Web/Work/Current/10042012_Madmen/madmen-intellectuals (de
v)
$ git push -v
Pushing to git#barrelstrengthdesign.beanstalkapp.com:/madmen-intellectuals.git
ssh: connect to host barrelstrengthdesign.beanstalkapp.com port 22: Bad file num
ber
fatal: The remote end hung up unexpectedly
Alain#ALAIN-PC /d/Sync/Web/Work/Current/10042012_Madmen/madmen-intellectuals (de
v)
$ ssh -Tv github.com
OpenSSH_4.6p1, OpenSSL 0.9.8e 23 Feb 2007
debug1: Connecting to github.com [207.97.227.239] port 22.
debug1: connect to address 207.97.227.239 port 22: Attempt to connect timed out
without establishing a connection
ssh: connect to host github.com port 22: Bad file number
This wasn't happening before installing GitStack. And I really haven't done anything other than that in the middle. Any suggestions ?
Notes:
While installing GitStack, I didn't check "Install msysgit" option, because I already had it installed. But the application didn't work out of the box. I assume that's the reason. I uninstalled it inmediately after that.
I've rebooted and disabled my firewall several times. I checked with Nmap and says "filtered".
I already saw Git SSH error: "Connect to host: Bad file number", but this is not the solution I'm looking for. I want to go back to the previous state without the error. Besides, I'm not only using github.
It worked before installing GitStack.
Thanks!
GitStack does not support yet Windows 8 as this system is not released yet.
GitStack creates a restore point before its installation. You should try to revert your computer at a previous state.
Make sure also to double check that your firewall does not block the port 22. GitStack does not use and modify the port 22.