Google Compute Engine SSH from browser stopped working Error 13 - ssh

A compute instance I had running stopped working and I am no longer able to ssh to it from the browser. When I try it hangs forever and eventually I get the error message:
You cannot connect to the VM instance because of an unexpected error.
Wait a few moments and then try again. (#13)
I looked here for common issues. I made a snapshot and tried recreating with a larger disk, in a different region and with a bigger compute instance but I was still unable to connect. When other users try to connect they have the same problem. I'm using a standard container so I expect the google daemon should be running.
This instance was collecting tweets and writing output to GCS regularly. Since ssh stopped working the instance has also stopped writing output.
Does anyone have any idea what could have gone wrong?

I would also suggest checking the Serial Console of the machine to see if there are any messages which provide any clues. For example, if the boot disk has run out of space (which can prevent SSH connectivity), there will be some messages displayed in the Serial Console implying this.
You could also try connecting to the machine via the Serial Console to troubleshoot the issue by following the advice here.
When you try to SSH into the instance from the Cloud Shell for example, using the following command, the output should provide some clues as to why you cannot SSH into the machine:
$ gcloud compute ssh INSTANCE_NAME --zone ZONE

If you are on a VPC network, try to check the applicable network TAG that allows the instance to use SSH and provide that tag to your instance. Because it could be the Firewall rules that are blocking your instance from creating the ssh connection.

Related

Is there a way of using dask jobqueue over ssh

Dask jobqueue seems to be a very nice solution for distributing jobs to PBS/Slurm managed clusters. However, if I'm understanding its use correctly, you must create instance of "PBSCluster/SLURMCluster" on head/login node. Then you can on the same node, create a client instance to which you can start submitting jobs.
What I'd like to do is let jobs originate on a remote machine, be sent over SSH to the cluster head node, and then get submitted to dask-jobqueue. I see that dask has support for sending jobs over ssh to a "distributed.deploy.ssh.SSHCluster" but this seems to be designed for immediate execution after ssh, as opposed to taking the further step of putting it in jobqueue.
To summarize, I'd like a workflow where jobs go remote --ssh--> cluster-head --slurm/jobqueue--> cluster-node. Is this possible with existing tools?
I am currently looking into this. My idea is to set-up an SSH tunnel with paramiko and then use Pyro5 to communicate with the cluster object from my local machine.

Google-Compute-Engine Virtual Machine Instance: Unable to login/SSH the VM instance after adding a disk

GCP VM instance: OS: Ubuntu (18.04 bionic) Disk size: 10GB. Later added another disk of 10 GB.
While working on the GCP VM instance, I was facing the issue for 'no-disk space left'. Then, I created another disk of 10 GB and added to this GCP VM instance as referred in https://cloud.google.com/compute/docs/disks/add-persistent-disk?&_ga=2.217520662.-1058595688.1590395241#formatting
Now, I exited the GCP VM instance and stopped it.
Later on, when I restarted the GCP VM instance, I am unable to connect. I tried to connect using the SSH connection option available on GCP, putty, WinSCP and telnet, but I am unable to connect now.
My understanding to this is that it might be possible that some services might have stopped on the GCP VM instance. Is there a way to check whether the services are running or not on the GCP VM instances. If yes,then how?
If you think, there is some other issue for connecting to the GCP VM instance then please let me know.
There may be several reasons;
Firewall rules - check them to be sure nothing blocks SSH traffic to your machine.
Have a look at the serial console output - you can do it via console gui or gcloud compute instances get-serial-port-output instance_name --zone=my_zone.
If your drive gets full you may not be able to login (no matter how).
Adding another persistend disk won't help if the first one is full.
You can increase it's size though - also via console or gcloud compute disks resize example-disk-1 --size=11GB - this will add 1GB more and if it's the matter of disk space it should allow you to log in.
If you're still not able to log in try enabling interaction with serial console gcloud compute instances add-metadata instance-name --metadata serial-port-enable=TRUE and connect to it gcloud compute connect-to-serial-port instance-name since this is the most full-proof method if everything else fails.
If you're able to connect via serial console check if the SSH service is listening:
sudo service ssh status - if not start it with sudo service ssh start and watch for any errors.
Similar case was also discussed here.

Google Cloud SSH Server: We are unable to connect to the VM on port 22

I am very new to Google Cloud Platform and was trying to restart my VM instance. I entered $ sudo poweroff into my SSH console, as suggested by https://cloud.google.com/compute/docs/instances/stopping-or-deleting-an-instance#stop_an_instance
and the console did not return anything. Afterwords I started the VM instance again and the SSH console started returning the message "We are unable to connect to the VM on port 22.".
I have a snapshot of my root disk, but I would really like my instance to be running properly again.
When you run either sudo poweroff or sudo shutdown -h now, your VM will shutdown right away. This involves flushing any in-memory buffers for disks back to the disks so that you do not lose any unflushed data.
Since you're initiating this command over a ssh session, you will not be able to look at any shutdown messages over ssh while the instance is shutting down (since the network service on the VM will also be brought down).
You can use gcloud compute instances list or gcloud compute instances describe VM_NAME commands to find out the status of the VM.
If it says RUNNING, it means the instance is running and you will be able to ssh to the VM. If it says TERMINATED, it means the instance was shutdown/terminated and you will not be billed for this instance.

gcloud ssh jobs when internet in interrupted

I'm using google cloud instance for one of my long duration job and using : gcloud compute ssh 'instance name' to connect from one of my ubuntu PC terminal.
All goes well. But as the job takes few hours to complete and when my PC is out of network the shell gets killed and hence the job also.
I'm wondering if there is a way by which the job can continue on google cloud when the 'SSH terminal' from my PC gets killed because on network unavailability?
Thanks
The best answer here is to use mosh to connect to your instance. In order to do this, you will need to first install mosh on your instance via the normal method for your distribution. Second you will need to modify the firewall that Google runs to allow for the required UDP port to reach:
you#local-pc:~$ gcloud compute firewall-rules create default-allow-mosh --allow=udp:60001
In this case we use the name 'default-allow-mosh' following the nomenclature used in the rest of the firewall rules of network-action-what and telling it to allow the UDP port that mosh says it must have allowed through.

Keeping access to a Fedora Amazon EC2 instance

I run a Fedora instance in Amazon EC2. I can access and work on it perfectly by Putty.
I also set Seconds between keepalives to 1 for not losing the connection due to inactivity (I mean in Putty).
Nevertheless, if a network/electric failure happens on my local computer, it shuts down the Putty connection, so the session logs off and the executions in the instances stop.
Can anybody help me in keeping a session alive and being able to connect/disconnect to it whenever I want?
Use the screen command.