When I try to access VM through ssh, it stuck at transferring ssh keys to SSH servers.
Already checked the solutions mentioned at:
https://groups.google.com/forum/#!topic/gce-discussion/zJS0qFFQYlM
but they didn't work.
The messages are shown on the screen when I try to access VM through ssh.
The key transfer to project metadata is taking an unusually long time. Transferring instead to instance metadata may be faster, but will transfer the keys only to this VM. If you wish to SSH into other VMs from this VM, you will need to transfer the keys accordingly.
Click here to transfer the key to instance metadata. Note that this setting is persistent and needs to be disabled in the Instance Details page once enabled.
You can drastically improve your key transfer times by migrating to OS Login.
Related
I am using a VM instance with 30gb disk space. Cloning my projects into this instance. While I was cloning one project, it showed "disk is out of space". So, I turned off all the processes running in this instance, closed all SSH connections and stopped the instance. Went to Disks, increased space from 30GB to 35GB. While starting a new SSH connection with this instance, it is not connecting. Sometimes, it goes on trying but doesnt connect. Sometimes, it shows an error
Connection via Cloud Identity-Aware Proxy Failed
Code: 4003
Reason: failed to connect to backend
You may be able to connect without using the Cloud Identity-Aware Proxy.
Tried to connect from gcloud shell, it shows me that no SSH keys are present. And ~/.ssh/authorized_keys is empty. Tried to copy key from google_compute_engine.pub to authorized_keys but not worked. Tried by creating new key and then copying it to authorized_keys > Not worked. While copying these keys, it showed me permission denied(publickey) error. While connecting from gcloud shell too, shows the same erroe.
I am getting error while trying to do ssh gcloud instance.
I have removed old ssh key and regenerated new ssh key and tried to connect but still the problem remains as it is.
Please share your suggestions.
Check whether the port 22 is open in the firewall for that specific instance. You can follow this document to manage your firewall rules. You can try connecting via serial console instead. The issue you are facing could arise due to many different reasons. It is worth trying different troubleshooting steps for SSH connectivity.
If you created new SSH key properly then check if you added the key to your instance or project-wide metadata. This article is a good read.
I've had a google cloud instance for some time and I used to ssh into it without any problem. At some point I had to remove the additional disk on which I just had some files. Now it doesnà t allow me to ssh into it anymore. Can the two things be linked? The firewall is set to default and it has the rule to allow SSH from anywhere.
Any advice?
You can try to reboot your cloud instance. What error do you get?.
I am using aws java sdk to launch EC2 instances (running Ubuntu 12.04) and run a distributed tool on them, the tool uses openMPI for message passing between the nodes and openMPI uses SSH to connect nodes with each other.
The problem is that the EC2 instances don't authenticate each other for SSH connections by default, this tutorial shows how to set up SSH by generating keys and adding them to nodes, However, when I tried to add the generated key to the slaves using the command
$ scp /home/mpiuser/.ssh/id_dsa.pub mpiuser#slave1:.ssh/authorized_keys
I still got permission denied. Also, after generating new keys, I was not able to log in using the ".pem" key that I got from amazon.
I am not experienced with SSH keys, but I would like to have some way of configuring each EC2 instance (when its firstly created) to authenticate the others, for example by coping a key into each of them. Is this possible and how It could be done?
P.S.: I can connect to each instance once it is launched and can execute any commands on them over SSH.
I found the solution, I added the amazon private key (.pem) in the image (AMI) that I use to create the EC2 instances and I changed the /etc/ssh/ssh_config file by adding a new identity file
IdentityFile /path/to/the/key/file
This made SSH recognize the .pem private key when it tries to connect to any other EC2 instance created with the same key.
I also changed StrictHostKeyChecking to no, which stopped the message "authenticity of host xxx can't be established" which requires users interaction to proceed with connecting to that host.
I was going to do rsync, but rsync over SSH needs to have the private key on the second EC2 instance. I'm concerned about copying my private SSH key to the server. That can't be safe, right?
Is there another possibility, e.g. somehow getting authentication via my computer? If it's only a little auth check at the beginning of each sync, I don't mind that.
Or can I securely sync files between EC2 instances without the private key?
Thanks for your input,
MrB
You needn't use your EC2 keys to setup SSH between the two EC2 instances. Look at this guide - http://ask-leo.com/how_can_i_automate_an_sftp_transfer_between_two_servers.html .
Simple outline of the process is, lets say you want to transfer files from Server1 to Server2. You basically create a new key for your user on Server1 (note this is different from the key you downloaded to access your EC2 instance - Server1 in this case). Then load up the public part in Server2's authorized_keys and you should be able to setup SSH.
If the user that the rsync process is going to run under is not your user, then you will have to setup SSH keys for the user that the process will run under.
HTH
Just create a snapshot of the volume you have your modified files contained and attach it your outdated instance after detaching the outdated volume.