I am not able to access the google compute engine instance using ssh or gcutil ssh. I have tried adding my local machine keys into metadata and shh keys of the specific instance. How to achieve access using a shh-client?
this is the link to the guides i followed.
Google Compute Engine - troubleshooting SSH "Connection refused"
https://cloud.google.com/compute/docs/instances/connecting-to-instance#standardssh
the official way is google compute ssh [instance-name]
If this isn't what you did to get the error, let me know and I'll try.
Just to let you know Google Compute Engine recommends that all users transition to the gcloud compute tool from gcutil. gcloud compute is a unified command-line tool that features a number of improvements over gcutil link.
To connect to Google compute Engine you need tor run the below command in your cloud Shell:
gcloud compute ssh [INSTANCE_NAME]
Here a the public documentation, If you want to ssh from inside an instance just run the same command line above.
The trick here is to use the -C (comment) parameter to specify your GCE userid.
If the Google user who owns the GCE instance is myname#gmail.com (which you will use as your login userid), then generate the key pair with (for example)
ssh-keygen -b521 -t ecdsa -C myname -f mykeypair
When you paste mykeypair.pub into the instance's public key list, you should see "myname" appear as the userid of the key.
Setting this up will let you use ssh, scp, etc from your command line.
Related
I have installed ansible in on of my GCE Debian VM Instance(1). Now I want to connect to another GCE Debian VM instance(2).
I have generated the public key on Instance 1 and copied the .pub key manually to the the authorized key of instance 2.
But, when I try to do the ssh from 1 to 2 it gives permission denied.
Is there any other way round? I am a little new to this, trying to learn.
is there any step by step guide available? and also what is the exact ip address to do ssh on? will it be the internal IP or the External IP taken by GCE when the Instance is started.
I'm an Ansible user too and I manage a set of compute engine servers. My scenario is pretty close to yours so hopefully this will work for you as well. To get this to work smoothly, you just need to realise that ssh public keys are metadata and can be used to tell GCE to create user accounts on instance creation.
SSH public keys are project-wide metadata
To get what you want the ssh public key should be added to the Metadata section under Compute Engine. My keys look like this:
ssh-rsa AAAAB3<long key sequence shortened>Uxh bob
Every time I get GCE to create an instance, it creates /home/bob and puts the key into the .ssh/authorized_keys section with all of the correct permissions set. This means I can ssh into that server if I have the private key. In my scenario I keep the Private Key only in two places, LastPass and my .ssh directory on my work computer. While I don't recommend it, you could also copy that private key to the .ssh directory on each server that you want to ssh from but I really recommend getting to grips with ssh-agent
Getting it to work with Ansible
The core of this is to tell Ansible not to validate host checking and to connect as the user specified in the key (bob in this example). To do that you need to set some ssh options when calling ansible
ansible-playbook -ssh-common-args='-o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no' -u bob
Now Ansible will connect to the servers mentioned in your playbook and try to use the local private key to negotiate the ssh connection which should work as GCE will have set things up for you when the VM is created. Also, since hostname checking is off, you can rebuild the VM as often as you like.
Saying it again
I really recommend that you run ansible from a small number of secure computers and not put your private key onto cloud servers. If you really need to ssh between servers, look into how ssh-agent passes identity around. A good place to start is this article.
Where did you say the metadata was?
I kind of glossed over that bit but here's an image to get you started.
From there you just follow the options for adding a public key. Don't forget that this works because the third part of the key is the username that you want GCE and Ansible to use when running plays.
It's quite simple if you have two instances in google cloud platform, automatically you have the guest environment installed (gcloud command line), with it you can ssh through all you ssh inside your project:
Just run the following command line for inside your instance A to reach the Instance B
[user#Instance(1)]$ gcloud compute ssh Instance(2) --zone [zone]
That it, if it's not working let me know, and verify if your firewall rule let internal traffic.
We have a user that is allowed to SSH into an VM on the Google Cloud Platform.
His key is added to the VM and he can SSH using
gcloud compute ssh name-of-vm
However connecting in this way will always have gcloud try to update project wide meta data
Updating project ssh metadata...failed
It fails because he only has rights for accessing & administrating this VM
However it's very annoying that every time he has to connect in this way he has to to wait for GCP trying to update metadata, which is not allowed and then check the sshkeys on the machine.
Is there a flag in the command to skip checking/updating project wide ssh keys?
Yes we can 'block project wide ssh keys' on the instance, but that would mean that other project admins cannot log in anymore.
I've also tried to minimise access to this user.
But, ideally, what rights should he have if he is allowed to SSH to the machine, start & stop the instance and store data into a bucket?
What you can do is to enable-oslogin for all the users you need including admins, enabling OS Login on instances disables metadata-based SSH key configurations on those instances.
The role to start, stop and connect via SSH to an instance would be roles/compute.instanceAdmin (take in account that this role is currently in beta) you can check here a list of the Compute Engine roles available so you can choose the one that better suits your needs.
To store data into a bucket, I think the most suitable role is roles/storage.objectCreator that allows users to create objects but not to delete or overwrite objects.
I found this solution very useful.
Create a file called config under ~/.ssh
Add the following to it. Change nickname to anything you prefer, $IP_OF_INSTANCE to the public IP of the instance, and $USER to your machine username.
Host nickname
HostName $IP_OF_INSTANCE
Port 22
User $USER
CheckHostIP no
StrictHostKeyChecking no
IdentityFile ~/.ssh/google_compute_engine
Now, you can simply SSH using:
ssh nickname
Note that the path on Linux and Mac is ~/.ssh while the path on Windows is something like C:\Users\<user>\.ssh
Re: #1: There's no flag on the command to change this behavior on a per-command level instead of a per-instance level ('block-project-ssh-keys', as you mentioned) but you could file a FR at https://issuetracker.google.com/savedsearches/559662.
After changing two passwords, root and the default user, suddenly we notice now that the "in browser" link on Google Compute Engine fails to connect via SSH.
Strangely, however, if we use the SSH command from the command line that Google provides (i.e. $ gcloud compute ssh VM-NAME --zone VM-ZONE) - SSH works.
It appears SSH is working - but the "in browser" SSH link no longer connects. What might have gone wrong and how do we fix this?
ADDENDUM:
Of note, a commenter below suggests it is not related to passwords but purely SSH keys - so it looks the answer to this question might rest around if there is a way to regenerate SSH keys on GCE instances. We are searching. If anyone knows code to regenerate SSH keys for GCE please post.
GCE VMs, by default, don't allow for SSH connections with clear-text password: it uses keys instead. You can specify approved keys during VM instantiation, or at a later time, but one that is always present is the key to the user account you used when creating the machine.
As long as you haven't modified /etc/ssh/sshd_config, this should continue to be the case. Either way, one more option you can use to connect via SSH to your instance is to run the following command:
$ gcloud compute ssh VM-NAME --zone VM-ZONE
while logged in with your authorized user account.
ADDENDUM - In lieu of regenerating previous keys, you can add additional, locally-generated SSH keys on both the project, as well as on the VM level. The first applies to all VMs and grants access to project owners and editors, while the second only applies to the VM in question. Both methods add the SSH entered to the Metadata server, from which the get uploaded to the VM prior to SSH connection to all / the VM.
You can do this from the Developers Console:
project-level SSH keys - go to your project -> Compute -> Compute Engine -> Metadata -> "SSH KEYS" (top of the screen) -> click on "Edit"
VM-level SSH keys - go to your project -> Compute -> Compute Engine -> VM instances -> click on the instance name -> "SSH keys" section (scroll down) -> click "Add SSH key"
We discovered the cause was a chown command we had executed on a directory for the primary user that Google creates on the Google Compute Engine instance.
By reversing that chown back to the Google created user, Google's in-browser SSH began working again.
We used chown on an entire user directory and also an ssh config file back to the Google created user using:
chown -R user_name_com /home/user_name_com
and also on this file
chown user_name_com /etc/ssh/ssh_config
where user_name_com was our gmail email address user.
I'm trying to copy files from one GCE instance to another. Instead of using google cloud storage with gsutil (which sometimes hangs for me), I'd like to try using gcutil pull.
I'm trying to do:
gcutil pull someInstance /some/file/path/here .
It works however the command asks me to enter a new passphrase (for ssh I assume). Since I'm running this from a script is there a way to default this passphrase to empty?
Maybe try to use passwordless SSH:
http://www.linuxproblem.org/art_9.html
I have used gcutil push to send files from master node to slave nodes (hadoop/spark) with it.
I installed the eucalyptus faststart3.4.2 iso from here http://downloads.eucalyptus.com/software/faststart/ and then I installed cloud in a box. Then creating an instance m1.small I am trying to ssh into the instance created by its IP. The VM is running and I can ping it, when ssh -i euca-demo.private 10.5.20.224 is run most probably it enters VM but asks for passphrase which I dont know because the image was given after installation that I used to create the instance. The message is
Enter passphrase for key 'euca-demo.private':
How can I enter without knowing the passphase? How to know the passphase?
Can you try to login as ec2-user?
$ ssh -i euca-demo.private ec2-user#10.5.20.224
As you have said in your comment, it seems that the public key of euca-demo.private is associated with the cloud-user user in the VM. So you can SSH via this user only.