I have installed ansible in on of my GCE Debian VM Instance(1). Now I want to connect to another GCE Debian VM instance(2).
I have generated the public key on Instance 1 and copied the .pub key manually to the the authorized key of instance 2.
But, when I try to do the ssh from 1 to 2 it gives permission denied.
Is there any other way round? I am a little new to this, trying to learn.
is there any step by step guide available? and also what is the exact ip address to do ssh on? will it be the internal IP or the External IP taken by GCE when the Instance is started.
I'm an Ansible user too and I manage a set of compute engine servers. My scenario is pretty close to yours so hopefully this will work for you as well. To get this to work smoothly, you just need to realise that ssh public keys are metadata and can be used to tell GCE to create user accounts on instance creation.
SSH public keys are project-wide metadata
To get what you want the ssh public key should be added to the Metadata section under Compute Engine. My keys look like this:
ssh-rsa AAAAB3<long key sequence shortened>Uxh bob
Every time I get GCE to create an instance, it creates /home/bob and puts the key into the .ssh/authorized_keys section with all of the correct permissions set. This means I can ssh into that server if I have the private key. In my scenario I keep the Private Key only in two places, LastPass and my .ssh directory on my work computer. While I don't recommend it, you could also copy that private key to the .ssh directory on each server that you want to ssh from but I really recommend getting to grips with ssh-agent
Getting it to work with Ansible
The core of this is to tell Ansible not to validate host checking and to connect as the user specified in the key (bob in this example). To do that you need to set some ssh options when calling ansible
ansible-playbook -ssh-common-args='-o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no' -u bob
Now Ansible will connect to the servers mentioned in your playbook and try to use the local private key to negotiate the ssh connection which should work as GCE will have set things up for you when the VM is created. Also, since hostname checking is off, you can rebuild the VM as often as you like.
Saying it again
I really recommend that you run ansible from a small number of secure computers and not put your private key onto cloud servers. If you really need to ssh between servers, look into how ssh-agent passes identity around. A good place to start is this article.
Where did you say the metadata was?
I kind of glossed over that bit but here's an image to get you started.
From there you just follow the options for adding a public key. Don't forget that this works because the third part of the key is the username that you want GCE and Ansible to use when running plays.
It's quite simple if you have two instances in google cloud platform, automatically you have the guest environment installed (gcloud command line), with it you can ssh through all you ssh inside your project:
Just run the following command line for inside your instance A to reach the Instance B
[user#Instance(1)]$ gcloud compute ssh Instance(2) --zone [zone]
That it, if it's not working let me know, and verify if your firewall rule let internal traffic.
Related
I am doing a wordpress installation on GCP, this is done through deploying a wordpress in market:
After the successful deployment, I also set a static IP address to the instance:
I need to use FileZilla or WinSCP to connect to the instance or at least SSH into the instance in order to do some customization work.
Can anyone enlighten me on how to get it done? I see SSH keys created for some most likely deleted resource during my practice:
[UPDATE]:
I double checked the Firewall rules and see there is a rule allowing SSH:
[Update]
I tried SSH from the console (Compute Engine -> VM Instances), I got into somewhere, here is the detail:
Connected, host fingerprint: ssh-rsa 0 AD:45:62:ED:E3:71:B1:3B:D4:9F:6D:9D:08:16
:0C:55:0F:C1:55:70:97:59:5E:C5:35:8E:D6:8E:E8:F9:C2:4A
Linux welynx-vm 4.9.0-11-amd64 #1 SMP Debian 4.9.189-3 (2019-09-02) x86_64
The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
xenonxie#welynx-vm:~$ ls
xenonxie#welynx-vm:~$ pwd
/home/xenonxie
xenonxie#welynx-vm:~$
Where is the Wordpress installation?
What is the footprint showing up here? pub key of the instance?
[SOLUTION]
Since the issue is now sorted out, I would like to add more specific screenshots here to help future readers on similar questions like mine:
Where is the wordpress installation?
You would need to SSH into the instance to find out there, there are couple ways to SSH into the instance:
1.1 Once you deploy a wordpress (or other Blog&CMS from Market), an instance is also created for that deployment, you can go to Compute Engine -> VM instances, the new instance will be displayed there.
Note: You would need to change the ip address to "static", otherwise the ip gets changed when instance is restarted.
1.2 on the very right end, you can SSH into the instance directly.
SSH through third party tool like PuTTY:
set up a session with config like below:
2.1 Create a new key pair with Putty Keygen as below:
2.2 Save the public key in Compute Engiine -> Metadata -> SSH Keys
2.3 Save the private key somewhere in your local, you will need it later
With the instance has the public key, you can proceed to create a putty session as below:
Note the IP address is the instance's static ip address, remember to include the user name
In the SSH tab, attach the private key saved earlier:
Now connect to the instance:
Similarly you can do this in WinSCP:
Big thanks to #gcptest_cloud, to make the post more intruitive and understandable to future readers, I recap it as below:
Where is the wordpress installation?
The original wordpress installation in in /var/www/html(thank you #gcptest_cloud) on the instance of the wordpress installation.
How to access the wordpress installation?
You would need to SSH into the instance to find out there, there are couple ways to SSH into the instance:
1.1 Once you deploy a wordpress (or other Blog&CMS from Market), an instance is also created for that deployment, you can go to Compute Engine -> VM instances, the new instance will be displayed there:
Note: You would need to change the ip address to "static", otherwise the ip gets changed when instance is restarted.
1.2 on the very right end, you can SSH into the instance directly:
SSH through third party tool like PuTTY:
2.1 Create a new key pair with Putty Keygen as below:
2.2 Save the private key somewhere in your local, you will need it later
2.3 Save the public key in Compute Engine -> Metadata -> SSH Keys
Note: You can also manually create the key with the copy and paste in .ssh folder in your home directory in the instance
With the instance has the public key, you can proceed to create a putty session as below:
Note the IP address is the instance's static ip address, remember to include the user name
In the SSH tab, attach the private key saved earlier:
Now connect to the instance:
Similarly you can do this in WinSCP:
Since this is a marketplace image, make sure you have firewall rule allowing port 22 and attach the target TAG to network tags of your VM.
After that, Click on the SSH button in the console, near the VM name. This is the simplest way to login into GCP instances
We have a user that is allowed to SSH into an VM on the Google Cloud Platform.
His key is added to the VM and he can SSH using
gcloud compute ssh name-of-vm
However connecting in this way will always have gcloud try to update project wide meta data
Updating project ssh metadata...failed
It fails because he only has rights for accessing & administrating this VM
However it's very annoying that every time he has to connect in this way he has to to wait for GCP trying to update metadata, which is not allowed and then check the sshkeys on the machine.
Is there a flag in the command to skip checking/updating project wide ssh keys?
Yes we can 'block project wide ssh keys' on the instance, but that would mean that other project admins cannot log in anymore.
I've also tried to minimise access to this user.
But, ideally, what rights should he have if he is allowed to SSH to the machine, start & stop the instance and store data into a bucket?
What you can do is to enable-oslogin for all the users you need including admins, enabling OS Login on instances disables metadata-based SSH key configurations on those instances.
The role to start, stop and connect via SSH to an instance would be roles/compute.instanceAdmin (take in account that this role is currently in beta) you can check here a list of the Compute Engine roles available so you can choose the one that better suits your needs.
To store data into a bucket, I think the most suitable role is roles/storage.objectCreator that allows users to create objects but not to delete or overwrite objects.
I found this solution very useful.
Create a file called config under ~/.ssh
Add the following to it. Change nickname to anything you prefer, $IP_OF_INSTANCE to the public IP of the instance, and $USER to your machine username.
Host nickname
HostName $IP_OF_INSTANCE
Port 22
User $USER
CheckHostIP no
StrictHostKeyChecking no
IdentityFile ~/.ssh/google_compute_engine
Now, you can simply SSH using:
ssh nickname
Note that the path on Linux and Mac is ~/.ssh while the path on Windows is something like C:\Users\<user>\.ssh
Re: #1: There's no flag on the command to change this behavior on a per-command level instead of a per-instance level ('block-project-ssh-keys', as you mentioned) but you could file a FR at https://issuetracker.google.com/savedsearches/559662.
I succesfully followed these instructions from GitHub on how to generate SSH keys and my connection with GitHub is succesfull.
But when I later want to check my SSH key following these instructions I don't get the SSH fingerprint I see in my GitHub SSH Keys setting page when I use ssh-add -l.
Instead of the SSH key fingerprint I get the message The agent has no identities. Why? And what does it mean?
This means you haven't successfully added your key to your agent. Use ssh-add to do so, as given in step 3, part 2 of your first link.
Note that this needs to be done for each ssh-agent instance; thus, if you log out and back in, you need to ssh-add your key again. Similarly, if you start ssh-agent twice, in two different terminal windows, they won't have shared private keys between them, so you would need to ssh-add once in each window (or, better, configure your system in such a way as to have an agent shared between all running applications in your desktop session).
Modern desktop environments generally will provide a SSH keyring for you, so you shouldn't need to start ssh-agent yourself if your agent is so configured, and the agent instance so provided should be shared across your entire session. gnome-keyring behaves this way, as does Apple's keychain and KDE's Wallet (with ksshaskpass enabled).
After changing two passwords, root and the default user, suddenly we notice now that the "in browser" link on Google Compute Engine fails to connect via SSH.
Strangely, however, if we use the SSH command from the command line that Google provides (i.e. $ gcloud compute ssh VM-NAME --zone VM-ZONE) - SSH works.
It appears SSH is working - but the "in browser" SSH link no longer connects. What might have gone wrong and how do we fix this?
ADDENDUM:
Of note, a commenter below suggests it is not related to passwords but purely SSH keys - so it looks the answer to this question might rest around if there is a way to regenerate SSH keys on GCE instances. We are searching. If anyone knows code to regenerate SSH keys for GCE please post.
GCE VMs, by default, don't allow for SSH connections with clear-text password: it uses keys instead. You can specify approved keys during VM instantiation, or at a later time, but one that is always present is the key to the user account you used when creating the machine.
As long as you haven't modified /etc/ssh/sshd_config, this should continue to be the case. Either way, one more option you can use to connect via SSH to your instance is to run the following command:
$ gcloud compute ssh VM-NAME --zone VM-ZONE
while logged in with your authorized user account.
ADDENDUM - In lieu of regenerating previous keys, you can add additional, locally-generated SSH keys on both the project, as well as on the VM level. The first applies to all VMs and grants access to project owners and editors, while the second only applies to the VM in question. Both methods add the SSH entered to the Metadata server, from which the get uploaded to the VM prior to SSH connection to all / the VM.
You can do this from the Developers Console:
project-level SSH keys - go to your project -> Compute -> Compute Engine -> Metadata -> "SSH KEYS" (top of the screen) -> click on "Edit"
VM-level SSH keys - go to your project -> Compute -> Compute Engine -> VM instances -> click on the instance name -> "SSH keys" section (scroll down) -> click "Add SSH key"
We discovered the cause was a chown command we had executed on a directory for the primary user that Google creates on the Google Compute Engine instance.
By reversing that chown back to the Google created user, Google's in-browser SSH began working again.
We used chown on an entire user directory and also an ssh config file back to the Google created user using:
chown -R user_name_com /home/user_name_com
and also on this file
chown user_name_com /etc/ssh/ssh_config
where user_name_com was our gmail email address user.
I am a newbie to Amazon web services, was trying to launch an Amazon instance and SSH to it using putty from windows. These are the steps I followed:
Created a key pair.
Added a security group rule for SSH and HTTP.
Launched and instance of EC2 using the above key pair and security group.
Using PuTTYgen converted the *.pem file to *.ppk
Using putty tried connecting to the public DNS of the instance and provided the *.ppk file.
I logged in using 'root' and 'ec2-user', and created the PPK file using SSH1 and SSH2, for all these attempts I get the following error in putty,
"Server refused our key"
Can you guys please help, any suggestions would be greatly appreciated.
I assume that the OP figured this out or otherwise moved on, but the answer is to use ubuntu as the user (if the server is ubuntu).
1) Make sure you have port 22 (SSH) opened in Security Group of EC2 Instance.
2) Try connecting with Elastic IP instead of public DNS name.
I hope you have followed these steps Connecting EC2 from a Windows Machine Using PuTTY
Another situation where I got the "Server refused our key" error when using putty, from windows, to ssh to an EC2 instance running ubuntu:
The private key was wrongly converted from .pem to .ppk.
puttygen has two options for "converting keys".
Load your .pem file into puttygen using the File->Load Private Key option and then save as .ppk file using the Save Private Key Button.
DO NOT use the menu option Conversions->Import Key to load the .pem file generated by EC2.
See the puttygen screenshots below, with the two menu options marked.
Check the username, it should be "ubuntu" for your machine.
Check if traffic is enabled on port 22 in Security group.
Check if you are using the correct url i.e ubuntu#public/elasticip
Maybe worth of checking one more thing. Go to AWS console, right mouse click on the instance and choose "Connect...". It will show you the DNS name that you want to use. If you restarted that instance at some point, that DNS name could have changed.
I had a similar problem when I tried to connect an instance created automatically by the Elastic Beanstalk service (EBS). But, once I linked my existing key name to the EBS (under Environment Details -> Edit Configuration -> Server Tab -> Existing Key Pair), I was able to login with 'ec2-user' and my existing key file (converted to .ppk) with putty.
This, however, terminates the running instance and rebuilds a new instance with access through the key pair named above.
Just in case it helps anyone else, I encountered this error after changing the permissions on the home folder within my instance. I was testing something and had executed chmod -R 777 on my home folder. As soon as this had occurred, once I had logged out I was effectively locked out.
You won't face this error if you SSH AWS directly using ".pem" file instead of converted ".ppk" file.
1) Use Git Bash instead of putty. Since you can run all the Linux commands in Git Bash. By installing Git you get to access Git Bash Terminal
2) Right click from the folder where you have ".pem" and select "Git Bash Here".
3) Your key must not be publicly viewable for SSH to work. So run "chmod 400 pemfile.pem".
4) Connect to your instance using its Public DNS - "ssh -i "pemfile.pem" ec2-user#ec2-x-x-x-x.us-west-1.compute.amazonaws.com"
5) Make sure to whitelist your Network IP for SSH in your_instance->security_group->inbound_rules
I assume you're following this guide, and connecting using the instructions on the subsequent page. Verify a couple of things:
You converted the key correctly, e.g. selected the right .pem file, saved as private key, 1024-bit SSH-2 RSA
The Auth settings (step 4 in the connection tutorial) are correct
I was having the same trouble (and took the same steps) until I changed the user name to 'admin' for the debian AMI I was using.
You should lookup the user name ofthe AMI you are using. The debian AMI is documented here
http://wiki.debian.org/Cloud/AmazonEC2Image/Squeeze
I have had this same problem. The AMI you are using is the one that is also used by the "Cloud Formation" templating solution.
In the end I gave up with that, and created a Red Hat instance. I was then able to connect by SSH fine using the user root.
The instructions here: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/putty.html work fine using a Red Hat instance but not using an Amazon Linux instance. I assume they have some username that I didn't think to try (root, ec2-user, and many other obvious ones, all were refused)
Hope that helps someone!
I use Debain AMI and I try ec2-user, root but correct login is 'admin'.
I was getting the same error when I tried to create a new key pair and tried to use that new pem/ppk file. I noticed that the Key Pair Name field on the instance was still the old one and in poking around. Apparently, you can't change a key pair. So I went back to the original key pair. Fortunately, I didn't delete anything so this was easy enough.
Try an alternative SSH client, like Poderosa. It accepts pem files, so you will not need to convert the key file.
If you already have a key pair, follow these steps:
Convert *.pem to *.ppk using PuTTYgen (Load pem file key then Save ppk)
Add ppk auth key file to Putty SSH>Auth options
Enter "Host Name (or IP address)" field: ubuntu#your-ip-address-of-ubuntu-ec2-host))