Connecting google cloud bitnami SSH - ssh

Google Cloud dashboard is very different and relatively difficult as compare to EC2, I have created a VM powered by LAMP, Now I want to put files on my server or SSH to it via File Zilla or Win SCP.
It didnt ask me to create the keys, I tried using bitnami username and password present at dashboard (with no key) but it is not authenticating. what should I do now ? I dont want to download whole gcloud sdk for just putting files.
Any option ?

Add you ssh key to the machine by going logging into the Google Cloud Console then navigating to Compute Engine > VM Instances.
Click on your VM name, then choose edit at the top.
Once you are in edit mode, you can upload an SSH key and save it.
After you added a key, you can ssh / scp like normal. See: https://cloud.google.com/compute/docs/instances/connecting-to-instance#standardssh

Related

How to disable Google compute engine from resetting SFTP folder permissions when using SSH-Key

Currently running a Google compute engine instance and using SFTP on the server.
Followed details to lock a user to the SFTP path using steps listed here: https://bensmann.no/restrict-sftp-users-to-home-folder/
To lock the user to a directory, the home directory of that user needs to be owned by root. Initially, the setup worked correctly but found that Google compute engine sporadically "auto-resets" the permissions back to the user.
I am using an SSH key that is set in the Google Cloud Console and that key is associated with the username. My guess is that Google Compute Engine is using this "meta-data" and reconfiguring the folder permissions to match that of the user associated with the SSH key.
Is there any way to disable this "auto-reset"? Or, rather, is there a better method to hosting SFTP and locking a single user to a SFTP path without having to change the home folder ownership to root?
Set your sshd rule to apply to the google-sudoers group.
The tool that manages user accounts is accounts daemon. You can turn it off temporarily but it's not recommended. The tool syncs the instance metadata's SSH keys with the linux accounts on the VM. If you do this any account changes won't be picked up, SSH from Cloud Console will probably stop working.
sudo systemctl stop google-accounts-daemon.service
That said it may be what you want if you ultimately want to block SSH access to the VM.

Google Compute Engine: Adding SSH but can't connect like in their video

OK I need help.
I have created a simple instance with Google Compute Engine, and I have added SSH keys through their META DATA section. But every time I try to log in with Putty (I can do it with their console) I get Permission denied (publickey)
Even when I log in with their browser console , I can see all the users I created with WEB UI, and public keys in authorized_keys
However I can't SSH even though my private keys are in my .ssh
I did all the checked and my ssh is enabled by default
http://screencast.com/t/zI9vDr2s
The thing was that I was accessing the site via IP, since for me the site's DNS wasn't propagated at the moment. nevertheless I tried placing domain name instead of IP and it worked. Weird because I still can't access the site via domain ... and my neighbor can.

Unable to connect to Google Compute Engine using "in browser" SSH

After changing two passwords, root and the default user, suddenly we notice now that the "in browser" link on Google Compute Engine fails to connect via SSH.
Strangely, however, if we use the SSH command from the command line that Google provides (i.e. $ gcloud compute ssh VM-NAME --zone VM-ZONE) - SSH works.
It appears SSH is working - but the "in browser" SSH link no longer connects. What might have gone wrong and how do we fix this?
ADDENDUM:
Of note, a commenter below suggests it is not related to passwords but purely SSH keys - so it looks the answer to this question might rest around if there is a way to regenerate SSH keys on GCE instances. We are searching. If anyone knows code to regenerate SSH keys for GCE please post.
GCE VMs, by default, don't allow for SSH connections with clear-text password: it uses keys instead. You can specify approved keys during VM instantiation, or at a later time, but one that is always present is the key to the user account you used when creating the machine.
As long as you haven't modified /etc/ssh/sshd_config, this should continue to be the case. Either way, one more option you can use to connect via SSH to your instance is to run the following command:
$ gcloud compute ssh VM-NAME --zone VM-ZONE
while logged in with your authorized user account.
ADDENDUM - In lieu of regenerating previous keys, you can add additional, locally-generated SSH keys on both the project, as well as on the VM level. The first applies to all VMs and grants access to project owners and editors, while the second only applies to the VM in question. Both methods add the SSH entered to the Metadata server, from which the get uploaded to the VM prior to SSH connection to all / the VM.
You can do this from the Developers Console:
project-level SSH keys - go to your project -> Compute -> Compute Engine -> Metadata -> "SSH KEYS" (top of the screen) -> click on "Edit"
VM-level SSH keys - go to your project -> Compute -> Compute Engine -> VM instances -> click on the instance name -> "SSH keys" section (scroll down) -> click "Add SSH key"
We discovered the cause was a chown command we had executed on a directory for the primary user that Google creates on the Google Compute Engine instance.
By reversing that chown back to the Google created user, Google's in-browser SSH began working again.
We used chown on an entire user directory and also an ssh config file back to the Google created user using:
chown -R user_name_com /home/user_name_com
and also on this file
chown user_name_com /etc/ssh/ssh_config
where user_name_com was our gmail email address user.

Transfer files from one google compute engine instance to another?

I'm trying to copy files from one GCE instance to another. Instead of using google cloud storage with gsutil (which sometimes hangs for me), I'd like to try using gcutil pull.
I'm trying to do:
gcutil pull someInstance /some/file/path/here .
It works however the command asks me to enter a new passphrase (for ssh I assume). Since I'm running this from a script is there a way to default this passphrase to empty?
Maybe try to use passwordless SSH:
http://www.linuxproblem.org/art_9.html
I have used gcutil push to send files from master node to slave nodes (hadoop/spark) with it.

SSH to Amazon EC2 instance using PuTTY in Windows

I am a newbie to Amazon web services, was trying to launch an Amazon instance and SSH to it using putty from windows. These are the steps I followed:
Created a key pair.
Added a security group rule for SSH and HTTP.
Launched and instance of EC2 using the above key pair and security group.
Using PuTTYgen converted the *.pem file to *.ppk
Using putty tried connecting to the public DNS of the instance and provided the *.ppk file.
I logged in using 'root' and 'ec2-user', and created the PPK file using SSH1 and SSH2, for all these attempts I get the following error in putty,
"Server refused our key"
Can you guys please help, any suggestions would be greatly appreciated.
I assume that the OP figured this out or otherwise moved on, but the answer is to use ubuntu as the user (if the server is ubuntu).
1) Make sure you have port 22 (SSH) opened in Security Group of EC2 Instance.
2) Try connecting with Elastic IP instead of public DNS name.
I hope you have followed these steps Connecting EC2 from a Windows Machine Using PuTTY
Another situation where I got the "Server refused our key" error when using putty, from windows, to ssh to an EC2 instance running ubuntu:
The private key was wrongly converted from .pem to .ppk.
puttygen has two options for "converting keys".
Load your .pem file into puttygen using the File->Load Private Key option and then save as .ppk file using the Save Private Key Button.
DO NOT use the menu option Conversions->Import Key to load the .pem file generated by EC2.
See the puttygen screenshots below, with the two menu options marked.
Check the username, it should be "ubuntu" for your machine.
Check if traffic is enabled on port 22 in Security group.
Check if you are using the correct url i.e ubuntu#public/elasticip
Maybe worth of checking one more thing. Go to AWS console, right mouse click on the instance and choose "Connect...". It will show you the DNS name that you want to use. If you restarted that instance at some point, that DNS name could have changed.
I had a similar problem when I tried to connect an instance created automatically by the Elastic Beanstalk service (EBS). But, once I linked my existing key name to the EBS (under Environment Details -> Edit Configuration -> Server Tab -> Existing Key Pair), I was able to login with 'ec2-user' and my existing key file (converted to .ppk) with putty.
This, however, terminates the running instance and rebuilds a new instance with access through the key pair named above.
Just in case it helps anyone else, I encountered this error after changing the permissions on the home folder within my instance. I was testing something and had executed chmod -R 777 on my home folder. As soon as this had occurred, once I had logged out I was effectively locked out.
You won't face this error if you SSH AWS directly using ".pem" file instead of converted ".ppk" file.
1) Use Git Bash instead of putty. Since you can run all the Linux commands in Git Bash. By installing Git you get to access Git Bash Terminal
2) Right click from the folder where you have ".pem" and select "Git Bash Here".
3) Your key must not be publicly viewable for SSH to work. So run "chmod 400 pemfile.pem".
4) Connect to your instance using its Public DNS - "ssh -i "pemfile.pem" ec2-user#ec2-x-x-x-x.us-west-1.compute.amazonaws.com"
5) Make sure to whitelist your Network IP for SSH in your_instance->security_group->inbound_rules
I assume you're following this guide, and connecting using the instructions on the subsequent page. Verify a couple of things:
You converted the key correctly, e.g. selected the right .pem file, saved as private key, 1024-bit SSH-2 RSA
The Auth settings (step 4 in the connection tutorial) are correct
I was having the same trouble (and took the same steps) until I changed the user name to 'admin' for the debian AMI I was using.
You should lookup the user name ofthe AMI you are using. The debian AMI is documented here
http://wiki.debian.org/Cloud/AmazonEC2Image/Squeeze
I have had this same problem. The AMI you are using is the one that is also used by the "Cloud Formation" templating solution.
In the end I gave up with that, and created a Red Hat instance. I was then able to connect by SSH fine using the user root.
The instructions here: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/putty.html work fine using a Red Hat instance but not using an Amazon Linux instance. I assume they have some username that I didn't think to try (root, ec2-user, and many other obvious ones, all were refused)
Hope that helps someone!
I use Debain AMI and I try ec2-user, root but correct login is 'admin'.
I was getting the same error when I tried to create a new key pair and tried to use that new pem/ppk file. I noticed that the Key Pair Name field on the instance was still the old one and in poking around. Apparently, you can't change a key pair. So I went back to the original key pair. Fortunately, I didn't delete anything so this was easy enough.
Try an alternative SSH client, like Poderosa. It accepts pem files, so you will not need to convert the key file.
If you already have a key pair, follow these steps:
Convert *.pem to *.ppk using PuTTYgen (Load pem file key then Save ppk)
Add ppk auth key file to Putty SSH>Auth options
Enter "Host Name (or IP address)" field: ubuntu#your-ip-address-of-ubuntu-ec2-host))