SFTP - From WinSCP to Terminal Access - ssh

I have been able to set up SSH access to my Google Cloud Platform VM via SFTP using WinSCP, but I now wish to do the same using another VM.
I have tried the ssh-keygen -t rsa , ssh-copy-id demo#198.51.100.0 method but always come up against the "Permission denied (public key)" error which from researching seems to be a pretty widespread issue with few reliable fixes (all the ones I tried didn't work).
I used PuttyGen to create the public and private key, and inserted the public key onto the server just through GCP settings, adding it under the SSH settings for my instance.
I am just confused on what to do with the private key when simply trying to sftp through the terminal on a separate VM, as before I would load the private key into WinSCP settings. Is there a folder I need to place it in or?

Regarding your first issue of "Permission denied (public key)" error, please follow the troubleshooting in this link and this.
About your other question of "what to do with the private key when simply trying to sftp through the terminal", that depends on the settings of the specific the 3rd party SFTP tool you are using. To locate the locations of SSH key after generating them, please review this document.

Once you have added the public key in the VM, you would need to boot the VM for public key to take effect. Try rebooting it and try

Related

Can't connect to SFTP (with private key file) in Copy Data Tool

I am trying to copy data from SFTP to blob but got stuck when creating SFTP source.
I have the connection details and can easily connect on Filezilla or WinSCP. However, I am unable to get it to work in Azure data factory.
I am not using code but the user interface.
The connection details on the page creating the SFTP source:
Connect via integration runtime: AutoResolveIntegrationRuntime (default)
Host: xyz
Port: 22 (can't remove it as it doesn't like it)
SSH Host Key Validation: Enable SSH Host Key Validation
SSH Host Key Finger-print: taken from WinSCP - Session - Server/protocol information
Authentication type: SSH Public Key Authentication -can't use basic as the private key holds the security info
User name:XXX
Private Key Type: Use Key Content
Private key content: loaded the .ppk file, tried also tried loading the .pem file and got different errors
Pass Phrase: none
When setting up this sftp in WinSCP or FileZilla it automatically converted the provided .pem file into .ppk.
When I loaded the .ppk file into ADF I got an error: Invalid Sftp credential provided for 'SshPublicKey' authentication type.
When I loaded the .pem file I got: Meet network issue when connect to Sftp server 'spiderftp.firstgroup.com', SocketErrorCode: 'TimedOut'.
I have also tried 'Disable SSH Host Key Validation' in SSH Host Key Validation and made no difference.
I have also opened the .ppk file in PuttyGen and used that host key finger print and still no luck.
Only getting these 2 errors depending on which file I load.
Can't find anything about this online so would be grateful for some advice.
Have you read this note in this doc?
https://learn.microsoft.com/en-us/azure/data-factory/connector-sftp#using-ssh-public-key-authentication
SFTP connector supports RSA/DSA OpenSSH key. Make sure your key file content starts with "-----BEGIN [RSA/DSA] PRIVATE KEY-----". If the private key file is a ppk-format file, please use Putty tool to convert from .ppk to OpenSSH format.
Got this working today. Like you, could connect using WinSCP and failed when using ADF.
The link Fang Liu shared contains our answers, but my issue was not the private key. I suspect Fang's suggestion resolved your problem and I'm sharing my answer here to help others who may encounter similar.
My issue:
When using Private Key Authentication in ADF the password becomes a Pass Phrase and you no longer have the ability to supply a password. To overcome the problem we disabled password authentication for the user and the SFTP connection started working.
As stated in the documentation. The Pass Phrase is used to decrypt the private key if it is encrypted.
Also worth noting:
If you store the contents of the private key in Key Vault you need
to base64 encode the entire contents of the exported key and use
that string. This includes "-----BEGIN RSA PRIVATE KEY-----" and the
end. The same applies if you want to paste the value into the
textbox of the SFTP linked service edit screen.
I did not try to manually edit the JSON of the Linked Service to explicitly provide a password and this could be workaround for someone to test if they are unable to disable the password.
I used PuTTYGen to export the PPK to a private key and had the same fingerprint issue too so I just disabled cert validation. Funnily you can use the fingerprint provided by the error and it passes validation so not sure where the bug lies. :-)

How to SSH between 2 Google Cloud Debian Instances

I have installed ansible in on of my GCE Debian VM Instance(1). Now I want to connect to another GCE Debian VM instance(2).
I have generated the public key on Instance 1 and copied the .pub key manually to the the authorized key of instance 2.
But, when I try to do the ssh from 1 to 2 it gives permission denied.
Is there any other way round? I am a little new to this, trying to learn.
is there any step by step guide available? and also what is the exact ip address to do ssh on? will it be the internal IP or the External IP taken by GCE when the Instance is started.
I'm an Ansible user too and I manage a set of compute engine servers. My scenario is pretty close to yours so hopefully this will work for you as well. To get this to work smoothly, you just need to realise that ssh public keys are metadata and can be used to tell GCE to create user accounts on instance creation.
SSH public keys are project-wide metadata
To get what you want the ssh public key should be added to the Metadata section under Compute Engine. My keys look like this:
ssh-rsa AAAAB3<long key sequence shortened>Uxh bob
Every time I get GCE to create an instance, it creates /home/bob and puts the key into the .ssh/authorized_keys section with all of the correct permissions set. This means I can ssh into that server if I have the private key. In my scenario I keep the Private Key only in two places, LastPass and my .ssh directory on my work computer. While I don't recommend it, you could also copy that private key to the .ssh directory on each server that you want to ssh from but I really recommend getting to grips with ssh-agent
Getting it to work with Ansible
The core of this is to tell Ansible not to validate host checking and to connect as the user specified in the key (bob in this example). To do that you need to set some ssh options when calling ansible
ansible-playbook -ssh-common-args='-o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no' -u bob
Now Ansible will connect to the servers mentioned in your playbook and try to use the local private key to negotiate the ssh connection which should work as GCE will have set things up for you when the VM is created. Also, since hostname checking is off, you can rebuild the VM as often as you like.
Saying it again
I really recommend that you run ansible from a small number of secure computers and not put your private key onto cloud servers. If you really need to ssh between servers, look into how ssh-agent passes identity around. A good place to start is this article.
Where did you say the metadata was?
I kind of glossed over that bit but here's an image to get you started.
From there you just follow the options for adding a public key. Don't forget that this works because the third part of the key is the username that you want GCE and Ansible to use when running plays.
It's quite simple if you have two instances in google cloud platform, automatically you have the guest environment installed (gcloud command line), with it you can ssh through all you ssh inside your project:
Just run the following command line for inside your instance A to reach the Instance B
[user#Instance(1)]$ gcloud compute ssh Instance(2) --zone [zone]
That it, if it's not working let me know, and verify if your firewall rule let internal traffic.

How does the GitHub authentification work?

If you follow the GitHub HowTo "Generating SSH Keys", you get three files in your ~/.ssh directory: known_hosts, id_rsa, and id_rsa.pub.
The file known_hosts is used for the server authentication, id_rsa is used for the client authentification (here is an article, that explains the difference).
Why should I create / why GitHub does need both -- a host and a user authentification files? How does the GitHub authentification work?
Thx
This is just plain old SSH authentication; nothing about it is specific to GitHub.
id_rsa and id_rsa.pub are the two halves of your key: the private key and the public key. Effectively, the public key is the lock for the private key. You put the lock (public key) on whatever servers you want easy access to, without too much worry that someone else will see it, because it's just a lock. You keep the (private) key on your machine, and use it to log into those servers; they see you have a key fitting the lock, and let you in.
(Not to say that you should put your public key on completely untrustworthy machines; there are malicious tricks that can take advantage of shortcuts like ssh -A.)
known_hosts doesn't actually have much to do with this; it's just where ssh stores the fingerprints of all the servers you've connected to, so it can throw up a big scary warning if the fingerprint changes. (That would mean it's not the same machine: either something has changed radically on the server side, or your connection has been hijacked.)
So, anyway, one of the protocols Git itself understands is SSH. When you use git#github.com:... as a repository URL, Git is just connecting over SSH. Of course, GitHub doesn't want you mucking around on their machines, so they only let you do Git things, not get a full shell.
As usual, the Arch wiki has a whole lot more words on this.
known_hosts stores the server's identity the first time you connect, so that you know the next time that you're connecting to the same server. This prevents someone from pretending to be the server the next time you connect (but sadly not the first time)
id_rsa is your secret key that proves that you are really you. Never give this away.
id_rsa.pub is the public key, its purpose for authentication is basically just to prove that you have the secret key without giving it out. This key you can give to anyone what needs it since there's nothing secret about it.
When you connect to the server, SSH first checks that the server has the correct key (ie it should match the one in known hosts. If the client is comfortable that the server is genuine, it uses its private key to sign the following data and sends it to the server;
string session identifier
byte SSH_MSG_USERAUTH_REQUEST
string user name
string service name
string "publickey"
boolean TRUE
string public key algorithm name
string public key to be used for authentication
The server verifies the signature using the public key (which you earlier uploaded to Github), and if it is correct, the client is authenticated.
The known_hosts file is used by ssh whenever you actually connect to a host via SSH. It stores a signed key of sorts for the server. Then, if it changes, you will know.
ssh-keygen -t rsa -C yourgithub#accountemail.com is used to generate the SSH key in which you will give the id_rsa.pub to github. Then, when you connect to github you have the private key id_rsa in your ~/.ssh folder which is then used to validate your information with github.
This is a very low-level explanation, but the private key (non .pub) file is your end, the .pub is for github and the known_hosts is for your box to know what is what.
You can also generate a config file in ~/.ssh for use to specify which key goes to which host..
authorized_keys and known_hosts are entirely different..
Your SSH server (sshd, ie) uses authorized_keys, or whatever file is defined within your /etc/ssh/sshd_config/ for knowing the public side of another key. So when a user connects to your server, they pass their private key, your SSH server verifies against the public key it has within authorized_keys and if it doesn't match, it doesn't work.
Github maintains an authorized_keys so-to-speak on their users. Your public key goes into your authorized_keys on your account and then when you connect via ssh to clone,push,etc, it checks your private key you send over with your public key they already know.

SSH to Amazon EC2 instance using PuTTY in Windows

I am a newbie to Amazon web services, was trying to launch an Amazon instance and SSH to it using putty from windows. These are the steps I followed:
Created a key pair.
Added a security group rule for SSH and HTTP.
Launched and instance of EC2 using the above key pair and security group.
Using PuTTYgen converted the *.pem file to *.ppk
Using putty tried connecting to the public DNS of the instance and provided the *.ppk file.
I logged in using 'root' and 'ec2-user', and created the PPK file using SSH1 and SSH2, for all these attempts I get the following error in putty,
"Server refused our key"
Can you guys please help, any suggestions would be greatly appreciated.
I assume that the OP figured this out or otherwise moved on, but the answer is to use ubuntu as the user (if the server is ubuntu).
1) Make sure you have port 22 (SSH) opened in Security Group of EC2 Instance.
2) Try connecting with Elastic IP instead of public DNS name.
I hope you have followed these steps Connecting EC2 from a Windows Machine Using PuTTY
Another situation where I got the "Server refused our key" error when using putty, from windows, to ssh to an EC2 instance running ubuntu:
The private key was wrongly converted from .pem to .ppk.
puttygen has two options for "converting keys".
Load your .pem file into puttygen using the File->Load Private Key option and then save as .ppk file using the Save Private Key Button.
DO NOT use the menu option Conversions->Import Key to load the .pem file generated by EC2.
See the puttygen screenshots below, with the two menu options marked.
Check the username, it should be "ubuntu" for your machine.
Check if traffic is enabled on port 22 in Security group.
Check if you are using the correct url i.e ubuntu#public/elasticip
Maybe worth of checking one more thing. Go to AWS console, right mouse click on the instance and choose "Connect...". It will show you the DNS name that you want to use. If you restarted that instance at some point, that DNS name could have changed.
I had a similar problem when I tried to connect an instance created automatically by the Elastic Beanstalk service (EBS). But, once I linked my existing key name to the EBS (under Environment Details -> Edit Configuration -> Server Tab -> Existing Key Pair), I was able to login with 'ec2-user' and my existing key file (converted to .ppk) with putty.
This, however, terminates the running instance and rebuilds a new instance with access through the key pair named above.
Just in case it helps anyone else, I encountered this error after changing the permissions on the home folder within my instance. I was testing something and had executed chmod -R 777 on my home folder. As soon as this had occurred, once I had logged out I was effectively locked out.
You won't face this error if you SSH AWS directly using ".pem" file instead of converted ".ppk" file.
1) Use Git Bash instead of putty. Since you can run all the Linux commands in Git Bash. By installing Git you get to access Git Bash Terminal
2) Right click from the folder where you have ".pem" and select "Git Bash Here".
3) Your key must not be publicly viewable for SSH to work. So run "chmod 400 pemfile.pem".
4) Connect to your instance using its Public DNS - "ssh -i "pemfile.pem" ec2-user#ec2-x-x-x-x.us-west-1.compute.amazonaws.com"
5) Make sure to whitelist your Network IP for SSH in your_instance->security_group->inbound_rules
I assume you're following this guide, and connecting using the instructions on the subsequent page. Verify a couple of things:
You converted the key correctly, e.g. selected the right .pem file, saved as private key, 1024-bit SSH-2 RSA
The Auth settings (step 4 in the connection tutorial) are correct
I was having the same trouble (and took the same steps) until I changed the user name to 'admin' for the debian AMI I was using.
You should lookup the user name ofthe AMI you are using. The debian AMI is documented here
http://wiki.debian.org/Cloud/AmazonEC2Image/Squeeze
I have had this same problem. The AMI you are using is the one that is also used by the "Cloud Formation" templating solution.
In the end I gave up with that, and created a Red Hat instance. I was then able to connect by SSH fine using the user root.
The instructions here: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/putty.html work fine using a Red Hat instance but not using an Amazon Linux instance. I assume they have some username that I didn't think to try (root, ec2-user, and many other obvious ones, all were refused)
Hope that helps someone!
I use Debain AMI and I try ec2-user, root but correct login is 'admin'.
I was getting the same error when I tried to create a new key pair and tried to use that new pem/ppk file. I noticed that the Key Pair Name field on the instance was still the old one and in poking around. Apparently, you can't change a key pair. So I went back to the original key pair. Fortunately, I didn't delete anything so this was easy enough.
Try an alternative SSH client, like Poderosa. It accepts pem files, so you will not need to convert the key file.
If you already have a key pair, follow these steps:
Convert *.pem to *.ppk using PuTTYgen (Load pem file key then Save ppk)
Add ppk auth key file to Putty SSH>Auth options
Enter "Host Name (or IP address)" field: ubuntu#your-ip-address-of-ubuntu-ec2-host))

Can I use SSH keys in something other than PuTTy (on Mac)?

Bluehost only recommends PuTTy. However, is it possible to use ssh keys without any extra, visible programs in Mac?
I would like to have a connection to my server to be a breeze, so that I can control my server in Terminal.
Of course! On Unix and OS X, the ssh-keygen command will generate public and private keys for SSH public-key authentication. The usual way to invoke this command (on the client) is:
ssh-keygen -t rsa
This command will ask you where to place your private key; the default place is ~/.ssh/id_rsa, and the public key will be placed in the file of the same name with a .pub extension added (for example: ~/.ssh/id_rsa.pub). The command also asks you to create a password ("passphrase") for the private key; you can leave it blank for no password as I do, but I don't recommend this practice.
Once you have your public and private keys on the client computer, you need to make your server recognize that public key. If you have shell access to the server, you can upload the public key file with scp, then use ssh to run the following command on the server:
cat id_rsa.pub >> ~/.ssh/authorized_keys
If your hosting company doesn't give you shell access (though Bluehost does), or this procedure doesn't work, it will likely give you a web interface to the same functionality.
Once your server is set up to recognize your public key, it will allow you access without a password when ssh on the client tries to use your private key for authentication. You may still have to enter your private key's password, but typically you only need to do this once for each client login session.
Sure, I do this all the time. Just follow these directions to generate an SSH key and copy it to your server. The instructions should work on both Mac and Linux.
SSHKeychain is pretty much ideal for this. It lives unobtrusively on the menu bar and integrates seamlessly with OS X's Keychain and SSH implementations.
You will need to use ssh-keygen as described in other answers, but once you've done that you can use SSHKeychain to avoid having to type your private key passphrase all the time.
OpenSSH should be available to you on OS X; open a terminal and check out "man ssh". SSH keys get stored (in a format different from PuTTY) in ~/.ssh. Having a config in ~/.ssh/config can make your life easier, too; you'll be able to say "Use this $SHORTNAME for this $HOST using this $KEY" and similar.
At the terminal prompt do
$ apropos ssh
You should get a list of all the programs Mac OS X comes with related to ssh.
Using the ssh* tools, your ssh keys will be stored under ~/.ssh. PuTTY is nice, but compared to the standard OpenSSH tools, it's really only useful on Windows systems.
Sure can! First run:
ssh-keygen
And go through the steps. It is a good idea to give it a password and such. Then you can:
cat ~/.ssh/id_rsa.pub
and copy-paste the result into the bluehost public key textarea.