How to forward local keypair in a SSH session? - ssh

I manually deploy websites through SSH, I manage source code in github/bitbucket. For every new site I'm currently generating a new keypair on the server and adding it to github/bitbucket, so that I can pull chances from server.
I came across a feature in capistrano to use local machine's key pair for pulling updates to server, which is ssh_options[:forward_agent] = true
How can I do something like this and forward my local machine's keypair to the server I'm SSH-ing into, so that I can avoid adding keys into github/bitbucket for every new site.

This turned out to be very simple, complete guide is here Using SSH Forwarding
In essence, you need to create a ~/.ssh/config file, if it doesn't exist.
Then, add the hosts (either domain name or IP address in the file and set ForwardAgent yes)
Sample Code:
Host example.com
ForwardAgent yes
Makes SSH life a lot easier.

Create ~/.ssh/config
Fill it with (host address is the address of the host you want to allow creds to be forwarded to):
Host [host address]
ForwardAgent yes
If you haven't already run ssh-agent, run it:
ssh-agent
Take the output from that command and paste it into the terminal. This will set the environment variables that need to be set for agent forwarding to work. Optionally, you can replace this and step 3 with:
eval "$(ssh-agent)"
Add the key you want forwarded to the ssh agent:
ssh-add [path to key if there is one]/[key_name].pem
Log into the remote host:
ssh -A [user]#[hostname]
From here, if you log into another host that accepts that key, it will just work:
ssh [user]#[hostname]

To use it simply with the default identity (id_rsa) you can use the following couple of command:
ssh-add
ssh -A [username]#[server-address]

The configuration file is very helpful but the trick for agent forwarding does the ssh-add command. It seems that this have to be initial triggered before any remote connections or after restart of the computer. To permanently add the key try the following solution from the user daminetreg:
Add private key permanently with ssh-add on Ubuntu

It is very useful :
ssh -i [private-key] -A [user]#[host]
You can set one command in bash_aliases or other command routines.

Related

Access to jumpbox as normal user and change to root user in ansible

Here is my situation. I want to access a server through a jumpbox/bastion host.
so, I will login as normal user in jumpbox and then change user to root after that login to remote server using root. I dont have direct access to root in jumpbox.
$ ssh user#jumpbox
$ user#jumpbox:~# su - root
Enter Password:
$ root#jumpbox:~/ ssh root#remoteserver
Enter Password:
$ root#remoteserver:~/
Above is the manual workflow. I want to achieve this in ansible.
I have seen something like this.
ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q user#jumpbox"'
This doesnot work when we need to switch to root and login to remote server.
There are a few things to unpack here:
General Design / Issue:
This isn't an Ansible issue, it's an ssh issue/proxy misconfiguration.
A bastion host/ssh proxy isn't meant to be logged into and have commands ran directly on it interactively (like su - root, enter password, then ssh...). That's not really a bastion, that's just a server you're logging into and running commands on. It's not an actual ssh proxy/bastion/jump role. At that point you might as well just run Ansible on the host.
That's why things like ProxyJump and ProxyCommand aren't working. They are designed to work with ssh proxies that are configured as ssh proxies (bastions).
Running Ansible Tasks as Root:
Ansible can run with sudo during task execution (it's called "become" in Ansible lingo), so you should never need to SSH as the literal root user with Ansible (shouldn't ssh as root ever really).
Answering the question:
There are a lot of workarounds for this, but the straightforward answer here is to configure the jump host as a proper bastion and your issue will go away. An example...
As the bastion "user", create an ssh key pair, or use an existing one.
On the bastion, edit the users ~/.ssh/config file to access the target server with the private key and desired user.
EXAMPLE user#bastion's ~/.ssh/config (I cringe seeing root here)...
Host remote-server
User root
IdentityFile ~/.ssh/my-private-key
Add the public key created in step 1 to the target servers ~/.ssh/authorized_keys file for the user you're logging in as.
After that type of config, your jump host is working as a regular ssh proxy. You can then use ProxyCommand or ProxyJump as you had tried to originally without issue.

Connecting to my remote site using git bash shell SSH

I can connect using these credentials through ftp but not through ssh.
Timothy#ement MINGW64 ~
$ ssh timothy#mywebsite.com
ssh: connect to host mywebsite.com port 22: Connection timed out
I'm sure this question has been asked a million times before. Does it have anything to do with ssh keys?
I'm using siteground and in the ssh/shell access area i've added this:
t r timothy#mywebsite.com KtV/T4QvP4K9n7Zki9n+ZWp6 0.0.0.0/0 - ALL Remove Key | Add IP | Private Key
any help would be appreciated. Thank you.
Does it have anything to do with ssh keys?
Yes: see the official SiteGround documentation How to use SSH.
you need to enable ssh access and register your public ssh key.
then you can use ssh (provided in your <path-to-git>/usr/bin) in order to access
ssh -p18765 <user>#yourdomain
SiteGround chooses to run its sshd on port 18765, not the default 22.
The siteground tutorials are junk, two out of the three chat support staff I spoke with just referred me to the tutorials when I was attempting to make a connection to my siteground server over ssh.
These are the steps that finally worked:
From the cPanel Advanced section select SSH/Shell Access
Generate a new key using their utility (make note of the password you used for later use).
*** They have a tutorial that should allow you to create a private key on linux then upload the public key to their site. That is "not recommended" and I was unable to get that to work.
Once you have their key listed in the current keys table click the Private Key link
Copy the Private Key to a file in your local .ssh directory (make sure the mask is 0600)
run the following command:
ssh-add
enter the passphrase you used when generating the key using their utility
If you get a response "Identity added: ..." you are all set
you should now be able to use the command:
ssh # -p18765
It doesn't look like they have X11 forwarding enabled though so if you use ssh -X you will get:
X11 forwarding request failed on channel 0

Jenkins won't use SSH key

I'm sorry to have to ask this question, but I feel like I've tried every answer so far on SO with no luck.
I have my local machine and my remote server. Jenkins is up and running on my server.
If I open up terminal and do something like scp /path/to/file user#server:/path/to/wherever then my ssh works fine without requiring a password
If I run this command inside of my Jenkins job I get 'Host Key Verification Failed'
So I know my SSH is working correctly the way I want, but why can't I get Jenkins to use this SSH key?
Interesting thing is, it did work fine when I first set up Jenkins and the key, then I think I restarted my local machine, or restarted Jenkins, then it stopped working. It's hard to say exactly what caused it.
I've also tried several options regarding ssh-agent and ssh-add but those don't seem to work.
I verified the local machine .pub is on the server in the /user/.ssh folder and is also in the authorized keys file. The folder is owned by user.
Any thoughts would be much appreciated and I can provide more info about my problem. Thanks!
Update:
Per Kensters suggestion I did su - jenkins, then ssh server, and it asked me to add to known hosts. So I thought this was a step in the right direction. But the same problem persisted afterward.
Something I did not notice before I can ssh server without password when using my myUsername account. But if I switch to the jenkins user, then it asks me for my password when I do ssh server.
I also tried ssh-keygen -R server as suggested to no avail.
Try
su jenkins
ssh-keyscan YOUR-HOSTNAME >> ~/.ssh/known_hosts
SSH Slaves Plugin doesn't support ECDSA. The command above should add RSA key for ssh-slave.
Host Key Verification Failed
ssh is complaining about the remote host key, not the local key that you're trying to use for authentication.
Every SSH server has a host key which is used to identify the server to the client. This helps prevent clients from connecting to servers which are impersonating the intended server. The first time you use ssh to connect to a particular host, ssh will normally prompt you to accept the remote host's host key, then store the key locally so that ssh will recognize the key in the future. The widely used OpenSSH ssh program stores known host keys in a file .ssh/known_hosts within each user's home directory.
In this case, one of two things is happening:
The user ID that Jenkins is using to run these jobs has never connected to this particular remote host before, and doesn't have the remote host's host key in its known_hosts file.
The remote host key has changed for some reason, and it no longer matches the key which is stored in the Jenkins user's known_hosts file.
You need to update the known_hosts file for the user which jenkins is using to run these ssh operations. You need to remove any old host key for this host from the file, then add the host's new host key to the file. The simplest way is to use su or sudo to become the Jenkins user, then run ssh interactively to connect to the remote server:
$ ssh server
If ssh prompts you to accept a host key, say yes, and you're done. You don't even have to finish logging in. If it prints a big scary warning that the host key has changed, run this to remove the existing host from known_hosts:
$ ssh-keygen -R server
Then rerun the ssh command.
One thing to be aware of: you can't use a passphrase when you generate a key that you're going to use with Jenkins, because it gives you no opportunity to enter such a thing (seeing as it runs automated jobs with no human intervention).

How can I force ssh to accept a new host fingerprint from the command line?

I'm getting the standard
WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that the RSA host key has just been changed.
The fingerprint for the RSA key sent by the remote host is
error message. However, the system (Appworx) that executes the command (sftp I think, not that it matters) is automated and I can't easily accept the new key, even after checking with the third party vendor that it is a valid change. I can add a new shell script that I can execute from the same system (and user), but there doesn't seem to be a command or command-line argument that will tell ssh to accept the key. I can't find anything in the man page or on Google. Surely this is possible?
The answers here are terrible advice. You should never turn off StrictHostKeyChecking in any real-world system (e.g. it's probably okay if you're just playing on your own local home network – but for anything else don't do it).
Instead use:
ssh-keygen -R hostname
That will force the known_hosts file to be updated to remove the old key for just the one server that has updated its key.
Then when you use:
ssh user#hostname
It will ask you to confirm the fingerprint – as it would for any other "new" (i.e. previously unseen) server.
While common wisdom is not to disable host key checking, there is a built-in option in SSH itself to do this. It is relatively unknown, since it's new (added in Openssh 6.5).
This is done with -o StrictHostKeyChecking=accept-new.
WARNING: use this only if you absolutely trust the IP\hostname you are going to SSH to:
ssh -o StrictHostKeyChecking=accept-new mynewserver.example.com
Note, StrictHostKeyChecking=no will add the public key to ~/.ssh/known_hosts even if the key was changed.
accept-new is only for new hosts. From the man page:
If this flag is set to “accept-new” then ssh will automatically add
new host keys to the user known hosts files, but will not permit
connections to hosts with changed host keys. If this flag
is set to “no” or “off”, ssh will automatically add new host keys
to the user known hosts files and allow connections to hosts with
changed hostkeys to proceed, subject to some restrictions.
If this flag is set to ask (the default), new host keys will be
added to the user known host files only after the user has confirmed
that is what they really want to do, and ssh will refuse to
connect to hosts whose host key has changed.
The host keys of known hosts will be verified automatically in all cases.
Why -o StrictHostKeyChecking=no is evil?
When you do not check the host key you might land with an SSH session on a different computer (yes, this is possible with IP Hijacking). A hostile server, which you don't own can be then used to steal a password and all sort of data.
Accepting a new unknown key is also pretty dangerous.
One should only do it if there is an absolute trust in the network or that the server was not compromised.
Personally, I use this flag only when I boot machines in a cloud environment with cloud-init immediately after the machine started.
Here's how to tell your client to trust the key. A better approach is to give it the key in advance, which I've described in the second paragraph. This is for an OpenSSH client on Unix, so I hope it's relevant to your situation.
You can set the StrictHostKeyChecking parameter. It has options yes, no, and ask. The default is ask. To set it system wide, edit /etc/ssh/ssh_config; to set it just for you, edit ~/.ssh/config; and to set it for a single command, give the option on the command line, e.g.
ssh -o "StrictHostKeyChecking no" hostname
An alternative approach if you have access to the host keys for the remote system is to add them to your known_hosts file in advance, so that SSH knows about them and won't ask the question. If this is possible, it's better from a security point of view. After all, the warning might be right and you really might be subject to a man-in-the-middle attack.
For instance, here's a script that will retrieve the key and add it to your known_hosts file:
ssh -o 'StrictHostKeyChecking no' hostname cat /etc/ssh/ssh_host_dsa_key.pub >>~/.ssh/known_hosts
Since you are trying to automate this by running a bash script on the host that is doing the ssh-ing, and assuming that:
You don't want to ignore host keys because that's an additional security risk.
Host keys on the host you're ssh-ing to rarely change, and if they do there's a good, well-known reason such as "the target host got rebuilt"
You want to run this script once to add the new key to known_hosts, then leave known_hosts alone.
Try this in your bash script:
# Remove old key
ssh-keygen -R $target_host
# Add the new key
ssh-keyscan $target_host >> ~/.ssh/known_hosts
You just have to update the current fingerprint that's being sent from server. Just Type in the following and you'll be good to go :)
ssh-keygen -f "/home/your_user_name/.ssh/known_hosts" -R "server_ip"
Just adding the most 'modern' approach.
Like all other answers - this means you are BLINDLY accepting a key from a host. Use CAUTION!
HOST=hostname ssh-keygen -R $HOST && ssh-keyscan -Ht ed25519 $HOST >> "$HOME/.ssh/known_hosts"
First remove any entry using -R, and then generate a hashed (-H) known_hosts entry which we append to the end of the file.
As with this answer prefer ed25519.
Get a list of SSH host IPs (or DNS name) output to a file > ssh_hosts
Run a one-liner to populate the ~/.ssh/known_hosts on the control node (often do this to prepare target nodes for Ansible run)
NOTE: Assume we prefer ed25519 type of host key
# add the target hosts key fingerprints
while read -r line; do ssh-keyscan -t ed25519 $line >> ~/.ssh/known_hosts; done<ssh_hosts
# add the SSH Key('s) public bit to target hosts `authorized_keys` file
while read -r line; do ssh-copy-id -i /path/to/key -f user#$line; done<ssh_hosts
ssh -o UserKnownHostsFile=/dev/null user#host
Add following file
~/.ssh/config
and this in the file as content
StrictHostKeyChecking no
This setting will make sure that ssh will never ask for fingerprint check again.
This should be added very carefully as this would be really dangerous and allow to access all fingerprints.

ssh: The authenticity of host 'hostname' can't be established

When i ssh to a machine, sometime i get this error warning and it prompts to say "yes" or "no". This cause some trouble when running from scripts that automatically ssh to other machines.
Warning Message:
The authenticity of host '<host>' can't be established.
ECDSA key fingerprint is SHA256:TER0dEslggzS/BROmiE/s70WqcYy6bk52fs+MLTIptM.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'pc' (ECDSA) to the list of known hosts.
Is there a way to automatically say "yes" or ignore this?
Depending on your ssh client, you can set the StrictHostKeyChecking option to no on the command line, and/or send the key to a null known_hosts file. You can also set these options in your config file, either for all hosts or for a given set of IP addresses or host names.
ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no
EDIT
As #IanDunn notes, there are security risks to doing this. If the resource you're connecting to has been spoofed by an attacker, they could potentially replay the destination server's challenge back to you, fooling you into thinking that you're connecting to the remote resource while in fact they are connecting to that resource with your credentials. You should carefully consider whether that's an appropriate risk to take on before altering your connection mechanism to skip HostKeyChecking.
Reference.
Old question that deserves a better answer.
You can prevent interactive prompt without disabling StrictHostKeyChecking (which is insecure).
Incorporate the following logic into your script:
if [ -z "$(ssh-keygen -F $IP)" ]; then
ssh-keyscan -H $IP >> ~/.ssh/known_hosts
fi
It checks if public key of the server is in known_hosts. If not, it requests public key from the server and adds it to known_hosts.
In this way you are exposed to Man-In-The-Middle attack only once, which may be mitigated by:
ensuring that the script connects first time over a secure channel
inspecting logs or known_hosts to check fingerprints manually (to be done only once)
To disable (or control disabling), add the following lines to the beginning of /etc/ssh/ssh_config...
Host 192.168.0.*
StrictHostKeyChecking=no
UserKnownHostsFile=/dev/null
Options:
The Host subnet can be * to allow unrestricted access to all IPs.
Edit /etc/ssh/ssh_config for global configuration or ~/.ssh/config for user-specific configuration.
See http://linuxcommando.blogspot.com/2008/10/how-to-disable-ssh-host-key-checking.html
Similar question on superuser.com - see https://superuser.com/a/628801/55163
Make sure ~/.ssh/known_hosts is writable. That fixed it for me.
The best way to go about this is to use 'BatchMode' in addition to 'StrictHostKeyChecking'. This way, your script will accept a new hostname and write it to the known_hosts file, but won't require yes/no intervention.
ssh -o BatchMode=yes -o StrictHostKeyChecking=no user#server.example.com "uptime"
This warning is issued due the security features, do not disable this feature.
It's just displayed once.
If it still appears after second connection, the problem is probably in writing to the known_hosts file.
In this case you'll also get the following message:
Failed to add the host to the list of known hosts
You may fix it by changing owner of changing the permissions of the file to be writable by your user.
sudo chown -v $USER ~/.ssh/known_hosts
Edit your config file normally located at '~/.ssh/config', and at the beggining of the file, add the below lines
Host *
User your_login_user
StrictHostKeyChecking no
IdentityFile ~/my_path/id_rsa.pub
User set to your_login_user says that this settings belongs to your_login_user
StrictHostKeyChecking set to no will avoid the prompt
IdentityFile is path to RSA key
This works for me and my scripts, good luck to you.
Ideally, you should create a self-managed certificate authority. Start with generating a key pair:
ssh-keygen -f cert_signer
Then sign each server's public host key:
ssh-keygen -s cert_signer -I cert_signer -h -n www.example.com -V +52w /etc/ssh/ssh_host_rsa_key.pub
This generates a signed public host key:
/etc/ssh/ssh_host_rsa_key-cert.pub
In /etc/ssh/sshd_config, point the HostCertificate to this file:
HostCertificate /etc/ssh/ssh_host_rsa_key-cert.pub
Restart the sshd service:
service sshd restart
Then on the SSH client, add the following to ~/.ssh/known_hosts:
#cert-authority *.example.com ssh-rsa AAAAB3Nz...cYwy+1Y2u/
The above contains:
#cert-authority
The domain *.example.com
The full contents of the public key cert_signer.pub
The cert_signer public key will trust any server whose public host key is signed by the cert_signer private key.
Although this requires a one-time configuration on the client side, you can trust multiple servers, including those that haven't been provisioned yet (as long as you sign each server, that is).
For more details, see this wiki page.
Do this -> chmod +w ~/.ssh/known_hosts. This adds write permission to the file at ~/.ssh/known_hosts. After that the remote host will be added to the known_hosts file when you connect to it the next time.
With reference to Cori's answer, I modified it and used below command, which is working. Without exit, remaining command was actually logging to remote machine, which I didn't want in script
ssh -o StrictHostKeyChecking=no user#ip_of_remote_machine "exit"
Add these to your /etc/ssh/ssh_config
Host *
UserKnownHostsFile=/dev/null
StrictHostKeyChecking=no
Generally this problem occurs when you are modifying the keys very oftenly. Based on the server it might take some time to update the new key that you have generated and pasted in the server. So after generating the key and pasting in the server, wait for 3 to 4 hours and then try. The problem should be solved. It happened with me.
The following steps are used to authenticate yourself to the host
Generate a ssh key. You will be asked to create a password for the key
ssh-keygen -f ~/.ssh/id_ecdsa -t ecdsa -b 521
(above uses the recommended encryption technique)
Copy the key over to the remote host
ssh-copy-id -i ~/.ssh/id_ecdsa user#host
N.B the user # host will be different to you. You will need to type in the password for this server, not the keys password.
You can now login to the server securely and not get an error message.
ssh user#host
All source information is located here:
ssh-keygen
For anyone who finds this and is simply looking to prevent the prompt on first connection, but still wants ssh to strictly check the key on subsequent connections (trust on first use), you can set StrictHostKeyChecking to accept-new in ~/.ssh/config, which will do what you're looking for. You can read more about it in man ssh_config. I strongly discourage disabling key checking altogether.
Run this in host server it's premonition issue
chmod -R 700 ~/.ssh
I had the same error and wanted to draw attention to the fact that - as it just happened to me - you might just have wrong privileges.You've set up your .ssh directory as either regular or root user and thus you need to be the correct user. When this error appeared, I was root but I configured .ssh as regular user. Exiting root fixed it.
This is trying to establish password-less authentication. So, if you try to run that command manually once, it will ask to provide the password there. After entering password, it saves that password permanently, and it will never ask again to type 'yes' or 'no'.
For me the reason is that I have wrong permission on ~/.ssh/known_hosts.
I have no write permission on known_hosts file. So it ask me again and again.
In my case, the host was unkown and instead of typing yes to the question are you sure you want to continue connecting(yes/no/[fingerprint])? I was just hitting enter .
I solve the issue which gives below written error:
Error:
The authenticity of host 'XXX.XXX.XXX' can't be established.
RSA key fingerprint is 09:6c:ef:cd:55:c4:4f:ss:5a:88:46:0a:a9:27:83:89.
Solution:
1. install any openSSH tool.
2. run command ssh
3. it will ask for do u add this host like.
accept YES.
4. This host will add in the known host list.
5. Now you are able to connect with this host.
This solution is working now......