I know this question has already been asked several times but I got another problem. I have a part in my script where I connect through ssh and scp and everytime I run the script it always ask for the password. Most of you would probably answer that I should use expect or sshpass yet I don't have any of this two. I tried running:
compgen -c
and there's no expect and sshpass existing.
Are there any alternative commands? I would really appreciate your help. Thanks
Update: I also can't install any of this since I'm only an ordinary user.
First I logged in to server A as testuser and entered the ff command:
ssh-keygen -d
Do not enter any passphrase.
This will generate files in the folder ~/.ssh/
Then scp the file rsa_id.pub (public key) to server B.
scp ~/.ssh/id_dsa.pub testuser#B:/home/testuser/.ssh/authorized_keys2
Do the same vice versa (if you want access to both). Then you can now transfer from one server to the other without the being asked for your password.
source
If you don't want to set up keys for passwordless access (against the rules?), you can set up "SSH connection sharing".
Insert these lines into your .ssh/config file:
ControlMaster auto
ControlPath /tmp/ssh_%r#%n:%p
ControlPersist 8h
Now, when you log into a server from the machine with that config it will ask you your password the first time, and won't ask again until 8 hours of idle time have passed (so, you'll get asked once per day, usually).
What it's doing is keeping the connection open in the background, and then reusing the same connection for all your SSH sessions. This gives a useful connect-speed boost, and means you don't need to re-authenticate. All-in-all, it's great for accelerating scripted SSH and SCP commands.
I'm sorry to have to ask this question, but I feel like I've tried every answer so far on SO with no luck.
I have my local machine and my remote server. Jenkins is up and running on my server.
If I open up terminal and do something like scp /path/to/file user#server:/path/to/wherever then my ssh works fine without requiring a password
If I run this command inside of my Jenkins job I get 'Host Key Verification Failed'
So I know my SSH is working correctly the way I want, but why can't I get Jenkins to use this SSH key?
Interesting thing is, it did work fine when I first set up Jenkins and the key, then I think I restarted my local machine, or restarted Jenkins, then it stopped working. It's hard to say exactly what caused it.
I've also tried several options regarding ssh-agent and ssh-add but those don't seem to work.
I verified the local machine .pub is on the server in the /user/.ssh folder and is also in the authorized keys file. The folder is owned by user.
Any thoughts would be much appreciated and I can provide more info about my problem. Thanks!
Update:
Per Kensters suggestion I did su - jenkins, then ssh server, and it asked me to add to known hosts. So I thought this was a step in the right direction. But the same problem persisted afterward.
Something I did not notice before I can ssh server without password when using my myUsername account. But if I switch to the jenkins user, then it asks me for my password when I do ssh server.
I also tried ssh-keygen -R server as suggested to no avail.
Try
su jenkins
ssh-keyscan YOUR-HOSTNAME >> ~/.ssh/known_hosts
SSH Slaves Plugin doesn't support ECDSA. The command above should add RSA key for ssh-slave.
Host Key Verification Failed
ssh is complaining about the remote host key, not the local key that you're trying to use for authentication.
Every SSH server has a host key which is used to identify the server to the client. This helps prevent clients from connecting to servers which are impersonating the intended server. The first time you use ssh to connect to a particular host, ssh will normally prompt you to accept the remote host's host key, then store the key locally so that ssh will recognize the key in the future. The widely used OpenSSH ssh program stores known host keys in a file .ssh/known_hosts within each user's home directory.
In this case, one of two things is happening:
The user ID that Jenkins is using to run these jobs has never connected to this particular remote host before, and doesn't have the remote host's host key in its known_hosts file.
The remote host key has changed for some reason, and it no longer matches the key which is stored in the Jenkins user's known_hosts file.
You need to update the known_hosts file for the user which jenkins is using to run these ssh operations. You need to remove any old host key for this host from the file, then add the host's new host key to the file. The simplest way is to use su or sudo to become the Jenkins user, then run ssh interactively to connect to the remote server:
$ ssh server
If ssh prompts you to accept a host key, say yes, and you're done. You don't even have to finish logging in. If it prints a big scary warning that the host key has changed, run this to remove the existing host from known_hosts:
$ ssh-keygen -R server
Then rerun the ssh command.
One thing to be aware of: you can't use a passphrase when you generate a key that you're going to use with Jenkins, because it gives you no opportunity to enter such a thing (seeing as it runs automated jobs with no human intervention).
I'm using rsync to backup our server to another running an rsync daemon on our LAN using the command
rsync -av /volume1/ Public/ root#192.168.2.20:/shares/Backup/Public/
It's working great except that it requires a manual password entry, so I'd like to automate it with a key pair. Running ssh-keygen I get the below where I hit return 3 times
ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
ssh-copy-id script isn't on our system, so I used the line below to copy the password file to the backup destination server. I checked and it copied correctly
> cat /root/.ssh/id_rsa.pub | ssh root#192.168.2.20 "cat >> /root/.ssh/authorized_keys"
As a test, I ssh to the destination server to ensure there's no longer a password prompt, but I'm still getting one?
DiskStation> ssh 192.168.2.20
root#192.168.2.20's password:
I'm not strong in unix, so am likely missing something obvious. Suggestions please?
___ Edit ____
Followed up with adding the following settings to sshd_config but still no luck:
RSAAuthentication yes
PubkeyAuthentication yes
Not sure if it matters, but the machine hosting the public key as a WD Sharespace which is a Debian Lenny build.
The correct procedure for passwordless SSH is as follows:
Begin by executing the ssh-keygen command to generate a key
ssh-keygen
Once you have the key, then you can copy it to the remote server. Use this command which makes it easier
ssh-copy-id user#host
The command assumes that you are using port 22 for ssh, if not use, with xxxx being the port number
ssh-copy-id "user#host -p xxxx"
See here for a detailed description of this command
In your case, when you are editing
/etc/ssh/sshd_config
Make sure you modify PasswordAuthentication from
PasswordAuthentication yes
to
PasswordAuthentication no
then restart sshd with
service sshd restart
Make sure the key is in your chain. ssh-add ~path/to/private/key otherwise you need to do ssh -i /path/to/key . Then make sure you're using ssh root#whatever. Then make sure the file is written to the remote node properly. Try copying and pasting rather than your cat and pipe. And lastly, try restarting ssh on the remote and perform those steps again (to permit the permitrootlogin to be active).
By the way, the fact that you are trying to avoid entering passwords and then you added a passphrase for the key, makes this entire process pointless.
The target server is a relatively clean install of Ubuntu 14.04. I generated a new ssh key using ssh-keygen and added it to my server using ssh-copy-id. I also checked that the public key was in the ~/.ssh/authorized_keys file on the server.
Even still, I am prompted for a password every time I try to ssh into the server.
I noticed something weird however. After I log into my first session using my password, the next concurrent sessions don't ask for a password. They seem to be using the ssh key properly. I've noticed this behaviour on two different clients (Mint OSX).
Are you sure your SSH key isn't protected by a password? Try the following:
How do I remove the passphrase for the SSH key without having to create a new key?
If that's not the case, it may just be that ssh is having trouble locating your private key. Try using the -i flag to explicitly point out its location.
ssh -i /path/to/private_key username#yourhost.com
Thank you Samuel Jun for the link to help.ubuntu.com - SSH Public Key Login Troubleshooting !
Just a little caveat:
If you copy your authorized keys file outside your encrypted home directory please make sure your root install is encrypted as well (imho Ubuntu still allows for unencrypted root install coupled with encryption of the home directory).
Otherwise this defeats the whole purpose of using encryption in the first place ;)
If this is happening to you on Windows (I'm on Windows 10)
Try running the program that you're trying to connect via ssh to the server as administrator.
For me I was using powershell with scoop to install a couple of things so that I could ssh straight from it. Anyway... I ran PowerShell as admin and tried connecting again and it didn't ask for my password.
For LinuxSE
Check the SE context with
% ls -dZ ~user/.ssh
Must contain unconfined_u:object_r:ssh_home_t:s0
If not, that was the problem , as root run
# for i in ~user/.ssh ~user/.ssh/*
do
semanage fcontext -a -t ssh_home_t $i
done
# restorecon -v -R ~user/.ssh
It looks like it's related to encryption on your home directory and therefore the authorized_keys file cannot be read.
https://unix.stackexchange.com/a/238570
Make sure your ssh public key was copied to the remote host in the right format. If you open the key file to edit it should read 1 line.
Basically, just do ssh-copy-id username#remote. It will take care of the rest.
When i ssh to a machine, sometime i get this error warning and it prompts to say "yes" or "no". This cause some trouble when running from scripts that automatically ssh to other machines.
Warning Message:
The authenticity of host '<host>' can't be established.
ECDSA key fingerprint is SHA256:TER0dEslggzS/BROmiE/s70WqcYy6bk52fs+MLTIptM.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'pc' (ECDSA) to the list of known hosts.
Is there a way to automatically say "yes" or ignore this?
Depending on your ssh client, you can set the StrictHostKeyChecking option to no on the command line, and/or send the key to a null known_hosts file. You can also set these options in your config file, either for all hosts or for a given set of IP addresses or host names.
ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no
EDIT
As #IanDunn notes, there are security risks to doing this. If the resource you're connecting to has been spoofed by an attacker, they could potentially replay the destination server's challenge back to you, fooling you into thinking that you're connecting to the remote resource while in fact they are connecting to that resource with your credentials. You should carefully consider whether that's an appropriate risk to take on before altering your connection mechanism to skip HostKeyChecking.
Reference.
Old question that deserves a better answer.
You can prevent interactive prompt without disabling StrictHostKeyChecking (which is insecure).
Incorporate the following logic into your script:
if [ -z "$(ssh-keygen -F $IP)" ]; then
ssh-keyscan -H $IP >> ~/.ssh/known_hosts
fi
It checks if public key of the server is in known_hosts. If not, it requests public key from the server and adds it to known_hosts.
In this way you are exposed to Man-In-The-Middle attack only once, which may be mitigated by:
ensuring that the script connects first time over a secure channel
inspecting logs or known_hosts to check fingerprints manually (to be done only once)
To disable (or control disabling), add the following lines to the beginning of /etc/ssh/ssh_config...
Host 192.168.0.*
StrictHostKeyChecking=no
UserKnownHostsFile=/dev/null
Options:
The Host subnet can be * to allow unrestricted access to all IPs.
Edit /etc/ssh/ssh_config for global configuration or ~/.ssh/config for user-specific configuration.
See http://linuxcommando.blogspot.com/2008/10/how-to-disable-ssh-host-key-checking.html
Similar question on superuser.com - see https://superuser.com/a/628801/55163
Make sure ~/.ssh/known_hosts is writable. That fixed it for me.
The best way to go about this is to use 'BatchMode' in addition to 'StrictHostKeyChecking'. This way, your script will accept a new hostname and write it to the known_hosts file, but won't require yes/no intervention.
ssh -o BatchMode=yes -o StrictHostKeyChecking=no user#server.example.com "uptime"
This warning is issued due the security features, do not disable this feature.
It's just displayed once.
If it still appears after second connection, the problem is probably in writing to the known_hosts file.
In this case you'll also get the following message:
Failed to add the host to the list of known hosts
You may fix it by changing owner of changing the permissions of the file to be writable by your user.
sudo chown -v $USER ~/.ssh/known_hosts
Edit your config file normally located at '~/.ssh/config', and at the beggining of the file, add the below lines
Host *
User your_login_user
StrictHostKeyChecking no
IdentityFile ~/my_path/id_rsa.pub
User set to your_login_user says that this settings belongs to your_login_user
StrictHostKeyChecking set to no will avoid the prompt
IdentityFile is path to RSA key
This works for me and my scripts, good luck to you.
Ideally, you should create a self-managed certificate authority. Start with generating a key pair:
ssh-keygen -f cert_signer
Then sign each server's public host key:
ssh-keygen -s cert_signer -I cert_signer -h -n www.example.com -V +52w /etc/ssh/ssh_host_rsa_key.pub
This generates a signed public host key:
/etc/ssh/ssh_host_rsa_key-cert.pub
In /etc/ssh/sshd_config, point the HostCertificate to this file:
HostCertificate /etc/ssh/ssh_host_rsa_key-cert.pub
Restart the sshd service:
service sshd restart
Then on the SSH client, add the following to ~/.ssh/known_hosts:
#cert-authority *.example.com ssh-rsa AAAAB3Nz...cYwy+1Y2u/
The above contains:
#cert-authority
The domain *.example.com
The full contents of the public key cert_signer.pub
The cert_signer public key will trust any server whose public host key is signed by the cert_signer private key.
Although this requires a one-time configuration on the client side, you can trust multiple servers, including those that haven't been provisioned yet (as long as you sign each server, that is).
For more details, see this wiki page.
Do this -> chmod +w ~/.ssh/known_hosts. This adds write permission to the file at ~/.ssh/known_hosts. After that the remote host will be added to the known_hosts file when you connect to it the next time.
With reference to Cori's answer, I modified it and used below command, which is working. Without exit, remaining command was actually logging to remote machine, which I didn't want in script
ssh -o StrictHostKeyChecking=no user#ip_of_remote_machine "exit"
Add these to your /etc/ssh/ssh_config
Host *
UserKnownHostsFile=/dev/null
StrictHostKeyChecking=no
Generally this problem occurs when you are modifying the keys very oftenly. Based on the server it might take some time to update the new key that you have generated and pasted in the server. So after generating the key and pasting in the server, wait for 3 to 4 hours and then try. The problem should be solved. It happened with me.
The following steps are used to authenticate yourself to the host
Generate a ssh key. You will be asked to create a password for the key
ssh-keygen -f ~/.ssh/id_ecdsa -t ecdsa -b 521
(above uses the recommended encryption technique)
Copy the key over to the remote host
ssh-copy-id -i ~/.ssh/id_ecdsa user#host
N.B the user # host will be different to you. You will need to type in the password for this server, not the keys password.
You can now login to the server securely and not get an error message.
ssh user#host
All source information is located here:
ssh-keygen
For anyone who finds this and is simply looking to prevent the prompt on first connection, but still wants ssh to strictly check the key on subsequent connections (trust on first use), you can set StrictHostKeyChecking to accept-new in ~/.ssh/config, which will do what you're looking for. You can read more about it in man ssh_config. I strongly discourage disabling key checking altogether.
Run this in host server it's premonition issue
chmod -R 700 ~/.ssh
I had the same error and wanted to draw attention to the fact that - as it just happened to me - you might just have wrong privileges.You've set up your .ssh directory as either regular or root user and thus you need to be the correct user. When this error appeared, I was root but I configured .ssh as regular user. Exiting root fixed it.
This is trying to establish password-less authentication. So, if you try to run that command manually once, it will ask to provide the password there. After entering password, it saves that password permanently, and it will never ask again to type 'yes' or 'no'.
For me the reason is that I have wrong permission on ~/.ssh/known_hosts.
I have no write permission on known_hosts file. So it ask me again and again.
In my case, the host was unkown and instead of typing yes to the question are you sure you want to continue connecting(yes/no/[fingerprint])? I was just hitting enter .
I solve the issue which gives below written error:
Error:
The authenticity of host 'XXX.XXX.XXX' can't be established.
RSA key fingerprint is 09:6c:ef:cd:55:c4:4f:ss:5a:88:46:0a:a9:27:83:89.
Solution:
1. install any openSSH tool.
2. run command ssh
3. it will ask for do u add this host like.
accept YES.
4. This host will add in the known host list.
5. Now you are able to connect with this host.
This solution is working now......