Everything works fine, but it keeps prompting for ssh passphrase during provisioning, which is very annoying, when you have 6 vm's it will prompt you like 12 times (and the whole automation piece kinda loses its point).
I've tried searching the web, but couldn't find an answer to a pretty obvious question.
There are various ways how to prevent this.
First of all and most obvious (but least preferable) is to remove the passphrase from the key:
ssh-keygen -p -P old_passphrase -N "" -f /path/to/key_file
The other possibility is to use ssh-agent, which will store the encrypted version of your key and will do the required operation on it when asked. You can find many guides and questions about it, but for completeness
eval $(ssh-agent)
ssh-add /path/to/key_file
do-your-vagrant-stuff
You can use sshpass, which will provide the passphrase to the ssh commands. It can read the passphrase as an argument, from environment variable or from file (can be insecure)
sshpass -p password your-vagrant-stuff
there are probably other ways, but you should most probably use ssh-agent.
Related
On Ubuntu 14.04 I have a private key in:
~/.ssh/id_rsa
I have installed the public key on the server I wish to connect to and indeed when I run the following, I do connect as expected:
ssh me#my-server-ip.com
I then deleted the private key on the client but running the above command still connects me. This leads me to believe that the SSH binary is running in some kind of daemon mode wherein it is caching the private key in memory? Is that correct? Short of a reboot, how do I 'flush' SSH to stop using the private key. Thanks
Run the following command after removing ~/.ssh/id_rsa
ssh-add -D
This commando removes all cached ssh identities from the ssh-agent.
If you type ssh me#my-server-ip.com now, the password prompt will show.
You can check with ssh-add -L what identities the ssh-agent has cached.
I know I'm a little late to this party, but for the enlightenment of others...
It sounds like you have your private SSH key (identity) cached in ssh-agent. Now it is worth noting that ssh-agent does not retain the key cache over a reboot or logout/login cycle, although some systems depending on configuration may add your key during either of those processes. However, in your instance, a reboot or possibly a logout/login cycle would remove the private key from the agent's cache. This is because you have already removed the ~/.ssh/id_rsa file and it therefore cannot be re-initialized into the agent.
For everyone else who may not have yet deleted their ~/.ssh/id_rsa file(s) or if you don't want to reboot or logout/in right now the following should prove useful.
First, you will want to remove any ~user/.ssh/id_rsa files which you wish to no longer be cached by ssh-agent.
Next, verify that there are, in fact, identities still being held open in 'ssh-agent' by running the following command:
ssh-add -L
This will list the public key parameters of all identities that the agent has actively cached. (Note: ssh-add -l will instead list the fingerprints of all keys/identities that are actively cached.) For each that you would like to remove you should run the following:
ssh-add -d /path/to/matching/public/key/file
If you just want to clear out ALL keys/identities from the agent then run this instead:
ssh-add -D
At this point, the key(s) desired to be removed will be no longer accessible to the agent and with the actual identity file removed, there shouldn't' be any way possible for an attempted remote SSH connection with that user to connect without using a different authorization method if configured/allowed.
The target server is a relatively clean install of Ubuntu 14.04. I generated a new ssh key using ssh-keygen and added it to my server using ssh-copy-id. I also checked that the public key was in the ~/.ssh/authorized_keys file on the server.
Even still, I am prompted for a password every time I try to ssh into the server.
I noticed something weird however. After I log into my first session using my password, the next concurrent sessions don't ask for a password. They seem to be using the ssh key properly. I've noticed this behaviour on two different clients (Mint OSX).
Are you sure your SSH key isn't protected by a password? Try the following:
How do I remove the passphrase for the SSH key without having to create a new key?
If that's not the case, it may just be that ssh is having trouble locating your private key. Try using the -i flag to explicitly point out its location.
ssh -i /path/to/private_key username#yourhost.com
Thank you Samuel Jun for the link to help.ubuntu.com - SSH Public Key Login Troubleshooting !
Just a little caveat:
If you copy your authorized keys file outside your encrypted home directory please make sure your root install is encrypted as well (imho Ubuntu still allows for unencrypted root install coupled with encryption of the home directory).
Otherwise this defeats the whole purpose of using encryption in the first place ;)
If this is happening to you on Windows (I'm on Windows 10)
Try running the program that you're trying to connect via ssh to the server as administrator.
For me I was using powershell with scoop to install a couple of things so that I could ssh straight from it. Anyway... I ran PowerShell as admin and tried connecting again and it didn't ask for my password.
For LinuxSE
Check the SE context with
% ls -dZ ~user/.ssh
Must contain unconfined_u:object_r:ssh_home_t:s0
If not, that was the problem , as root run
# for i in ~user/.ssh ~user/.ssh/*
do
semanage fcontext -a -t ssh_home_t $i
done
# restorecon -v -R ~user/.ssh
It looks like it's related to encryption on your home directory and therefore the authorized_keys file cannot be read.
https://unix.stackexchange.com/a/238570
Make sure your ssh public key was copied to the remote host in the right format. If you open the key file to edit it should read 1 line.
Basically, just do ssh-copy-id username#remote. It will take care of the rest.
I'm getting the standard
WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that the RSA host key has just been changed.
The fingerprint for the RSA key sent by the remote host is
error message. However, the system (Appworx) that executes the command (sftp I think, not that it matters) is automated and I can't easily accept the new key, even after checking with the third party vendor that it is a valid change. I can add a new shell script that I can execute from the same system (and user), but there doesn't seem to be a command or command-line argument that will tell ssh to accept the key. I can't find anything in the man page or on Google. Surely this is possible?
The answers here are terrible advice. You should never turn off StrictHostKeyChecking in any real-world system (e.g. it's probably okay if you're just playing on your own local home network – but for anything else don't do it).
Instead use:
ssh-keygen -R hostname
That will force the known_hosts file to be updated to remove the old key for just the one server that has updated its key.
Then when you use:
ssh user#hostname
It will ask you to confirm the fingerprint – as it would for any other "new" (i.e. previously unseen) server.
While common wisdom is not to disable host key checking, there is a built-in option in SSH itself to do this. It is relatively unknown, since it's new (added in Openssh 6.5).
This is done with -o StrictHostKeyChecking=accept-new.
WARNING: use this only if you absolutely trust the IP\hostname you are going to SSH to:
ssh -o StrictHostKeyChecking=accept-new mynewserver.example.com
Note, StrictHostKeyChecking=no will add the public key to ~/.ssh/known_hosts even if the key was changed.
accept-new is only for new hosts. From the man page:
If this flag is set to “accept-new” then ssh will automatically add
new host keys to the user known hosts files, but will not permit
connections to hosts with changed host keys. If this flag
is set to “no” or “off”, ssh will automatically add new host keys
to the user known hosts files and allow connections to hosts with
changed hostkeys to proceed, subject to some restrictions.
If this flag is set to ask (the default), new host keys will be
added to the user known host files only after the user has confirmed
that is what they really want to do, and ssh will refuse to
connect to hosts whose host key has changed.
The host keys of known hosts will be verified automatically in all cases.
Why -o StrictHostKeyChecking=no is evil?
When you do not check the host key you might land with an SSH session on a different computer (yes, this is possible with IP Hijacking). A hostile server, which you don't own can be then used to steal a password and all sort of data.
Accepting a new unknown key is also pretty dangerous.
One should only do it if there is an absolute trust in the network or that the server was not compromised.
Personally, I use this flag only when I boot machines in a cloud environment with cloud-init immediately after the machine started.
Here's how to tell your client to trust the key. A better approach is to give it the key in advance, which I've described in the second paragraph. This is for an OpenSSH client on Unix, so I hope it's relevant to your situation.
You can set the StrictHostKeyChecking parameter. It has options yes, no, and ask. The default is ask. To set it system wide, edit /etc/ssh/ssh_config; to set it just for you, edit ~/.ssh/config; and to set it for a single command, give the option on the command line, e.g.
ssh -o "StrictHostKeyChecking no" hostname
An alternative approach if you have access to the host keys for the remote system is to add them to your known_hosts file in advance, so that SSH knows about them and won't ask the question. If this is possible, it's better from a security point of view. After all, the warning might be right and you really might be subject to a man-in-the-middle attack.
For instance, here's a script that will retrieve the key and add it to your known_hosts file:
ssh -o 'StrictHostKeyChecking no' hostname cat /etc/ssh/ssh_host_dsa_key.pub >>~/.ssh/known_hosts
Since you are trying to automate this by running a bash script on the host that is doing the ssh-ing, and assuming that:
You don't want to ignore host keys because that's an additional security risk.
Host keys on the host you're ssh-ing to rarely change, and if they do there's a good, well-known reason such as "the target host got rebuilt"
You want to run this script once to add the new key to known_hosts, then leave known_hosts alone.
Try this in your bash script:
# Remove old key
ssh-keygen -R $target_host
# Add the new key
ssh-keyscan $target_host >> ~/.ssh/known_hosts
You just have to update the current fingerprint that's being sent from server. Just Type in the following and you'll be good to go :)
ssh-keygen -f "/home/your_user_name/.ssh/known_hosts" -R "server_ip"
Just adding the most 'modern' approach.
Like all other answers - this means you are BLINDLY accepting a key from a host. Use CAUTION!
HOST=hostname ssh-keygen -R $HOST && ssh-keyscan -Ht ed25519 $HOST >> "$HOME/.ssh/known_hosts"
First remove any entry using -R, and then generate a hashed (-H) known_hosts entry which we append to the end of the file.
As with this answer prefer ed25519.
Get a list of SSH host IPs (or DNS name) output to a file > ssh_hosts
Run a one-liner to populate the ~/.ssh/known_hosts on the control node (often do this to prepare target nodes for Ansible run)
NOTE: Assume we prefer ed25519 type of host key
# add the target hosts key fingerprints
while read -r line; do ssh-keyscan -t ed25519 $line >> ~/.ssh/known_hosts; done<ssh_hosts
# add the SSH Key('s) public bit to target hosts `authorized_keys` file
while read -r line; do ssh-copy-id -i /path/to/key -f user#$line; done<ssh_hosts
ssh -o UserKnownHostsFile=/dev/null user#host
Add following file
~/.ssh/config
and this in the file as content
StrictHostKeyChecking no
This setting will make sure that ssh will never ask for fingerprint check again.
This should be added very carefully as this would be really dangerous and allow to access all fingerprints.
My issue is that everytime I have to login to a given account on a linux server (there are many) I have to go pull a text file not I have to look at the username and ip.
Example: "ssh some_user#xxx.xxx.xxx.x -pxxxxx"
I want to make my life a little easier by creating a shortcut, e.g. "ssh some_user"...
I searched and could not find an answer, likely not using the right terminology.
Thanks!
You can use the ssh client configuration file (.ssh/config). If you have to type ssh -p 1234 mylogin#my.server.with.a.long.name.com, you can populate your config file with
host server
hostname my.server.with.a.long.name.com
user mylogin
port 1234
Then you can simply type ssh server and it will have the same effect. You can have as many entries in your .ssh/config file as you want and even use wildcards (*)
If you are using a recent version of bash, you can furthermore make use of the command_not_found_handle function:
command_not_found_handle () {
if grep "host $1" ~/.ssh/config &>/dev/null; then
ssh $#
else
printf "Sorry: Command not found: $1\n"
return 127
fi
}
Then you can connect simply with
server
I dont know if I understood your problem correct, but a proper ssh config file make life muuuch easier. No IP, no domain, no password, not even a username.
See the man page: http://linux.die.net/man/5/ssh_config
I like things like ssh vm, or scp vm:... no more scp blablubb#192.168.226.xy:...+ passphrase.
Also see ssh-keygen and ssh-copy-id for asymmetric key exchange. Will get you rid of typing passwords.
Generally I recommend to read a ssh tutorial.
How can you make SSH read the password from stdin, which it doesn't do by default?
based on this post you can do:
Create a command which open a ssh session using SSH_ASKPASS (seek SSH_ASKPASS on man ssh)
$ cat > ssh_session <<EOF
export SSH_ASKPASS="/path/to/script_returning_pass"
setsid ssh "your_user"#"your_host"
EOF
NOTE: To avoid ssh to try to ask on tty we use setsid
Create a script which returns your password (note echo "echo)
$ echo "echo your_ssh_password" > /path/to/script_returning_pass
Make them executable
$ chmod +x ssh_session
$ chmod +x /path/to/script_returning_pass
try it
$ ./ssh_session
Keep in mind that ssh stands for secure shell, and if you store your user, host and password in plain text files you are misleading the tool an creating a possible security gap
You can use sshpass which is for example in the offical debian repositories. Example:
$ apt-get install sshpass
$ sshpass -p 'password' ssh username#server
You can't with most SSH clients. You can work around it with by using SSH API's, like Paramiko for Python. Be careful not to overrule all security policies.
Distilling this answer leaves a simple and generic script:
#!/bin/bash
[[ $1 =~ password: ]] && cat || SSH_ASKPASS="$0" DISPLAY=nothing:0 exec setsid "$#"
Save it as pass, do a chmod +x pass and then use it like this:
$ echo mypass | pass ssh user#host ...
If its first argument contains password: then it passes its input to its output (cat) otherwise it launches whatver was presented after setting itself as the SSH_ASKPASS program.
When ssh encounters both SSH_ASKPASS AND DISPLAY set, it will launch the program referred to by SSH_ASKPASS, passing it the prompt user#host's password:
An old post reviving...
I found this one while looking for a solution to the exact same problem, I found something and I hope someone will one day find it useful:
Install ssh-askpass program (apt-get, yum ...)
Set the SSH_ASKPASS variable (export SSH_ASKPASS=/usr/bin/ssh-askpass)
From a terminal open a new ssh connection without an undefined TERMINAL variable (setsid ssh user#host)
This looks simple enough to be secure but did not check yet (just using in a local secure context).
Here we are.
FreeBSD mailing list recommends the expect library.
If you need a programmatic ssh login, you really ought to be using public key logins, however -- obviously there are a lot fewer security holes this way as compared to using an external library to pass a password through stdin.
a better sshpass alternative is :
https://github.com/clarkwang/passh
I got problems with sshpass, if ssh server is not added to my known_hosts sshpass will not show me any message, passh do not have this problem.
I'm not sure the reason you need this functionality but it seems you can get this behavior with ssh-keygen.
It allows you to login to a server without using a password by having a private RSA key on your computer and a public RSA key on the server.
http://www.linuxproblem.org/art_9.html