Can I pass RSA hostkey of server as PuTTY command line option? - ssh

Do we have option on PuTTY command line to send RSA hostkey as an argument similar to WinSCP -hostkey.?
PuTTY command currently used:
putty.exe -ssh -l username -pw password -m command.txt RemoteServerIP
Is there a option like WinSCP where RSA hostkey can be passed just like below:
open sftp://username :password#RemoteServerIP/ -hostkey="ssh-rsa 2048 11:2c:5d:f5:22:22:ab:12:3a:be:37:1c:cd:f6:13:d1"
Also let me know, if my option of using PuTTY for this task is a bad option.
Detailed explanation for those who are interested to know entire background:
I have developed a Django application to kick off some remote scripts
and get the task done. This uses putty ssh to run commands at the
background using subprocess module, parameters are passed from the
Djangofront end.
Problem I am facing is, There are multiple users who will use this
application to kick off their scripts. Only requirement is they have
to store IP address and RSA key of the server on a config file on
Django Server.
Since all of the servers use RSA key, for the first login it asks to
confirm the RSA fingerprint storage prompt.
Usually when we kick off this manually from our local machine we give
Yes, for the first time. and subsequent runs it won't ask for the
confirmation.
Since these scripts will be running from a DjangoServer where users
won't have access, is there a way I can still be able to run the
remote scripts using putty?
Please note I am aware of kicking off script using WinSCP but
unfortunately in our environment I cannot kickoff Scripts from
WinSCP, but I can FTP using WinSCP and I use hostkey option so it
does not prompt for confirmation

There are several ways of dealing with SSH/SCP/SFTP host key verification.
One way is described in this answer to a similar question on ServerFault. Echo y or n depending on whether you do or don't want the key added to the cache in the registry. Redirect the error output stream to suppress the notification messages.
echo 'y' | plink -l USERNAME HOSTNAME 'COMMANDLINE' 2>$null # cache host key
echo 'n' | plink -l USERNAME HOSTNAME 'COMMANDLINE' 2>$null # do not cache host key
Note, however, that this will fail if you don't want to cache the key and use batch mode:
echo 'n' | plink -batch -l USERNAME HOSTNAME 'COMMANDLINE' # this won't work!
Note, however, that this approach essentially disables the host key verification, which was put in place to protect from man-in-the-middle attacks. Which is to say that automatically accepting host keys from arbitrary remote hosts is NOT RECOMMENDED.
Better alternatives to automatically accepting arbitrary host keys would be:
Saving a PuTTY session for which you already validated the host key, so you can re-use it from plink like this:
plink -load SESSION_NAME 'COMMANDLINE'
Pre-caching the host key in the registry prior to running plink. There is a Python script that can convert a key in OpenSSH known_hosts format to a registry file that you can import on Windows if you don't want to manually open a session and verify the fingerprint.
Providing the fingerprint of the server's host key when running plink:
$user = 'USERNAME'
$server = 'HOSTNAME'
$cmd = 'COMMANDLINE'
$fpr = 'fa:38:b6:f2:a3:...'
plink -batch -hostkey $fpr -l $user $server $cmd
All of these assume that you obtained the relevant information via a secure channel and properly verified it, of course.

PuTTY also has -hostkey switch, just with a slightly different syntax:
-hostkey 11:2c:5d:f5:22:22:ab:12:3a:be:37:1c:cd:f6:13:d1
And indeed, PuTTY is not the right tool to automate command execution.
Instead, use Plink (PuTTY command-line connection tool):
plink.exe -ssh -l username -pw password -hostkey aa:bb:cc:... hostname command

Related

Issue remoting into a device and doing a simple ping test with Ansible

After following instructions both online and in a couple of books, I am unsure of why this is happening. I have a feeling there is a missing setting, but here is the setup:
I am attempting to use the command:
ansible all -u $USER -m ping -vvvv
Obviously using the -vvvv for debugging, but not much output aside from the fact it says it's attempting to connect. I get the following error:
S4 | FAILED => FAILED: Authentication failed.
S4 stands for switch 4, a Cisco switch I am attempting to automate configuration and show commands on. I know 100% the password I set in the host_vars file is correct, as it works when I use it from a standard SSH client.
Here are my non-default config settings in the ansible.cfg file:
[defaults]
transport=paramiko
hostfile = ./myhosts
host_key_checking=False
timeout = 5
My myhosts file:
[cisco-switches]
S4
And my host_vars file for S4:
ansible_ssh_host: 192.168.1.12
ansible_ssh_pass: password
My current version is 1.9.1, running on a Centos VM. I do have an ACL applied on the management interface of the switch, but it allows remote connections from this particular IP.
Please advise.
Since you are using ansible to automate commands in a Cisco switch, I guess you want to perform the SSH connection to the switch without been prompted for password or been requested to press [Y/N] to confirm the connection.
To do that I recommend to configure the Cisco IOS SSH Server on the switch to perform RSA-Based user authentication.
First of all you need to generate RSA key pair on your Linux box:
ssh-keygen -t rsa -b 1024
Note: You can use 2048 instead 1024 but consider that some IOS versions will accept maximum 254 characters for ssh public key.
At switch side:
conf t
ip ssh pubkey-chain
username test
key-string
Copy the entire public key as appears in the cat id_rsa.pub
including the ssh-rsa and username#hostname.
Please note that some IOS versions will accept
maximum 254 characters.
You can paste multiple lines.
exit
exit
If you need that 'test' user can execute privileged IOS commands:
username test privilege 15 secret _TEXT_CLEAR_PASSWORD_
Then, test your connection from your Linux box in order to add the switch to known_hosts file. This will only happen one time for each switch/host not found in the known_hosts file:
ssh test#10.0.0.1
The authenticity of host '10.0.0.1 (10.0.0.1)' can't be established.
RSA key fingerprint is xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:d6:4b:d1:67.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.0.0.1' (RSA) to the list of known hosts.
ciscoswitch#
ciscoswitch#exit
Finally test the connection using ansible over SSH and raw module, for example:
ansible inventory -m raw -a "show env all" -u test
I hope you find it useful.

How can I force ssh to accept a new host fingerprint from the command line?

I'm getting the standard
WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that the RSA host key has just been changed.
The fingerprint for the RSA key sent by the remote host is
error message. However, the system (Appworx) that executes the command (sftp I think, not that it matters) is automated and I can't easily accept the new key, even after checking with the third party vendor that it is a valid change. I can add a new shell script that I can execute from the same system (and user), but there doesn't seem to be a command or command-line argument that will tell ssh to accept the key. I can't find anything in the man page or on Google. Surely this is possible?
The answers here are terrible advice. You should never turn off StrictHostKeyChecking in any real-world system (e.g. it's probably okay if you're just playing on your own local home network – but for anything else don't do it).
Instead use:
ssh-keygen -R hostname
That will force the known_hosts file to be updated to remove the old key for just the one server that has updated its key.
Then when you use:
ssh user#hostname
It will ask you to confirm the fingerprint – as it would for any other "new" (i.e. previously unseen) server.
While common wisdom is not to disable host key checking, there is a built-in option in SSH itself to do this. It is relatively unknown, since it's new (added in Openssh 6.5).
This is done with -o StrictHostKeyChecking=accept-new.
WARNING: use this only if you absolutely trust the IP\hostname you are going to SSH to:
ssh -o StrictHostKeyChecking=accept-new mynewserver.example.com
Note, StrictHostKeyChecking=no will add the public key to ~/.ssh/known_hosts even if the key was changed.
accept-new is only for new hosts. From the man page:
If this flag is set to “accept-new” then ssh will automatically add
new host keys to the user known hosts files, but will not permit
connections to hosts with changed host keys. If this flag
is set to “no” or “off”, ssh will automatically add new host keys
to the user known hosts files and allow connections to hosts with
changed hostkeys to proceed, subject to some restrictions.
If this flag is set to ask (the default), new host keys will be
added to the user known host files only after the user has confirmed
that is what they really want to do, and ssh will refuse to
connect to hosts whose host key has changed.
The host keys of known hosts will be verified automatically in all cases.
Why -o StrictHostKeyChecking=no is evil?
When you do not check the host key you might land with an SSH session on a different computer (yes, this is possible with IP Hijacking). A hostile server, which you don't own can be then used to steal a password and all sort of data.
Accepting a new unknown key is also pretty dangerous.
One should only do it if there is an absolute trust in the network or that the server was not compromised.
Personally, I use this flag only when I boot machines in a cloud environment with cloud-init immediately after the machine started.
Here's how to tell your client to trust the key. A better approach is to give it the key in advance, which I've described in the second paragraph. This is for an OpenSSH client on Unix, so I hope it's relevant to your situation.
You can set the StrictHostKeyChecking parameter. It has options yes, no, and ask. The default is ask. To set it system wide, edit /etc/ssh/ssh_config; to set it just for you, edit ~/.ssh/config; and to set it for a single command, give the option on the command line, e.g.
ssh -o "StrictHostKeyChecking no" hostname
An alternative approach if you have access to the host keys for the remote system is to add them to your known_hosts file in advance, so that SSH knows about them and won't ask the question. If this is possible, it's better from a security point of view. After all, the warning might be right and you really might be subject to a man-in-the-middle attack.
For instance, here's a script that will retrieve the key and add it to your known_hosts file:
ssh -o 'StrictHostKeyChecking no' hostname cat /etc/ssh/ssh_host_dsa_key.pub >>~/.ssh/known_hosts
Since you are trying to automate this by running a bash script on the host that is doing the ssh-ing, and assuming that:
You don't want to ignore host keys because that's an additional security risk.
Host keys on the host you're ssh-ing to rarely change, and if they do there's a good, well-known reason such as "the target host got rebuilt"
You want to run this script once to add the new key to known_hosts, then leave known_hosts alone.
Try this in your bash script:
# Remove old key
ssh-keygen -R $target_host
# Add the new key
ssh-keyscan $target_host >> ~/.ssh/known_hosts
You just have to update the current fingerprint that's being sent from server. Just Type in the following and you'll be good to go :)
ssh-keygen -f "/home/your_user_name/.ssh/known_hosts" -R "server_ip"
Just adding the most 'modern' approach.
Like all other answers - this means you are BLINDLY accepting a key from a host. Use CAUTION!
HOST=hostname ssh-keygen -R $HOST && ssh-keyscan -Ht ed25519 $HOST >> "$HOME/.ssh/known_hosts"
First remove any entry using -R, and then generate a hashed (-H) known_hosts entry which we append to the end of the file.
As with this answer prefer ed25519.
Get a list of SSH host IPs (or DNS name) output to a file > ssh_hosts
Run a one-liner to populate the ~/.ssh/known_hosts on the control node (often do this to prepare target nodes for Ansible run)
NOTE: Assume we prefer ed25519 type of host key
# add the target hosts key fingerprints
while read -r line; do ssh-keyscan -t ed25519 $line >> ~/.ssh/known_hosts; done<ssh_hosts
# add the SSH Key('s) public bit to target hosts `authorized_keys` file
while read -r line; do ssh-copy-id -i /path/to/key -f user#$line; done<ssh_hosts
ssh -o UserKnownHostsFile=/dev/null user#host
Add following file
~/.ssh/config
and this in the file as content
StrictHostKeyChecking no
This setting will make sure that ssh will never ask for fingerprint check again.
This should be added very carefully as this would be really dangerous and allow to access all fingerprints.

script to ssh to a unix server

It will be helpful if somebody could tell me how to connect to a unix server using username and password as arguments.my username and password is "anitha".
How can i create a shell script which automatically connect to my unix server with this username and password?
I guess you want to remotely connect to your *nix server from network. Base on my guess, to:
connect to remote *nix server, everybody is using SSH
ssh anitha#anitha ip-to-unix-server
automatically connect, write simple bash shell wrap around your ssh connect command and do something, not suggested, you should use ssh password less login (aka public/private key)
#!/usr/bin/env bash
ip=172.16.0.1 #replace 172.16.0.1 with your unix server's ip
username=anitha #your ssh username
password=anitha #your ssh password
command=who #what do you want to do with remote server
arguments= #arguments for your command
expect -c 'spawn ssh $username#$ip ; expect password ; send "$password\n" ; interact'
connect without typing password, you may need to use SSH password less login
Use sshpass if you really need to use non-interactive keyboard-interactive authentication (pun intended) or better switch to using pubkey-based authentication.
Note that passing the password in clear to the ssh client is very lame as the password gets exposed in the publicly-readable process list where it can be read by anyone. sshpass works around this problem by creating a pseudo-terminal and communicating with the ssh client using it, so at least the password is not exposed at runtime.
Step 1:
jsmith#local-host$ [Note: You are on local-host here]
jsmith#local-host$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/jsmith/.ssh/id_rsa):[Enter key]
Enter passphrase (empty for no passphrase): [Press enter key]
Enter same passphrase again: [Pess enter key]
Your identification has been saved in /home/jsmith/.ssh/id_rsa.
Your public key has been saved in /home/jsmith/.ssh/id_rsa.pub.
The key fingerprint is:
33:b3:fe:af:95:95:18:11:31:d5:de:96:2f:f2:35:f9 jsmith#local-host
Step 2:
From Local-host, run this One liner for password less ssh connectivity.
cat ~/.ssh/id_dsa.pub | ssh useronanotherserver#anotherservername 'cat >> ~/.ssh/authorized_keys'
You should use expect, which is an extension of tcl that was made specifically for automating login tasks.
Basic ssh login question: could not able to spawn(ssh) using expect
How to interact with the server programattically after you have established the session: Expect Script to Send Different String Outputs

ssh: The authenticity of host 'hostname' can't be established

When i ssh to a machine, sometime i get this error warning and it prompts to say "yes" or "no". This cause some trouble when running from scripts that automatically ssh to other machines.
Warning Message:
The authenticity of host '<host>' can't be established.
ECDSA key fingerprint is SHA256:TER0dEslggzS/BROmiE/s70WqcYy6bk52fs+MLTIptM.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'pc' (ECDSA) to the list of known hosts.
Is there a way to automatically say "yes" or ignore this?
Depending on your ssh client, you can set the StrictHostKeyChecking option to no on the command line, and/or send the key to a null known_hosts file. You can also set these options in your config file, either for all hosts or for a given set of IP addresses or host names.
ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no
EDIT
As #IanDunn notes, there are security risks to doing this. If the resource you're connecting to has been spoofed by an attacker, they could potentially replay the destination server's challenge back to you, fooling you into thinking that you're connecting to the remote resource while in fact they are connecting to that resource with your credentials. You should carefully consider whether that's an appropriate risk to take on before altering your connection mechanism to skip HostKeyChecking.
Reference.
Old question that deserves a better answer.
You can prevent interactive prompt without disabling StrictHostKeyChecking (which is insecure).
Incorporate the following logic into your script:
if [ -z "$(ssh-keygen -F $IP)" ]; then
ssh-keyscan -H $IP >> ~/.ssh/known_hosts
fi
It checks if public key of the server is in known_hosts. If not, it requests public key from the server and adds it to known_hosts.
In this way you are exposed to Man-In-The-Middle attack only once, which may be mitigated by:
ensuring that the script connects first time over a secure channel
inspecting logs or known_hosts to check fingerprints manually (to be done only once)
To disable (or control disabling), add the following lines to the beginning of /etc/ssh/ssh_config...
Host 192.168.0.*
StrictHostKeyChecking=no
UserKnownHostsFile=/dev/null
Options:
The Host subnet can be * to allow unrestricted access to all IPs.
Edit /etc/ssh/ssh_config for global configuration or ~/.ssh/config for user-specific configuration.
See http://linuxcommando.blogspot.com/2008/10/how-to-disable-ssh-host-key-checking.html
Similar question on superuser.com - see https://superuser.com/a/628801/55163
Make sure ~/.ssh/known_hosts is writable. That fixed it for me.
The best way to go about this is to use 'BatchMode' in addition to 'StrictHostKeyChecking'. This way, your script will accept a new hostname and write it to the known_hosts file, but won't require yes/no intervention.
ssh -o BatchMode=yes -o StrictHostKeyChecking=no user#server.example.com "uptime"
This warning is issued due the security features, do not disable this feature.
It's just displayed once.
If it still appears after second connection, the problem is probably in writing to the known_hosts file.
In this case you'll also get the following message:
Failed to add the host to the list of known hosts
You may fix it by changing owner of changing the permissions of the file to be writable by your user.
sudo chown -v $USER ~/.ssh/known_hosts
Edit your config file normally located at '~/.ssh/config', and at the beggining of the file, add the below lines
Host *
User your_login_user
StrictHostKeyChecking no
IdentityFile ~/my_path/id_rsa.pub
User set to your_login_user says that this settings belongs to your_login_user
StrictHostKeyChecking set to no will avoid the prompt
IdentityFile is path to RSA key
This works for me and my scripts, good luck to you.
Ideally, you should create a self-managed certificate authority. Start with generating a key pair:
ssh-keygen -f cert_signer
Then sign each server's public host key:
ssh-keygen -s cert_signer -I cert_signer -h -n www.example.com -V +52w /etc/ssh/ssh_host_rsa_key.pub
This generates a signed public host key:
/etc/ssh/ssh_host_rsa_key-cert.pub
In /etc/ssh/sshd_config, point the HostCertificate to this file:
HostCertificate /etc/ssh/ssh_host_rsa_key-cert.pub
Restart the sshd service:
service sshd restart
Then on the SSH client, add the following to ~/.ssh/known_hosts:
#cert-authority *.example.com ssh-rsa AAAAB3Nz...cYwy+1Y2u/
The above contains:
#cert-authority
The domain *.example.com
The full contents of the public key cert_signer.pub
The cert_signer public key will trust any server whose public host key is signed by the cert_signer private key.
Although this requires a one-time configuration on the client side, you can trust multiple servers, including those that haven't been provisioned yet (as long as you sign each server, that is).
For more details, see this wiki page.
Do this -> chmod +w ~/.ssh/known_hosts. This adds write permission to the file at ~/.ssh/known_hosts. After that the remote host will be added to the known_hosts file when you connect to it the next time.
With reference to Cori's answer, I modified it and used below command, which is working. Without exit, remaining command was actually logging to remote machine, which I didn't want in script
ssh -o StrictHostKeyChecking=no user#ip_of_remote_machine "exit"
Add these to your /etc/ssh/ssh_config
Host *
UserKnownHostsFile=/dev/null
StrictHostKeyChecking=no
Generally this problem occurs when you are modifying the keys very oftenly. Based on the server it might take some time to update the new key that you have generated and pasted in the server. So after generating the key and pasting in the server, wait for 3 to 4 hours and then try. The problem should be solved. It happened with me.
The following steps are used to authenticate yourself to the host
Generate a ssh key. You will be asked to create a password for the key
ssh-keygen -f ~/.ssh/id_ecdsa -t ecdsa -b 521
(above uses the recommended encryption technique)
Copy the key over to the remote host
ssh-copy-id -i ~/.ssh/id_ecdsa user#host
N.B the user # host will be different to you. You will need to type in the password for this server, not the keys password.
You can now login to the server securely and not get an error message.
ssh user#host
All source information is located here:
ssh-keygen
For anyone who finds this and is simply looking to prevent the prompt on first connection, but still wants ssh to strictly check the key on subsequent connections (trust on first use), you can set StrictHostKeyChecking to accept-new in ~/.ssh/config, which will do what you're looking for. You can read more about it in man ssh_config. I strongly discourage disabling key checking altogether.
Run this in host server it's premonition issue
chmod -R 700 ~/.ssh
I had the same error and wanted to draw attention to the fact that - as it just happened to me - you might just have wrong privileges.You've set up your .ssh directory as either regular or root user and thus you need to be the correct user. When this error appeared, I was root but I configured .ssh as regular user. Exiting root fixed it.
This is trying to establish password-less authentication. So, if you try to run that command manually once, it will ask to provide the password there. After entering password, it saves that password permanently, and it will never ask again to type 'yes' or 'no'.
For me the reason is that I have wrong permission on ~/.ssh/known_hosts.
I have no write permission on known_hosts file. So it ask me again and again.
In my case, the host was unkown and instead of typing yes to the question are you sure you want to continue connecting(yes/no/[fingerprint])? I was just hitting enter .
I solve the issue which gives below written error:
Error:
The authenticity of host 'XXX.XXX.XXX' can't be established.
RSA key fingerprint is 09:6c:ef:cd:55:c4:4f:ss:5a:88:46:0a:a9:27:83:89.
Solution:
1. install any openSSH tool.
2. run command ssh
3. it will ask for do u add this host like.
accept YES.
4. This host will add in the known host list.
5. Now you are able to connect with this host.
This solution is working now......

Disabling unidentified host confirmation when connecting to Amazon EC2 instances using SSH

I am writing a script using boto and Python to automatically launch an Amazon EC2 instance and interact with it using SSH. Everything works fine except that every time I establish the connection, SSH prompts me to confirm the authenticity of the host like this:
The authenticity of host 'ec2-174-129-121-25.compute-1.amazonaws.com (174.129.121.25)' can't be established.
RSA key fingerprint is 26:09:bd:21:4f:55:20:3f:0d:fc:5f:cc:3e:08:30:db.
Are you sure you want to continue connecting (yes/no)?
My SSH command is:
ssh -i ssh2.pem root#ec2-174-129-121-25.compute-1.amazonaws.com
Since every EC2 instance is a new host, I have to confirm this every time, but I want an automatic script without any user input. What is the best solution?
Use -O StrictHostKeyChecking=no and, optionally, set the KnownHostsFile of /dev/null (if you want to be totally insecure about things). But remember, you're bypassing security measures meant to protect you!
edit and probably CheckHostIP=no too. man ssh and see all the gory bits.
For PuTTY and windows you can use
echo y | plink -pw yourpassword root#yourservername.com