How to disable GSSAPI authentication using PuTTY Plink? - ssh

I'm using plink.exe (from PuTTY) to run shell commands over SSH. Trying to authenticate via GSSAPI makes it slow ("freezes" ~ 7 sec while trying). Therefore I want to disable GGSSAPI authentication.
Under PuTTY I can disable GSSAPI authentication and everything is fine (because I don't want to authenticate via GSSAPI).
How to disable GSSAPI authentication using plink.exe?

There's no command-line switch in Plink/PuTTY to disable GSSAPI.
All you can do is to configure a stored session in PuTTY GUI with GSSAPI disabled and use it in Plink using -load switch.
plink -load "my session with disabled gssapi"
You can combine that with other command-line options. So you can create a stored session that only disables GSSAPI:
plink -load "disable gssapi" username#hostname

Related

Jenkins CLI who-am-i command always reporting anonymous

I have a user called "jenkins" that has id_rsa.pub key in it's configuration. When I attempt to run java -jar jenkins-cli.jar who-am-i it always reports back:
Authenticated as: anonymous
Authorities:
This makes me think it's failing to authenticate and defaulting to anonymous.
Any ideas?
You'll need to specify that you want to connect via SSH and specify the username.
java -jar jenkins-cli.jar -s https://your-jenkins-server/jenkins/ -ssh -user "your-user" who-am-i
You will also have to enable the SSH server in Jenkins (Configure Global Security -> SSH Server). Official wiki article:
https://wiki.jenkins.io/display/JENKINS/Jenkins+SSH

Can I pass RSA hostkey of server as PuTTY command line option?

Do we have option on PuTTY command line to send RSA hostkey as an argument similar to WinSCP -hostkey.?
PuTTY command currently used:
putty.exe -ssh -l username -pw password -m command.txt RemoteServerIP
Is there a option like WinSCP where RSA hostkey can be passed just like below:
open sftp://username :password#RemoteServerIP/ -hostkey="ssh-rsa 2048 11:2c:5d:f5:22:22:ab:12:3a:be:37:1c:cd:f6:13:d1"
Also let me know, if my option of using PuTTY for this task is a bad option.
Detailed explanation for those who are interested to know entire background:
I have developed a Django application to kick off some remote scripts
and get the task done. This uses putty ssh to run commands at the
background using subprocess module, parameters are passed from the
Djangofront end.
Problem I am facing is, There are multiple users who will use this
application to kick off their scripts. Only requirement is they have
to store IP address and RSA key of the server on a config file on
Django Server.
Since all of the servers use RSA key, for the first login it asks to
confirm the RSA fingerprint storage prompt.
Usually when we kick off this manually from our local machine we give
Yes, for the first time. and subsequent runs it won't ask for the
confirmation.
Since these scripts will be running from a DjangoServer where users
won't have access, is there a way I can still be able to run the
remote scripts using putty?
Please note I am aware of kicking off script using WinSCP but
unfortunately in our environment I cannot kickoff Scripts from
WinSCP, but I can FTP using WinSCP and I use hostkey option so it
does not prompt for confirmation
There are several ways of dealing with SSH/SCP/SFTP host key verification.
One way is described in this answer to a similar question on ServerFault. Echo y or n depending on whether you do or don't want the key added to the cache in the registry. Redirect the error output stream to suppress the notification messages.
echo 'y' | plink -l USERNAME HOSTNAME 'COMMANDLINE' 2>$null # cache host key
echo 'n' | plink -l USERNAME HOSTNAME 'COMMANDLINE' 2>$null # do not cache host key
Note, however, that this will fail if you don't want to cache the key and use batch mode:
echo 'n' | plink -batch -l USERNAME HOSTNAME 'COMMANDLINE' # this won't work!
Note, however, that this approach essentially disables the host key verification, which was put in place to protect from man-in-the-middle attacks. Which is to say that automatically accepting host keys from arbitrary remote hosts is NOT RECOMMENDED.
Better alternatives to automatically accepting arbitrary host keys would be:
Saving a PuTTY session for which you already validated the host key, so you can re-use it from plink like this:
plink -load SESSION_NAME 'COMMANDLINE'
Pre-caching the host key in the registry prior to running plink. There is a Python script that can convert a key in OpenSSH known_hosts format to a registry file that you can import on Windows if you don't want to manually open a session and verify the fingerprint.
Providing the fingerprint of the server's host key when running plink:
$user = 'USERNAME'
$server = 'HOSTNAME'
$cmd = 'COMMANDLINE'
$fpr = 'fa:38:b6:f2:a3:...'
plink -batch -hostkey $fpr -l $user $server $cmd
All of these assume that you obtained the relevant information via a secure channel and properly verified it, of course.
PuTTY also has -hostkey switch, just with a slightly different syntax:
-hostkey 11:2c:5d:f5:22:22:ab:12:3a:be:37:1c:cd:f6:13:d1
And indeed, PuTTY is not the right tool to automate command execution.
Instead, use Plink (PuTTY command-line connection tool):
plink.exe -ssh -l username -pw password -hostkey aa:bb:cc:... hostname command

How to do other "stuff" in the same terminal where you establish an SSH tunnel

I often use an ssh tunnel. I open up one terminal to create the tunnel (e.g. ssh -L 1111:servera:2222 user#serverb). Then I open a new terminal to do my work. Is there a way to establish the tunnel in a terminal and somehow put it in the background so I don't need to open up a new terminal? I tried putting "&" at the end, but that didn't do the trick. The tunnel went into the background before I could enter the password. Then I did fg, entered the password and I was stuck in the ssh session.
I know one possible solution would be to use screen or tmux or something like that. Is there a simple solution I'm missing?
There is the -f and -N options exactly for that:
-f Requests ssh to go to background just before command execution. This is useful if
ssh is going to ask for passwords or passphrases, but the user wants it in the
background. This implies -n. The recommended way to start X11 programs at a remote
site is with something like ssh -f host xterm.
If the ExitOnForwardFailure configuration option is set to ``yes'', then a client
started with -f will wait for all remote port forwards to be successfully established
before placing itself in the background.
-N Do not execute a remote command. This is useful for just forwarding ports
(protocol version 2 only).
So the full command would be ssh -fNL 1111:servera:2222 user#serverb.
A way to prevent ssh asking for the password would also be to use SSH public keys for authentication with an agent that either saves the password or prompts it using an external graphical program such as pinentry.
It might also be useful for you to look into autossh, which will reconnect your SSH automatically if the connection drops.

Net::SSH::AuthenticationFailed: Authentication failed

From workstation (Windows) trying to execute
knife ssh 'name:*' 'sudo chef-client'
But it shows error message of
WARNING: Failed to connect to ******** – Net::SSH::AuthenticationFailed: Authentication failed for user ************
How do I solve this error?
Another question is how to execute 'sudo chef-client' on all nodes from workstation without using any passwords?
If you run knife ssh --help you'll get a list of available options. Try adding -VV for verbose output. That's usually helpful as it should tell you what user knife is trying to connect as.
My guess is you'll have to incorporate one or more of the ssh options (a few listed here):
-x, --ssh-user USERNAME
-i, --identity-file IDENTITY_FILE
-P, --ssh-password [PASSWORD] (will prompt if flag specified but no password is given)
The docs (https://docs.getchef.com/knife_ssh.html) also have some helpful examples
Your SSH authentication isn't working, fix that. Key-based authentication is something I'm sure you can look up on Google, but in general set your public key in .ssh/authorized_keys and setup your agent on your workstation.

How to use ansible with two factor authentication?

I have enabled two factor authentication for ssh using duosecurity (using this playbook https://github.com/CoffeeAndCode/ansible-duo ).
How can I use ansible to manage the server now. The SSH calls fail at gathering facts because of this. I want the person running the playbook to enter the two factor code before the playbook is run.
Disabling two factor for the deployment user is a possible solution but creates a security issue which I would I like to avoid.
It's a hack, but you can tunnel a non-2fac Ansible SSH connection through a 2fac-enabled SSH connection.
Overview
We will setup two users: ansible will be the user Ansible will use. It should be authenticated in a way that's supported by Ansible (i.e., not 2fac). This user will be restricted so it cannot connect from anywhere but 127.0.0.1, so it is not accessible from outside the machine.
The second user, ansible_tunnel will be open to the outside world, but will be authenticated by two factors, and will only allow tunneling of SSH connections to the local machine.
You must be able to configure 2-factor authentication only for some users (not all).
Some info on SSH tunnels.
On the target machine:
Create two users: ansible and ansible_tunnel
Put your public key in ~/.ssh/authorized_keys of both users
Set the shell of ansible_tunnel to /bin/false, or lock the user - it will be used for tunneling exclusively, not running commands
Add the following to /etc/ssh/sshd_config:
AllowTcpForwarding no
AllowUsers ansible#127.0.0.1 ansible_tunnel
Match User ansible_tunnel
AllowTcpForwarding yes
PermitOpen 127.0.0.1:22
ForceCommand echo 'This account can only be used for tunneling SSH sessions'
Setup 2-factor authentication only for ansible_tunnel
Restart sshd
On the machine running Ansible:
Before running Ansible, run the following (on the Ansible machine, not the target):
ssh -N -L 8022:127.0.0.1:22 ansible_tunnel#<host>
You will be authenticated using two factors.
Once the tunnel is up (check with netstat), run Ansible with ansible_ssh_user=ansible, ansible_ssh_port=8022 and ansible_ssh_host=localhost.
Recap
Only ansible_tunnel can connect from the outside, and it will be authenticated using two factors
Once the tunnel is set up, connecting to port 8022 on the local machine is the same as connecting to sshd on the remote machine
We're allowing ansible to connect over SSH only when it is done through the localhost, so only connections that are tunneled are allowed
Scale
This will not scale well for multiple server, due to the need to open a separate tunnel for each machine, which requires manual action. However, if you've chosen 2-factor authentication for your servers you're already willing to do some manual action to connect to each server, and this solution will only add a little overhead with some script-wrapping.
[EDITED TO ADD]
Bonus
For convenience, we may want to log into the maintenance account directly to do some manual work, without going through the process of setting up a tunnel. We can configure SSH to require 2fac authentication in this case, while maintaining the ability to connect without 2fac through the tunnel:
# All users must authenticate using two factors
AuthenticationMethods publickey,keyboard-interactive
# Allow both maintenance user and tunnel user with no restrictions
AllowUsers ansible ansible_tunnel
# The maintenance user is allowed to authenticate using a single factor only
# when connecting from a local address - it should be impossible to connect to
# this user using a single factor from the outside (the only way to do that is
# having an existing access to the machine, or use the two-factor tunnel)
Match User ansible Address 127.0.0.1
AuthenticationMethods publickey
I can use ansible with ssh and 2FA using the ControlMaster feature of ssh and ansible.
My local ssh client is configured to dump a ControlPath socket for multiplexing connection. Ansible is configured to use the same socket.
Local ssh client
This configuration enable multiplexing for all connections. I personally store this configuration in `~/.ssh/config:
Host *
ControlMaster auto
ControlPath ~/.ssh/master-%r#%h:%p.socket
ControlPersist 1m
When a connection is established, a socket appears in the $HOME/.ssh directory. This socket persists during one minute after disconnection.
Configure ansible
Ansible is configured to re-use the local socket.
Add this in your ansible configuration file (for instance, ~/.ansible.cfg):
[ssh_connection]
control_path=~/.ssh/master-%%r#%%h:%%p.socket
Note the double % for variable substitution.
Usage
Connect to your server using ssh regular command (ssh user#server), and perform 2FA;
Launch your ansible command as usual.
The step 2 must be performed within the ControlPersist configuration, or keep an ssh connection in a terminal when you launch ansible command in another one.
You can also force to close connection when you do not need it, using: ssh -O exit user#server.
Note that, if you open a third terminal and run ssh user#server, you will not be asked for credentials: the connection established in 1. will be re-used.
Drawbacks
In case of bad network conditions
Sometimes, when you loose connection, the socket persists. Every further connection hangs. You must manually disconnect this connection, using ssh -O exit user#server. This is the only known drawback for this method.
References:
Ansible parameter ANSIBLE_SSH_CONTROL_PATH
About multiplexing ssh (a very old blog post which makes me discover ssh multiplexing: https://blog.scottlowe.org/2015/12/11/using-ssh-multiplexing/)
Solution using a Bastion Host
Even using an ssh bastion host it took me quite a while to get this working. In case it helps anyone else, here's what I came up with. It uses the ControlMaster ssh config options and since ansible uses regular ssh it can be configured to use the same ssh features and re-use the connection to the bastion host regardless of how many connections it opens to remote hosts. I've seen these Control options recommended in general (presumably for performance reasons if you have a lot of hosts) but not in the context of 2FA to a bastion host.
With this approach you don't need any sshd config changes, so you'll want AuthenticationMethods publickey,keyboard-interactive as the only authentication method setting on the bastion server, and publickey only for all your other servers that you're proxying through the bastion to get to. Since the bastion host is the only one that accepts external connections from the internet, it's the only one that requires 2FA, and internal hosts rely on agent forwarding for public key authentication but don't use 2FA.
On the client, I created a new ssh config file for my ansible environment in the top-level directory that I run ansible from (so sibling of ansible.cfg) called ssh.config. It contains:
Host bastion-persistent-connection
HostName <bastion host>
ForwardAgent yes
IdentityFile ~/.ssh/my-key
ControlMaster auto
ControlPath ~/.ssh/ansible-%r#%h:%p
ControlPersist 10m
Host 10.0.*.*
ProxyCommand ssh -W %h:%p bastion-persistent-connection -F ./ssh.config
IdentityFile ~/.ssh/my-key
Then in ansible.cfg I have:
[ssh_connection]
ssh_args = -F ./ssh.config
A few things to note:
My private subnet in this case is 10.0.0.0/16 which maps to the host wildcard option above. The bastion proxies all ssh connections to servers on this subnet.
This is a bit brittle in that I can only run my ssh or ansible commands in this directory, because of the ProxyCommand passing the local path to this config file. Unfortunately I don't think there's an ssh variable that maps to the current config file being used so that I could pass the same config file to the ProxyCommand automatically. Depending on your environment it might be better to use an absolute path for this.
The one gotcha is it makes running ansible more complex. Unfortunately, from what I can tell ansible has no support whatsoever for 2FA. So if you have no existing ssh connection to the bastion, ansible will print out Verification code: once for every private server it's connecting to, but it's not actually listening for the input so no matter what you do the connections will fail.
So I first run: ssh -F ssh.config bastion-persistent-connection
This creates the socket file in ~/.ssh/ansible-*, and the ssh agent locally will close & remove that socket after the configurable time (what I have set to 10m).
Once the socket is open I can run ansible commands like normal, e.g. ansible all -m ping and they succeed.