SSH Public Key Failure - ssh

I ran into a fairly niche issue that may solve issues that others may have, but haven't been able to solve; when following all the normal tips troubleshooting SSH public key issues.
System:
OpenSSH
Rhel 7
NFS home directory mounts
If you are using NFS home directory mounts, there is a SELinux setting that you need to have enabled to allow SSH with public keys.
The command to enable this is as follows:
setsebool -P use_nfs_home_dirs 1
This change with persist, so no worries about having to do this every time on restart.
Without enabling this setting in SELinux when you SSH the command will not be given access to read the authorized_keys file resulting in a public key authentication failure.
An easy was to view this issue is to run journalctl -f on the server and then attempt to SSH using public keys. You will see an error saying SELinux is preventing /usr/sbin/sshd from reading ~/.ssh/authorized_keys.
I hope this saves someone the headache I had.

The command to enable NFS home directories: setsebool -P use_nfs_home_dirs 1

Related

ssh and sudo: pam_unix(sudo:auth): conversation failed, auth could not identify password for [username]

I'm facing a weird behavior trying to run rsync as sudo through ssh with passwordless login.
This is something I do with dozens of servers, I'm having this frustrating problem connecting to a couple of Ubuntu 18.04.4 servers
PREMISE
the passwordless SSH from CLIENT to SERVER with account USER works
nicely
When I'm logged in SERVER I can sudo everything with
account USER
On SERVER I've added the following to /etc/sudoers
user ALL=NOPASSWD:/usr/bin/rsync
Now, if I launch this simple test from machine CLIENT as user USER, I receive the following sudo error message:
$ ssh utente#192.168.200.135 -p 2310 sudo rsync
sudo: no tty present and no askpass program specified
Moreover, looking in the SERVER's /var/log/auth.log I found this errors:
sudo: pam_unix(sudo:auth): conversation failed
sudo: pam_unix(sudo:auth): auth could not identify password for [user]
am not an PAM expert, but tested the following solution working on Ubuntu 16.04.5 and 20.04.1
NOTE : Configuration set to default on /etc/ssh/sshd_config
$ sudo visudo -f /etc/sudoers.d/my_config_file
add the below lines
my_username ALL=(ALL) NOPASSWD:ALL
and don't forget to restart sshd
$ sudo systemctl restart sshd
I've found a solution thanks to Centos. Infact, because of the more complex configuration of /etc/sudoers in Centos (compared to Ubuntu or Debian), I've been forced to put my additional configurations to an external file in /etc/sudoers.d/ instead than putting it directly into /etc/sudoers
SOLUTION:
Putting additional configurations directly into /etc/sudoers wouldn't work
Putting the needed additional settings in a file within the directory /etc/sudoers.d/ will work
e.g. , these are the config lines put in a file named /etc/sudoers.d/my_config_file:
Host_Alias MYSERVERHOST=192.168.1.135,localhost
# User that will execute Rsync with Sudo from a remote client
rsyncuser MYSERVERHOST=NOPASSWD:/usr/bin/rsync
Why /etc/sudoers didn't work? It's unknown to me even after two days worth of Internet search. I find this very obscure and awful.
What follows is a quote from this useful article: https://askubuntu.com/a/931207
Unlike /etc/sudoers, the contents of /etc/sudoers.d survive system upgrades, so it's preferrable to create a file there than to modify /etc/sudoers.
For the editing of any configuration file to be used by sudo the command visudo is preferable.
i.e.
$ sudo visudo -f /etc/sudoers.d/my_config_file
I had a similar problem on a custom linux server, but the solution was similar to the answers above.
As soon as I removed the line your_user ALL=(ALL) NOPASSWD:ALL from /etc/sudoers, the errors were gone.

Run ssh on Apache -> Failed to get a pseudo terminal: Permission denied

I'm using flask with apache(mod_wsgi).
When I use ssh module with external command subprocess.call("ssh ......",shell=True)
(My Python Flask code : Not wrong)
ssh = "sshpass -p \""+password+"\" ssh -p 6001 "+username+"#"+servername+" \"mkdir ~/MY_SERVER\""
subprocess.call(ssh, shell=True)
I got this error on Apache error_log : Failed to get a pseudo terminal: Permission denied
How can I fix this?
I've had this problem under RHEL 7. It's due to SELinux blocking apache user to access pty. To solve:
Disable or set SELinux as permissive (check your security needs): edit /etc/selinux/config and reboot.
Allow apache to control its directory for storing SSH keys:
sudo -u apache
chown apache /etc/share/httpd
ssh to desired host, accept key.
I think apache's login shell is "/sbin/nologin".
If you want to allow apache to use shell command, modify /etc/passwd and change the login shell to another shell like "/bin/bash".
However, this method is vulnerable to security. Many python ssh modules are available in internet. Use one of them.
What you are doing seems frightfully insecure. If you cannot use a Python library for your SSH connections, then you should at least plug the hole that is shell=True. There is very little here which is done by the shell anyway; doing it in Python affords you more control, and removes a big number of moving parts.
subprocess.call(['/usr/bin/sshpass', '-p', password,
'/usr/bin/ssh', '-T', '-p', '6001', '{0}#{1}'.format(username, servername),
'mkdir ~/MY_SERVER'])
If you cannot hard-code the paths to sshpass and ssh, you should at least make sure you have a limited, controlled PATH variable in your environment before doing any of this.
The fix for Failed to get a pseudo-terminal is usually to add a -T flag to the ssh command line. I did that above. If your real code actually requires a tty (which mkdir obviously does not), perhaps experiment with -t instead, and/or redirecting standard input and standard output.

Cannot ssh into server except with google dev console ssh

I cannot ssh from my computer into the server hosted on Google Cloud.
I tried the normal ssh-keygen with user#domain.com and uploading the public key, which worked last time, but this time it didn't. The issue started after I changed the password for the account. After that I could no longer ssh or sftp into the account, although I wasn't disconnected until I disconnected.
I then tried the gcloud ssh user#instance and it ran fine and told me it just hasn't propagated yet.
I added AllowUsers user to the server's ssh config file and I restarted ssh on the server, but still the same result
Here's the error:
Permission denied (publickey).
ERROR: (gcloud.compute.ssh) [/usr/bin/ssh] exited with return code [255].
Update:
I've been working with Google tech support and this issue is still unresolvable. A file called authorized_keys permissions keep getting changed on boot to another user, who I also cannot log in as.
So I change it to:
thisUser:www-data 755
but on boot it changes it to:
otherUser:otherUser 600
There are a couple of things in order to fix this. You can take advantage of the metadata feature in GCE and add a startup script that would automatically change the permissions.
From the developers console, go to your Instance > Metadata and add a pair of Key/value
key : startup-script
value: chmod 755 /home/your_user/.ssh/authorized_keys OR chmod 755 ~/.ssh
after rebooting you should check the Serial Ouput option further down that page and see if it ran on startup. it should show you something along these lines :
startup script found in metadata.
startupscript: Running startup script /var/run/google.startup.script
Further information can be found HERE
Hope that helps!
I solved this by deleting the existing ssh key under Custom metadata in the VM settings. I then could login on ssh

SSH keys setup but still asking for password (but not for 2nd, 3rd, etc. sessions)

The target server is a relatively clean install of Ubuntu 14.04. I generated a new ssh key using ssh-keygen and added it to my server using ssh-copy-id. I also checked that the public key was in the ~/.ssh/authorized_keys file on the server.
Even still, I am prompted for a password every time I try to ssh into the server.
I noticed something weird however. After I log into my first session using my password, the next concurrent sessions don't ask for a password. They seem to be using the ssh key properly. I've noticed this behaviour on two different clients (Mint OSX).
Are you sure your SSH key isn't protected by a password? Try the following:
How do I remove the passphrase for the SSH key without having to create a new key?
If that's not the case, it may just be that ssh is having trouble locating your private key. Try using the -i flag to explicitly point out its location.
ssh -i /path/to/private_key username#yourhost.com
Thank you Samuel Jun for the link to help.ubuntu.com - SSH Public Key Login Troubleshooting !
Just a little caveat:
If you copy your authorized keys file outside your encrypted home directory please make sure your root install is encrypted as well (imho Ubuntu still allows for unencrypted root install coupled with encryption of the home directory).
Otherwise this defeats the whole purpose of using encryption in the first place ;)
If this is happening to you on Windows (I'm on Windows 10)
Try running the program that you're trying to connect via ssh to the server as administrator.
For me I was using powershell with scoop to install a couple of things so that I could ssh straight from it. Anyway... I ran PowerShell as admin and tried connecting again and it didn't ask for my password.
For LinuxSE
Check the SE context with
% ls -dZ ~user/.ssh
Must contain unconfined_u:object_r:ssh_home_t:s0
If not, that was the problem , as root run
# for i in ~user/.ssh ~user/.ssh/*
do
semanage fcontext -a -t ssh_home_t $i
done
# restorecon -v -R ~user/.ssh
It looks like it's related to encryption on your home directory and therefore the authorized_keys file cannot be read.
https://unix.stackexchange.com/a/238570
Make sure your ssh public key was copied to the remote host in the right format. If you open the key file to edit it should read 1 line.
Basically, just do ssh-copy-id username#remote. It will take care of the rest.

Connecting to a remote Centos server using SSH Keys

I am trying to connect to a Centos 6.3 Server using an SSH Key so I can run a script remotely without it asking for a password everytime. I have followed the following instructions:
Login to the server using the normal ssh command and password one time so the server adds your computer to the known hosts
In your computer using cygwin-terminal generate the keys and leave the passphrase blank:ssh-keygen -t rsa
Now set permissions on your private key and ssh folder:chmod 700 ~/.ssh & chmod 600 ~/.ssh/id_rsa
Copy the public key (id_rsa.pub) to the server, login to the server and add the public key to the authorized_keys list: cat id_rsa.pub >> ~/.ssh/authorized_keys
Once you've imported the public key, you can delete it from the server. Set file permissions on the server: chmod 700 ~/.ssh & chmod 600 ~/.ssh/authorized_keys
Retart the ssh daemon on the server: service sshd restart
Test the connection from your computer:ssh root#198.61.220.107
But when I try to ssh to the remote server it is still asking me for the password. The .ssh folder was not created on the server so I had to created myself. Any ideas of what might be happening? did I miss something? Is there another way to set up the keys?
Well it turns out I had stupidly changed the owner of the /root directory when I was setting up the server so since this is where the /.ssh directory was for the user I was trying to loggin with (root) it was denying access to that directory because it belonged to another user.
Dec 10 16:25:49 thyme sshd[9121]: Authentication refused: bad ownership or modes for directory /root
I changed the owner back to root and that did it.
chown root /root
Thanks guys for you help.
Apparently this is a known bug. The suggested solution doesn't actually work, but I found that this would on a CentOS 6.2 system at work:
chmod 600 .ssh/authorized_keys
chmod 700 .ssh
Althogh OP had found a solution, I would like to record my solution of similar problem in the hope that it will be helpful to those who google similar problem and reach this answer.
The reason of my issue is that the .ssh directory in the user's home folder on CentOS server was not set a proper mode after being created by useradd command.
In addition, I need to manually set .ssh folder mode by following commands:
chmod g-w /home/user
chmod 700 /home/user/.ssh
chmod 600 /home/user/.ssh/authorized_keys
Other answers are generic, note that Centos 6 uses selinux. selinux can deny access to the authorised_keys file despite correct permissions and ownership
From the known issues in Centos 6 Release Notes:
Make sure that you setup correctly the selinux context of the public key if you transfer it to a CentOS 6 server with selinux
enabled. Otherwise selinux might forbid access to the
~/.ssh/authorized_keys file and by matter of consequence key
authentication will not work. In order to setup the correct context
you can use:
restorecon -R -v /home/user/.ssh
ssh-copy-id from CentOS 6 is aware of selinux contexts and the previous workaround is not needed.