When logging using public key from client then always password is requested even everything is apparently configured. It's fresh Fedora 24 installation with home disk copied from previous install, access rights to .ssh/authorized_keys are correct. ssh -vvv does not provide any valuable information.
Surprisingly when sshd started manually as program sshd -dd to see log messages then is sometimes works but seems there is a firewall problem or so.
So I edited /etc/sshd/sshd_config enable logging and restarted sshd service.
SyslogFacility AUTHPRIV
#LogLevel INFO
LogLevel VERBOSE
systemctl start sshd
After logging attept I inspected log and I see:
AVC avc: denied { read } for pid=3111 comm="sshd" name="authorized_keys" dev="sda2" ino=697179 scontext=system_u:system_r:sshd_t:s0-s0:c0.c1023 tcontext=unconfined_u:object_r:home_root_t:s0 tclass=file permissive=0
USER_AUTH pid=3111 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:sshd_t:s0-s0:c0.c1023 msg='op=pubkey acct="myname" exe="/usr/sbin/sshd" hostname=? addr=192.168.56.102 terminal=ssh res=failed'
What is wrong ?
I found solution at https://bugzilla.redhat.com/show_bug.cgi?id=653140 as selinux problem. Problem was probably home copied from another machine.
restorecon -R -v ~/.ssh/
Related
I ran into a fairly niche issue that may solve issues that others may have, but haven't been able to solve; when following all the normal tips troubleshooting SSH public key issues.
System:
OpenSSH
Rhel 7
NFS home directory mounts
If you are using NFS home directory mounts, there is a SELinux setting that you need to have enabled to allow SSH with public keys.
The command to enable this is as follows:
setsebool -P use_nfs_home_dirs 1
This change with persist, so no worries about having to do this every time on restart.
Without enabling this setting in SELinux when you SSH the command will not be given access to read the authorized_keys file resulting in a public key authentication failure.
An easy was to view this issue is to run journalctl -f on the server and then attempt to SSH using public keys. You will see an error saying SELinux is preventing /usr/sbin/sshd from reading ~/.ssh/authorized_keys.
I hope this saves someone the headache I had.
The command to enable NFS home directories: setsebool -P use_nfs_home_dirs 1
I'm facing a weird behavior trying to run rsync as sudo through ssh with passwordless login.
This is something I do with dozens of servers, I'm having this frustrating problem connecting to a couple of Ubuntu 18.04.4 servers
PREMISE
the passwordless SSH from CLIENT to SERVER with account USER works
nicely
When I'm logged in SERVER I can sudo everything with
account USER
On SERVER I've added the following to /etc/sudoers
user ALL=NOPASSWD:/usr/bin/rsync
Now, if I launch this simple test from machine CLIENT as user USER, I receive the following sudo error message:
$ ssh utente#192.168.200.135 -p 2310 sudo rsync
sudo: no tty present and no askpass program specified
Moreover, looking in the SERVER's /var/log/auth.log I found this errors:
sudo: pam_unix(sudo:auth): conversation failed
sudo: pam_unix(sudo:auth): auth could not identify password for [user]
am not an PAM expert, but tested the following solution working on Ubuntu 16.04.5 and 20.04.1
NOTE : Configuration set to default on /etc/ssh/sshd_config
$ sudo visudo -f /etc/sudoers.d/my_config_file
add the below lines
my_username ALL=(ALL) NOPASSWD:ALL
and don't forget to restart sshd
$ sudo systemctl restart sshd
I've found a solution thanks to Centos. Infact, because of the more complex configuration of /etc/sudoers in Centos (compared to Ubuntu or Debian), I've been forced to put my additional configurations to an external file in /etc/sudoers.d/ instead than putting it directly into /etc/sudoers
SOLUTION:
Putting additional configurations directly into /etc/sudoers wouldn't work
Putting the needed additional settings in a file within the directory /etc/sudoers.d/ will work
e.g. , these are the config lines put in a file named /etc/sudoers.d/my_config_file:
Host_Alias MYSERVERHOST=192.168.1.135,localhost
# User that will execute Rsync with Sudo from a remote client
rsyncuser MYSERVERHOST=NOPASSWD:/usr/bin/rsync
Why /etc/sudoers didn't work? It's unknown to me even after two days worth of Internet search. I find this very obscure and awful.
What follows is a quote from this useful article: https://askubuntu.com/a/931207
Unlike /etc/sudoers, the contents of /etc/sudoers.d survive system upgrades, so it's preferrable to create a file there than to modify /etc/sudoers.
For the editing of any configuration file to be used by sudo the command visudo is preferable.
i.e.
$ sudo visudo -f /etc/sudoers.d/my_config_file
I had a similar problem on a custom linux server, but the solution was similar to the answers above.
As soon as I removed the line your_user ALL=(ALL) NOPASSWD:ALL from /etc/sudoers, the errors were gone.
After i installed SSH,
i try to configure Chroot, but then i got following error when i try to login.
Feb 29 11:53:49 tng-ubuntu sshd[15314]: error: /dev/pts/2: No such file or directory
Not very sure about what happen, i try many many options, actually almost spent a whole afternoon, still don't know what is the issue.
Can someone help?
Subsystem sftp internal-sftp -l VERBOSE
Whenever i have following section in my sshd_config, it failed.
I already try to change /home to /home/%u or %h ...
Match Group sftponly
ChrootDirectory /home
AllowTcpForwarding no
X11Forwarding no
ForceCommand internal-sftp -l VERBOSE
Actually my configuration works, but i was verifying using SSH login, and my SSH failed to login. Though i still don't know why my SSH failed to login, but anyway, my SFTP works.
Need to check further why the SSH failed to login.
I'm using ansible to provision my Centos 7 produciton cluster. Unfortunately, execution of below command results with ansible Tiemout and Linux Pluggable Authentication Modules (pam) error conversation failed.
The same ansible command works well, executed against virtual lab mad out of vagrant boxes.
Ansible Command
$ ansible master_server -m yum -a 'name=vim state=installed' -b -K -u lukas -vvvv
123.123.123.123 | FAILED! => {
"msg": "Timeout (7s) waiting for privilege escalation prompt: \u001b[?1h\u001b=\r\r"
}
SSHd Log
# /var/log/secure
Aug 26 13:36:19 master_server sudo: pam_unix(sudo:auth): conversation failed
Aug 26 13:36:19 master_server sudo: pam_unix(sudo:auth): auth could not identify password for [lukas]
I've found the problem. It turned out to be PAM's auth module problem! Let me describe how I got to the solution.
Context:
I set up my machine for debugging - that is I had four terminal windows opened.
1st terminal (local machine): Here, I was executing ansible prduction_server -m yum -a 'name=vim state=installed' -b -K -u username
2nd terminal (production server): Here, I executed journalctl -f (system wide log).
3rd terminal (production server): Here, I executed tail -f /var/log/secure (log for sshd).
4th terminal (production server): Here, I was editing vi /etc/pam.d/sudo file.
Every time, I executed command from 1st terminal I got this errors:
# ansible error - on local machine
Timeout (7s) waiting for privilege escalation prompt error.
# sshd error - on remote machine
pam_unix(sudo:auth): conversation failed
pam_unix(sudo:auth): [username]
I showed my entire setup to my colleague, and he told me that the error had to do something with "PAM". Frankly, It was the first time that I've heard about PAM. So, I had to read this PAM Tutorial.
I figured out, that error relates to auth interface located in /etc/pam.d/sudo module. Diging over the internet, I stambled upon this pam_permit.so module with sufficient controll flag, that fixed my problem!
Solution
Basically, what I added was auth sufficient pam_permit.so line to /etc/pam.d/sudo file. Look at the example below.
$ cat /etc/pam.d/sudo
#%PAM-1.0
# Fixing ssh "auth could not identify password for [username]"
auth sufficient pam_permit.so
# Below is original config
auth include system-auth
account include system-auth
password include system-auth
session optional pam_keyinit.so revoke
session required pam_limits.so
session include system-auth
Conclusion:
I spent 4 days to arrive to this solution. I stumbled upon over a dozens solutions that did not worked for me, starting from "duplicated sudo password in ansible hosts/config file", "ldap specific configuration" to getting advice from always grumpy system admins!
Note:
Since, I'm not expert in PAM, I'm not aware if this fix affects other aspects of the system, so be cautious over blindly copy pasting this code! However, if you are expert on PAM please share with us alternative solutions or input. Thanks!
Assuming the lukas user is a local account, you should look at how the pam_unix.so module is declared in your system-auth pam file. But more information about the user account and pam configuration is necessary for a specific answer.
While adding auth sufficient pam_permit.so is enough to gain access. Using it in anything but the most insecure test environment would not be recommended. From the pam_permit man page:
pam_permit is a PAM module that always permit access. It does nothing
else.
So adding pam_permit.so as sufficient for authentication in this manner will completely bypass the security for all users.
Found myself in the same situation, tearing my hair out. In my case, hidden toward the end of the sudoers file, there was the line:
%sudo ALL=(ALL:ALL) ALL
This undoes authorizations that come before it. If you're not using the sudo group then this line can safely be deleted.
I had this error since upgrading sudo to version 1.9.4 with pacman. I hadn't noticed that pacman had provided a new sudoers file.
I just needed to merge /etc/sudoers.pacnew.
See here for more details: https://wiki.archlinux.org/index.php/Pacman/Pacnew_and_Pacsave
I know that this doesn't answer the original question (which pertains to a Centos system), but this is the top Google result for the error message, so I thought I'd leave my solution here in case anyone stumbles across this problem coming from an Arch Linux based operating system.
I got the same error when I tried to restart apache2 with sudo service apache2 restart
When logging into root I was able to see the real error lied with the configuration of apache2. Turned out I removed a site's SSL-Certificate files a few months ago but didn't disable the site in apache2. a2dissite did the trick.
I forgot that I had enabled SELinux on one of my web servers. So when I went to log into the host with my user account and ssh key, I was getting permission denied errors.
[TimothyDunphy#JEC206429674LM:~] #ssh bluethundr#web1.somedomain.com
Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
Hmmm... So I consoled into the server and was able to login. I tailed the audit logs, and this is what I saw:
type=USER_LOGIN msg=audit(1429981690.809:394593): pid=17074 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:sshd_t:s0-s0:c0.c1023 msg='op=login acct="bluethundr" exe="/usr/sbin/sshd" hostname=? addr=47.18.111.100 terminal=ssh res=failed'
In googling for the answer to this I got the advice to run this command:
[root#web1:~] #restorecon -R -v /home/bluethundr/.ssh
[root#web1:~] #
But when I go to login again, after doing that, I get the same result. Permission denied and the same error in the logs.
The only other thing I can think of is that the home directory for the user is mounted from an NFS share. Might there be some SELinux incantation I can use to allow SSH to a home directory on an NFS share?
Or maybe I'm missing something else?
Thanks,
Tim
If restorecon didn't work, I generally try audit2why and/or audit2allow to find what policy is being violated. That's not to say that I apply the policy change suggestions that are generated, just that they lead to very good information to resolving the issue.
Bingo!!
When I ran audit2why -w this was the output I saw:
[root#web1:~] #grep ssh /var/log/audit/audit.log | audit2why -w
Was caused by:
The boolean use_nfs_home_dirs was set incorrectly.
Description:
Allow use to nfs home dirs
Allow access by executing:
# setsebool -P use_nfs_home_dirs 1
type=AVC msg=audit(1429983513.529:394784): avc: denied { read } for pid=19748 comm="sshd" name="authorized_keys" dev="0:40" ino=275968 scontext=system_u:system_r:sshd_t:s0-s0:c0.c1023 tcontext=system_u:object_r:nfs_t:s0 tclass=file
So it looks like my hunch about it being about NFS and your suggestion to use audit2why allow me to crack the case!
[TimothyDunphy#JEC206429674LM:~/creds] #ssh bluethundr#web1.jokefire.com
Last login: Sat Apr 25 13:41:02 2015 from ool-2f126f64.dyn.optonline.net
[bluethundr#web1 ~]$
Bam!! It works. Thanks for your help!