SELinux prevents ssh with RSA key - ssh

I forgot that I had enabled SELinux on one of my web servers. So when I went to log into the host with my user account and ssh key, I was getting permission denied errors.
[TimothyDunphy#JEC206429674LM:~] #ssh bluethundr#web1.somedomain.com
Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
Hmmm... So I consoled into the server and was able to login. I tailed the audit logs, and this is what I saw:
type=USER_LOGIN msg=audit(1429981690.809:394593): pid=17074 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:sshd_t:s0-s0:c0.c1023 msg='op=login acct="bluethundr" exe="/usr/sbin/sshd" hostname=? addr=47.18.111.100 terminal=ssh res=failed'
In googling for the answer to this I got the advice to run this command:
[root#web1:~] #restorecon -R -v /home/bluethundr/.ssh
[root#web1:~] #
But when I go to login again, after doing that, I get the same result. Permission denied and the same error in the logs.
The only other thing I can think of is that the home directory for the user is mounted from an NFS share. Might there be some SELinux incantation I can use to allow SSH to a home directory on an NFS share?
Or maybe I'm missing something else?
Thanks,
Tim

If restorecon didn't work, I generally try audit2why and/or audit2allow to find what policy is being violated. That's not to say that I apply the policy change suggestions that are generated, just that they lead to very good information to resolving the issue.

Bingo!!
When I ran audit2why -w this was the output I saw:
[root#web1:~] #grep ssh /var/log/audit/audit.log | audit2why -w
Was caused by:
The boolean use_nfs_home_dirs was set incorrectly.
Description:
Allow use to nfs home dirs
Allow access by executing:
# setsebool -P use_nfs_home_dirs 1
type=AVC msg=audit(1429983513.529:394784): avc: denied { read } for pid=19748 comm="sshd" name="authorized_keys" dev="0:40" ino=275968 scontext=system_u:system_r:sshd_t:s0-s0:c0.c1023 tcontext=system_u:object_r:nfs_t:s0 tclass=file
So it looks like my hunch about it being about NFS and your suggestion to use audit2why allow me to crack the case!
[TimothyDunphy#JEC206429674LM:~/creds] #ssh bluethundr#web1.jokefire.com
Last login: Sat Apr 25 13:41:02 2015 from ool-2f126f64.dyn.optonline.net
[bluethundr#web1 ~]$
Bam!! It works. Thanks for your help!

Related

SSH Public Key Failure

I ran into a fairly niche issue that may solve issues that others may have, but haven't been able to solve; when following all the normal tips troubleshooting SSH public key issues.
System:
OpenSSH
Rhel 7
NFS home directory mounts
If you are using NFS home directory mounts, there is a SELinux setting that you need to have enabled to allow SSH with public keys.
The command to enable this is as follows:
setsebool -P use_nfs_home_dirs 1
This change with persist, so no worries about having to do this every time on restart.
Without enabling this setting in SELinux when you SSH the command will not be given access to read the authorized_keys file resulting in a public key authentication failure.
An easy was to view this issue is to run journalctl -f on the server and then attempt to SSH using public keys. You will see an error saying SELinux is preventing /usr/sbin/sshd from reading ~/.ssh/authorized_keys.
I hope this saves someone the headache I had.
The command to enable NFS home directories: setsebool -P use_nfs_home_dirs 1

Failed to setup SSH FTP in Ubuntu

After i installed SSH,
i try to configure Chroot, but then i got following error when i try to login.
Feb 29 11:53:49 tng-ubuntu sshd[15314]: error: /dev/pts/2: No such file or directory
Not very sure about what happen, i try many many options, actually almost spent a whole afternoon, still don't know what is the issue.
Can someone help?
Subsystem sftp internal-sftp -l VERBOSE
Whenever i have following section in my sshd_config, it failed.
I already try to change /home to /home/%u or %h ...
Match Group sftponly
ChrootDirectory /home
AllowTcpForwarding no
X11Forwarding no
ForceCommand internal-sftp -l VERBOSE
Actually my configuration works, but i was verifying using SSH login, and my SSH failed to login. Though i still don't know why my SSH failed to login, but anyway, my SFTP works.
Need to check further why the SSH failed to login.

pam_unix(sudo:auth): conversation failed, auth could not identify password for [username]

I'm using ansible to provision my Centos 7 produciton cluster. Unfortunately, execution of below command results with ansible Tiemout and Linux Pluggable Authentication Modules (pam) error conversation failed.
The same ansible command works well, executed against virtual lab mad out of vagrant boxes.
Ansible Command
$ ansible master_server -m yum -a 'name=vim state=installed' -b -K -u lukas -vvvv
123.123.123.123 | FAILED! => {
"msg": "Timeout (7s) waiting for privilege escalation prompt: \u001b[?1h\u001b=\r\r"
}
SSHd Log
# /var/log/secure
Aug 26 13:36:19 master_server sudo: pam_unix(sudo:auth): conversation failed
Aug 26 13:36:19 master_server sudo: pam_unix(sudo:auth): auth could not identify password for [lukas]
I've found the problem. It turned out to be PAM's auth module problem! Let me describe how I got to the solution.
Context:
I set up my machine for debugging - that is I had four terminal windows opened.
1st terminal (local machine): Here, I was executing ansible prduction_server -m yum -a 'name=vim state=installed' -b -K -u username
2nd terminal (production server): Here, I executed journalctl -f (system wide log).
3rd terminal (production server): Here, I executed tail -f /var/log/secure (log for sshd).
4th terminal (production server): Here, I was editing vi /etc/pam.d/sudo file.
Every time, I executed command from 1st terminal I got this errors:
# ansible error - on local machine
Timeout (7s) waiting for privilege escalation prompt error.
# sshd error - on remote machine
pam_unix(sudo:auth): conversation failed
pam_unix(sudo:auth): [username]
I showed my entire setup to my colleague, and he told me that the error had to do something with "PAM". Frankly, It was the first time that I've heard about PAM. So, I had to read this PAM Tutorial.
I figured out, that error relates to auth interface located in /etc/pam.d/sudo module. Diging over the internet, I stambled upon this pam_permit.so module with sufficient controll flag, that fixed my problem!
Solution
Basically, what I added was auth sufficient pam_permit.so line to /etc/pam.d/sudo file. Look at the example below.
$ cat /etc/pam.d/sudo
#%PAM-1.0
# Fixing ssh "auth could not identify password for [username]"
auth sufficient pam_permit.so
# Below is original config
auth include system-auth
account include system-auth
password include system-auth
session optional pam_keyinit.so revoke
session required pam_limits.so
session include system-auth
Conclusion:
I spent 4 days to arrive to this solution. I stumbled upon over a dozens solutions that did not worked for me, starting from "duplicated sudo password in ansible hosts/config file", "ldap specific configuration" to getting advice from always grumpy system admins!
Note:
Since, I'm not expert in PAM, I'm not aware if this fix affects other aspects of the system, so be cautious over blindly copy pasting this code! However, if you are expert on PAM please share with us alternative solutions or input. Thanks!
Assuming the lukas user is a local account, you should look at how the pam_unix.so module is declared in your system-auth pam file. But more information about the user account and pam configuration is necessary for a specific answer.
While adding auth sufficient pam_permit.so is enough to gain access. Using it in anything but the most insecure test environment would not be recommended. From the pam_permit man page:
pam_permit is a PAM module that always permit access. It does nothing
else.
So adding pam_permit.so as sufficient for authentication in this manner will completely bypass the security for all users.
Found myself in the same situation, tearing my hair out. In my case, hidden toward the end of the sudoers file, there was the line:
%sudo ALL=(ALL:ALL) ALL
This undoes authorizations that come before it. If you're not using the sudo group then this line can safely be deleted.
I had this error since upgrading sudo to version 1.9.4 with pacman. I hadn't noticed that pacman had provided a new sudoers file.
I just needed to merge /etc/sudoers.pacnew.
See here for more details: https://wiki.archlinux.org/index.php/Pacman/Pacnew_and_Pacsave
I know that this doesn't answer the original question (which pertains to a Centos system), but this is the top Google result for the error message, so I thought I'd leave my solution here in case anyone stumbles across this problem coming from an Arch Linux based operating system.
I got the same error when I tried to restart apache2 with sudo service apache2 restart
When logging into root I was able to see the real error lied with the configuration of apache2. Turned out I removed a site's SSL-Certificate files a few months ago but didn't disable the site in apache2. a2dissite did the trick.

Ansible sudo run ("as root") on Cygwin

Need to run bash-script at sudo-user on remote hosts using Ansible. My working machine is Win10 + Cygwin (sorry, it wasn't my fault).
So, i tested it on non-sudo scripts (it doesn't need root access) - and it works.
No, first time it didn't work at all: Failed to connect to the host via ssh: my_user#server1: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password)
So, i used this: "ssh-keygen -t rsa" -> "ssh-copy-id my_user#server1" || "ssh-copy-id my_user#server2" under my_user: created an ssh-key and shered it to remote hosts. After that i could run scripts under my_user on server1, server2 and so on...
Now, i need run sudo-scripts. But i can't understand, how it'll be.
on Cygwin there're no ROOT-user. And i don't know, how can to generete ssh-key for nonexistent user.
how to run ansible playbook with root? remote_user: root goes with error: Failed to connect to the host via ssh: my_user#server1: Permission denied Look, it's my_user, not root. Does it run as my_user or root-user?
Maybe i do it wrong at all, and are there any "best practice"-vay to run sudo-scripts?
Oh, please, give me a help to solve my problem.
Seems like auth as root disabled on remote server.
In /etc/ssh/sshd_config find PermitRootLogin and set it on Yes, but I'll not recommend you to do that.
Actually, use exactly root user - it's bad practice.
Check permissions for your my_user. Maybe you can grant it sudo rights without password.
To do that edit /etc/sudoers as root, find this line:
# Allow members of group sudo to execute any command
And after it add this:
my_user ALL=(ALL) NOPASSWD: ALL
After it you'll be able to execute any sudo command without password on remote machine.
I did it, but what i did?
So, steps of solution:
set become: true at playbook, abuote here:
hosts:
test_hosts
become: true
vars:
Next, run playbook with "-K" attibute: ansible-playbook ./your_playbook.yml -K
So, it works: ran and even exec scripts under sudo.
But i can't understand, how can i set what user i use as "executable user".

SSH public key authorization does not work

When logging using public key from client then always password is requested even everything is apparently configured. It's fresh Fedora 24 installation with home disk copied from previous install, access rights to .ssh/authorized_keys are correct. ssh -vvv does not provide any valuable information.
Surprisingly when sshd started manually as program sshd -dd to see log messages then is sometimes works but seems there is a firewall problem or so.
So I edited /etc/sshd/sshd_config enable logging and restarted sshd service.
SyslogFacility AUTHPRIV
#LogLevel INFO
LogLevel VERBOSE
systemctl start sshd
After logging attept I inspected log and I see:
AVC avc: denied { read } for pid=3111 comm="sshd" name="authorized_keys" dev="sda2" ino=697179 scontext=system_u:system_r:sshd_t:s0-s0:c0.c1023 tcontext=unconfined_u:object_r:home_root_t:s0 tclass=file permissive=0
USER_AUTH pid=3111 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:sshd_t:s0-s0:c0.c1023 msg='op=pubkey acct="myname" exe="/usr/sbin/sshd" hostname=? addr=192.168.56.102 terminal=ssh res=failed'
What is wrong ?
I found solution at https://bugzilla.redhat.com/show_bug.cgi?id=653140 as selinux problem. Problem was probably home copied from another machine.
restorecon -R -v ~/.ssh/