Kerberos auth with Apache/PHP on CentOS7 - apache

I want to configure Kerberos auth with Apache - with no success.
It look that problem with keytab file.
When I run:
klist -kte /path/to/website.HTTP.keytab
I get:
KVNO Timestamp Principal
---- ------------------- ------------------------------------------------------
12 01/01/1970 02:00:00 HTTP/website.domain#DOMAIN
Then I run:
kinit -k -t /path/to/website.HTTP.keytab HTTP/website.domain#DOMAIN
kinit: Client 'HTTP/website.domain#DOMAIN' not found in Kerberos database while getting initial credentials
Any Idea whats goes wrong?

HTTP/website.domain#DOMAIN is an incomplete SPN, which seems the most likely reason kinit can't find the SPN in the database. A full SPN, as an example, would look like this: HTTP/website.domain#DOMAIN.COM. To fix, you will need to re-create the keytab using that fully-qualified syntax. Example:
ktpass -out HTTP.keytab -mapUser AD_Account_Name#DOMAIN.COM +rndPass -mapOp set +DumpSalt -crypto AES128-SHA1 -ptype KRB5_NT_PRINCIPAL -princ HTTP/website.domain#DOMAIN.COM

Related

pam_unix(sudo:auth): conversation failed, auth could not identify password for [username]

I'm using ansible to provision my Centos 7 produciton cluster. Unfortunately, execution of below command results with ansible Tiemout and Linux Pluggable Authentication Modules (pam) error conversation failed.
The same ansible command works well, executed against virtual lab mad out of vagrant boxes.
Ansible Command
$ ansible master_server -m yum -a 'name=vim state=installed' -b -K -u lukas -vvvv
123.123.123.123 | FAILED! => {
"msg": "Timeout (7s) waiting for privilege escalation prompt: \u001b[?1h\u001b=\r\r"
}
SSHd Log
# /var/log/secure
Aug 26 13:36:19 master_server sudo: pam_unix(sudo:auth): conversation failed
Aug 26 13:36:19 master_server sudo: pam_unix(sudo:auth): auth could not identify password for [lukas]
I've found the problem. It turned out to be PAM's auth module problem! Let me describe how I got to the solution.
Context:
I set up my machine for debugging - that is I had four terminal windows opened.
1st terminal (local machine): Here, I was executing ansible prduction_server -m yum -a 'name=vim state=installed' -b -K -u username
2nd terminal (production server): Here, I executed journalctl -f (system wide log).
3rd terminal (production server): Here, I executed tail -f /var/log/secure (log for sshd).
4th terminal (production server): Here, I was editing vi /etc/pam.d/sudo file.
Every time, I executed command from 1st terminal I got this errors:
# ansible error - on local machine
Timeout (7s) waiting for privilege escalation prompt error.
# sshd error - on remote machine
pam_unix(sudo:auth): conversation failed
pam_unix(sudo:auth): [username]
I showed my entire setup to my colleague, and he told me that the error had to do something with "PAM". Frankly, It was the first time that I've heard about PAM. So, I had to read this PAM Tutorial.
I figured out, that error relates to auth interface located in /etc/pam.d/sudo module. Diging over the internet, I stambled upon this pam_permit.so module with sufficient controll flag, that fixed my problem!
Solution
Basically, what I added was auth sufficient pam_permit.so line to /etc/pam.d/sudo file. Look at the example below.
$ cat /etc/pam.d/sudo
#%PAM-1.0
# Fixing ssh "auth could not identify password for [username]"
auth sufficient pam_permit.so
# Below is original config
auth include system-auth
account include system-auth
password include system-auth
session optional pam_keyinit.so revoke
session required pam_limits.so
session include system-auth
Conclusion:
I spent 4 days to arrive to this solution. I stumbled upon over a dozens solutions that did not worked for me, starting from "duplicated sudo password in ansible hosts/config file", "ldap specific configuration" to getting advice from always grumpy system admins!
Note:
Since, I'm not expert in PAM, I'm not aware if this fix affects other aspects of the system, so be cautious over blindly copy pasting this code! However, if you are expert on PAM please share with us alternative solutions or input. Thanks!
Assuming the lukas user is a local account, you should look at how the pam_unix.so module is declared in your system-auth pam file. But more information about the user account and pam configuration is necessary for a specific answer.
While adding auth sufficient pam_permit.so is enough to gain access. Using it in anything but the most insecure test environment would not be recommended. From the pam_permit man page:
pam_permit is a PAM module that always permit access. It does nothing
else.
So adding pam_permit.so as sufficient for authentication in this manner will completely bypass the security for all users.
Found myself in the same situation, tearing my hair out. In my case, hidden toward the end of the sudoers file, there was the line:
%sudo ALL=(ALL:ALL) ALL
This undoes authorizations that come before it. If you're not using the sudo group then this line can safely be deleted.
I had this error since upgrading sudo to version 1.9.4 with pacman. I hadn't noticed that pacman had provided a new sudoers file.
I just needed to merge /etc/sudoers.pacnew.
See here for more details: https://wiki.archlinux.org/index.php/Pacman/Pacnew_and_Pacsave
I know that this doesn't answer the original question (which pertains to a Centos system), but this is the top Google result for the error message, so I thought I'd leave my solution here in case anyone stumbles across this problem coming from an Arch Linux based operating system.
I got the same error when I tried to restart apache2 with sudo service apache2 restart
When logging into root I was able to see the real error lied with the configuration of apache2. Turned out I removed a site's SSL-Certificate files a few months ago but didn't disable the site in apache2. a2dissite did the trick.

Beeline can't find private method "getKeytab"

I am trying to connect to Hive with Kerberos authentication using beeline. I have initialized a ticket with
kinit -V --kdc-hostname=<HOSTNAME> -kt /etc/krb5.keytab <USER#REALM>
and I can see it is active when I run klist but when I try to connect to Hive, I get the well known error message:
SASL negotiation failure
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
I changed the log4j level to debug, and found the following:
DEBUG HiveAuthFactory: Cannot find private method "getKeytab" in class:org.apache.hadoop.security.UserGroupInformation
and after this, beeline is trying to use my unix username to authenticate, which is obviously failing. So I think the problem is that beeline doesn't find my keytab file.
Most probably the problem is with beeline command.
Make sure you provide authentication parameter correctly and have double quotes around the connection string.
beeline -u "jdbc:hive2://HOSTNAME:10000/default;principal=hive/hostname#Example.com"
And also check your Kerberos principal if it has permission to access hive.

Docker Registry incorrectly claims an expired CA cert

I followed the Docker Registry installation docs precisely, and have a registry running on a remote Ubuntu VM. On that VM, the Docker container is running with the following command:
docker run -d -p 5000:5000 --restart=always --name registry \
-v `pwd`/auth:/auth \
-e "REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm" \
-e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd \
-v `pwd`/certs:/certs \
-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/registry.crt \
-e REGISTRY_HTTP_TLS_KEY=/certs/registry.key \
registry:2
On the remote VM, I have the following directory structure:
/home/myuser/
certs/
registry.crt
registry.key
/etc/docker/certs.d/myregistry.example.com:5000/
ca.crt
ca.key
The ca.crt is the same exact cert as ~/certs/registry.crt (just renamed); same goes for ca.key and registry.key being the same/just renamed. I created the ca* files per a suggestion from the error output you'll see below.
I am almost 100% sure the CA cert is still valid, although any help ruling that out (e.g. how can I actually tell?) would be appreciated. When I start the container and look at the Docker logs, I don't see any errors.
I then attempt to login from my local laptop (Mac):
docker login myregistry.example.com:5000
It queries me for my username, password and email (although I don't recall ever specifying an email when setting up Basic Auth). After entering these correctly (I have checked and double checked...) I get the following error:
myuser#mymachine:~/tmp$docker login myregistry.example.com:5000
Username: my_ciuser
Password:
Email: myuser#example.com
Error response from daemon: invalid registry endpoint https://myregistry.example.com:5000/v0/:
unable to ping registry endpoint https://myregistry.example.com:5000/v0/ v2 ping attempt failed with error:
Get https://myregistry.example.com:5000/v2/: x509: certificate has expired or is not yet valid
v1 ping attempt failed with error: Get https://myregistry.example.com:5000/v1/_ping: x509:
certificate has expired or is not yet valid. If this private registry supports only HTTP or
HTTPS with an unknown CA certificate, please add
`--insecure-registry myregistry.example.com:5000` to the daemon's
arguments. In the case of HTTPS, if you have access to the registry's CA
certificate, no need for the flag; simply place the CA certificate
at /etc/docker/certs.d/myregistry.example.com:5000/ca.crt
So from my perspective, I guess the following are possible:
The CA cert is invalid (if so, why?!?)
The CA cert is an intermediary cert (if so, how can I tell?)
The CA cert is expired (if so, how do I tell?)
This is a bad error message, and some other facet of the registry is not configured properly (if so, how do I troubleshoot further?)
Perhaps my cert is not located in the correct place on the server, or doesn't have the right permissions set (if so, where does the cert need to be?)
Something else that I would never expect in a million years
Any ideas/thoughts?
As said in the error message:
... In the case of HTTPS, if you have access to the registry's CA
certificate, no need for the flag; simply place the CA certificate
at /etc/docker/certs.d/myregistry.example.com:5000/ca.crt
where myregistry.example.com:5000 - your CN with port.
You should copy your ca.crt into each Docker Daemon that will connect to your Docker Registry and put it in this folder: /etc/docker/certs.d/myregistry.example.com:5000/ca.crt
After this action you need to restart Docker daemon, for example, via sudo service docker stop && service docker start on CentOS (or call similar procedure on your OS).
I had the similar error:
Then I added my private registry to the insecureregistries list.
See below image for docker-desktop

SELinux prevents ssh with RSA key

I forgot that I had enabled SELinux on one of my web servers. So when I went to log into the host with my user account and ssh key, I was getting permission denied errors.
[TimothyDunphy#JEC206429674LM:~] #ssh bluethundr#web1.somedomain.com
Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
Hmmm... So I consoled into the server and was able to login. I tailed the audit logs, and this is what I saw:
type=USER_LOGIN msg=audit(1429981690.809:394593): pid=17074 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:sshd_t:s0-s0:c0.c1023 msg='op=login acct="bluethundr" exe="/usr/sbin/sshd" hostname=? addr=47.18.111.100 terminal=ssh res=failed'
In googling for the answer to this I got the advice to run this command:
[root#web1:~] #restorecon -R -v /home/bluethundr/.ssh
[root#web1:~] #
But when I go to login again, after doing that, I get the same result. Permission denied and the same error in the logs.
The only other thing I can think of is that the home directory for the user is mounted from an NFS share. Might there be some SELinux incantation I can use to allow SSH to a home directory on an NFS share?
Or maybe I'm missing something else?
Thanks,
Tim
If restorecon didn't work, I generally try audit2why and/or audit2allow to find what policy is being violated. That's not to say that I apply the policy change suggestions that are generated, just that they lead to very good information to resolving the issue.
Bingo!!
When I ran audit2why -w this was the output I saw:
[root#web1:~] #grep ssh /var/log/audit/audit.log | audit2why -w
Was caused by:
The boolean use_nfs_home_dirs was set incorrectly.
Description:
Allow use to nfs home dirs
Allow access by executing:
# setsebool -P use_nfs_home_dirs 1
type=AVC msg=audit(1429983513.529:394784): avc: denied { read } for pid=19748 comm="sshd" name="authorized_keys" dev="0:40" ino=275968 scontext=system_u:system_r:sshd_t:s0-s0:c0.c1023 tcontext=system_u:object_r:nfs_t:s0 tclass=file
So it looks like my hunch about it being about NFS and your suggestion to use audit2why allow me to crack the case!
[TimothyDunphy#JEC206429674LM:~/creds] #ssh bluethundr#web1.jokefire.com
Last login: Sat Apr 25 13:41:02 2015 from ool-2f126f64.dyn.optonline.net
[bluethundr#web1 ~]$
Bam!! It works. Thanks for your help!

Net::SSH::AuthenticationFailed: Authentication failed

From workstation (Windows) trying to execute
knife ssh 'name:*' 'sudo chef-client'
But it shows error message of
WARNING: Failed to connect to ******** – Net::SSH::AuthenticationFailed: Authentication failed for user ************
How do I solve this error?
Another question is how to execute 'sudo chef-client' on all nodes from workstation without using any passwords?
If you run knife ssh --help you'll get a list of available options. Try adding -VV for verbose output. That's usually helpful as it should tell you what user knife is trying to connect as.
My guess is you'll have to incorporate one or more of the ssh options (a few listed here):
-x, --ssh-user USERNAME
-i, --identity-file IDENTITY_FILE
-P, --ssh-password [PASSWORD] (will prompt if flag specified but no password is given)
The docs (https://docs.getchef.com/knife_ssh.html) also have some helpful examples
Your SSH authentication isn't working, fix that. Key-based authentication is something I'm sure you can look up on Google, but in general set your public key in .ssh/authorized_keys and setup your agent on your workstation.