Configure LDAP with PGAdmin - ldap

Trying to configure LDAP with pgAdmin.
I have pgAdmin running locally on a cluster and I'm using Apache Directory Studio as a local LDAP server with the default connection and I've created 1 user.
The logs from Apache Directory Studio are:
#!SEARCH REQUEST (462) OK
#!CONNECTION ldap://0.0.0.0:10389
#!DATE 2021-03-12T09:33:38.565
# LDAP URL : ldap://0.0.0.0:10389/uid=admin,ou=system?*??(objectClass=*)
# command line : ldapsearch -H ldap://0.0.0.0:10389 -x -D "uid=admin,ou=system" -W -b "uid=admin,ou=system" -s base -a always "(objectClass=*)" "*"
# baseObject : uid=admin,ou=system
# scope : baseObject (0)
# derefAliases : derefAlways (3)
# sizeLimit : 0
# timeLimit : 0
# typesOnly : False
# filter : (objectClass=*)
# attributes : *
#!SEARCH RESULT DONE (462) OK
#!CONNECTION ldap://0.0.0.0:10389
#!DATE 2021-03-12T09:33:38.566
# numEntries : 1
In my pgAdmin config_local.py file I have the following:
AUTHENTICATION_SOURCES = ['ldap','internal']
LDAP_SERVER_URI = 'ldap://<my-ip-address>:10389'
LDAP_USERNAME_ATTRIBUTE = 'uid'
LDAP_BASE_DN = 'uid=admin,ou=system'
LDAP_SEARCH_BASE_DN = 'uid=admin,ou=system'
When I try to log into pgAdmin with admin or the created user I get the following error:
ldap3.core.exceptions.LDAPBindError: automatic bind not successful - invalidCredentials
I think I'm getting the base DN wrong or Apache isn't configured properly. Grateful for any help.

Related

Icinga2 event plugin command starting a rundeck job via api

i made myself a test environment in icinga2 with a tomcat server. I would like to combine the two softwares rundeck and icinga. My idea is to start a rundeck job, when icinga detects a problem. In my case I have a tomcat server, where i fill up the swap memory, which should start the rundeck job to clear the swap.
I am using the Icinga2 Director for managing. I created an event plugin command, which should execute the rundeck api command as a script, called "rundeckapi". It looks like this:
#/usr/lib64/nagios/plugins/rundeckapi
#!/bin/bash
curl --location --request POST 'rundeck-server:4440/api/38/job/9f04657a-eaab-4e79-a5f3-00d3053f6cb0/run' \
--header 'X-Rundeck-Auth-Token: GuaoD6PtH5BhobhE3bAPo4mGyfByjNya' \
--header 'Content-Type: application/json' \
--header 'Cookie: JSESSIONID=node01tz8yvp4gjkly8kpj18h8u5x42.node0' \
--data-raw '{
"options": {
"IP":"192.168.4.13"
}
}'
(I also tried to just paste the command in the command field in the director, but this didn't work either.)
I placed it in the /usr/lib64/nagios/plugins/ directory and set the configuration in icinga for the command as following:
#zones.d/director-global/command.conf
object EventCommand "SWAP clear" {
import "plugin-event-command"
command = [ PluginDir + "/rundeckapi" ]
}
The service template looks like this:
#zones.d/master/service_templates.conf
template Service "SWAP" {
check_command = "swap"
max_check_attempts= "5"
check_interval = 1m
retry_interval = 15s
check_timeout = 10s
enable_notifications = true
enable_active_checks = true
enable_passive_checks = true
enable_event_handler = true
enable_flapping = true
enable_perfdata = true
event_command = "SWAP clear"
command_endpoint = host_name
}
Then I added the service to the host.
I enabled the debug mode and started to fill the SWAP and watched at the debug.log, with tail -f /var/log/icinga2/debug.log | grep 'event handler' and found this:
notice/Checkable: Executing event handler 'SWAP clear' for checkable 'centos_tomcat_3!SWAP'
The centos_tomcat_3 is the host for testing. IT seems like the event handler is executing the the script, but when I look at the rundeck server, i can't find a running job. When i start the rundeckapi script manually it works and i can see the job on rundeck.
I already read the documentation from icinga, but i didn't help.
I would be very thankful if someone could help me.
Thanks in advance.
Define the plugin as an event handler and assign it to the host.
I tested using this docker environment modified with Rundeck official image + a NGINX host:
version: '2'
services:
icinga2:
#image: jordan/icinga2
build:
context: ./
dockerfile: Dockerfile
restart: on-failure:5
# Set your hostname to the FQDN under which your
# sattelites will reach this container
hostname: icinga2
env_file:
- secrets_sql.env
environment:
- ICINGA2_FEATURE_GRAPHITE=1
# Important:
# keep the hostname graphite the same as
# the name of the graphite docker-container
- ICINGA2_FEATURE_GRAPHITE_HOST=graphite
- ICINGA2_FEATURE_GRAPHITE_PORT=2003
- ICINGA2_FEATURE_GRAPHITE_URL=http://graphite
# - ICINGA2_FEATURE_GRAPHITE_SEND_THRESHOLDS=true
# - ICINGA2_FEATURE_GRAPHITE_SEND_METADATA=false
- ICINGAWEB2_ADMIN_USER=admin
- ICINGAWEB2_ADMIN_PASS=admin
#- ICINGA2_USER_FULLNAME=Icinga2 Docker Monitoring Instance
- DEFAULT_MYSQL_HOST=mysql
volumes:
- ./data/icinga/cache:/var/cache/icinga2
- ./data/icinga/certs:/etc/apache2/ssl
- ./data/icinga/etc/icinga2:/etc/icinga2
- ./data/icinga/etc/icingaweb2:/etc/icingaweb2
- ./data/icinga/lib/icinga:/var/lib/icinga2
- ./data/icinga/lib/php/sessions:/var/lib/php/sessions
- ./data/icinga/log/apache2:/var/log/apache2
- ./data/icinga/log/icinga2:/var/log/icinga2
- ./data/icinga/log/icingaweb2:/var/log/icingaweb2
- ./data/icinga/log/mysql:/var/log/mysql
- ./data/icinga/spool:/var/spool/icinga2
# Sending e-mail
# See: https://github.com/jjethwa/icinga2#sending-notification-mails
# If you want to enable outbound e-mail, edit the file mstmp/msmtprc
# and configure to your corresponding mail setup. The default is a
# Gmail example but msmtp can be used for any MTA configuration.
# Change the aliases in msmtp/aliases to your recipients.
# Then uncomment the rows below
# - ./msmtp/msmtprc:/etc/msmtprc:ro
# - ./msmtp/aliases:/etc/aliases:ro
ports:
- "80:80"
- "443:443"
- "5665:5665"
graphite:
image: graphiteapp/graphite-statsd:latest
container_name: graphite
restart: on-failure:5
hostname: graphite
volumes:
- ./data/graphite/conf:/opt/graphite/conf
- ./data/graphite/storage:/opt/graphite/storage
- ./data/graphite/log/graphite:/var/log/graphite
- ./data/graphite/log/carbon:/var/log/carbon
mysql:
image: mariadb
container_name: mysql
env_file:
- secrets_sql.env
volumes:
- ./data/mysql/data:/var/lib/mysql
# If you have previously used the container's internal DB use:
#- ./data/icinga/lib/mysql:/var/lib/mysql
rundeck:
image: rundeck/rundeck:3.3.12
hostname: rundeck
ports:
- '4440:4440'
nginx:
image: nginx:alpine
hostname: nginx
ports:
- '81:80'
Rundeck side:
To access Rundeck open a new tab in your browser using the http://localhost:4440 web address. You can access with user: admin and password: admin.
Create a new project and create a new job, I created the following one, you can import it to your instance:
- defaultTab: nodes
description: ''
executionEnabled: true
id: c3e0860c-8f69-42f9-94b9-197d0706a915
loglevel: INFO
name: RestoreNGINX
nodeFilterEditable: false
options:
- name: opt1
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- exec: echo "hello ${option.opt1}"
keepgoing: false
strategy: node-first
uuid: c3e0860c-8f69-42f9-94b9-197d0706a915
Now go to the User Icon (up to right) > Profile, now click on the + icon ("User API Tokens" section) and save the API key string, useful to create the API call script from the Icinga2 container.
Go to the Activity page (left menu) and click on the "Auto Refresh" checkbox.
Incinga2 side:
You can enter Icinga 2 by opening a new tab in your browser using the http://localhost URL, I defined username: admin and password: admin in the docker-compose file.
Add the following script as a command at /usr/lib/nagios/plugins path with the following content (it's a curl api call like your scenario, the API key is the same generated in the third step from "Rundeck side" section of this step-by-step):
#!/bin/bash
curl --location --request POST 'rundeck:4440/api/38/job/c3e0860c-8f69-42f9-94b9-197d0706a915/run' --header 'X-Rundeck-Auth-Token: Zf41wIybwzYhbKD6PrXn01ZMsV2aT8BR' --header 'Content-Type: application/json' --data-raw '{ "options": { "opt1": "world" } }'
Also make the script executable: chmod +x /usr/lib/nagios/plugin/restorenginx
In the Icinga2 browser tab, go to the Icinga Director (Left Menu) > Commands. On the "Command Type" list select "Event Plugin Command", on the "Command Name" textbox type "restorenginx" and on the "Command" textbox put the full path of the script (/usr/lib/nagios/plugins/restorenginx). Then click on the "Store" button (bottom) and now click on Deploy (up).
Check how looks.
This is the config preview (at zones.d/director-global/commands.conf):
object EventCommand "restorenginx" {
import "plugin-event-command"
command = [ "/usr/lib/nagios/plugins/restorenginx" ]
}
Now, create the host template (In my example I'm using an Nginx container to monitoring), go to Icinga Director (Left Menu) > Hosts, and select "Host Templates". Then click on the + Add link (up). On the Name type the host template name, I used "nginxSERVICE", on the "check command" textbox put the command to check the host alive (I used "ping"). Now in the Event command textbox select the Command created in the previous step.
Check how looks.
Now it's time to create the host (based on the previous steps template). Go to Icinga Direcrector (Left Menu) > Hosts and select "Host". Then click on the + Add link (up). On the hostname type the server hostname (nginx, defined on the docker-compose file), In "Imports" select the template is created in the previous step ("nginxSERVICE"), type anything on the "Display name" textbox, and in the "Host address" add the Nginx container IP. Click on the "Store" button" and then on the "Deploy" link at the top.
Check how looks.
To enable the Event Hander on the host, go to the "Overview" (Left menu) > Hosts, select "NGINX", scroll down on the right section and enable "Event Handler" on the "Feature Commands" section.
Check how looks.
Nginx side (it's time to test the script):
Stop the container and go to the Rundeck Activity page browser tab, you'll see the job launched by the Icinga2 monitoring tool.

LDAP Authentication - OpenShift - OKD

I have deployed a new OKD cluster (3.11) and as Identity Provider I have selected LDAPPasswordIdentityProvider
The configuration goes like this:
openshift_master_identity_providers=[{'name': 'idm', 'challenge': 'true', 'login': 'true', 'kind': 'LDAPPasswordIdentityProvider', 'attributes': {'id': ['dn'], 'email': ['mail'], 'name': ['cn'], 'preferredUsername': ['uid']}, 'bindDN': 'uid=service,cn=users,cn=accounts,dc=myorg,dc=com', 'bindPassword': 'reallysecurepasswordhere', 'insecure': 'false', 'url': 'ldaps://idm.myorg.com:636/dc=myorg,dc=com?uid??(memberof=cn=openshift,cn=accounts,dc=myorg,dc=com)'}]
I have tried two dozens of possibilities with this URL.
On the logs I always get:
I0528 15:23:38.491659 1 ldap.go:122] searching for (&(objectClass=*)(uid=user1))
E0528 15:23:38.494172 1 login.go:174] Error authenticating "user1" with provider "idm": multiple entries found matching "user1"
I don't get it why is the filter showing as (&(objectClass=*)(uid=... appears as the filter isn't being parsed correctly, despite the URL being as above.
I also checked the master-config.yaml and it is correct as my ini file.
If I do ldapsearch I get the expected results:
$ ldapsearch -x -D "uid=service,cn=users,cn=accounts,dc=myorg,dc=com" -W -H ldaps://idm.myorg.com -s sub -b "cn=accounts,dc=myorg,dc=com" '(&(uid=user1)(memberof=cn=openshift,cn=groups,cn=accounts,dc=myorg,dc=com))' uid
Enter LDAP Password:
# extended LDIF
#
# LDAPv3
# base <cn=accounts,dc=myorg,dc=com> with scope subtree
# filter: (&(uid=user1)(memberof=cn=openshift,cn=groups,cn=accounts,dc=myorg,dc=com))
# requesting: uid
#
# user1, users, accounts, myorg.com
dn: uid=user1,cn=users,cn=accounts,dc=myorg,dc=com
uid: user1
# search result
search: 2
result: 0 Success
# numResponses: 2
# numEntries: 1
The LDAP Server is FreeIPA.
Help please!
Ok, I found the solution to the problem.
I assumed ... incorrectly ... that running the playbook openshift-ansible/playbook/openshift-master/config.yml would restart the openshift-master API.
It doesn't.
I noticed this when, instead of editing my ini inventory where I have this set and running config, I started editing directly on /etc/origin/master/master-config.yaml and using master-restart api to restart the API.
Several URL alterations (many incorrect actually) had never been ran. Config uploaded them, but the master api doesn't restart, so new config doesn't go in place, and I kept hitting the wall.

keycloak admin cli unable to authenticate

I am new to keycloak. I have been following the admin cli guide to automate realm creation ( inside a dockerfile ). The kcadm call to create realm is failing with authentication error - "HTTP error - 401 Unauthorized".
These are the 3 lines which I am trying to execute and the exception is thrown at the last line -
i) $JBOSS_HOME/bin/add-user-keycloak.sh -r master -u uadmin -p ${UADMIN_PWD}
( started the keycloak server after this )
ii) $JBOSS_HOME/bin/kcadm.sh config credentials --server http://localhost:8080/auth --realm master -user uadmin --password ${UADMIN_PWD}
iii) $JBOSS_HOME/bin/kcadm.sh create realms -s realm=myrealm -s enabled=true
Top of the stack is here -
04:53:48,721 ERROR [org.jboss.resteasy.resteasy_jaxrs.i18n] (default task-1) RESTEASY002005:
Failed executing POST /admin/realms:org.jboss.resteasy.spi.UnauthorizedException: Bearer
at org.keycloak.services.resources.admin.AdminRoot.authenticateRealmAdminRequest(AdminRoot.java:160)
at org.keycloak.services.resources.admin.AdminRoot.getRealmsAdmin(AdminRoot.java:209)
I inspected the $HOME/.keycloak/kcadm.config file and the content is as below -
$ cat kcadm.config
{
"serverUrl" : "http://localhost:8080/auth",
"realm" : "master",
"endpoints" : { }
}
There is no authentication token , which I could see there.
( One more observation, the "config credentials" command does not throw any exception if an invalid credential is passed. It would be helpful if there is an exception thrown. )
Any pointers for what am I doing wrong here, for the authentication issue during realm creation ?
Actually there was a typo error in the command --
"ii) $JBOSS_HOME/bin/kcadm.sh config credentials --server http://localhost:8080/auth --realm master -user uadmin --password ${UADMIN_PWD} "
The user parameter was "-user" , which should have been "--user"

I'm having trouble authenticating over AD to windows machines from my ansible host. 'Server not found in Kerberos Database' on Ubuntu 16.10

I'm having trouble authenticating over AD to windows machines from my ansible host. I have a valid kerberos ticket -
klist
Credentials cache: FILE:/tmp/krb5cc_1000
Principal: ansible#SOMEDOMAIN.LOCAL
Issued Expires Principal
Mar 10 09:15:27 2017 Mar 10 19:15:24 2017 krbtgt/SOMEDOMAIN.LOCAL#SOMEDOMAIN.LOCAL
My kerberos config looks fine to me -
cat /etc/krb5.conf
[libdefaults]
default_realm = SOMEDOMAIN.LOCAL
# dns_lookup_realm = true
# dns_lookup_kdc = true
# ticket_lifetime = 24h
# renew_lifetime = 7d
# forwardable = true
# The following krb5.conf variables are only for MIT Kerberos.
# kdc_timesync = 1
# forwardable = true
# proxiable = true
# The following encryption type specification will be used by MIT Kerberos
# if uncommented. In general, the defaults in the MIT Kerberos code are
# correct and overriding these specifications only serves to disable new
# encryption types as they are added, creating interoperability problems.
#
# Thie only time when you might need to uncomment these lines and change
# the enctypes is if you have local software that will break on ticket
# caches containing ticket encryption types it doesn't know about (such as
# old versions of Sun Java).
# default_tgs_enctypes = des3-hmac-sha1
# default_tkt_enctypes = des3-hmac-sha1
# permitted_enctypes = des3-hmac-sha1
# The following libdefaults parameters are only for Heimdal Kerberos.
# v4_instance_resolve = false
# v4_name_convert = {
# host = {
# rcmd = host
# ftp = ftp
# }
# plain = {
# something = something-else
# }
# }
# fcc-mit-ticketflags = true
[realms]
SOMEDOMAIN.LOCAL = {
kdc = prosperitydc1.somedomain.local
kdc = prosperitydc2.somedomain.local
default_domain = somedomain.local
admin_server = somedomain.local
}
[domain_realm]
.somedomain.local = SOMEDOMAIN.LOCAL
somedomain.local = SOMEDOMAIN.LOCAL
When running a test command - ansible windows -m win_ping -vvvvv I get
'Server not found in Kerberos database'.
ansible windows -m win_ping -vvvvv
Using /etc/ansible/ansible.cfg as config file
Loading callback plugin minimal of type stdout, v2.0 from /usr/lib/python2.7/dist-packages/ansible/plugins/callback/__init__.pyc
Using module file /usr/lib/python2.7/dist-packages/ansible/modules/core/windows/win_ping.ps1
<kerberostest.somedomain.local> ESTABLISH WINRM CONNECTION FOR USER: ansible#SOMEDOMAIN.LOCAL on PORT 5986 TO kerberostest.somedomain.local
<kerberostest.somedomain.local> WINRM CONNECT: transport=kerberos endpoint=https://kerberostest.somedomain.local:5986/wsman
<kerberostest.somedomain.local> WINRM CONNECTION ERROR: authGSSClientStep() failed: (('Unspecified GSS failure. Minor code may provide more information', 851968), ('Server not found in Kerberos database', -1765328377))
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/ansible/plugins/connection/winrm.py", line 154, in _winrm_connect
self.shell_id = protocol.open_shell(codepage=65001) # UTF-8
File "/home/prosperity/.local/lib/python2.7/site-packages/winrm/protocol.py", line 132, in open_shell
res = self.send_message(xmltodict.unparse(req))
File "/home/prosperity/.local/lib/python2.7/site-packages/winrm/protocol.py", line 207, in send_message
return self.transport.send_message(message)
File "/home/prosperity/.local/lib/python2.7/site-packages/winrm/transport.py", line 181, in send_message
prepared_request = self.session.prepare_request(request)
File "/home/prosperity/.local/lib/python2.7/site-packages/requests/sessions.py", line 407, in prepare_request
hooks=merge_hooks(request.hooks, self.hooks),
File "/home/prosperity/.local/lib/python2.7/site-packages/requests/models.py", line 306, in prepare
self.prepare_auth(auth, url)
File "/home/prosperity/.local/lib/python2.7/site-packages/requests/models.py", line 543, in prepare_auth
r = auth(self)
File "/home/prosperity/.local/lib/python2.7/site-packages/requests_kerberos/kerberos_.py", line 308, in __call__
auth_header = self.generate_request_header(None, host, is_preemptive=True)
File "/home/prosperity/.local/lib/python2.7/site-packages/requests_kerberos/kerberos_.py", line 148, in generate_request_header
raise KerberosExchangeError("%s failed: %s" % (kerb_stage, str(error.args)))
KerberosExchangeError: authGSSClientStep() failed: (('Unspecified GSS failure. Minor code may provide more information', 851968), ('Server not found in Kerberos database', -1765328377))
kerberostest.somedomain.local | UNREACHABLE! => {
"changed": false,
"msg": "kerberos: authGSSClientStep() failed: (('Unspecified GSS failure. Minor code may provide more information', 851968), ('Server not found in Kerberos database', -1765328377))",
"unreachable": true
}
I am able to ssh to the target machine
ssh -v1 kerberostest.somedomain.local -p 5986
OpenSSH_7.3p1 Ubuntu-1, OpenSSL 1.0.2g 1 Mar 2016
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug1: Connecting to kerberostest.somedomain.local [10.10.20.84] port 5986.
debug1: Connection established.
I can also ping all hosts with their hostname. I'm at a loss :(
Here is the ansible host file-
sudo cat /etc/ansible/hosts
# This is the default ansible 'hosts' file.
#
# It should live in /etc/ansible/hosts
#
# - Comments begin with the '#' character
# - Blank lines are ignored
# - Groups of hosts are delimited by [header] elements
# - You can enter hostnames or ip addresses
# - A hostname/ip can be a member of multiple groups
# Ex 1: Ungrouped hosts, specify before any group headers.
## green.example.com
## blue.example.com
## 192.168.100.1
## 192.168.100.10
# Ex 2: A collection of hosts belonging to the 'webservers' group
## [webservers]
## alpha.example.org
## beta.example.org
## 192.168.1.100
## 192.168.1.110
# If you have multiple hosts following a pattern you can specify
# them like this:
## www[001:006].example.com
# Ex 3: A collection of database servers in the 'dbservers' group
## [dbservers]
##
## db01.intranet.mydomain.net
## db02.intranet.mydomain.net
## 10.25.1.56
## 10.25.1.57
# Here's another example of host ranges, this time there are no
# leading 0s:
## db-[99:101]-node.example.com
[monitoring-servers]
#nagios
10.10.20.75 ansible_connection=ssh ansible_user=nagios
[windows]
#fileserver.somedomain.local#this machine isnt joined to the domain yet.
kerberostest.SOMEDOMAIN.LOCAL
[windows:vars]
#the following works for windows local account authentication
#ansible_ssh_user = prosperity
#ansible_ssh_pass = *********
#ansible_connection = winrm
#ansible_ssh_port = 5986
#ansible_winrm_server_cert_validation = ignore
#vars needed to authenticate on the windows domain using kerberos
ansible_user = ansible#SOMEDOMAIN.LOCAL
ansible_connection = winrm
ansible_winrm_scheme = https
ansible_winrm_transport = kerberos
ansible_winrm_server_cert_validation = ignore
I also tried connecting to the domain with realmd with success, but running the ansible command produced the same result.
This looks like a case of a missing SPN.
Here's the relevant error snippet:
<kerberostest.prosperityerp.local> ESTABLISH WINRM CONNECTION FOR USER: ansible#PROSPERITYERP.LOCAL on PORT 5986 TO kerberostest.prosperityerp.local
<kerberostest.prosperityerp.local> WINRM CONNECT: transport=kerberos endpoint=https://kerberostest.prosperityerp.local:5986/wsman
<kerberostest.prosperityerp.local> WINRM CONNECTION ERROR: authGSSClientStep() failed: (('Unspecified GSS failure. Minor code may provide more information', 851968), ('Server not found in Kerberos database', -1765328377))
And that is based off something I noticed in your Ansible configuration file:
[windows]
#fileserver.prosperityerp.local#this machine isnt joined to the domain yet.
kerberostest.PROSPERITYERP.LOCAL
I think the this machine isnt joined to the domain yet line in that file is a good indicator that the SPN HTTP/kerberostest.prosperityerp.local does not exist in Active Directory which would be causing the "server not found" message. You can SSH to kerberostest.prosperityerp.local, probably because it exists in DNS or in a Hosts file of the client machine, but unless and until the SPN HTTP/kerberostest.prosperityerp.local is created in Active Directory you will continue to get that error message. Adding that SPN properly in at this point would be a whole other topic of discussion.
You could use a command like this to test if you have that SPN defined:
setspn -Q HTTP/kerberostest.prosperityerp.local
SPNs exists to represent to a Kerberos client where to find the service instance for that service on the network.
Also run:
nslookup kerberostest.prosperityerp.local
on at least two client machines to make sure the FQDN of the IP host where the Kerberized is running exists DNS. DNS is a requirement for Kerberos to properly run in a network.
Finally, you could use Wireshark on the client for further analysis, use the filter kerberos to highlight only kerberos traffic.
In my case, the Server not found in Kerberos database error was a result of the target Windows machine's DNS name not being mapped to the right realm, as hinted at in this line from this Microsoft Technet Article:
The error “Server not found in Kerberos database” is common and can be misleading because it often appears when the service principal is not missing. The error can be caused by domain/realm mapping problems or it can be the result of a DNS problem where the service principal name is not being built correctly. Server logs and network traces can be used to determine what service principal is actually being requested.
I had playbook whoami.yaml:
- hosts: windows-machine.mydomain.com
tasks:
- name: Run 'whoami' command
win_command: whoami
Hosts file:
[windows]
windows-machine.mydomain.com
[windows:vars]
ansible_connection=winrm
ansible_winrm_transport=kerberos
ansible_user=user#FOO.BAR.MYDOMAIN.COM
ansible_password=<password>
ansible_port=5985
Since the DNS name was windows-machine.mydomain.com, but the AD realm was FOO.BAR.MYDOMAIN.COM I had to fix the mapping in my /etc/krb5.conf file on my Ansible host:
INCORRECT
This won't work for our case since this mapping rule won't apply to windows-machine.mydomain.com:
[domain_realm]
foo.bar.mydomain.com = FOO.BAR.MYDOMAIN.COM
CORRECT
This will correctly map windows-machine.mydomain.com to realm FOO.BAR.MYDOMAIN.COM
[domain_realm]
.mydomain.com = FOO.BAR.MYDOMAIN.COM

setting up gitlab LDAP-authentication without special gitlab user

I want to set up Gitlab with our company's LDAP as a demo. But unfortunately I have to put in an admin password in gitlab.yml to make gitlab access the LDAP service. The problem actually is the administration, as they don't want to setup another account just for Gitlab. Is there any way to circumvent this without filling in my own password? Is there a way to make Gitlab establish the LDAP connection with only the provided user credentials?
Any ideas beside logging in as anonymous?
Already posted here.
I haven't tried it yet, but from the things I've build so far authenticating against LDAP and the informations from the config-file this user-account seems only to be needed when your LDAP does not support anonymous binding and searching.
So I would leave the two entries bind_dn and password commented out and try whether it works or not.
UPDATE
I've implemented LDAP-Autehntication in Gitlab and it's fairly easy.
In the gitlab.yml-file there is a section called ldap.
There you have to provide the informations to connect to your LDAP. It seems that all fields have to be given, there seems to be no fallback default! If you want to use anonymous binding for retrieval of the users DN supply an empty string for bind_dn and password. Commenting them out seems not to work! At least I got a 501 Error message.
More information can be found at https://github.com/patthoyts/gitlabhq/wiki/Setting-up-ldap-auth and (more outdated but still helpful) https://github.com/intridea/omniauth-ldap
I have patched gitlab to work this way and documented the process in https://foivos.zakkak.net/tutorials/gitlab_ldap_auth_without_querying_account/
I shamelessly copy the instructions here for self-completeness.
Note: This tutorial was last tested with gitlab 8.2 installed from source.
This tutorial aims to describe how to modify a Gitlab installation to
use the users credentials to authenticate with the LDAP server. By
default Gitlab relies on anonymous binding or a special querying user
to ask the LDAP server about the existence of a user before
authenticating her with her own credentials. For security reasons,
however, many administrators disable anonymous binding and forbid the
creation of special querying LDAP users.
In this tutorial we assume that we have a gitlab setup at
gitlab.example.com and an LDAP server running on ldap.example.com, and
users have a DN of the following form:
CN=username,OU=Users,OU=division,OU=department,DC=example,DC=com.
Patching
To make Gitlab work in such cases we need to partly modify its
authentication mechanism regarding LDAP.
First, we replace the omniauth-ldap module with this derivation. To
achieve this we apply the following patch to gitlab/Gemfile:
diff --git a/Gemfile b/Gemfile
index 1171eeb..f25bc60 100644
--- a/Gemfile
+++ b/Gemfile
## -44,4 +44,5 ## gem 'gitlab-grack', '~> 2.0.2', require: 'grack'
# LDAP Auth
# GitLab fork with several improvements to original library. For full list of changes
# see https://github.com/intridea/omniauth-ldap/compare/master...gitlabhq:master
-gem 'gitlab_omniauth-ldap', '1.2.1', require: "omniauth-ldap"
+#gem 'gitlab_omniauth-ldap', '1.2.1', require: "omniauth-ldap"
+gem 'gitlab_omniauth-ldap', :git => 'https://github.com/zakkak/omniauth-ldap.git', require: 'net-ldap', require: "omniauth-ldap"
Now, we need to perform the following actions:
sudo -u git -H bundle install --without development test mysql --path vendor/bundle --no-deployment
sudo -u git -H bundle install --deployment --without development test mysql aws
These commands will fetch the modified omniauth-ldap module in
gitlab/vendor/bundle/ruby/2.x.x/bundler/gems. Now that the module is
fetched, we need to modify it to use the DN our LDAP server expects. We
achieve this by patching lib/omniauth/strategies/ldap.rb in
gitlab/vendor/bundle/ruby/2.x.x/bundler/gems/omniauth-ldap with:
diff --git a/lib/omniauth/strategies/ldap.rb b/lib/omniauth/strategies/ldap.rb
index 9ea62b4..da5e648 100644
--- a/lib/omniauth/strategies/ldap.rb
+++ b/lib/omniauth/strategies/ldap.rb
## -39,7 +39,7 ## module OmniAuth
return fail!(:missing_credentials) if missing_credentials?
# The HACK! FIXME: do it in a more generic/configurable way
- #options[:bind_dn] = "CN=#{request['username']},OU=Test,DC=my,DC=example,DC=com"
+ #options[:bind_dn] = "CN=#{request['username']},OU=Users,OU=division,OU=department,DC=example,DC=com"
#options[:password] = request['password']
#adaptor = OmniAuth::LDAP::Adaptor.new #options
With this module, gitlab uses the user's credentials to bind to the LDAP
server and query it, as well as, to authenticate the user herself.
This however will only work as long as the users do not use ssh-keys to
authenticate with Gitlab. When authenticating through an ssh-key, by
default Gitlab queries the LDAP server to find out whether the
corresponding user is (still) a valid user or not. At this point, we
cannot use the user credentials to query the LDAP server, since the user
did not provide them to us. As a result we disable this mechanism,
essentially allowing users with registered ssh-keys but removed from the
LDAP server to still use our Gitlab setup. To prevent such users from
being able to still use your Gitlab setup, you will have to manually
delete their ssh-keys from any accounts in your setup.
To disable this mechanism we patch gitlab/lib/gitlab/ldap/access.rb
with:
diff --git a/lib/gitlab/ldap/access.rb b/lib/gitlab/ldap/access.rb
index 16ff03c..9ebaeb6 100644
--- a/lib/gitlab/ldap/access.rb
+++ b/lib/gitlab/ldap/access.rb
## -14,15 +14,16 ## module Gitlab
end
def self.allowed?(user)
- self.open(user) do |access|
- if access.allowed?
- user.last_credential_check_at = Time.now
- user.save
- true
- else
- false
- end
- end
+ true
+ # self.open(user) do |access|
+ # if access.allowed?
+ # user.last_credential_check_at = Time.now
+ # user.save
+ # true
+ # else
+ # false
+ # end
+ # end
end
def initialize(user, adapter=nil)
## -32,20 +33,21 ## module Gitlab
end
def allowed?
- if Gitlab::LDAP::Person.find_by_dn(user.ldap_identity.extern_uid, adapter)
- return true unless ldap_config.active_directory
+ true
+ # if Gitlab::LDAP::Person.find_by_dn(user.ldap_identity.extern_uid, adapter)
+ # return true unless ldap_config.active_directory
- # Block user in GitLab if he/she was blocked in AD
- if Gitlab::LDAP::Person.disabled_via_active_directory?(user.ldap_identity.extern_uid, adapter)
- user.block unless user.blocked?
- false
- else
- user.activate if user.blocked? && !ldap_config.block_auto_created_users
- true
- end
- else
- false
- end
+ # # Block user in GitLab if he/she was blocked in AD
+ # if Gitlab::LDAP::Person.disabled_via_active_directory?(user.ldap_identity.extern_uid, adapter)
+ # user.block unless user.blocked?
+ # false
+ # else
+ # user.activate if user.blocked? && !ldap_config.block_auto_created_users
+ # true
+ # end
+ # else
+ # false
+ # end
rescue
false
end
Configuration
In gitlab.yml use something like the following (modify to your needs):
#
# 2. Auth settings
# ==========================
## LDAP settings
# You can inspect a sample of the LDAP users with login access by running:
# bundle exec rake gitlab:ldap:check RAILS_ENV=production
ldap:
enabled: true
servers:
##########################################################################
#
# Since GitLab 7.4, LDAP servers get ID's (below the ID is 'main'). GitLab
# Enterprise Edition now supports connecting to multiple LDAP servers.
#
# If you are updating from the old (pre-7.4) syntax, you MUST give your
# old server the ID 'main'.
#
##########################################################################
main: # 'main' is the GitLab 'provider ID' of this LDAP server
## label
#
# A human-friendly name for your LDAP server. It is OK to change the label later,
# for instance if you find out it is too large to fit on the web page.
#
# Example: 'Paris' or 'Acme, Ltd.'
label: 'LDAP_EXAMPLE_COM'
host: ldap.example.com
port: 636
uid: 'sAMAccountName'
method: 'ssl' # "tls" or "ssl" or "plain"
bind_dn: ''
password: ''
# This setting specifies if LDAP server is Active Directory LDAP server.
# For non AD servers it skips the AD specific queries.
# If your LDAP server is not AD, set this to false.
active_directory: true
# If allow_username_or_email_login is enabled, GitLab will ignore everything
# after the first '#' in the LDAP username submitted by the user on login.
#
# Example:
# - the user enters 'jane.doe#example.com' and 'p#ssw0rd' as LDAP credentials;
# - GitLab queries the LDAP server with 'jane.doe' and 'p#ssw0rd'.
#
# If you are using "uid: 'userPrincipalName'" on ActiveDirectory you need to
# disable this setting, because the userPrincipalName contains an '#'.
allow_username_or_email_login: false
# To maintain tight control over the number of active users on your GitLab installation,
# enable this setting to keep new users blocked until they have been cleared by the admin
# (default: false).
block_auto_created_users: false
# Base where we can search for users
#
# Ex. ou=People,dc=gitlab,dc=example
#
base: 'OU=Users,OU=division,OU=department,DC=example,DC=com'
# Filter LDAP users
#
# Format: RFC 4515 http://tools.ietf.org/search/rfc4515
# Ex. (employeeType=developer)
#
# Note: GitLab does not support omniauth-ldap's custom filter syntax.
#
user_filter: '(&(objectclass=user)(objectclass=person))'
GitLab uses omniauth to manage multiple login sources (including LDAP).
So if you can somehow extend omniauth in order to manage the LDAP connection differently, you could fetch the password from a different source.
That would allow you to avoid keeping said password in the ldap section of the gitlab.yml config file.