I have a ldap server + kerberos setup in a centos vm (running using boot2docker vm) And i am trying to use them for my web application authentication (from host - my macbook).
For authentication, i need to use the "GSSAPI" mechanism, not the simple bind. 'simple bind' is working perfectly, but the "GSSAPI" based approach is not working.
I am getting the following error whenever i try the "ldapwhoami" command (i ran 'kinit' before running ldapwhoami to make sure i have valid kerberos TGT)
ldap_sasl_interactive_bind_s: Local error (-2)
additional info: SASL(-1): generic failure: GSSAPI Error: Miscellaneous failure (see text (unable to reach any KDC in realm DEV.EXAMPLE.COM, tried 1 KDC)
Please note that the LDAP server and the kerberos server side is working perfectly, means i tested them with things like "ldapsearch", "ldapwhoami" in the centos VM where i have my ldap server + kerberos setup, Its working fine. I am able to see proper output for them.
I am getting errors (above error) only when i try the same command from my laptop (client).
Note: even i created host principal (host/mymacbook.dev#DEV.EXAMPLE.COM) from my laptop and added it to my local krb5.keytab file using 'kadmin'.
Below are my client side configurations:
/etc/krb5.conf file in Client (macbook):
[libdefaults]
default_realm = DEV.EXAMPLE.COM
ticket_lifetime = 24000
dns_lookup_realm = false
dns_lookup_kdc = false
[realms]
DEV.EXAMPLE.COM = {
kdc = d4dc7089282c
admin_server = krb.example.com
}
[domain_realm]
.dev.example.com = DEV.EXAMPLE.COM
dev.example.com = DEV.EXAMPLE.COM
.example.com = DEV.EXAMPLE.COM
example.com = DEV.EXAMPLE.COM
[appdefaults]
pam = {
debug = false
ticket_lifetime = 36000
renew_lifetime = 36000
forwardable = true
krb4_convert = false
}
[logging]
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmin.log
/etc/hosts file in Client (macbook):
127.0.0.1 localhost
192.168.59.3 mymacbook.dev
255.255.255.255 broadcasthost
::1 localhost
192.168.59.103 ldapserver.example.com
192.168.59.103 d4dc7089282c
192.168.59.103 krb.example.com
192.168.59.103 is my boot2docker vm ip, and i am doing port forwarding from boot2docker vm to docker image on all the default ports related to LDAP and kerberos ( 88, 389, 464 & 749)
Any idea why i am getting this error?
ldap_sasl_interactive_bind_s: Local error (-2)
additional info: SASL(-1): generic failure: GSSAPI Error: Miscellaneous failure (see text (unable to reach any KDC in realm DEV.EXAMPLE.COM, tried 1 KDC)
is it related to DNS or something else? any suggestions?
On MacOS the default client does not fall back to TCP.
in your krb.conf prefix your kdc with tcp/ to force the client to use TCP if your network blocks UPD traffic (As some network admins might do).
kdc = tcp/ds01.int.domain.com:88
You need multiple things to get a containerized KDC being reachable from the outside.
Lets assume you are using port 88 as that is the default and lets also assume your image was called docker-kdc.
Make sure your port 88 is exposed.
EXPOSE 88
Make sure your KDC daemon listens on that port. For the sake of this example, I am simply using the KDC as an entrypoint, you should be able to extrapolate if that wasn't applying for your specific example.
ENTRYPOINT ["/usr/lib/heimdal-servers/kdc", "--config-file=/etc/heimdal-kdc/kdc.conf", "-P 88"]
When running the container, I am using port forwarding towards 48088. Note that the KDC uses both, TCP and UDP.
docker run -d -h kdc --name kdc -p 48088:88/udp -p 48088:88 docker-kdc
From this point on, your KDC should be reachable from within the host system.
=== OSX Only ===
Now given that you are using OSX (boot2docker -> VirtualBox), you will also need to setup port forwarding towards your OSX environment.
VBoxManage controlvm boot2docker-vm natpf1 "48088/tcp,tcp,127.0.0.1,48088,,48088"
VBoxManage controlvm boot2docker-vm natpf1 "48088/udp,udp,127.0.0.1,48088,,48088"
Get the IP address of your docker container if needed.
When using plain docker (on linux), you can simply use the loopback 127.0.0.1.
When using boot2docker (on OSX), you will get that using: boot2docker ip
Prepare a minimal krb5.conf that makes use of the KDC. For the sake of this example, I am using a realm called EXAMPLE.COM on the domain example.com.
Note that you will have to replace IP with the result of step 5.
[libdefaults]
default_realm = EXAMPLE.COM
noaddresses = true
[realms]
EXAMPLE.COM = {
kdc = IP:48088
admin_server = IP:48088
}
[domain_realm]
example.com = EXAMPLE.COM
.example.com = EXAMPLE.COM
Now go ahead and test that configuration.
export KRB5_CONF=PATH_TO_THE_KRB5.CONF_FILE_FROM_STEP_6
kinit test/foo.example.com#EXAMPLE.COM
Since I had to do this for a project of mine, I packed it all into some little script that might be helpful for your further research;
https://github.com/tillt/docker-kdc
Ensure that krb5.conf file is in /etc directory. I had the same issue and had no firewall issues, still was getting the same error. Finally, I was able to fix the issue by moving the krb5.conf file to /etc directory.
Related
I'm running my own gitlab server on Oracle Cloud
and its Domain handled by Cloudflare
But my gitlab ssh cloning doesn't work at all.
(as far as I can remember, HTTP 413 curl 22 The requested URL returned error: 413)
(And It also happened port 22 network unreachable error)
I think other process hold 22 port, so I tried to change gitlab ssh port to other port.
Changed gitlab.rb (gitlab_shell_ssh_port to other)
Open Oracle Cloud VCN port,
Open Ubuntu Firewall(ufw allow and also tried iptables)
And Add other port on sshd (/etc/ssh/ssh_config)
disable cloudflare DNS Proxy to DNS only.
but It doesn't work and even port is inaccessible.
more than that nothing listen on that port.
what should I do more on this?
I've built a docker image based on httpd:2.4. In my k8s deployment I've defined the following securityContext:
securityContext:
privileged: false
runAsNonRoot: true
runAsUser: 431
allowPrivilegeEscalation: false
In order to get this container to run properly as non-root apache needs to be configured to bind to a port > 1024, as opposed to the default 80. As far as I can tell this means editing Listen 80 in httpd.conf to Listen {Some port > 1024}.
When I want to run the docker image I've build normally (i.e. on default port 80) I have the following port settings:
deployment
spec.template.spec.containers[0].ports[0].containerPort: 80
service
spec.ports[0].targetPort: 80
spec.ports[0].port: 8080
ingress
spec.rules[0].http.paths[0].backend.servicePort: 8080
Given these settings the service becomes accessible at the host url provided in the ingress manifest. Again, this is without the changes to httpd.conf. When I make those changes (using Listen 8000), and add in the securityContext section to the deployment, I change the various manifests accordingly:
deployment
spec.template.spec.containers[0].ports[0].containerPort: 8000
service
spec.ports[0].targetPort: 8000
spec.ports[0].port: 8080
ingress
spec.rules[0].http.paths[0].backend.servicePort: 8080
Yet for some reason, when I try to access a URL that should be working I get a 502 Bad Gateway error. Have I set the ports correctly? Is there something else I need to do?
Check if pod is Running
kubectl get pods
kubectl logs pod_name
Check if the URL is accessible within the pod
kubectl exec -it <pod_name> -- bash
$ curl http://localhost:8000
If the above didn't work, check your httpd.conf.
Check with the service name
kubectl exec -it <ingress pod_name> -- bash
$ curl http://svc:8080
You can check ingress logs too.
In order to get this container to run properly as non-root apache
needs to be configured to bind to a port > 1024, as opposed to the
default 80
You got it, that's the hard requirement in order to make the apache container running as non-root, therefore this change needs to be done at container level, not to Kubernetes' abstracts like Deployment's Pod spec or Service/Ingress resource object definitions. So the only thing left in your case, is to build a custom httpd image, with listening port > 1024. The same approach applies to the NGINX Docker containers.
One key information for the 'containerPort' field in Pod spec, that you are trying to manually adjust, and which is not so apparent. It's there primarily for informational purposes, and does not cause opening port on container level. According Kubernetes API reference:
Not specifying a port here DOES NOT prevent that port from being
exposed. Any port which is listening on the default "0.0.0.0" address
inside a container will be accessible from the network. Cannot be updated.
I hope this will help you to move on
I'm running Samba 4.3.11-Ubuntu on Ubuntu 16.04, and I'm unable to get LDAPS (port 636) to work at all.
Samba is running as an Active Directory Domain Controller, and other AD DC fncitonality seems to be fine.
This used to work, but now there's nothing listening on that port. I'm not sure what I did to break it, but it stopped working after I updated my server with a trusted certificate.
Here's what I have for /etc/samba/smb.conf:
# Global parameters
[global]
workgroup = AD
realm = AD.<redacted>.COM
netbios name = SAMBADC
server role = active directory domain controller
dns forwarder = 8.8.8.8
idmap_ldb:use rfc2307 = yes
tls enabled = yes
tls keyfile = tls/ad.<redacted>.com.key
tls certfile = tls/c7535fc6c5e8e557.crt
tls cafile = tls/gd_bundle-g2-g1.crt
ldap server require strong auth = allow_sasl_over_tls
[netlogon]
path = /var/lib/samba/sysvol/ad.<redacted>.com/scripts
read only = No
[sysvol]
path = /var/lib/samba/sysvol
read only = No
The error I'm getting is:
nitsadmin#sambadc:/etc/samba$ telnet localhost 636
Trying 127.0.0.1...
Trying ::1...
telnet: Unable to connect to remote host: Cannot assign requested address
Anyone have any idea why this might not work? Any idea what Cannot assign requested address means?
Could you please provide a log file which is specified in your smb.conf parameter log file = while you start the samba service?
There could be something wrong with your certificates.
One thing you could try is to switch to autogenerated self-signed certificate and see if it solves the issue. If it does - you have to fix your certificates.
To do this, remove all certificates from tls folder and reconfigure smb.conf:
tls enabled = yes
tls keyfile = tls/key.pem
tls certfile = tls/cert.pem
tls cafile = tls/ca.pem
Then restart samba service and see if it helps.
I used certbot to generate a Let's encrypt certificate for my website, but Yaws gives me an SSL accept failed: timeout error when I try to connect to it (after it times out of course). Interestingly it works when I redirect example.com to the local ip address of the server in the hosts file on my machine and connect to example.com:8080, but not when I connect to example.com without editing the hosts file or when I connect from my phone over 4G. Here's my webserver's configuration file (it is the only configuration file in conf.d):
<server www.example.com>
port = 8080
listen = 0.0.0.0
docroot = /usr/share/yaws
<ssl>
keyfile = /etc/letsencrypt/live/example.com/privkey.pem
certfile = /etc/letsencrypt/live/example.com/fullchain.pem
</ssl>
</server>
I made sure that the keyfile and the certificate are both readable by the yaws user. Next to the keyfiles is a README that contains the following:
`privkey.pem` : the private key for your certificate.
`fullchain.pem`: the certificate file used in most server software.
`chain.pem` : used for OCSP stapling in Nginx >=1.3.7.
`cert.pem` : will break many server configurations, and should not be used
without reading further documentation (see link below).
We recommend not moving these files. For more information, see the Certbot
User Guide at https://certbot.eff.org/docs/using.html#where-are-my-certificates.
So I'm relatively sure I've used the right file (the other ones gave me errors like badmatch and {tls_alert,"decrypt error"}). I also tried trivial things like writing https:// before the URL, but it didn't fix the issue, also, everything works fine when the server is running without SSL. The version of Erlang running on my server is Erlang/OTP 19. Also, if it's unclear, the domain isn't actually example.com.
Also, example.com is redirected via cname to examplecom.duckdns.org, if that matters.
UPDATE:
My server was listening on port 8080, that was forwarded from the external port 80, for https connections, when the default https port is port 443. My other mistake was connecting to http://example.com instead of https://example.com. Forwarding the external port 443 to the internal port 8443 and configuring yaws to listen on port 8443 fixed everything.
Just to be sure to understand, when you do something like curl -v https://example.com:8080, you get a timeout, that's it ? (here https protocol and port 8080 are mandatory of course)
SSL timeout during accept can be triggered when an unencrypted request is received on a SSL vhost.
Could you also provide the output of the following command:
echo -e "HEAD / HTTP/1.0\r\n\r\n" | openssl s_client -connect mysite.com:8080 -ign_eof
And finally, which version of Yaws are you running ? on which OS ?
I'm running the Omnibus version of Gitlab in docker.
I've edited my gitlab.rb file so as to enable https by prefixing external_url with https://. This seems to work well enough. However, when I also add my port to the URL:
external_url = 'https://www.example.com:12345'
My browser shows me a "connection refused" error. Why is this?
If you want to change the default port where GitLab is running, you have to put this in your gitlab.rb file :
gitlab_rails['gitlab_host'] = 'example.com'
gitlab_rails['gitlab_port'] = 12345
gitlab_rails['gitlab_https'] = true
After you set this parameters, you'll have to run a # gitlab-ctl reconfigure
Also take a look at /opt/gitlab/embedded/service/gitlab-shell/config.yml: you can find here interesting parameters for: the SSL certificates path
http_settings:
#user: someone
#password: somepass
ca_file: /etc/ssl/cert.pem
ca_path: /etc/pki/tls/certs
self_signed_cert: false