LDAPS Microsoft Active Directory Multiple Certificates RFC6125 - ssl

We have an Microsoft Active Directory Domain with a large pool of domain controllers (DC) that are are setup with LDAP. These are all setup with LDAPS and uses Certificate Services via a template to setup a certificate with the domain name (i.e. test.corp) in the Subject Alternate Name (SAN) for the LDAPS server to serve.
Since these are DC's, DNS is setup in a pool for each these systems to respond to requests to test.corp in a round robin fashion.
Each of these DC's have multiple templates and multiple certificates in the Local Computer\Personal Certificate Store.
Upon testing, using a nodejs module, ldapjs when making a LDAPS request using the domain name, test.corp we notice that a handful of servers fail with the following message:
Error [ERR_TLS_CERT_ALTNAME_INVALID]: Hostname/IP does not match
certificate's altnames: Host: test.corp. is not in the cert's
altnames: othername:, DNS:.test.corp
As we investigated we found that these handful of LDAPS servers are serving the incorrect certificate. We determined this by using the following command
openssl s_client -connect .test.corp:636
If you take the certificate section of the output and put it in a file and use a tool such as the Certificate manager or certutil to read the file, you can see the certificate is not the correct one. (It does not have the domain "test.corp" SAN). We also verified this by comparing the Serial Numbers
As we investigated, since we have DC's that have multiple certificates in the Local Computer\Personal Certificate store, we came across the following article:
https://social.technet.microsoft.com/wiki/contents/articles/2980.ldap-over-ssl-ldaps-certificate.aspx
It suggests putting the certificate from the local computer\Personal certificate store to the Active Directory Domain Service\Personal store. We followed the steps outlined but we found the same results.
Upon further investigation, it was suggested to use a tool called ldp or adsiedit. We then proceeded to use these tools and spoofed the local machine's host file we were doing the test from, to point the domain (test.corp) to the ip's of one of the DC's that are giving us trouble. After a restart to clear any cache we tested the "ldp" and "adsiedit" tools to connect to test.corp. These systems did not report any errors.
We found this odd, we then ran the openssl command to see what certificate it was serving from this same system and we found it was still serving the incorrect certificate.
Upon further research, it appears that the "ldp" upon selecting the SSL checkbox and "adsiedit" tools were not compliant with RFC6125, specifically B.3
https://www.rfc-editor.org/rfc/rfc6125#appendix-B.3
, which basically states the identity of the certificate must match the identity of the request otherwise the handshake would fail. This identity verification is done by using the certificate common name (CN) or the SAN.
Based on this appears the tools "ldp" and "adsiedit" are not conforming to the RFC6125 standard.
All this to say, we need to first fix the handful of domain controllers that are serving the correct certificate. We are open to suggestions since we have been working on this problem for the past few months. Second, is there a way to get the MS tools in question to work to the RFC6125 standard?
This has been moved to:
https://serverfault.com/questions/939515/ldaps-microsoft-active-directory-multiple-certificates-rfc6125

RFC6125 specifically states that it does not supersede existing RFCs. LDAP cert handling is defined in RFC4513. Outside of that, RFC6125 has significant flaws. See also https://bugzilla.redhat.com/show_bug.cgi?id=1740070#c26

LDP will supposedly validate the SSL against the client store if you toggle the ssl checkbox on the connection screen.
That said, I'm not surprised that neither it nor ADSI edit enforce that part of the standard given they are often used to configure or repair broken configurations. Out of the box and without Certificate Services they use self signed certs on LDAPS. I would wager 80% of DCs never get a proper certificate for LDAP. If they enforced it most wouldn't be able to connect. A better design decision would have been to toggle off the validation.
I use a similar openssl command to verify my own systems. I think it's superior to LDP even if LDP were to validate the certificate. To save you some effort, I would suggest using this variant of the openssl command:
echo | openssl s_client -connect .test.corp:636 2>/dev/null | openssl x509 -noout -dates -issuer -subject -text
That should save you having to output to a file and having to read it with other tools.
I've found LDAPS on AD to be a huge pain for the exact reasons you describe. It just seems to pick up the first valid cert it can find. If you've already added it to the AD DS personal store, I'm not sure where else to suggest you go other than removing some of tother certs from the DCs computer store.

Related

Can I use my own Certificate Authority for HTTPS over LAN?

I have a server and a few clients, all running on different docker containers. The users can use the client by entering localhost:3000 on their browser (where the client docker is running).
All the containers run on the same LAN. I want to use HTTPS.
Can I sign a public private key pair using my own CA, then load the CA's public key to the browser?
I want to use the normal flow for public domains, but internally with my own CA.
Or should I look for another solution?
Meta: since you've now disclosed nodejs, that makes it at least borderline for topicality.
In general, the way PKIX (as used in SSL/TLS including HTTPS) works is that the server must have a privatekey and matching certificate; this is the same whether you use a public CA or your own (as you desire). The server should also have any intermediate or 'chain' cert(s) needed to verify its cert; a public CA will always need such chain cert(s) because CABforum rules (codifying common best practice) prohibits issuing 'subscriber' (EE) certs directly from a root, while your own CA is up to you -- you can choose to use intermediate(s) or not, although as I say it is considered best practice to use them and keep the root privatekey 'offline' -- in cryptography, that means not on any system that communicates with anybody, such as in this case servers that request certificates, thus eliminating one avenue of attack -- on a specialized device that is 'airgapped' (not connected or even able to be connected to any network) and in a locked vault, possibly with 'tamper protection', a polite name for self-destruct. As a known example of the rigor needed to secure something as sensitive as the root key of an important CA, compare Stuxnet.
The client(s) does not need and should not be configured with the server cert unless you want to do pinning; it(they) do need the CA root cert. Most clients, and particularly browsers, already have many/most/all public CA root certs builtin, so using a cert from such a CA does not require any action on the client(s); OTOH using your own CA requires adding your CA cert to the client(s). Chrome on Windows uses the Microsoft-supplied (Windows) store; you can add to this explicitly (using the GUI dialog, or the certutil program or powershell), although in domain-managed environments (e.g. businesses) it is also popular to 'push' a CA cert (or certs) using GPO. Firefox uses its own truststore, which you must add to explicitly.
In nodejs you configure the privatekey, server cert, and if needed chain cert(s), as documented
PS: note you generally should, and for Chrome (and new Edge, which is actually Chromium) must, have the SubjectAlternativeName (SAN) extension in the server cert specify its domain name(s), or optionally IP address(es), NOT (or not only) the CommonName (CN) attribute as you will find in many outdated and/or incompetent instructions and tutorials on the Web. OpenSSL commandline makes it easy to do CommonName but not quite so easy to do SAN; there are many Qs on several Stacks about this. Any public CA after about 2010 does SAN automatically.

Solve boost.asio certificate failed error -without- access to source code, to find out what information Philips Hue Bridge shares

This is a bit of a super duper specific question, but who knows there's someone out there that can help me.
I happen to have Philips Hue Bridge and I would love to know what personal information it is sharing with the outside world. Using tcpdump on my router I figured the Hue Bridge has a rather talkative personality. But because it talks over SSL tunnels, I have no idea what it says. So what I did is I setup a SonicWall with SSL-DPI with a CA, got root access to the Hue Bridge and found the application that does the talking to wws://ws.meethue.com (its called websocketcd). I then replaced the root certificate on the Hue Bridge, adjusted the cipher to match the Sonicwall and now I am stuck due to boost.asio trowing an validation error of my certificate:
error:14090086:lib(20):func(144):reason(134)
For those not too familiar with the error codes, this is what they mean:
lib(20) is ERR_LIB_SSL
func(144) is SSL_F_SSL3_GET_SERVER_CERTIFICATE
reason(134) is SSL_R_CERTIFICATE_VERIFY_FAILED
To verify it's not my SonicWall or certificate that is causing the problem, I executed openssl s_client -connect ws.meethue.com:443 -CAfile ca.pem from the Hue Bridge and that validates the chain perfectly fine, the same way as the original certificate. I also verified that the application is loading my root certificate and cipher correctly (because if change the cipher, I get a cipher error error). Also in my browser, I can visit https://ws.meethue.com without certificate errors. Here's my self made certificate chain, in case someone wants to check it: https://gofile.io/d/5msjoJ (password for download/key 1020304050, it's a temporary key that only exists in my local test env. so it's safe to share ;-)
If websocketcd wasn't a binary file, the problem was super easy to solve using set_verify_mode, but unfortunately it is a binary and that makes life significantly more complicated.
Is there anyone who can give me advice how to make this blob called websocketcd with boost.asio in it accept my root certificate? What I tried too: letting it communicate without ssl and with ssl without encryption (eNULL:aNULL ciphers). I am a bit hesitant to share the blob but for those who have a Hue Bridge too, it's located at /usr/bin/websocketcd.
Perhaps you can use strace (or maybe even ltrace) to spot which certificate paths it is using for root authorities.
If it uses a single file, you might be abel to hack it by replacing it with a CA that verifies your MITM certificate.
Sometimes the file can contain multiple certificates, so worth appending/prepending yours.
If you're in luck, there will be a readdir on a directory containing certificates. If so, you should be able to add your root certificate (in PEM form) there and **remember to run c_rehash on that directory.
For those interested: after some 20hrs, I figured that websocketcd requires a certificate revocation list for each CA in the chain (which do not have to have any revoked serials). These CLRs need to be included in the root CA file that is loaded using the ca-filename argument. I was not aware that Boost Asio could demand that a CLR is present for each CA, but apparently, they (Signify) managed to do so.

Why does validation fail when connecting to a server by IP address instead of hostname?

I am getting the bad certificate error while accessing the server using IP address instead DNS name.
Is this functionality newly introduced in tls1.1. and tls 1.2? It would be good if someone would point out OpenSSL code where it fails and return the bad certificate error.
Why do we get bad certificate error while accessing the server using IP address instead dns name?
It depends on the issuing/validation policies, user agents, and the version of OpenSSL you are using. So to give you a precise answer, we need to know more about your configuration.
Generally speaking, suppose www.example.com has a IP address of www.xxx.yyy.zzz. If you connect via https://www.example.com/..., then the connection should succeed. If you connect using a browser via https://www.xxx.yyy.zzz/... then it should always fail. If you connect using another user agent via https://www.xxx.yyy.zzz/... then it should succeed if the certificate includes www.xxx.yyy.zzz; and fail otherwise.
Issuing/Validation Policies
There are two bodies which dominate issuing/validation policies. They are the CA/Browser Forum, and the Internet Engineering Task Force (IETF).
Browsers, Like Chrome, Firefox and Internet Explorer, follow the CA/B Baseline Requirements (CA/B BR).
Other user agents, like cURL and Wget, follow IETF issuing and validation policies, like RFC 5280, Internet X.509 Public Key Infrastructure Certificate and Certificate Revocation List (CRL) Profile and RFC 6125, Representation and Verification of Domain-Based Application Service Identity within Internet Public Key Infrastructure Using X.509 (PKIX) Certificates in the Context of Transport Layer Security (TLS). The RFCs are more relaxed that CA/B issuing policies.
User Agents
Different user agents have different policies that apply to DNS names. Some want a traditional hostname found in DNS, while others allow IP addresses.
Browsers only allow DNS hostnames in the Subject Alternate Name (SAN). If the hostname is missing from the SAN, then the match will not occur. Putting the server name in the Common Name is a waste of time and energy because browsers require host names in the SAN.
Browsers do not match a public IP address in the SAN. They will sometimes allow a Private IP from RFC 1918, Address Allocation for Private Internets.
Other user agents allow any name in the Subject Alternate Name (SAN). They also will match a name in both the Common Name (CN) and the Subject Alternate Name (SAN). Names include a DNS name like www.example.com, a public IP address, a private IP address like 192.168.10.10 and a local name like localhost and localhost.localdomain.
OpenSSL Version
OpenSSL version 1.0.2 and below did not perform hostname validation. That is, you had to perform the matching yourself. If you did not perform hostname validation yourself, then it appeared the connection always succeeded. Also see Hostname Validation and TLS Client on the OpenSSL wiki.
OpenSSL 1.1.0 and above perform hostname matching. If you switch to 1.1.0, then you should begin experiencing failures if you were not performing hostname matching youself or you were not strictly following issuing policies.
It would be good if someone would point out OpenSSL code where it fails and return the bad certificate error.
The check-ins occurred in early-2015, and they have been available in Master (i.e., 1.1.0-dev) since that time. The code was also available in 1.0.2, but you had to perform special actions. The routines were not available in 1.0.1 or below. Also see Hostname Validation on the OpenSSL wiki. I don't have the Git check-ins because I'm on a Windows machine at the moment.
More information of the rules for names and their locations can be found at How do you sign Certificate Signing Request with your Certification Authority and How to create a self-signed certificate with openssl. There are at least four or six more documents covering them, like how things need to be presented for HTTP Strict Transport Security (HSTS) and Public Key Pinning with Overrides for HTTP.

OpenSSL in GitLab, what verification for self-signed certificate?

On Debian, using GitLab, I ran into issues with my self-signed certificate.
Reading through the code after a lot of searching on the Internet (I guess, it's the last resort, FOSS is helpful), I found the following lines in gitlab-shell/lib/gitlab_net.rb which left me... perplexed.
if config.http_settings['self_signed_cert']
http.verify_mode = OpenSSL::SSL::VERIFY_NONE
end
Most Stack Overflow responses about the diverse issues I've had until now have led me to believe that VERIFY_NONE, as you'd expect, doesn't verify anything. VERIFY_PEER seems, based on my reading, to be the correct setting for self-signed.
As I read it, it feels like taking steps to secure my connection using a certificate, and then just deciding to not use it? Is it a bug, or am I misreading the source?
gitlab-shell (on the GitLab server) has to communicate to the GitLab instance through an HTTPS or SSH URL API.
If it is a self-signed certificate, it doesn't want any error/warning when trying to access those GitLab URLs, hence the SSL::VERIFY_NONE.
But, that same certificate is also used by clients (outside of the GitLab server), using those same GitLab HTTPS URLs from their browser.
For them, the self-signed certificate is useful, provided they install it in their browser keystore.
For those transactions (clients to GitLab), the certificate will be "verified".
The OP Kheldar point's out in Mislav's post:
OpenSSL expects to find each certificate in a file named by the certificate subject’s hashed name, plus a number extension that starts with 0.
That means you can’t just drop My_Awesome_CA_Cert.pem in the directory and expect it to be picked up automatically.
However, OpenSSL ships with a utility called c_rehash which you can invoke on a directory to have all certificates indexed with appropriately named symlinks.
(See for instance OpenSSL Verify location)
cd /some/where/certs
c_rehash .

How to tell LDAP SSL server with multiple certificates to return the one that I need?

My simple LDAP java program, using
env.put(Context.INITIAL_CONTEXT_FACTORY, "com.sun.jndi.ldap.LdapCtxFactory");
env.put(Context.SECURITY_AUTHENTICATION, "simple");
env.put(Context.SECURITY_PRINCIPAL, <UserDN>);
env.put(Context.SECURITY_CREDENTIALS, <Password>);
env.put(Context.SECURITY_PROTOCOL, "ssl");
env.put(Context.PROVIDER_URL, "ldaps://<host>:636");
to make LDAP SSL authentication stopped working ever since a 2nd server certificate with the same CN but other details in the subject are different was installed on the server which I don't have access at all.
The program fails when I make the initial context
new InitialDirContext(env);
The error is "Failed to initialize directory context: <host>:636"
It returns the 2nd server certificate when I run
openssl s_client -showcerts -connect <host>:636 </dev/null
that makes me believe that the solution will be to find a way to tell the server which certificate to use.
I search and read a lot of articles on this topic and I have to admit that I am very confused, it is not clear to me if these articles are talking about client certificate or server certificate, or the actions to be taken are for the client side, or server side.
In one article, it says that I can use a custom SSLSocketFactory with the keystore path and
env.put("java.naming.ldap.factory.socket", "com.xxx.MyCustomSSLSocketFactory");
But I don't know the path to the server certificate keystore on the server.
In one Microsoft article, it says the best resolution is to have just one server certificate on the server or to put the server certificate to Active Directory Domain Services (NTDS\Personal) certificate store for LDAPS communications. But I don't have access to the server and the 'fix' to this problem has to be done in my LDAP java program.
In another article, it says to use Server Name Indication (SNI) extension.
So is there a way that I can specify which certificate I want to the server? Or my problem is somewhere else?
Thanks a lot.
Here is the stack trace:
javax.naming.ServiceUnavailableException: <host>:636; socket closed
at com.sun.jndi.ldap.Connection.readReply(Connection.java:419)
at com.sun.jndi.ldap.LdapClient.ldapBind(LdapClient.java:340)
at com.sun.jndi.ldap.LdapClient.authenticate(LdapClient.java:192)
at com.sun.jndi.ldap.LdapCtx.connect(LdapCtx.java:2694)
at com.sun.jndi.ldap.LdapCtx.<init>(LdapCtx.java:293)
at com.sun.jndi.ldap.LdapCtxFactory.getUsingURL(LdapCtxFactory.java:175)
at com.sun.jndi.ldap.LdapCtxFactory.getUsingURLs(LdapCtxFactory.java:193)
at com.sun.jndi.ldap.LdapCtxFactory.getLdapCtxInstance(LdapCtxFactory.java:136)
at com.sun.jndi.ldap.LdapCtxFactory.getInitialContext(LdapCtxFactory.java:66)
at javax.naming.spi.NamingManager.getInitialContext(NamingManager.java:667)
at javax.naming.InitialContext.getDefaultInitCtx(InitialContext.java:288)
at javax.naming.InitialContext.init(InitialContext.java:223)
at javax.naming.InitialContext.<init>(InitialContext.java:197)
at javax.naming.directory.InitialDirContext.<init>(InitialDirContext.java:82)
When I used Jxplorer to run the same test, it gave me the same error.
EJP was right to point out that the issue was that the certificate was not trusted. Many thanks EJP.
When I installed the CA Certificate in %JAVA_HOME%/lib/security/cacerts, Jxplorer worked. My program still failed. I had to add these lines in it to make it work (not sure if I need all of them though ...):
System.setProperty("javax.net.ssl.keyStore",%JAVA_HOME%/lib/security/cacerts);
System.setProperty("javax.net.ssl.trustStore",%JAVA_HOME%/lib/security/cacerts);
System.setProperty("javax.net.ssl.keyStorePassword=changeit);
System.setProperty("javax.net.ssl.trustStorePassword=changeit);
But since the certificate is not trusted in the first place, I simply 'force' our server to trust it, hence this solution is not acceptable. And neither our server nor the LDAP server runs with Java 7. So SNI is out too!
EJP mentioned that I could control the server certificate by restricting the cipher suites or accepted issuers in the client (my webapp), if the server certificates have different algorithms or issuers. The 2 certificates do have different issuers, however, I don't know how to do that and I could not find anything on that neither.
EJP can you please elaborate, or point me to some sites ... ?
If the certificates have different issuers, you can control which certificate you get at the client by controlling which of those issuers is in your truststore. If only one of them is, that's the one you'll get. If they're both there, you get pot luck. Note that if your truststore also contains a common super-issuer, again it's probably pot luck.
The result isn't pot luck if you specify one and only one certificate in the Certificates - Service (Active Directory Domain Service) - NTDS\Personal location in Microsoft Management Console. Contrary to Microsoft docs I've read, though, a domain controller restart seemed to be necessary for the newly specified certificate to 'take hold'.