Solve boost.asio certificate failed error -without- access to source code, to find out what information Philips Hue Bridge shares - ssl

This is a bit of a super duper specific question, but who knows there's someone out there that can help me.
I happen to have Philips Hue Bridge and I would love to know what personal information it is sharing with the outside world. Using tcpdump on my router I figured the Hue Bridge has a rather talkative personality. But because it talks over SSL tunnels, I have no idea what it says. So what I did is I setup a SonicWall with SSL-DPI with a CA, got root access to the Hue Bridge and found the application that does the talking to wws://ws.meethue.com (its called websocketcd). I then replaced the root certificate on the Hue Bridge, adjusted the cipher to match the Sonicwall and now I am stuck due to boost.asio trowing an validation error of my certificate:
error:14090086:lib(20):func(144):reason(134)
For those not too familiar with the error codes, this is what they mean:
lib(20) is ERR_LIB_SSL
func(144) is SSL_F_SSL3_GET_SERVER_CERTIFICATE
reason(134) is SSL_R_CERTIFICATE_VERIFY_FAILED
To verify it's not my SonicWall or certificate that is causing the problem, I executed openssl s_client -connect ws.meethue.com:443 -CAfile ca.pem from the Hue Bridge and that validates the chain perfectly fine, the same way as the original certificate. I also verified that the application is loading my root certificate and cipher correctly (because if change the cipher, I get a cipher error error). Also in my browser, I can visit https://ws.meethue.com without certificate errors. Here's my self made certificate chain, in case someone wants to check it: https://gofile.io/d/5msjoJ (password for download/key 1020304050, it's a temporary key that only exists in my local test env. so it's safe to share ;-)
If websocketcd wasn't a binary file, the problem was super easy to solve using set_verify_mode, but unfortunately it is a binary and that makes life significantly more complicated.
Is there anyone who can give me advice how to make this blob called websocketcd with boost.asio in it accept my root certificate? What I tried too: letting it communicate without ssl and with ssl without encryption (eNULL:aNULL ciphers). I am a bit hesitant to share the blob but for those who have a Hue Bridge too, it's located at /usr/bin/websocketcd.

Perhaps you can use strace (or maybe even ltrace) to spot which certificate paths it is using for root authorities.
If it uses a single file, you might be abel to hack it by replacing it with a CA that verifies your MITM certificate.
Sometimes the file can contain multiple certificates, so worth appending/prepending yours.
If you're in luck, there will be a readdir on a directory containing certificates. If so, you should be able to add your root certificate (in PEM form) there and **remember to run c_rehash on that directory.

For those interested: after some 20hrs, I figured that websocketcd requires a certificate revocation list for each CA in the chain (which do not have to have any revoked serials). These CLRs need to be included in the root CA file that is loaded using the ca-filename argument. I was not aware that Boost Asio could demand that a CLR is present for each CA, but apparently, they (Signify) managed to do so.

Related

LDAPS Microsoft Active Directory Multiple Certificates RFC6125

We have an Microsoft Active Directory Domain with a large pool of domain controllers (DC) that are are setup with LDAP. These are all setup with LDAPS and uses Certificate Services via a template to setup a certificate with the domain name (i.e. test.corp) in the Subject Alternate Name (SAN) for the LDAPS server to serve.
Since these are DC's, DNS is setup in a pool for each these systems to respond to requests to test.corp in a round robin fashion.
Each of these DC's have multiple templates and multiple certificates in the Local Computer\Personal Certificate Store.
Upon testing, using a nodejs module, ldapjs when making a LDAPS request using the domain name, test.corp we notice that a handful of servers fail with the following message:
Error [ERR_TLS_CERT_ALTNAME_INVALID]: Hostname/IP does not match
certificate's altnames: Host: test.corp. is not in the cert's
altnames: othername:, DNS:.test.corp
As we investigated we found that these handful of LDAPS servers are serving the incorrect certificate. We determined this by using the following command
openssl s_client -connect .test.corp:636
If you take the certificate section of the output and put it in a file and use a tool such as the Certificate manager or certutil to read the file, you can see the certificate is not the correct one. (It does not have the domain "test.corp" SAN). We also verified this by comparing the Serial Numbers
As we investigated, since we have DC's that have multiple certificates in the Local Computer\Personal Certificate store, we came across the following article:
https://social.technet.microsoft.com/wiki/contents/articles/2980.ldap-over-ssl-ldaps-certificate.aspx
It suggests putting the certificate from the local computer\Personal certificate store to the Active Directory Domain Service\Personal store. We followed the steps outlined but we found the same results.
Upon further investigation, it was suggested to use a tool called ldp or adsiedit. We then proceeded to use these tools and spoofed the local machine's host file we were doing the test from, to point the domain (test.corp) to the ip's of one of the DC's that are giving us trouble. After a restart to clear any cache we tested the "ldp" and "adsiedit" tools to connect to test.corp. These systems did not report any errors.
We found this odd, we then ran the openssl command to see what certificate it was serving from this same system and we found it was still serving the incorrect certificate.
Upon further research, it appears that the "ldp" upon selecting the SSL checkbox and "adsiedit" tools were not compliant with RFC6125, specifically B.3
https://www.rfc-editor.org/rfc/rfc6125#appendix-B.3
, which basically states the identity of the certificate must match the identity of the request otherwise the handshake would fail. This identity verification is done by using the certificate common name (CN) or the SAN.
Based on this appears the tools "ldp" and "adsiedit" are not conforming to the RFC6125 standard.
All this to say, we need to first fix the handful of domain controllers that are serving the correct certificate. We are open to suggestions since we have been working on this problem for the past few months. Second, is there a way to get the MS tools in question to work to the RFC6125 standard?
This has been moved to:
https://serverfault.com/questions/939515/ldaps-microsoft-active-directory-multiple-certificates-rfc6125
RFC6125 specifically states that it does not supersede existing RFCs. LDAP cert handling is defined in RFC4513. Outside of that, RFC6125 has significant flaws. See also https://bugzilla.redhat.com/show_bug.cgi?id=1740070#c26
LDP will supposedly validate the SSL against the client store if you toggle the ssl checkbox on the connection screen.
That said, I'm not surprised that neither it nor ADSI edit enforce that part of the standard given they are often used to configure or repair broken configurations. Out of the box and without Certificate Services they use self signed certs on LDAPS. I would wager 80% of DCs never get a proper certificate for LDAP. If they enforced it most wouldn't be able to connect. A better design decision would have been to toggle off the validation.
I use a similar openssl command to verify my own systems. I think it's superior to LDP even if LDP were to validate the certificate. To save you some effort, I would suggest using this variant of the openssl command:
echo | openssl s_client -connect .test.corp:636 2>/dev/null | openssl x509 -noout -dates -issuer -subject -text
That should save you having to output to a file and having to read it with other tools.
I've found LDAPS on AD to be a huge pain for the exact reasons you describe. It just seems to pick up the first valid cert it can find. If you've already added it to the AD DS personal store, I'm not sure where else to suggest you go other than removing some of tother certs from the DCs computer store.

OpenLDAP: TLS error -8179:Peer's Certificate issuer is not recognized

I'm not familiar with certificates and openldap. I'm trying to port someone elses work from an older OS to CentOS-6 with openldap-2.4.23. On the old OS, an ldap connection worked without issue. Now on CentOS-6, I get the following error when doing a simple bind:
TLS error -8179:Peer's Certificate issuer is not recognized.
My /etc/openldap/ldap.conf has a single line:
TLS_CACERTDIR /etc/openldap/certs
I tried commenting out that line and putting the following into the file but that didn't change the error message I received.
tls_reqcert allow
I also tried putting only the following line in ldap.conf but that didn't change the error. I tried this based on information found in this question.
LDAPTLS_CACERT /etc/ssl/certs/ca-bundle.crt
I copied files into the following directories:
/etc/pki/tls/certs/ca.crt
/etc/pki/tls/certs/server.crt
/etc/pki/tls/private/server.key
I have no choice but to use openldap-2.4.23. Any idea what is causing this error or what I can do to troubleshoot?
Thanks in advance.
SP
As per http://www.zytrax.com/books/ldap/ch6/ldap-conf.html TLS_CACERT should point to the file containing the CA cert that the client will use to verify the certificate. You need to make sure the your servers CA [The CA that signed your server certificate] is present in the file that TLS_CACERT points to[in your case /etc/ssl/certs/ca-bundle.crt.
I had the same error. In my case the reason was, that my client had the wrong certificate in /etc/ipa/ca.crt. To fix this, I just copied /etc/ipa/ca.crt from the KDC server to the client and the error disappeared.
Depending upon the environment, OpenLDAP may completely ignore the value set for TLS_CACERTDIR because evidently GnuTLS doesn't support that type of certificate store.
From the man page for ldap.conf(5)
TLS_CACERTDIR <path>
Specifies the path of a directory that contains Certifi‐
cate Authority certificates in separate individual files.
The TLS_CACERT is always used before TLS_CACERTDIR. This
parameter is ignored with GnuTLS.
In my case, I suspect that GnuTLS is in use, so TLS_CACERTDIR simply does nothing. Using TLS_CACERT pointed to a file containing the certificate of my server's signing CA seems to have done the trick.
I think https://serverfault.com/questions/437546/centos-openldap-cert-trust-issues is a much more complete answer.

OpenSSL in GitLab, what verification for self-signed certificate?

On Debian, using GitLab, I ran into issues with my self-signed certificate.
Reading through the code after a lot of searching on the Internet (I guess, it's the last resort, FOSS is helpful), I found the following lines in gitlab-shell/lib/gitlab_net.rb which left me... perplexed.
if config.http_settings['self_signed_cert']
http.verify_mode = OpenSSL::SSL::VERIFY_NONE
end
Most Stack Overflow responses about the diverse issues I've had until now have led me to believe that VERIFY_NONE, as you'd expect, doesn't verify anything. VERIFY_PEER seems, based on my reading, to be the correct setting for self-signed.
As I read it, it feels like taking steps to secure my connection using a certificate, and then just deciding to not use it? Is it a bug, or am I misreading the source?
gitlab-shell (on the GitLab server) has to communicate to the GitLab instance through an HTTPS or SSH URL API.
If it is a self-signed certificate, it doesn't want any error/warning when trying to access those GitLab URLs, hence the SSL::VERIFY_NONE.
But, that same certificate is also used by clients (outside of the GitLab server), using those same GitLab HTTPS URLs from their browser.
For them, the self-signed certificate is useful, provided they install it in their browser keystore.
For those transactions (clients to GitLab), the certificate will be "verified".
The OP Kheldar point's out in Mislav's post:
OpenSSL expects to find each certificate in a file named by the certificate subject’s hashed name, plus a number extension that starts with 0.
That means you can’t just drop My_Awesome_CA_Cert.pem in the directory and expect it to be picked up automatically.
However, OpenSSL ships with a utility called c_rehash which you can invoke on a directory to have all certificates indexed with appropriately named symlinks.
(See for instance OpenSSL Verify location)
cd /some/where/certs
c_rehash .

How to tell LDAP SSL server with multiple certificates to return the one that I need?

My simple LDAP java program, using
env.put(Context.INITIAL_CONTEXT_FACTORY, "com.sun.jndi.ldap.LdapCtxFactory");
env.put(Context.SECURITY_AUTHENTICATION, "simple");
env.put(Context.SECURITY_PRINCIPAL, <UserDN>);
env.put(Context.SECURITY_CREDENTIALS, <Password>);
env.put(Context.SECURITY_PROTOCOL, "ssl");
env.put(Context.PROVIDER_URL, "ldaps://<host>:636");
to make LDAP SSL authentication stopped working ever since a 2nd server certificate with the same CN but other details in the subject are different was installed on the server which I don't have access at all.
The program fails when I make the initial context
new InitialDirContext(env);
The error is "Failed to initialize directory context: <host>:636"
It returns the 2nd server certificate when I run
openssl s_client -showcerts -connect <host>:636 </dev/null
that makes me believe that the solution will be to find a way to tell the server which certificate to use.
I search and read a lot of articles on this topic and I have to admit that I am very confused, it is not clear to me if these articles are talking about client certificate or server certificate, or the actions to be taken are for the client side, or server side.
In one article, it says that I can use a custom SSLSocketFactory with the keystore path and
env.put("java.naming.ldap.factory.socket", "com.xxx.MyCustomSSLSocketFactory");
But I don't know the path to the server certificate keystore on the server.
In one Microsoft article, it says the best resolution is to have just one server certificate on the server or to put the server certificate to Active Directory Domain Services (NTDS\Personal) certificate store for LDAPS communications. But I don't have access to the server and the 'fix' to this problem has to be done in my LDAP java program.
In another article, it says to use Server Name Indication (SNI) extension.
So is there a way that I can specify which certificate I want to the server? Or my problem is somewhere else?
Thanks a lot.
Here is the stack trace:
javax.naming.ServiceUnavailableException: <host>:636; socket closed
at com.sun.jndi.ldap.Connection.readReply(Connection.java:419)
at com.sun.jndi.ldap.LdapClient.ldapBind(LdapClient.java:340)
at com.sun.jndi.ldap.LdapClient.authenticate(LdapClient.java:192)
at com.sun.jndi.ldap.LdapCtx.connect(LdapCtx.java:2694)
at com.sun.jndi.ldap.LdapCtx.<init>(LdapCtx.java:293)
at com.sun.jndi.ldap.LdapCtxFactory.getUsingURL(LdapCtxFactory.java:175)
at com.sun.jndi.ldap.LdapCtxFactory.getUsingURLs(LdapCtxFactory.java:193)
at com.sun.jndi.ldap.LdapCtxFactory.getLdapCtxInstance(LdapCtxFactory.java:136)
at com.sun.jndi.ldap.LdapCtxFactory.getInitialContext(LdapCtxFactory.java:66)
at javax.naming.spi.NamingManager.getInitialContext(NamingManager.java:667)
at javax.naming.InitialContext.getDefaultInitCtx(InitialContext.java:288)
at javax.naming.InitialContext.init(InitialContext.java:223)
at javax.naming.InitialContext.<init>(InitialContext.java:197)
at javax.naming.directory.InitialDirContext.<init>(InitialDirContext.java:82)
When I used Jxplorer to run the same test, it gave me the same error.
EJP was right to point out that the issue was that the certificate was not trusted. Many thanks EJP.
When I installed the CA Certificate in %JAVA_HOME%/lib/security/cacerts, Jxplorer worked. My program still failed. I had to add these lines in it to make it work (not sure if I need all of them though ...):
System.setProperty("javax.net.ssl.keyStore",%JAVA_HOME%/lib/security/cacerts);
System.setProperty("javax.net.ssl.trustStore",%JAVA_HOME%/lib/security/cacerts);
System.setProperty("javax.net.ssl.keyStorePassword=changeit);
System.setProperty("javax.net.ssl.trustStorePassword=changeit);
But since the certificate is not trusted in the first place, I simply 'force' our server to trust it, hence this solution is not acceptable. And neither our server nor the LDAP server runs with Java 7. So SNI is out too!
EJP mentioned that I could control the server certificate by restricting the cipher suites or accepted issuers in the client (my webapp), if the server certificates have different algorithms or issuers. The 2 certificates do have different issuers, however, I don't know how to do that and I could not find anything on that neither.
EJP can you please elaborate, or point me to some sites ... ?
If the certificates have different issuers, you can control which certificate you get at the client by controlling which of those issuers is in your truststore. If only one of them is, that's the one you'll get. If they're both there, you get pot luck. Note that if your truststore also contains a common super-issuer, again it's probably pot luck.
The result isn't pot luck if you specify one and only one certificate in the Certificates - Service (Active Directory Domain Service) - NTDS\Personal location in Microsoft Management Console. Contrary to Microsoft docs I've read, though, a domain controller restart seemed to be necessary for the newly specified certificate to 'take hold'.

SSL ASN1 Encoding routines and x509 certificate routine errors

I'm completely new to anything Secure Socket Layer related up until yesterday evening and today. I need to get a self-signed certificate to proceed with an app registration process so that I can implement OAuth in an app I'm writint. I went through a nice tutorial about how to generate certificates here. I'm an ubuntu user, if you didn't click the link to figure that out. I've been trying to generate a self-signed 1024 bit RSA key encoded x.509 certificate in PEM format. After setting up the configuration and doing everything as is on the tutorial (of course with the exception of specifying the environment-related data to my own environment). The commands to generate a new certificate and key after going through the configuration are:
forces SSL to look for configuration file in alternate location (the server configuration file):
export OPENSSL_CONF=~/myCA/exampleserver.cnf
Generate the certificate and key:
openssl req -newkey rsa:1024 -keyout tempkey.pem -keyform PEM -out tempreq.pem -outform PEM
Following those two commands the following is displayed:
Generating a 1024 bit RSA private key
...++++++
...............++++++
writing new private key to 'tempkey.pem'
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:
-----
I enter my pass phrase and the error I continually get is:
problems making Certificate Request
3074111688:error:0D06407A:asn1 encoding routines:a2d_ASN1_OBJECT:first num too large:a_object.c:109:
3074111688:error:0B083077:x509 certificate routines:X509_NAME_ENTRY_create_by_txt:invalid field name:x509name.c:285:name=organizationUnitName
I ran into a similar problem while following the same tutorial that you mentioned. In my case, the error was:
problems making Certificate Request
140098671105696:error:0D07A097:asn1 encoding routines:ASN1_mbstring_ncopy:string too long:a_mbstr.c:154:maxsize=2
So I figured out that I've written some string which should have been 2 characters long (maxsize=2), but happened way longer. I returned back to my config file and quickly found that I've wrote the long name of the country, instead of the 2-character code. This solved my problem.
not really familiar with the process but, it appears "invalid field name:x509name.c:285:name=organizationUnitName" means your Organization Unit Name is invalid.
According to digicert.com: The Organizational Unit is whichever branch of your company is ordering the certificate such as accounting, marketing, etc.
it depends on what is in your conf file, the openssl ca tool looks for sections in the file, those sections look for other sections, some of the section names are mandatory and some of the name/value pairs in sections are mandatory.. it's quite a big configuration space offered by this file
The error you mention comes up when openssl doesnt recognise a name inside a section in different scenarios, e.g. i've seen it when I was adding a custom oid for an end-entity cert, and also when customising contents of a ca cert.
if you post your configuration file and what you expect in the resulting ceritifcate then we can help. Also can you say what you intend to use the certificate for (e.g. secure a client session on a production webservice or something else)
I had the same problem, had C=USA instead of C=US
I had a similar issue. I followed the advice from GitHub using the countryName_default parameter. It seems like this parameter does not exist on my openssh.exe, contrary to the advice on GitHub.
Once I removed any xxx_default parameters from the [ req_distinguished_name ] section of the SSL xxx.conf file, the creation of the certificate succeeded.
This is working on Windows 10.