I have a site which is served over HTTPS, but which iTunes can't find. My suspicion is that it's related to the iTunes backend server being Java 6, and Java 6 not supporting SNI. SSL Labs seems to hint that my site does require SNI (see this report, and search for SNI), but I can't think why. Have I misunderstood multi-domain certificates? I've got multiple sites running on the same server, but my understanding was that as long as all the URLs were listed as Subject Alternative Names on the certificate, that all would be well.
Does anyone know a good way to check if a URL requires SNI support on the client to access it? I don't have a Windows XP/Java 6 install around to play with sadly.
The reports from SSLLabs regarding SNI are usually correct. Your understanding that SNI is not needed if your certificate contains all possible hosts is correct too. But, not needed in theory does not mean that your server setup does not require SNI anyway.
I don't have a Windows XP/Java 6 install around to play with sadly.
Given that you only specify what you don't have I will assume that you have everything else which might be used. A simple way to check is openssl:
# without SNI
$ openssl s_client -connect host:port
# use SNI
$ openssl s_client -connect host:port -servername host
Compare the output of both calls of openssl s_client. If they differ in the certificate they serve or if the call w/o SNI fails to establish an SSL connection than you need SNI to get the correct certificate or to establish a SSL connection at all.
An easy way to check if a site relies on SNI is this:
openssl s_client -servername alice.sni.velox.ch -tlsextdebug -msg \
-connect alice.sni.velox.ch:443 2>/dev/null | grep "server name"
And if in that output you see the following, it means the site is using SNI.
TLS server extension "server name" (id=0), len=0
The above is a summary of an answer at serverfault.
Nginx in general, and your site in particular, accepts but doesn't require SNI. To test this you cannot easily use Oracle Java out of the box, because its cacerts does not include DST Root CA X3 which is the root cert used (initially) by 'Let's Encrypt' who issued your site's cert; this is true for all versions of Oracle Java up to current (8u74). Windows (hence IE and Chrome on Windows) and Firefox do have this root cert; I can't say for other OS or browsers.
To fix this so you can easily test, either:
use Oracle Java 6 but modify JRE/lib/security/cacerts to add the DSTX3 cert, obtained either from your OS or browser, or by following the link at https://letsencrypt.org/certificates/ to https://www.identrust.com/certificates/trustid/root-download-x3.html -- except that page nonstandardly gives you only the base64 body of the cert so you must manually add the PEM header and trailer lines before Java keytool will import it.
use Oracle Java 6 as-is but configure your application (with system properties) to use a custom truststore which you create containing the DSTX3 cert as above.
use a version of Java 6 that does include this root cert in cacerts. In particular I use CentOS 6 and its openjdk packages (for 6, 7, and 8) use a systemwide CA 'bundle' that includes DSTX3, which is what made it easy for me to do this test. I expect, but can't confirm, that other RedHat variants do the same. For other distros and platforms I can't say; if not, see above.
Monitor the connection attempt with wireshark or similar to see that the ClientHello does not contain SNI, but the connection succeeds and is successfully used for an HTTP request.
If you actually want to communicate with the server instead of testing it for SNI, simply omit the final 'monitor' step.
Related
We have an Microsoft Active Directory Domain with a large pool of domain controllers (DC) that are are setup with LDAP. These are all setup with LDAPS and uses Certificate Services via a template to setup a certificate with the domain name (i.e. test.corp) in the Subject Alternate Name (SAN) for the LDAPS server to serve.
Since these are DC's, DNS is setup in a pool for each these systems to respond to requests to test.corp in a round robin fashion.
Each of these DC's have multiple templates and multiple certificates in the Local Computer\Personal Certificate Store.
Upon testing, using a nodejs module, ldapjs when making a LDAPS request using the domain name, test.corp we notice that a handful of servers fail with the following message:
Error [ERR_TLS_CERT_ALTNAME_INVALID]: Hostname/IP does not match
certificate's altnames: Host: test.corp. is not in the cert's
altnames: othername:, DNS:.test.corp
As we investigated we found that these handful of LDAPS servers are serving the incorrect certificate. We determined this by using the following command
openssl s_client -connect .test.corp:636
If you take the certificate section of the output and put it in a file and use a tool such as the Certificate manager or certutil to read the file, you can see the certificate is not the correct one. (It does not have the domain "test.corp" SAN). We also verified this by comparing the Serial Numbers
As we investigated, since we have DC's that have multiple certificates in the Local Computer\Personal Certificate store, we came across the following article:
https://social.technet.microsoft.com/wiki/contents/articles/2980.ldap-over-ssl-ldaps-certificate.aspx
It suggests putting the certificate from the local computer\Personal certificate store to the Active Directory Domain Service\Personal store. We followed the steps outlined but we found the same results.
Upon further investigation, it was suggested to use a tool called ldp or adsiedit. We then proceeded to use these tools and spoofed the local machine's host file we were doing the test from, to point the domain (test.corp) to the ip's of one of the DC's that are giving us trouble. After a restart to clear any cache we tested the "ldp" and "adsiedit" tools to connect to test.corp. These systems did not report any errors.
We found this odd, we then ran the openssl command to see what certificate it was serving from this same system and we found it was still serving the incorrect certificate.
Upon further research, it appears that the "ldp" upon selecting the SSL checkbox and "adsiedit" tools were not compliant with RFC6125, specifically B.3
https://www.rfc-editor.org/rfc/rfc6125#appendix-B.3
, which basically states the identity of the certificate must match the identity of the request otherwise the handshake would fail. This identity verification is done by using the certificate common name (CN) or the SAN.
Based on this appears the tools "ldp" and "adsiedit" are not conforming to the RFC6125 standard.
All this to say, we need to first fix the handful of domain controllers that are serving the correct certificate. We are open to suggestions since we have been working on this problem for the past few months. Second, is there a way to get the MS tools in question to work to the RFC6125 standard?
This has been moved to:
https://serverfault.com/questions/939515/ldaps-microsoft-active-directory-multiple-certificates-rfc6125
RFC6125 specifically states that it does not supersede existing RFCs. LDAP cert handling is defined in RFC4513. Outside of that, RFC6125 has significant flaws. See also https://bugzilla.redhat.com/show_bug.cgi?id=1740070#c26
LDP will supposedly validate the SSL against the client store if you toggle the ssl checkbox on the connection screen.
That said, I'm not surprised that neither it nor ADSI edit enforce that part of the standard given they are often used to configure or repair broken configurations. Out of the box and without Certificate Services they use self signed certs on LDAPS. I would wager 80% of DCs never get a proper certificate for LDAP. If they enforced it most wouldn't be able to connect. A better design decision would have been to toggle off the validation.
I use a similar openssl command to verify my own systems. I think it's superior to LDP even if LDP were to validate the certificate. To save you some effort, I would suggest using this variant of the openssl command:
echo | openssl s_client -connect .test.corp:636 2>/dev/null | openssl x509 -noout -dates -issuer -subject -text
That should save you having to output to a file and having to read it with other tools.
I've found LDAPS on AD to be a huge pain for the exact reasons you describe. It just seems to pick up the first valid cert it can find. If you've already added it to the AD DS personal store, I'm not sure where else to suggest you go other than removing some of tother certs from the DCs computer store.
I'm setting up Apache with several distinct SSL certificates for different domains that reside on the same server (and thus sharing the same IP address).
With Qualys SSL Test I discovered that there are clients (i.e. BingBot as of december 2013) that do not support the SNI extension.
So I'm thinking about crafting a special default web application that can gather the requests of such clients, but how can I simulate those clients?
I'm on Windows 8, with no access to Linux boxes, if that matters.
You can use the most commonly used SSL library, OpenSSL. Windows binaries are available to download.
openssl s_client -connect domain.com:443 command serves very well to test SSL connection from client side. It doesn't support SNI by default. You can append -servername domain.com argument to enable SNI.
If you are using OpenSSL 1.1.0 or earlier version, use openssl s_client -connect $ip:$port, and OpenSSL wouldn't enable the SNI extension
If you are using OpenSSL 1.1.1, you need add -noservername flag to openssl s_client.
Similar to openssl s_client is gnutls-cli
gnutls-cli --disable-sni www.google.com
You could install Strawberry Perl and then use the following script to simulate a client not supporting SNI:
use strict;
use warnings;
use LWP::UserAgent;
my $ua = LWP::UserAgent->new(ssl_opts => {
# this disables SNI
SSL_hostname => '',
# These disable certificate verification, so that we get a connection even
# if the certificate does not match the requested host or is invalid.
# Do not use in production code !!!
SSL_verify_mode => 0,
verify_hostname => 0,
});
# request some data
my $res = $ua->get('https://example.com');
# show headers
# pseudo header Client-SSL-Cert-Subject gives information about the
# peers certificate
print $res->headers_as_string;
# show response including header
# print $res->as_string;
By setting SSL_hostname to an empty string you can disable SNI, disabling this line enables SNI again.
The approach of using a special default web application simply would not work.
You can't do that because said limited clients not just open a different page, but they fail completely.
Consider you have a "default" vhost which a non-SNI client will open just fine.
You also have an additional vhost which is supposed to be open by an SNI-supporting client.
Obviously, these two must have different hostnames (say, default.example.com and www.example.com), else Apache or nginx wouldn't know which site to show to which connecting client.
Now, if a non-SNI client tries to open https://www.example.com, he'll be presented a certificate from default.example.com, which would give him a certificate error. This is a major caveat.
A fix for this error is to make a SAN (multi-domain) certificate that would include both www.example.com and default.example.com. Then, if a non-SNI client tries to open https://www.example.com, he'll be presented with a valid certificate, but even then his Host: header would still point to www.example.com, and his request will get routed not to default.example.com but to www.example.com.
As you can see, you either block non-SNI clients completely or forward them to an expected vhost. There's no sensible option for a default web application.
With a Java HTTP client you can disable the SNI extension by setting the system property jsse.enableSNIExtension=false.
More here: Java TLS: Disable SNI on client handshake
I'm trying to start my TURN server with TLS enabled. I use the following line to start the server:
daemon --user=$USER $TURN $OPTIONS --tls-listening-port 3478 --cert /root/cert_2014_11/my_domain_nl.crt --pkey /root/cert_2014_11/my_domain_nl.key --CA-file /root/cert_2014_11/PositiveSSLCA2.crt
The environment variables in there are set in the config file. The server works fine without TLS using the same startup line, but if I add the three SSL related arguments, the server still isn't reachable over TLS. I tried setting a different port for SLL instead of the standard port, but it still didn't work. Whatever I do, I can reach the server without SLL, but over TLS I can't reach it. The certificate chain I use if fine, I use it for our website as well.
I've run into this exact problem before. Have a look at the documentation for the --CA-file argument:
--CA-file <filename> CA file in OpenSSL format.
Forces TURN server to verify the client SSL certificates.
By default, no CA is set and no client certificate check is performed.
This argument is needed only when you will be verifying client certificates. It's not for the certificate chain for your server certificate.
Drop the --CA-file argument, keeping the --cert and --pkey arguments.
EDIT: FYI, the certificate file you give to the --cert option can contain the entire certificate chain (yours and your CA's).
On Debian, using GitLab, I ran into issues with my self-signed certificate.
Reading through the code after a lot of searching on the Internet (I guess, it's the last resort, FOSS is helpful), I found the following lines in gitlab-shell/lib/gitlab_net.rb which left me... perplexed.
if config.http_settings['self_signed_cert']
http.verify_mode = OpenSSL::SSL::VERIFY_NONE
end
Most Stack Overflow responses about the diverse issues I've had until now have led me to believe that VERIFY_NONE, as you'd expect, doesn't verify anything. VERIFY_PEER seems, based on my reading, to be the correct setting for self-signed.
As I read it, it feels like taking steps to secure my connection using a certificate, and then just deciding to not use it? Is it a bug, or am I misreading the source?
gitlab-shell (on the GitLab server) has to communicate to the GitLab instance through an HTTPS or SSH URL API.
If it is a self-signed certificate, it doesn't want any error/warning when trying to access those GitLab URLs, hence the SSL::VERIFY_NONE.
But, that same certificate is also used by clients (outside of the GitLab server), using those same GitLab HTTPS URLs from their browser.
For them, the self-signed certificate is useful, provided they install it in their browser keystore.
For those transactions (clients to GitLab), the certificate will be "verified".
The OP Kheldar point's out in Mislav's post:
OpenSSL expects to find each certificate in a file named by the certificate subject’s hashed name, plus a number extension that starts with 0.
That means you can’t just drop My_Awesome_CA_Cert.pem in the directory and expect it to be picked up automatically.
However, OpenSSL ships with a utility called c_rehash which you can invoke on a directory to have all certificates indexed with appropriately named symlinks.
(See for instance OpenSSL Verify location)
cd /some/where/certs
c_rehash .
I'm trying to create a self-signed wildcard SSL certificate for use on a number of development and test servers running IIS 6. Following various guides has led to a couple ways of generating the certificates, but I haven't had any luck getting it to work. The most successful ways I've had were following this OpenSSL guide and using makecert.exe like so:
makecert.exe -r -b 01/01/2009 -e 01/01/2042 -sr LocalMachine -ss MY -a sha1 -n CN="*.example.com" -sky exchange -pe -eku 1.3.6.1.5.5.7.3.1 -sy 12 -sp "Microsoft RSA SChannel Cryptographic Provider" wildcard.cer
Both of which generate certificates that IIS 6 will accept, but when I actually try to view the site I get the following error in firefox:
Data Transfer Interrupted
The connection to dev.example.com was interrupted while the page was loading.
IE just gives:
Internet Explorer cannot display the webpage
Most likely causes:
You are not connected to the Internet.
The website is encountering problems.
There might be a typing error in the address.
This error happens whether I try to access it by domain name, machine name, localhost, local ip, or loopback ip.
So...how can I create a self-signed wildcard cert that IIS 6 will work with? Or how can I fix the problems I'm experiencing with the ones I've already created?
You can use the IIS 6 Resource Kit provided by MS, an command line app called SelfSSL. It can generate the SSL key and import it into your IIS installation.
IIS 6 Resource Kit
you can do a wildcard certificate with *.domain.local and multiple ssl protocols by using the c:\inetpub\adminscripts adsutil.vbs set w3svc[siteid]\SecureBindings ":443:name.domain.local"
Did you realize that you would need to change from "example.com" to some thing more appropriate to your situation ("localhost" might be one of them during testing).
For IIS 7 - there is a wzard to do this. It takes about 30 seconds to setup.
For IIS 6 - it's a bit trickier. It takes about 30 minutes to setup.
Which one are you using?
I strongly recommend moving to IIS 7 - it is very foreign at first, but they've made a lot of improvements.
Given that you probably can't upgrade to IIS 7, I had to do the following to implement what you want in IIS 6.
1) create certificate server
2) generate request
3) grant request
4) install certificate
It's a bit of a pain to setup the certificate authority server, but it comes with Windows Server and the walkthrough is pretty straight forward.
We discovered that the Certificate Authority wasn't being trusted because of domain settings and was causing the errors. We ended up deploying a star cert generated by a trusted CA and that cleared up the problems.