How to simulate non-SNI browsers (without SNI support)? - ssl

I'm setting up Apache with several distinct SSL certificates for different domains that reside on the same server (and thus sharing the same IP address).
With Qualys SSL Test I discovered that there are clients (i.e. BingBot as of december 2013) that do not support the SNI extension.
So I'm thinking about crafting a special default web application that can gather the requests of such clients, but how can I simulate those clients?
I'm on Windows 8, with no access to Linux boxes, if that matters.

You can use the most commonly used SSL library, OpenSSL. Windows binaries are available to download.
openssl s_client -connect domain.com:443 command serves very well to test SSL connection from client side. It doesn't support SNI by default. You can append -servername domain.com argument to enable SNI.

If you are using OpenSSL 1.1.0 or earlier version, use openssl s_client -connect $ip:$port, and OpenSSL wouldn't enable the SNI extension
If you are using OpenSSL 1.1.1, you need add -noservername flag to openssl s_client.

Similar to openssl s_client is gnutls-cli
gnutls-cli --disable-sni www.google.com

You could install Strawberry Perl and then use the following script to simulate a client not supporting SNI:
use strict;
use warnings;
use LWP::UserAgent;
my $ua = LWP::UserAgent->new(ssl_opts => {
# this disables SNI
SSL_hostname => '',
# These disable certificate verification, so that we get a connection even
# if the certificate does not match the requested host or is invalid.
# Do not use in production code !!!
SSL_verify_mode => 0,
verify_hostname => 0,
});
# request some data
my $res = $ua->get('https://example.com');
# show headers
# pseudo header Client-SSL-Cert-Subject gives information about the
# peers certificate
print $res->headers_as_string;
# show response including header
# print $res->as_string;
By setting SSL_hostname to an empty string you can disable SNI, disabling this line enables SNI again.

The approach of using a special default web application simply would not work.
You can't do that because said limited clients not just open a different page, but they fail completely.
Consider you have a "default" vhost which a non-SNI client will open just fine.
You also have an additional vhost which is supposed to be open by an SNI-supporting client.
Obviously, these two must have different hostnames (say, default.example.com and www.example.com), else Apache or nginx wouldn't know which site to show to which connecting client.
Now, if a non-SNI client tries to open https://www.example.com, he'll be presented a certificate from default.example.com, which would give him a certificate error. This is a major caveat.
A fix for this error is to make a SAN (multi-domain) certificate that would include both www.example.com and default.example.com. Then, if a non-SNI client tries to open https://www.example.com, he'll be presented with a valid certificate, but even then his Host: header would still point to www.example.com, and his request will get routed not to default.example.com but to www.example.com.
As you can see, you either block non-SNI clients completely or forward them to an expected vhost. There's no sensible option for a default web application.

With a Java HTTP client you can disable the SNI extension by setting the system property jsse.enableSNIExtension=false.
More here: Java TLS: Disable SNI on client handshake

Related

Check SSL installed correctly without domain name

Is there a way to check if SSL is correctly set up on a server, before pointing the domain at the server (the site has SSL on it's current server, and I want to make sure SSL is ready to go on the new server before I change the A record).
The site, on the new server, will not be in the root directory of the web server, so going to the server's IP address in my browser or using online SSL checker tools won't work (or is there a way to test just with IP address?).
The new server is Apache.
Thanks
Setup everything on the new server, then populate both its /etc/hosts and yours (or equivalent on your OS) with a mapping between its IP address and the name.
Hence at least the browser on your machine should, based on /etc/hosts query the new server, before you do the same change in the DNS for anyone else to see.
HTTPS and direct browsing by IP addresses does not mix well because:
certificates are based on hostnames, not IP addresses
with SNI, the client needs to pass an hostname at the TLS level for the server to properly select the certificate, in case of multihosting on a single IP address
If it's enough to test SSL/TLS, not HTTP level including things like redirects and linked resources (CSS, JS, images, etc)
openssl s_client -connect address:port -servername hostname_for_SNI </dev/null
# or <NUL: on Windows
# optionally add -quiet to suppress most non-error output

HAProxy dynamic SSL configuration for multiple domains

I have something like 100 similar websites in two VPS. I would like to use HAProxy to switch traffic dynamically but at the same time I would like to add an SSL certificate.
I want to use add a variable to call the specific certificate for each website.
For example:
frontend web-https
bind 0.0.0.0:443 ssl crt /etc/ssl/certs/{{domain}}.pem
reqadd X-Forwarded-Proto:\ https
rspadd Strict-Transport-Security:\ max-age=31536000
default_backend website
I'd like also check if the SSL certificate is really available and in case it is not available then switch to HTTP with a redirect.
Is this possibile with HAProxy?
This can be done, but TLS (SSL) does not allow you to do it the way you envision.
First, HAProxy allows you to specify a default certificate and a directory for additonal certificates.
From the documentation for the crt keyword
If a directory name is used instead of a PEM file, then all files found in
that directory will be loaded in alphabetic order unless their name ends with
'.issuer', '.ocsp' or '.sctl' (reserved extensions). This directive may be
specified multiple times in order to load certificates from multiple files or
directories. The certificates will be presented to clients who provide a
valid TLS Server Name Indication field matching one of their CN or alt
subjects. Wildcards are supported, where a wildcard character '*' is used
instead of the first hostname component (eg: *.example.org matches
www.example.org but not www.sub.example.org).
If no SNI is provided by the client or if the SSL library does not support
TLS extensions, or if the client provides an SNI hostname which does not
match any certificate, then the first loaded certificate will be presented.
This means that when loading certificates from a directory, it is highly
recommended to load the default one first as a file or to ensure that it will
always be the first one in the directory.
So, all you need is a directory containing each cert/chain/key in a pem file, and a modification to your configuration like this:
bind 0.0.0.0:443 ssl crt /etc/haproxy/my-default.pem crt /etc/haproxy/my-cert-directory
Note you should also add no-sslv3.
I want to use add a variable to call te specific certificate for each website
As noted in the documentation, if the browser sends Server Name Identification (SNI), then HAProxy will automatically negotiate with the browser using the appropriate certificate.
So configurable cert selection isn't necessary, but more importantly, it isn't possible. SSL/TLS doesn't work that way (anywhere). Until the browser successfully negotiates the secure channel, you don't know what web site the browser will be asking for, because the browser hasn't yet sent the request.
If the browser doesn't speak SNI -- a concern that should be almost entirely irrelevant any more -- or if there is no cert on file that matches the hostname presented in the SNI -- then the default certificate is used for negotiation with the browser.
I'd like also check if the ssl is real available and in case is not available switch to http with a redirect
This is also not possible. Remember, encryption is negotiated first, and only then is the HTTP request sent by the browser.
So, a user will never see your redirect unless they bypass the browser's security warning -- which they must necessarily see, because the hostname in the default certificate won't match the hostname the browser expects to see in the cert.
At this point, there's little point in forcing them back to http, because by bypassing the browser security warning, they have established a connection that is -- simultaneously -- untrusted yet still encrypted. The connection is technically secure but the user has a red Ă— in the address bar because the browser correctly believes that the certificate is invalid (due to the hostname mismatch). But on the user's insistence at bypassing the warning, the browser still uses the invalid certificate to establish the secure channel.
If you really want to redirect even after all of this, you'll need to take a look at the layer 5 fetches. You'll need to verify that the Host header matches the SNI or the default cert, and if your certs are wildcards, you'll need to accommodate that too, but this will still only happen after the user bypasses the security warning.
Imagine if things were so simple that a web server without a valid certificate could hijack traffic by simply redirecting it without the browser requiring the server's certificate being valid (or deliberate action by the user to bypass the warning) and it should become apparent why your original idea not only will not work, but in fact should not work.
Note also that the certificates loaded from the configured directory are all loaded at startup. If you need HAProxy to discover new ones or discard old ones, you need a hot restart of HAProxy (usually sudo service haproxy reload).

Testing if a URL requires SNI

I have a site which is served over HTTPS, but which iTunes can't find. My suspicion is that it's related to the iTunes backend server being Java 6, and Java 6 not supporting SNI. SSL Labs seems to hint that my site does require SNI (see this report, and search for SNI), but I can't think why. Have I misunderstood multi-domain certificates? I've got multiple sites running on the same server, but my understanding was that as long as all the URLs were listed as Subject Alternative Names on the certificate, that all would be well.
Does anyone know a good way to check if a URL requires SNI support on the client to access it? I don't have a Windows XP/Java 6 install around to play with sadly.
The reports from SSLLabs regarding SNI are usually correct. Your understanding that SNI is not needed if your certificate contains all possible hosts is correct too. But, not needed in theory does not mean that your server setup does not require SNI anyway.
I don't have a Windows XP/Java 6 install around to play with sadly.
Given that you only specify what you don't have I will assume that you have everything else which might be used. A simple way to check is openssl:
# without SNI
$ openssl s_client -connect host:port
# use SNI
$ openssl s_client -connect host:port -servername host
Compare the output of both calls of openssl s_client. If they differ in the certificate they serve or if the call w/o SNI fails to establish an SSL connection than you need SNI to get the correct certificate or to establish a SSL connection at all.
An easy way to check if a site relies on SNI is this:
openssl s_client -servername alice.sni.velox.ch -tlsextdebug -msg \
-connect alice.sni.velox.ch:443 2>/dev/null | grep "server name"
And if in that output you see the following, it means the site is using SNI.
TLS server extension "server name" (id=0), len=0
The above is a summary of an answer at serverfault.
Nginx in general, and your site in particular, accepts but doesn't require SNI. To test this you cannot easily use Oracle Java out of the box, because its cacerts does not include DST Root CA X3 which is the root cert used (initially) by 'Let's Encrypt' who issued your site's cert; this is true for all versions of Oracle Java up to current (8u74). Windows (hence IE and Chrome on Windows) and Firefox do have this root cert; I can't say for other OS or browsers.
To fix this so you can easily test, either:
use Oracle Java 6 but modify JRE/lib/security/cacerts to add the DSTX3 cert, obtained either from your OS or browser, or by following the link at https://letsencrypt.org/certificates/ to https://www.identrust.com/certificates/trustid/root-download-x3.html -- except that page nonstandardly gives you only the base64 body of the cert so you must manually add the PEM header and trailer lines before Java keytool will import it.
use Oracle Java 6 as-is but configure your application (with system properties) to use a custom truststore which you create containing the DSTX3 cert as above.
use a version of Java 6 that does include this root cert in cacerts. In particular I use CentOS 6 and its openjdk packages (for 6, 7, and 8) use a systemwide CA 'bundle' that includes DSTX3, which is what made it easy for me to do this test. I expect, but can't confirm, that other RedHat variants do the same. For other distros and platforms I can't say; if not, see above.
Monitor the connection attempt with wireshark or similar to see that the ClientHello does not contain SNI, but the connection succeeds and is successfully used for an HTTP request.
If you actually want to communicate with the server instead of testing it for SNI, simply omit the final 'monitor' step.

RFC5766-turn-server with TLS

I'm trying to start my TURN server with TLS enabled. I use the following line to start the server:
daemon --user=$USER $TURN $OPTIONS --tls-listening-port 3478 --cert /root/cert_2014_11/my_domain_nl.crt --pkey /root/cert_2014_11/my_domain_nl.key --CA-file /root/cert_2014_11/PositiveSSLCA2.crt
The environment variables in there are set in the config file. The server works fine without TLS using the same startup line, but if I add the three SSL related arguments, the server still isn't reachable over TLS. I tried setting a different port for SLL instead of the standard port, but it still didn't work. Whatever I do, I can reach the server without SLL, but over TLS I can't reach it. The certificate chain I use if fine, I use it for our website as well.
I've run into this exact problem before. Have a look at the documentation for the --CA-file argument:
--CA-file <filename> CA file in OpenSSL format.
Forces TURN server to verify the client SSL certificates.
By default, no CA is set and no client certificate check is performed.
This argument is needed only when you will be verifying client certificates. It's not for the certificate chain for your server certificate.
Drop the --CA-file argument, keeping the --cert and --pkey arguments.
EDIT: FYI, the certificate file you give to the --cert option can contain the entire certificate chain (yours and your CA's).

Send client certificate to Server in Tcl

Currently my application (in C) authenticates to a web server using an SSL certificate. I'm now moving most of the funcitions (if not all) to Tcl.
I couldn't find any tutorial or example on how to do it (I'd prefere to use Tcl ::http:: but TclCurl would be fine).
Any suggestions?
Thanks
Johannes's answer is right, except if you want to provide different identities to different sites. In that case you use tls::init, which allows you to set default TLS-related options to tls::socket prior to that command being called.
package require http
package require tls
http::register https 443 ::tls::socket
# Where is our identity?
tls::init -keyfile "my_key.p12" -cafile "the_server_id.pem"
# Now, how to provide the password (don't know what the arguments are)
proc tls::password args {
return "the_pass"; # Return whatever the password is
}
# Do the secure connection
set token [http::geturl https://my.secure.site/]
# Disable the key
tls::init -keyfile {}
Note that the way of providing the password is bizarre, and I know for sure that this mechanism isn't going to be nice when doing asynchronous connections. (There's a standing Feature Request for improving the integration between the http and tls packages…)
To use https with tcl you usually use the tls package. The man page for the http package gives you an example how to do that:
package require http
package require tls
::http::register https 443 ::tls::socket
set token [::http::geturl https://my.secure.site/]
If you read the documentation of the tls package for tls::socket, you find that there are some options to pass client certificates. Combining that gives you:
::http::register https 443 [list ::tls::socket \
-cafile caPublic.pem -certfile client.pem]
You might have to specify the -password callback parameter if the certificate file is protected by a password.
Note that this solution uses the client certificate for each https (regardless of the target) request from your application.
Edit: As Donal suggested, it might be better to use tls::init than to specify it with ::http::register.
An example:
package require http
package require tls
::http::register https 443 ::tls::socket
proc ::get_cert_pass {} {
return "passw0rd"
}
# Add the options here
::tls::init -cafile caPublic.pem -certfile client.pem -password ::get_cert_pass
set tok [::http::geturl https://my.secure.site/]
To do a request, always use the last 2 line then.