Using '-servername' param with openssl s_client - apache

I am installing a new SSL certificate on Centos6/Apache and my web browser keeps picking up the old certificate. To test my setup, I am using "openssl s_client" but I am seeing different results based on the "-servername" parameter. No one seems to us this parameter and it does not appear in the man pages but I saw it mentioned here OpenSSL: Check SSL Certificate Expiration Date and More .
If I run this command:
echo | openssl s_client -connect example.com:443 2>/dev/null | openssl x509 -noout -issuer -subject -dates
I get the correct date for the certificate.
(notBefore=Apr 20 00:00:00 2017 GMT notAfter=Apr 20 23:59:59 2018 GMT)
However, if I intruduce the -servername parameter into the commmand
echo | openssl s_client -servername example.com -connect example.com:443 2>/dev/null | openssl x509 -noout -issuer -subject -dates
I then get the expired date that my browser is showing -
(notBefore=Apr 20 00:00:00 2016 GMT notAfter=Apr 20 23:59:59 2017 GMT)
Can anyone explain why this is happening, as this must be related to the reason why my SSL certificate shows as expired in my browser.
Thanks
O

The servername argument to s_client is documented (briefly) on this page:
https://www.openssl.org/docs/man1.0.2/apps/s_client.html
Essentially it works a little like a "Host" header in HTTP, i.e. it causes the requested domain name to be passed as part of the SSL/TLS handshake (in the SNI - Server Name Indication extension). A server can then host multiple domains behind a single IP. It will respond with the appropriate certificate based on the requested domain name.
If you do not request a specific domain name the server does not know which certificate to give you, so you end up with a default one. In your case one of the certificates that the server is serving up for your domain has expired, but the default certificate has not.
You need to make sure you are updating the correct VirtualHost entry for your domain, e.g. see:
https://www.digicert.com/ssl-support/apache-multiple-ssl-certificates-using-sni.htm

Related

Lookup Let's Encrypt expiry date when behind Cloudflare

My website uses an SSL certificate from Let's Encrypt. The website also goes through Cloudflare. This means that the website uses Cloudflare's SSL certificate from the user's browser to Cloudflare and then it uses Let's Encrypt's from Cloudflare to the website server.
When I look up the website's SSL certificate in a browser then all I see is Cloudflare's SSL cert and its expiry date. This date is about 6 months in the future. However, I know that Let's Encrypt will expire much sooner than that, but when?
All methods that I have seen for looking up this date also only get the client-facing Cloudflare SSL cert date.
echo | openssl s_client -connect <website>:443 -servername <website> 2>/dev/null | openssl x509 -noout -dates
I obviously need to know the (much sooner) date for when I need to renew the Let's Encrypt certificate. You know, so my website doesn't go down...
The answer is to use localhost, not the domain.
This is how I run it from the Ubuntu, on the server where the Let's Encrypt certificate is stored.
echo | openssl s_client -connect localhost:443 2>/dev/null | openssl x509 -noout -dates
If you have more than one certificate on the server, then this might work, but I'm not sure (I only have one):
echo | openssl s_client -connect localhost:443 -servername <website> 2>/dev/null | openssl x509 -noout -dates
This will also tell you the renewal dates if you installed the certificate with certbot:
certbot renew
Note that this will also renew the cert if less than 30 days are left.

Way to check which certificate haproxy use in runtime?

I use haproxy, which use ssl certificate in this way:
bind *:443 ssl crt /usr/local/etc/haproxy/ssl/mycertificate.pem /usr/local/etc/haproxy/ssl/net.pem
I change the ssl certificate, and I want to make sure that haproxy uses the new certificate. Is there any way to check it?
All together on one line, at the command (shell) prompt:
true |
openssl s_client -connect example.com:443 -servername example.com -showcerts |
openssl x509 -text -noout
Note that you need to specify the server name twice.
You will see everything about the cert, here, including the not-before and not-after validity dates.

2 different certificates seen from 2 different VMs

I am having trouble understanding a problem that I have. I am seeking help to understand what is happening. Hopefully someone can help me.
First let me give you some context:
One of our providers at work gave us 2 urls in order to access his service. These 2 urls are URLs for their primary and secondary site. In our system, we are always sending requests to the primary site. If the primary site is not available, we try to use the secondary site.
A few weeks ago, the certificate of our provider changed. We proceeded to the change. The certificate is a wild card certificate (it applies for both urls). Everything seemed to work perfectly on our qualification environnement. But we noticed a strange behavior on production.
We performed on our machines the following openssl request:
echo | openssl s_client -connect <PROVIDER_URL_1:443> 2>/dev/null | openssl x509 -noout -dates
For the primary URL, everything is working fine, openssl request shows the certificate is valid:
notBefore=Jun 20 00:00:00 2016 GMT
notAfter=Aug 19 23:59:59 2018 GMT
But when I perform the exact same openssl request with the secondary URL, I find the previous certificated
echo | openssl s_client -connect 2>/dev/null | openssl x509 -noout -dates
notBefore=May 15 00:00:00 2014 GMT
notAfter=Jul 13 23:59:59 2016 GMT
I don't understand why our production environnement sees 2 different certificates for PROVIDER_URL_1 and PROVIDER_URL_2 when on our qualification environnement both URLs provide the same wildcard certificate.
Do you guys have any idea what seems to be the problem here ?

Browser, s_client without SNI and expired certificate

When I access one of my subdomains: say https://foo.example.com in a browser and inspect the certificates, the certificate looks great. When I use openssl from a remote computer it shows an expired certificate. How can this be?
I tried to reproduce what was found in this question, but my scenario is different. When I run
echo | openssl s_client -showcerts -connect foo.example.com:443 2>&1 | grep Verify
I see:
Verify return code: 10 (certificate has expired)
When I run:
echo | openssl s_client -showcerts -connect foo.example.com:443 2>&1 | openssl x509 -noout -dates
I get:
notBefore=Sep 27 15:10:20 2014 GMT
notAfter=Sep 27 15:10:20 2015 GMT
It looks expired but the browser doesn't show it as expired. Here it is in the browser:
See the 1st comment by #jww. He pointed out that I needed to add -tls1 -servername foo.example.com to my openssl command. His comment:
Try adding -tls1 -servername foo.example.com. I'm guessing you have a front-end server that's providing a default domain for requests without SNI, and the default domain is routed to an internal server with the old certificate. When the browsers connect, they use SNI and get the server for which you have updated the certificate. Or, there could be an intermediate with an expired certificate in the chain that's being served. If you provide real information, its easier for us to help you with problems like this.

HTTPS call not referring the correct SSL certificate for the corresponding site

I'm having two coldfusion applications which runs on Apache web server
https://site1.com
https://site2.com
Both having its own SSL certificate and are configured in httpd-ssl.conf file with Name based virtual host for each site.
When I'm doing a HTTPS call from Site1.com to Site2.com,
httpService = new http();
httpService.setMethod("get");
httpService.setUrl("https://site2.com/comp.cfc?method=amethod&ID=12");
result = httpService.send().getPrefix();
it gives the following error
I/O Exception: hostname in certificate didn't match: <site2.com> != <site1.com>
Actually it should use site2's SSL certificate. But not sure why it is using Site1's SSL certificate and giving the error.
This looks like a Server Name Indication (SNI) issue. SNI is a TLS extension allowing to host several HTTPS servers on the same server.
You can confirm this issue using:
echo "" | openssl s_client -connect site2.com:443 | openssl x509 -noout -subject
If you see something like CN=site1.com try this:
echo "" | openssl s_client -connect site2.com:443 -servername site2.com | openssl x509 -noout -subject
if you get CN=site2.com, this is a SNI issue.
You can look at this bug, more specifically this comment:
The SNI support has been added in ColdFusion 11. The change required for supporting this is quite big and therefore it can't be backported to ColdFusion 10.
Other workarounds could be to host your 2 HTTPS sites on 2 separate servers, to set up a unique SSL certificate valid for both names (using X509 SubjectAltName extension) or to disable certificate CN validation (if possible).
You need to import the SSL certificate into ColdFusion/Java keystore. If this doesn't help, add -Djavax.net.debug=all in jvm.config for ColdFusion. This would require a CF service restart. Then try the SSL call.