My website uses an SSL certificate from Let's Encrypt. The website also goes through Cloudflare. This means that the website uses Cloudflare's SSL certificate from the user's browser to Cloudflare and then it uses Let's Encrypt's from Cloudflare to the website server.
When I look up the website's SSL certificate in a browser then all I see is Cloudflare's SSL cert and its expiry date. This date is about 6 months in the future. However, I know that Let's Encrypt will expire much sooner than that, but when?
All methods that I have seen for looking up this date also only get the client-facing Cloudflare SSL cert date.
echo | openssl s_client -connect <website>:443 -servername <website> 2>/dev/null | openssl x509 -noout -dates
I obviously need to know the (much sooner) date for when I need to renew the Let's Encrypt certificate. You know, so my website doesn't go down...
The answer is to use localhost, not the domain.
This is how I run it from the Ubuntu, on the server where the Let's Encrypt certificate is stored.
echo | openssl s_client -connect localhost:443 2>/dev/null | openssl x509 -noout -dates
If you have more than one certificate on the server, then this might work, but I'm not sure (I only have one):
echo | openssl s_client -connect localhost:443 -servername <website> 2>/dev/null | openssl x509 -noout -dates
This will also tell you the renewal dates if you installed the certificate with certbot:
certbot renew
Note that this will also renew the cert if less than 30 days are left.
Related
I have an issue with the server rejecting the client certificate in the handshake if I issue openssl call with just the cert (with chain) and private key.
This issue goes away if I also set the cafile param and point it to the same file as the cert.
It seems as if openssl cannot construct the chain without the cafile input even if the information is already in the cert input. I wonder if you guys had experience with this. I just find it a bit odd.
To summarize, this works:
sudo openssl s_client -connect <ip>:<port> -cert cert_with_chain.pem -key privkey.pem -CAfile cert_with_chain.pem
This doesn't work (Server reject with "null cert chain"):
sudo openssl s_client -connect <ip>:<port> -cert cert_with_chain.pem -key privkey.pem
Open SSL version:
OpenSSL 1.0.2k-fips 26 Jan 2017
The problem is not that "openssl cannot construct the chain without the cafile" but that it wasn't the intention in the first place to do so. The intended behavior is well documented in man s_client:
-cert certname The certificate to use, if one is requested by the server.
-CAfile file A file containing trusted certificates to use during server authentication and to use when attempting to build the client
certificate chain.
I use haproxy, which use ssl certificate in this way:
bind *:443 ssl crt /usr/local/etc/haproxy/ssl/mycertificate.pem /usr/local/etc/haproxy/ssl/net.pem
I change the ssl certificate, and I want to make sure that haproxy uses the new certificate. Is there any way to check it?
All together on one line, at the command (shell) prompt:
true |
openssl s_client -connect example.com:443 -servername example.com -showcerts |
openssl x509 -text -noout
Note that you need to specify the server name twice.
You will see everything about the cert, here, including the not-before and not-after validity dates.
I am installing a new SSL certificate on Centos6/Apache and my web browser keeps picking up the old certificate. To test my setup, I am using "openssl s_client" but I am seeing different results based on the "-servername" parameter. No one seems to us this parameter and it does not appear in the man pages but I saw it mentioned here OpenSSL: Check SSL Certificate Expiration Date and More .
If I run this command:
echo | openssl s_client -connect example.com:443 2>/dev/null | openssl x509 -noout -issuer -subject -dates
I get the correct date for the certificate.
(notBefore=Apr 20 00:00:00 2017 GMT notAfter=Apr 20 23:59:59 2018 GMT)
However, if I intruduce the -servername parameter into the commmand
echo | openssl s_client -servername example.com -connect example.com:443 2>/dev/null | openssl x509 -noout -issuer -subject -dates
I then get the expired date that my browser is showing -
(notBefore=Apr 20 00:00:00 2016 GMT notAfter=Apr 20 23:59:59 2017 GMT)
Can anyone explain why this is happening, as this must be related to the reason why my SSL certificate shows as expired in my browser.
Thanks
O
The servername argument to s_client is documented (briefly) on this page:
https://www.openssl.org/docs/man1.0.2/apps/s_client.html
Essentially it works a little like a "Host" header in HTTP, i.e. it causes the requested domain name to be passed as part of the SSL/TLS handshake (in the SNI - Server Name Indication extension). A server can then host multiple domains behind a single IP. It will respond with the appropriate certificate based on the requested domain name.
If you do not request a specific domain name the server does not know which certificate to give you, so you end up with a default one. In your case one of the certificates that the server is serving up for your domain has expired, but the default certificate has not.
You need to make sure you are updating the correct VirtualHost entry for your domain, e.g. see:
https://www.digicert.com/ssl-support/apache-multiple-ssl-certificates-using-sni.htm
When I access one of my subdomains: say https://foo.example.com in a browser and inspect the certificates, the certificate looks great. When I use openssl from a remote computer it shows an expired certificate. How can this be?
I tried to reproduce what was found in this question, but my scenario is different. When I run
echo | openssl s_client -showcerts -connect foo.example.com:443 2>&1 | grep Verify
I see:
Verify return code: 10 (certificate has expired)
When I run:
echo | openssl s_client -showcerts -connect foo.example.com:443 2>&1 | openssl x509 -noout -dates
I get:
notBefore=Sep 27 15:10:20 2014 GMT
notAfter=Sep 27 15:10:20 2015 GMT
It looks expired but the browser doesn't show it as expired. Here it is in the browser:
See the 1st comment by #jww. He pointed out that I needed to add -tls1 -servername foo.example.com to my openssl command. His comment:
Try adding -tls1 -servername foo.example.com. I'm guessing you have a front-end server that's providing a default domain for requests without SNI, and the default domain is routed to an internal server with the old certificate. When the browsers connect, they use SNI and get the server for which you have updated the certificate. Or, there could be an intermediate with an expired certificate in the chain that's being served. If you provide real information, its easier for us to help you with problems like this.
I recently had a problem where a server certificate was not set to expire, but the intermediate cert in its chain had expired, thus preventing my WebLogic server from starting. I am familiar with using openssl s_client -connect server:443 -showcerts | openssl x509 -enddate to get the server cert expiration date, but is it possible to do the same for the other certs in the chain? Thanks.
openssl s_client -connect server:443 -showcerts returns all certificates in chain except for root (which is correct). Just parse these certificates out of the output and run openssl x509 -enddate on each of them.