2 different certificates seen from 2 different VMs - ssl

I am having trouble understanding a problem that I have. I am seeking help to understand what is happening. Hopefully someone can help me.
First let me give you some context:
One of our providers at work gave us 2 urls in order to access his service. These 2 urls are URLs for their primary and secondary site. In our system, we are always sending requests to the primary site. If the primary site is not available, we try to use the secondary site.
A few weeks ago, the certificate of our provider changed. We proceeded to the change. The certificate is a wild card certificate (it applies for both urls). Everything seemed to work perfectly on our qualification environnement. But we noticed a strange behavior on production.
We performed on our machines the following openssl request:
echo | openssl s_client -connect <PROVIDER_URL_1:443> 2>/dev/null | openssl x509 -noout -dates
For the primary URL, everything is working fine, openssl request shows the certificate is valid:
notBefore=Jun 20 00:00:00 2016 GMT
notAfter=Aug 19 23:59:59 2018 GMT
But when I perform the exact same openssl request with the secondary URL, I find the previous certificated
echo | openssl s_client -connect 2>/dev/null | openssl x509 -noout -dates
notBefore=May 15 00:00:00 2014 GMT
notAfter=Jul 13 23:59:59 2016 GMT
I don't understand why our production environnement sees 2 different certificates for PROVIDER_URL_1 and PROVIDER_URL_2 when on our qualification environnement both URLs provide the same wildcard certificate.
Do you guys have any idea what seems to be the problem here ?

Related

New SSL certificate verification errors - what's the root cause?

My application uses the Close API (https://developer.close.com/) to store user data. Our testing environment is now getting SSL errors when trying to write to it:
Faraday::SSLError (SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed (certificate has expired))
What could the cause be? My first assumption reading the error message is that the Close certificates might have expired. But that seems unlikely - the service is generally well-maintained, and we are not having any issues in production.
The next thing I considered was that perhaps the certificate store on our server is out of date. That would not surprise me much, because the server is running Ubuntu 14, and other people are experiencing problems there. But we have multiple machines with the same configuration, and only one of them is giving us trouble. Is it possible this is the root cause?
To reproduce:
$ openssl s_client -CApath /dev/null -showcerts -connect api.close.com:443 -servername api.close.com
CONNECTED(00000003)
depth=3 O = Digital Signature Trust Co., CN = DST Root CA X3
verify error:num=10:certificate has expired
notAfter=Sep 30 14:01:15 2021 GMT
verify return:0
The problem turned out to be the recent deprecation of the DST root certificate. An old root certificate (DST) was deprecated on Sep 30, and the modern version (ISRG) is now expected.
The testing server still had the DST and ISRG certificates installed, whereas all the other machines only had ISRG. I assume that something about this particular API was looking preferentially for the DST certificate and ignoring ISRG unless it was the only one.
To solve: Remove DST cert from /usr/share/ca-certificates/mozilla and leave ISRG one there - try the openssl command above, now it works.

how do I unblock self-signed SSL certificates?

Issue: users can't log into mobile app due to "unable to contact server"
debugging message: "TypeError: Network request failed"
Attempted fixes: restarted server, verified that db is running and nothing has changed, restarted VM that server is running on, I checked the api using postman. When I ran a simple POST request I got the following message:
There was an error connecting to
https://app.something.com/api/Accounts/5076/sometest?filter%5Bwhere%5D%xxxxx%5D=null&access_token=mwVfUBNxxxxxxx5x4A4Y5DktKnTZXeL6CB34MoP.
One of the suggestions I was given was:
Self-signed SSL certificates are being blocked: Fix this by turning
off 'SSL certificate verification' in Settings > General
As soon as I followed this step, I was able to make the POST request and everything seemed to work fine. I'm completely new to this type of error. Allso, I did not set up this app/db/certificates. So, other than unblocking self-signed SSL certificates(which seems like a really bad idea), I'm not sure how to proceed. What are my options?
here's what the result of examining the certificate:
depth=1 C = US, O = Let's Encrypt, CN = Let's Encrypt Authority X3
verify error:num=20:unable to get local issuer certificate
verify return:0 poll
errornotBefore=Jan 28 11:54:38 2019 GMT
notAfter=Apr 28 11:54:38 2019 GMT
Either, purchase a signed certificate from a CA if you plan to expose this to the public.
Or the free option is to use Let's Encrypt, with this service, you are issues free certificates, however they expire in a relatively short period of time; most of the time however you can run an agent which will automatically rotate the certificates before they expire.
The third option is to install the CA certificate that was used to self sign this, into to your browser. i.e., like a large company might do.
edit
Seems like it might instead be an expired certificate? Check when it expires with this:
openssl s_client -showcerts -servername www.stackoverflow.com -connect www.stackoverflow.com:443 </dev/null | openssl x509 -noout -dates
change both instances of stackoverflow you your domain

Lookup Let's Encrypt expiry date when behind Cloudflare

My website uses an SSL certificate from Let's Encrypt. The website also goes through Cloudflare. This means that the website uses Cloudflare's SSL certificate from the user's browser to Cloudflare and then it uses Let's Encrypt's from Cloudflare to the website server.
When I look up the website's SSL certificate in a browser then all I see is Cloudflare's SSL cert and its expiry date. This date is about 6 months in the future. However, I know that Let's Encrypt will expire much sooner than that, but when?
All methods that I have seen for looking up this date also only get the client-facing Cloudflare SSL cert date.
echo | openssl s_client -connect <website>:443 -servername <website> 2>/dev/null | openssl x509 -noout -dates
I obviously need to know the (much sooner) date for when I need to renew the Let's Encrypt certificate. You know, so my website doesn't go down...
The answer is to use localhost, not the domain.
This is how I run it from the Ubuntu, on the server where the Let's Encrypt certificate is stored.
echo | openssl s_client -connect localhost:443 2>/dev/null | openssl x509 -noout -dates
If you have more than one certificate on the server, then this might work, but I'm not sure (I only have one):
echo | openssl s_client -connect localhost:443 -servername <website> 2>/dev/null | openssl x509 -noout -dates
This will also tell you the renewal dates if you installed the certificate with certbot:
certbot renew
Note that this will also renew the cert if less than 30 days are left.

Using '-servername' param with openssl s_client

I am installing a new SSL certificate on Centos6/Apache and my web browser keeps picking up the old certificate. To test my setup, I am using "openssl s_client" but I am seeing different results based on the "-servername" parameter. No one seems to us this parameter and it does not appear in the man pages but I saw it mentioned here OpenSSL: Check SSL Certificate Expiration Date and More .
If I run this command:
echo | openssl s_client -connect example.com:443 2>/dev/null | openssl x509 -noout -issuer -subject -dates
I get the correct date for the certificate.
(notBefore=Apr 20 00:00:00 2017 GMT notAfter=Apr 20 23:59:59 2018 GMT)
However, if I intruduce the -servername parameter into the commmand
echo | openssl s_client -servername example.com -connect example.com:443 2>/dev/null | openssl x509 -noout -issuer -subject -dates
I then get the expired date that my browser is showing -
(notBefore=Apr 20 00:00:00 2016 GMT notAfter=Apr 20 23:59:59 2017 GMT)
Can anyone explain why this is happening, as this must be related to the reason why my SSL certificate shows as expired in my browser.
Thanks
O
The servername argument to s_client is documented (briefly) on this page:
https://www.openssl.org/docs/man1.0.2/apps/s_client.html
Essentially it works a little like a "Host" header in HTTP, i.e. it causes the requested domain name to be passed as part of the SSL/TLS handshake (in the SNI - Server Name Indication extension). A server can then host multiple domains behind a single IP. It will respond with the appropriate certificate based on the requested domain name.
If you do not request a specific domain name the server does not know which certificate to give you, so you end up with a default one. In your case one of the certificates that the server is serving up for your domain has expired, but the default certificate has not.
You need to make sure you are updating the correct VirtualHost entry for your domain, e.g. see:
https://www.digicert.com/ssl-support/apache-multiple-ssl-certificates-using-sni.htm

Browser, s_client without SNI and expired certificate

When I access one of my subdomains: say https://foo.example.com in a browser and inspect the certificates, the certificate looks great. When I use openssl from a remote computer it shows an expired certificate. How can this be?
I tried to reproduce what was found in this question, but my scenario is different. When I run
echo | openssl s_client -showcerts -connect foo.example.com:443 2>&1 | grep Verify
I see:
Verify return code: 10 (certificate has expired)
When I run:
echo | openssl s_client -showcerts -connect foo.example.com:443 2>&1 | openssl x509 -noout -dates
I get:
notBefore=Sep 27 15:10:20 2014 GMT
notAfter=Sep 27 15:10:20 2015 GMT
It looks expired but the browser doesn't show it as expired. Here it is in the browser:
See the 1st comment by #jww. He pointed out that I needed to add -tls1 -servername foo.example.com to my openssl command. His comment:
Try adding -tls1 -servername foo.example.com. I'm guessing you have a front-end server that's providing a default domain for requests without SNI, and the default domain is routed to an internal server with the old certificate. When the browsers connect, they use SNI and get the server for which you have updated the certificate. Or, there could be an intermediate with an expired certificate in the chain that's being served. If you provide real information, its easier for us to help you with problems like this.