Browser, s_client without SNI and expired certificate - ssl

When I access one of my subdomains: say https://foo.example.com in a browser and inspect the certificates, the certificate looks great. When I use openssl from a remote computer it shows an expired certificate. How can this be?
I tried to reproduce what was found in this question, but my scenario is different. When I run
echo | openssl s_client -showcerts -connect foo.example.com:443 2>&1 | grep Verify
I see:
Verify return code: 10 (certificate has expired)
When I run:
echo | openssl s_client -showcerts -connect foo.example.com:443 2>&1 | openssl x509 -noout -dates
I get:
notBefore=Sep 27 15:10:20 2014 GMT
notAfter=Sep 27 15:10:20 2015 GMT
It looks expired but the browser doesn't show it as expired. Here it is in the browser:

See the 1st comment by #jww. He pointed out that I needed to add -tls1 -servername foo.example.com to my openssl command. His comment:
Try adding -tls1 -servername foo.example.com. I'm guessing you have a front-end server that's providing a default domain for requests without SNI, and the default domain is routed to an internal server with the old certificate. When the browsers connect, they use SNI and get the server for which you have updated the certificate. Or, there could be an intermediate with an expired certificate in the chain that's being served. If you provide real information, its easier for us to help you with problems like this.

Related

Lookup Let's Encrypt expiry date when behind Cloudflare

My website uses an SSL certificate from Let's Encrypt. The website also goes through Cloudflare. This means that the website uses Cloudflare's SSL certificate from the user's browser to Cloudflare and then it uses Let's Encrypt's from Cloudflare to the website server.
When I look up the website's SSL certificate in a browser then all I see is Cloudflare's SSL cert and its expiry date. This date is about 6 months in the future. However, I know that Let's Encrypt will expire much sooner than that, but when?
All methods that I have seen for looking up this date also only get the client-facing Cloudflare SSL cert date.
echo | openssl s_client -connect <website>:443 -servername <website> 2>/dev/null | openssl x509 -noout -dates
I obviously need to know the (much sooner) date for when I need to renew the Let's Encrypt certificate. You know, so my website doesn't go down...
The answer is to use localhost, not the domain.
This is how I run it from the Ubuntu, on the server where the Let's Encrypt certificate is stored.
echo | openssl s_client -connect localhost:443 2>/dev/null | openssl x509 -noout -dates
If you have more than one certificate on the server, then this might work, but I'm not sure (I only have one):
echo | openssl s_client -connect localhost:443 -servername <website> 2>/dev/null | openssl x509 -noout -dates
This will also tell you the renewal dates if you installed the certificate with certbot:
certbot renew
Note that this will also renew the cert if less than 30 days are left.

SSL_connect:error in SSLv3 read server hello A [duplicate]

I am running Windows Vista and am attempting to connect via https to upload a file in a multi part form but I am having some trouble with the local issuer certificate. I am just trying to figure out why this isnt working now, and go back to my cURL code later after this is worked out. Im running the command:
openssl s_client -connect connect_to_site.com:443
It gives me an digital certificate from VeriSign, Inc., but also shoots out an error:
Verify return code: 20 (unable to get local issuer certificate)
What is the local issuer certificate? Is that a certificate from my own computer? Is there a way around this? I have tried using -CAfile mozilla.pem file but still gives me same error.
I had the same problem and solved it by passing path to a directory where CA keys are stored. On Ubuntu it was:
openssl s_client -CApath /etc/ssl/certs/ -connect address.com:443
Solution:
You must explicitly add the parameter -CAfile your-ca-file.pem.
Note: I tried also param -CApath mentioned in another answers, but is does not works for me.
Explanation:
Error unable to get local issuer certificate means, that the openssl does not know your root CA cert.
Note: If you have web server with more domains, do not forget to add also -servername your.domain.net parameter. This parameter will "Set TLS extension servername in ClientHello". Without this parameter, the response will always contain the default SSL cert (not certificate, that match to your domain).
This error also happens if you're using a self-signed certificate with a keyUsage missing the value keyCertSign.
Is your server configured for client authentication? If so you need to pass the client certificate while connecting with the server.
I had the same problem on OSX OpenSSL 1.0.1i from Macports, and also had to specify CApath as a workaround (and as mentioned in the Ubuntu bug report, even an invalid CApath will make openssl look in the default directory).
Interestingly, connecting to the same server using PHP's openssl functions (as used in PHPMailer 5) worked fine.
put your CA & root certificate in /usr/share/ca-certificate or /usr/local/share/ca-certificate.
Then
dpkg-reconfigure ca-certificates
or even reinstall ca-certificate package with apt-get.
After doing this your certificate is collected into system's DB:
/etc/ssl/certs/ca-certificates.crt
Then everything should be fine.
With client authentication:
openssl s_client -cert ./client-cert.pem -key ./client-key.key -CApath /etc/ssl/certs/ -connect foo.example.com:443
Create the certificate chain file with the intermediate and root ca.
cat intermediate/certs/intermediate.cert.pem certs/ca.cert.pem > intermediate/certs/ca-chain.cert.pem
chmod 444 intermediate/certs/ca-chain.cert.pem
Then verfify
openssl verify -CAfile intermediate/certs/ca-chain.cert.pem \
intermediate/certs/www.example.com.cert.pem
www.example.com.cert.pem: OK
Deploy the certific
I faced the same issue,
It got fixed after keeping issuer subject value in the certificate as it is as subject of issuer certificate.
so please check "issuer subject value in the certificate(cert.pem) == subject of issuer (CA.pem)"
openssl verify -CAfile CA.pem cert.pem
cert.pem: OK
I got this problem when my NGINX server did not have a complete certificate chain in the certificate file it was configured with.
My solution was to find a similar server and extract the certificates from that server with something like:
openssl s_client -showcerts -CAfile my_local_issuer_CA.cer -connect my.example.com:443 > output.txt
Then I added the ASCII armoured certificates from that 'output.txt' file (except the machine-certificate) to a copy of my machines certificate-file and pointed NGINX at that copied file instead and the error went away.
this error messages means that
CABundle is not given by (-CAfile ...)
OR
the CABundle file is not closed by a self-signed root certificate.
Don't worry. The connection to server will work even
you get theis message from openssl s_client ...
(assumed you dont take other mistake too)
I would update #user1462586 answer by doing the following:
I think it is more suitable to use update-ca-certificates command, included in the ca-certificates package than dpkg-reconfigure.
So basically, I would change its useful answer to this:
Retrieve the certificate (from this stackoverflow answer and write it in the right directory:
# let's say we call it my-own-cert.crt
openssl s_client -CApath /etc/ssl/certs/ -connect <hostname.domain.tld>:<port> 2>/dev/null </dev/null | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > /usr/share/ca-certificates/my-own-cert.crt
Repeat the operation if you need other certificates.
For example, if you need CA certs for ldaps/starttls with Active Directory, see here for how to process this + use openssl to convert it in pem/crt:
openssl x509 -inform der -in LdapSecure.cer -out my-own-ca.pem
#and copy it in the right directory...
cp my-own-ca.pem /usr/share/ca-certificates/my-own-ca.crt
Add this certificates to the /etc/ca-certificates.conf configuration file:
echo "my-own-cert.crt" >> /etc/ca-certificates.conf
echo "my-own-ca.crt" >> /etc/ca-certificates.conf
Update /etc/ssl/certs directory:
update-ca-certificate
Enjoy
Note that if you use private domain name machines, instead of legitimate public domain names, you may need to edit your /etc/hosts file to be able to have the corresponding FQDN.
This is due to SNI Certificate binding issue on the Vserver or server itself

Unable to verify ssl certificate

I am not able to verify webmaster account of one of my client.
Google is saying "Verification failed - The connection to your server timed out."
When I tried to do wget the URL, I found below error. Can someone please help me resolving this?
[pdurgapal]$ wget https://atlanticdiscountstore.com
--2017-06-28 11:48:48-- https://atlanticdiscountstore.com
Resolving atlanticdiscountstore.com... 188.241.58.18
Connecting to atlanticdiscountstore.com|188.241.58.18|:443... connected.
ERROR: cannot verify atlanticdiscountstore.com’s certificate, issued by “/CN=baldwincountyunited.com”:
Self-signed certificate encountered.
ERROR: certificate common name “baldwincountyunited.com” doesn’t match requested host name “atlanticdiscountstore.com”.
To connect to atlanticdiscountstore.com insecurely, use ‘--no-check-certificate’.
[pdurgapal]$
You must be using a very old version of wget which has no support for SNI. When using a proper client with support for SNI the certificate can be verified. Apart from that the server is terrible slow in responding after the TLS handshake is successfully done, but this is not the issue you asked about.
To demonstrate the problem an access to the site without SNI:
$ openssl s_client -connect atlanticdiscountstore.com:443 |\
openssl x509 -text
...
Subject: CN=baldwincountyunited.com
...
X509v3 Subject Alternative Name:
DNS:baldwincountyunited.com, DNS:mail.baldwincountyunited.com, DNS:www.baldwincountyunited.com
and with SNI:
$ openssl s_client -connect atlanticdiscountstore.com:443 \
-servername atlanticdiscountstore.com |\
openssl x509 -text
...
Subject: ... CN=*.atlanticdiscountstore.com
...
X509v3 Subject Alternative Name:
DNS:*.atlanticdiscountstore.com, DNS:atlanticdiscountstore.com

Using '-servername' param with openssl s_client

I am installing a new SSL certificate on Centos6/Apache and my web browser keeps picking up the old certificate. To test my setup, I am using "openssl s_client" but I am seeing different results based on the "-servername" parameter. No one seems to us this parameter and it does not appear in the man pages but I saw it mentioned here OpenSSL: Check SSL Certificate Expiration Date and More .
If I run this command:
echo | openssl s_client -connect example.com:443 2>/dev/null | openssl x509 -noout -issuer -subject -dates
I get the correct date for the certificate.
(notBefore=Apr 20 00:00:00 2017 GMT notAfter=Apr 20 23:59:59 2018 GMT)
However, if I intruduce the -servername parameter into the commmand
echo | openssl s_client -servername example.com -connect example.com:443 2>/dev/null | openssl x509 -noout -issuer -subject -dates
I then get the expired date that my browser is showing -
(notBefore=Apr 20 00:00:00 2016 GMT notAfter=Apr 20 23:59:59 2017 GMT)
Can anyone explain why this is happening, as this must be related to the reason why my SSL certificate shows as expired in my browser.
Thanks
O
The servername argument to s_client is documented (briefly) on this page:
https://www.openssl.org/docs/man1.0.2/apps/s_client.html
Essentially it works a little like a "Host" header in HTTP, i.e. it causes the requested domain name to be passed as part of the SSL/TLS handshake (in the SNI - Server Name Indication extension). A server can then host multiple domains behind a single IP. It will respond with the appropriate certificate based on the requested domain name.
If you do not request a specific domain name the server does not know which certificate to give you, so you end up with a default one. In your case one of the certificates that the server is serving up for your domain has expired, but the default certificate has not.
You need to make sure you are updating the correct VirtualHost entry for your domain, e.g. see:
https://www.digicert.com/ssl-support/apache-multiple-ssl-certificates-using-sni.htm

PayPal SSL Certificate Change: Testing Verisign G5 Certificate

I'am trying to confirm, that our server will be ready for the SSL Certificate Change.
According to Microsite migration on www.sandbox.paypal.com is complete.
Running:
openssl s_client -CApath /etc/ssl/certs/ -connect www.sandbox.paypal.com:443
returned 0 (ok)
Does this test definitively confirm that our server is ready?
The openssl connection return code(0) will be affirmative for this cert check, but there’s a slightly change you may want to make for the call.
Run with the following line and try the conn one more time, (I’ve added the –showcerts parameter so that the cert chain will be printed out and you may easily identify Verisign G5 root cert in there)
openssl s_client -connect api-3t.sandbox.paypal.com:443 -showcerts -CApath /etc/ssl/certs/