Ubuntu server 16.04 PHP7.4 Apache2 running wordpress Geotrust SHA256 certificate
I have started getting the following error
cURL error 60: SSL certificate problem: unable to get local issuer certificate
I have read through and tried most solutions on the many questions on here, but to no avail
The latest 2 i have tried is adding to php.ini the following 2 lines and restarted Apache and rebooted after each one to see if it solves the issue. But it does not
After downloading a fresh copy of cacert.pem The first one i tried was
curl.cainfo = "/path/to/cacert.pem"
Then i tried
openssl.cafile = "/path/to/cacert.pem"
But i still get the same error
Any assistance greatly appreciated.
Many thanks
This is likely to be a server problem ("unable to get local issuer certificate" often is):
Even when using a CA bundle to verify a server cert, you might still experience problems if your CA store does not contain the certificates for the intermediates if the server doesn't provide them.
The TLS protocol mandates that the intermediate certificates are sent in the handshake, but as browsers have ways to survive or work around such omissions, missing intermediates in TLS handshakes still happen that browser-users won't notice.
Browsers work around this problem in two ways: they cache intermediate certificates from previous transfers and some implement the TLS "AIA" extension that lets the client explicitly download such cerfificates on demand.
To figure out for sure if this is your problem, use a TLS test service like perhaps this one: https://www.ssllabs.com/ssltest/
Related
As part of updating the SSL/TLS certificate of the web server deployed in the Kubernetes(which the current one will expire soon), I updated the Kubernetes secret (kubernetes.io/tls) with the new crt and key.
After that, the application works fine in the browser.
But, the API calls to the server, (From some python applications running in some pods) are hitting some SSLError.
The same will work if I restore the old certificate for the server.
The error is:
requests.exceptions.SSLError: HTTPSConnectionPool(host='hostname',
port=443): Max retries exceeded with url: URL(Caused by
SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED]
certificate verify failed: unable to get local issuer certificate
(_ssl.c:1131)')))
I tried to resolve this by creating the crt and key in different ways from the pfx file.
But the issue remains.
I did some search on - if anything to update in the Kubernetes cluster as part of the certificate change and I couldn't find a solution.
Any help will be greatly appreciated.
The issue was, the certificate I installed was without the intermediate certificate. The browsers may "fill in the gap" by searching for the missing certificate. Re-install the certificate with the complete chain resolved the issue
I am looking a solution for hours but can't find any. I am using letsencrypt ssl via certbot.
My domain is ektaz.com when I check certificate on browser it says
Expires: 8 November 2021 Monday 16:24:33 GMT+03:00
When I check it from server side with certbot certificates I get result as
Expiry Date: 2021-11-08 13:24:33+00:00 (VALID: 39 days)
But all browsers says certificate is invalid I don't understand why.
Also I have renewed this certificate many times using certbot renew I had no issue so far. I have cleared all cache and tried result is the same. I restarted apache many times. Even restarted server but nothing changed.
Server OS : Ubuntu 20.04 LTS
Your certificate is likely not invalid at all.
There is a simple fix. I'm using nginx configuration style for this example:
ssl_certificate /usr/local/etc/letsencrypt/live/domain.com/cert.pem;
Lines like that need to be replaced by lines like this
ssl_certificate /usr/local/etc/letsencrypt/live/domain.com/fullchain.pem;
Then refresh your server's configuration.
This problem is popping up all over the place, including with both small and large websites.
The root cause is older tutorials for configuration of webservers that served the cert.pem file (because it worked) rather than the fullchain.pem file which makes sure a browser gets the full chain needed to validate the certificate.
Unfortunately, Apple, Mozilla, and some others have dropped the ball and are still using the same intermediate certificate (IdentTrust DST Global Root CA X3) which expired yesterday afternoon at 2:21:40 pm CST to check certificates that were using it before. iOS 15.0 (19A346) is the only released Apple software version that is automatically using the new intermediate certificate even when the server doesn't send the full chain.
The actual intermediate certificate being used by the server is issued to R3 by ISRG Root X1, but unless you configure your server to explicitly tell this to browsers by using the fullchain.pem within the server configuration, then sadly many software companies have dropped the ball and don't do it right on their own.
But once again, this is an easy fix. Just make that slight change to lines in your server's configuration file "cert.pem" -> "fullchain.pem" and you should be fine.
And there's no reason not to keep on using the fullchain.pem file permanently. In fact, even prior to this situation, various networks (college campus WiFi networks are notorious for this) will screw up your certificate's chain of authority unless you use the fullchain.pem file anyway. Let's Encrypt even recommends this now as the only proper way to configure your web server to use certificates.
I purchased my SSL certificate from GoDaddy.
I made the common name www.mywebsite.com.
In my DNS settings I have the website forwarding from the naked domain to the www.mywebsite.com.
I removed any settings inside Heroku regarding the SSL certificate from the GUI.
Then I went through the instructions here.
To recap, I generated my server.key by first creating the crs files and sending those to GoDaddy.
I purchased the $20/mo endpoint.
GoDaddy gives me a downloadable ZIP for my certificates, one with one certificate, and one with 3 certificates inside of it.
I run the following command to install the bundled version first with the following failing message that follows:
heroku certs:add server.crt server.key --type endpoint
No certificate given is a domain name certificate.
The reason I even tried to use the bundle is that my SSL doesn't work in firefox, and intermediary cert is not being included. After looking around for an answer on this, I couldn't find one.
So to get my website back up and running in the short term, I decided to just do what I did before, and upload the single cert. That works, but not really.
Now I get this message when I run the cUrl test:
* error:14077438:SSL routines:SSL23_GET_SERVER_HELLO:tlsv1 alert internal error
Also, my website is down. :(
How do I fix this?
The answer in my case seems to be that purchasing an SSL cert is not necessary on Heroku. When you purchase a paid hosting package they provide SSL certificates by default without having to buy their SSL add-on endpoint.
There are likely other use-cases for using a paid SSL cert, but in my case I didn't have to do that.
If this answer helped you please upvote this question as some people seem to think it's a question worth down voting.
I am using AWS ELB as entry point (in Proxy mode) to load balance among 2 HAProxy-es behind it from where traffic goes further to MQTT broker.
Those 2 HAProxies are responsible for client TLS termination (2 way TLS).
Certificates kind of work. I've tested on local setup between 2 servers. I've been able to publish with 2-way TLS, properly terminate it, and publish message to mqtt. Problem arise when moving everything to AWS.
I am using self signed root CA, intermediate CA, server certificate and client certificates. Using Elliptic Curve...
Problem might be due to a servers CN. I think it has to be the same as hostname connecting to with tools like mosquitto_pub.
Error that I get is TLS error, whith debug -> ssl handshake failure. Somehow I am not able to produce more verbose errors. Using openssl with s_client and debug for max debug output. Which produce me ssl handshake failure.
I would really appreciate any hint/suggestion.
Thanks in advance.
Tomaz
I solved this by using subjectAltName feature. I edited openssl.cnf and add new section [alt_names] and reference it later on in configuration. Under alt_names I added 1 DNS key, and 2 IPs. Found with man x509v3_config.
Best,
Tomaz
My Nginx server has an SSL certificate that looks really good and works in most browsers perfectly. The server is https://live.evmote.com . You can "hit" the server by going to https://live.evmote.com/primus .
The SSL Cert check is here: https://www.ssllabs.com/ssltest/analyze.html?d=live.evmote.com
So far, so good. The problem is specifically on the Tesla Model S browser (the in-car browser). It gives a "Bad certificate" error. The Tesla browser is notoriously bad and has incomplete support. There's no way to view the cert chain or debug the problem from the Tesla. It's more like an appliance than a computer.
Here's the SSL support from within the Tesla:
http://i.imgur.com/EbIrClM.jpg
On the Nginx server, I'm getting this error in the log:
SSL3_GET_CLIENT_HELLO:no shared cipher
Now, clearly from the Tesla SSL report and the server report, there are shared ciphers. I would expect that they would handshake on this one:
TLS_RSA_WITH_AES_256_CBC_SHA (0x35)
I'm not sure how to troubleshoot from here.
Thanks,
Ryan
The error message might be misleading. What's definitely a problem is that the browser does not support SNI, but your web site requires it. At least it only serves the valid certificate (for live.evmote.com) for SNI capable browsers, all the others get a self-signed wildcard certificate which will not be accepted by a browser doing proper certificate validation.
We've hit similar problem from the Java client. The root cause was explicitly set protocols in SSL context (io.netty.handler.ssl.SslContext):
val ctx = io.netty.handler.ssl.SslContextBuilder.forClient()
...
.protocols("SSLv2Hello,TLSv1.2,TLSv1.3".split(","))
.build()
And precisely SSLv2Hello, that should be neither declared nor used here.
Unfortunately, we can't control all our clients. So further investigation showed this problem has appeared after OpenSSL upgrade to 1.1.*.
Once we downgraded OpenSSL to 1.0.* it do the trick. Nginx 1.18.0 + OpenSSL 1.0.2u could handle handshakes with SSLv2Hello without errors and with the highest protocol possible.