We running 2 application on amazon EC2 (backend.example.com & frontend.example.com). For that application, we used a paid SSL Certificate. That certificate expiration date at 2021 June. But today, we got an error -
cURL error 60: SSL certificate problem: certificate has expired (see http://curl.haxx.se/libcurl/c/libcurl-errors.html)
We check certificate expiration date, but there was no problem (2021 June). Then we follow this thread - curl: (60) SSL certificate problem: unable to get local issuer certificate (#Dahomz answer)
After that, when we curl example.com by - curl -v --url https://backend.example.com --cacert /etc/ssl/ssl.cert/cacert.pem, It working fine. Response like -
* Rebuilt URL to: https://backend.example.com/
* Trying 127.0.0.1...
* Connected to backend.example.com (127.0.0.1) port 443 (#0)
* found 139 certificates in /etc/ssl/ssl.cert/cacert.pem
* found 600 certificates in /etc/ssl/certs
* ALPN, offering http/1.1
* SSL connection using TLS1.2 / ******_RSA_***_***_GCM_*****
* server certificate verification OK
* server certificate status verification SKIPPED
* common name: *.example.com (matched)
* server certificate expiration date OK
* server certificate activation date OK
* certificate public key: RSA
* certificate version: #3
* subject: OU=Domain Control Validated,OU=PositiveSSL Wildcard,CN=*.example.xyz
* start date: Mon, 04 May 2019 00:00:00 GMT
* expire date: Wed, 07 June 2021 23:59:59 GMT
* issuer: C=GB,ST=Greater Manchester,L=Salford,O=Sectigo Limited,CN=Sectigo RSA Domain Validation Secure Server CA
* compression: NULL
* ALPN, server accepted to use http/1.1
But when we hit from frontend.example.com to backend.example.com by curl, it throws this error -
* Rebuilt URL to: https://backend.example.com/
* Trying 127.0.0.1...
* Connected to backend.example.com (127.0.0.1) port 443 (#0)
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:#STRENGTH
* successfully set certificate verify locations:
* CAfile: /etc/ssl/ssl.cert/cacert.pem
CApath: /etc/ssl/certs
* SSL connection using TLSv1.2 / *****-RSA-*****-GCM-******
* ALPN, server accepted to use http/1.1
* Server certificate:
* subject: OU=Domain Control Validated; OU=PositiveSSL Wildcard; CN=*.example.com
* start date: Mar 4 00:00:00 2019 GMT
* expire date: Apr 7 23:59:59 2021 GMT
* issuer: C=GB; ST=Greater Manchester; L=Salford; O=Sectigo Limited; CN=Sectigo RSA Domain Validation Secure Server CA
* SSL certificate verify result: certificate has expired (10), continuing anyway.
My curl code -
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, "https://backend.example.com");
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_VERBOSE, 1);
curl_setopt($ch, CURLOPT_STDERR, fopen(public_path("c.log"), 'w'));
curl_setopt ($ch, CURLOPT_SSL_VERIFYPEER, FALSE);
curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, FALSE);
$output = curl_exec($ch);
$error = curl_error($ch);
$info = curl_getinfo($ch);
curl_close($ch);
To fix the problem, remove the expired root certificate from your domain certificate.
Go to https://whatsmychaincert.com
Test Your Server
If they confirm you you have an expired root certificate, download and use the .crt without this certificate.
If you're having this issue with "curl" (or similar) on a Ubuntu 16 system, here's how we fixed it:
On the Ubuntu 16 system hosting the curl / app that fails:
nano /etc/ca-certificates.conf
Remove the line (or comment) specifying AddTrust_External_Root.crt
apt update && apt install ca-certificates
update-ca-certificates -f -v
Try curl again with the URL that was failing before - hopefully it works now :)
For ubuntu 14.04
Open your terminal
sudo su
wget https://support.sectigo.com/Com_KnowledgeDetailPage?Id=kA01N000000rfBO -O SHA-2_Root_USERTrust_RSA_Certification_Authority.crt --no-check-certificate
cp SHA-2_Root_USERTrust_RSA_Certification_Authority.crt /usr/share/ca-certificates/mozilla/
Then
dpkg-reconfigure ca-certificates and uncheck mozilla/AddTrust_External_Root.crt and check mozilla/2_Root_USERTrust_RSA_Certification_Authority.crt
or run sudo update-ca-certificates for uncheck those.
You could enable insecure connections adding this option to your
$HOME/.curlrc file:
$ echo "insecure" >> ~/.curlrc
Do not recommended to keep this permanently, but, for quick and temporal solutions this is a good option.
Reference: How to apply the changes for all HTTPS connection
It seems like your truststore is not updated with the latest trusted root. Understanding that it happened to you beginning yesterday 30th May. I am assuming that you have Sectigo as your CA.
Update your trustore and you should be able to connect.
https://support.sectigo.com/articles/Knowledge/Sectigo-AddTrust-External-CA-Root-Expiring-May-30-2020
A permanent solution would be to reissue the SSL certificate from your provider and reinstall it on your server.
The reissued certificate would update the CA bundle.
Cheers!
We have the same error. For solving your issue update your "SSLCertificateChainFile" with the newest version of your trusted SSL site. In our case is comodo.
You need to go to your trusted site and find under your certificates the "CA-CRT". Copy the content.
Go to your /etc/apache2/sites-available
Find the line wih "SSLCertificateChainFile".
Next edit the file and replace the content with your new CA-CRT values.
Then restart your web server, in our case is apache:
service apache2 restart
or
systemctl restart apache2
If you can't restart apache the easy way is reboot your instance.
We had the same issue, after some troubleshooting we found that the root certificates of COMODO where expired.
Valid until Sat, 30 May 2020 10:48:38 UTC (expired 3 days, 5 hours ago) EXPIRED
We tested this via: https://www.ssllabs.com/ssltest/index.html.
And we resolved it by downloading the certificates freshly from our reseller.
This is the result we received about the COMODO certificates
I had to fix this issue on a debian based server
this was due to the system use of openssl (curl depends on openssl)
here is how it went:
remove AddTrust_External_Root.crt from your system (usually found in /etc/ssl/certs)
remove or comment the "mozilla/AddTrust_External_Root" line from /etc/ca-certificates.conf
run sudo update-ca-certificates to update the certificates used by openssl
maybe it can help you ?
Yesterday I ran into the problem #finesse was reporting above. Since on our system the ca-certificates get updated automatically, I was quite troubled since the certificate was valid
using curl on the command line
using a php script with php-cli
but it did not work from the web site.
Solution was simple:
just restart php-fpm :/
Best regards,
Willi
change or edit the settings below:
server key = A server key is a private encryption/decryption key used by the server.
Intermediate Certificate (CA) = Certificate Authority (CA) is an entity that issues digital certificates which will verify the ownership of a public key by the named subject of the certificate.
Domain Certificate = A domain certificate is an electronic document that is given by the Certification Authority which checks the permission of the applicant to use a specific domain name.
I managed to fix the problem by running updates on my server:
sudo yum update
This seems to have fixed any issues with the curl certificates.
Related
We would like to have our custom brew repository to allow our developers to easy manage/update our company tools. We made a decision to keep all these files on AWS S3 bucket and in brew formulas just point directly to the object's url. The only restriction which we have is to be sure that access to that AWS S3 bucket is available behind our VPN network.
So what we did:
Created new bucket, let's say with following name: downloads.example.com
Created S3 endpoint. AWS created dns entry:
*.vpce-XXXXXXXXXXXXXXX-XXXXXX.s3.eu-west-1.vpce.amazonaws.com
In the bucket policy we limited access only to that AWS S3 endpoint:
"Condition": {
"StringEquals": {
"aws:SourceVpce": "vpce-XXXXXXXXXXXXXXX"
}
}
We created a route53 DNS entry:
record A downloads.example.com as an alias to *.vpce-XXXXXXXXXXXXXXX-XXXXXX.s3.eu-west-1.vpce.amazonaws.com
After that simple configuration we are able to get/push objects only when we are connected to our VPN server using AWS CLI commands.
Unfortunately problem is when we want to use curl for example:
* Trying 10.X.X.X:443...
* Connected to downloads.example.com (10.X.X.X) port 443 (#0)
...
* Server certificate:
* subject: CN=s3.eu-west-1.amazonaws.com
* start date: Dec 16 00:00:00 2021 GMT
* expire date: Jan 14 23:59:59 2023 GMT
* subjectAltName does not match downloads.example.com
* SSL: no alternative certificate subject name matches target host name 'downloads.example.com'
* Closing connection 0
* TLSv1.2 (OUT), TLS alert, close notify (256):
If i do the same command with skipping CA verification it works:
20211217 16:56:52 kamil#thor ~$ curl -Ls https://downloads.example.com/getMe.txt -k
test file
Do you know if there is an any way to makes that work properly?
I know that we could do following things but we would like see other options:
push route s3.eu-west-1.amazonaws.com via VPN and in the bucket policy limit access only to our VPN public IP
install right certificates on ingress/nginx to do some redirect/proxy
we tried some combination with Loadbalancers and ACMs but didn't work.
Thank you in advance for help
Kamil
I'm afraid it is not possible to do what you want.
When you create an endpoint, AWS is not creating certificates for your own domain. It create a certificate for it owns domains.
You check it with:
First, download the certificate
$ echo | openssl s_client -connect 10.99.16.29:443 2>&1 | sed --quiet '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > vpce.pem
Then you can verify what names are in the certificate.
$ openssl x509 -noout -text -in vpce.pem | grep DNS | tr "," "\n" | sort -u
DNS:s3.eu-central-1.amazonaws.com
DNS:*.accesspoint.vpce-0f0d06a5091e70758-7mtj4kk7-eu-central-1a.s3.eu-central-1.vpce.amazonaws.com
DNS:*.accesspoint.vpce-0f0d06a5091e70758-7mtj4kk7.s3.eu-central-1.vpce.amazonaws.com
DNS:*.bucket.vpce-0f0d06a5091e70758-7mtj4kk7-eu-central-1a.s3.eu-central-1.vpce.amazonaws.com
DNS:*.bucket.vpce-0f0d06a5091e70758-7mtj4kk7.s3.eu-central-1.vpce.amazonaws.com
DNS:*.control.vpce-0f0d06a5091e70758-7mtj4kk7-eu-central-1a.s3.eu-central-1.vpce.amazonaws.com
DNS:*.control.vpce-0f0d06a5091e70758-7mtj4kk7.s3.eu-central-1.vpce.amazonaws.com
DNS:*.s3-accesspoint.eu-central-1.amazonaws.com
DNS:*.s3-control.eu-central-1.amazonaws.com
DNS:*.s3.eu-central-1.amazonaws.com
DNS:bucket.vpce-0f0d06a5091e70758-7mtj4kk7-eu-central-1a.s3.eu-central-1.vpce.amazonaws.com
DNS:bucket.vpce-0f0d06a5091e70758-7mtj4kk7.s3.eu-central-1.vpce.amazonaws.com
note for brevity, I've remove some names from the list.
So, to access to your endpoint and do not have problems with certificates, you need to use one of the names provided in the certificate.
I've set up a server on my local network, but I'm stymied troubleshooting HTTPS connections. When I use curl -v to make a request, I'm told, "requested domain name does not match the server's certificate."
But the output itself appears to indicate that the domain does match: "reg.qa"
What's the real issue here?
$ curl -v https://reg.qa/
* Added reg.qa:443:172.18.0.4 to DNS cache
* About to connect() to reg.qa port 443 (#0)
* Trying 172.18.0.4...
* Connected to reg.qa (172.18.0.4) port 443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* CAfile: /etc/pki/tls/certs/ca-bundle.crt
CApath: none
* Server certificate:
* subject: CN=reg.qa,OU=Web Test,O=MYORG,ST=CA,C=US
* start date: Mar 11 20:53:04 2020 GMT
* expire date: Mar 10 20:53:04 2025 GMT
* common name: reg.qa
* issuer: CN=reg.qa,OU=Web Test,O=MYORG,ST=CA,C=US
* NSS error -12276 (SSL_ERROR_BAD_CERT_DOMAIN)
* Unable to communicate securely with peer: requested domain name does not match the server's certificate.
* Closing connection 0
curl: (51) Unable to communicate securely with peer: requested domain name does not match the server's certificate.
NSS error -12276 (SSL_ERROR_BAD_CERT_DOMAIN)
As far as I know newer versions of NSS require that the certificate uses subject alternative names (SAN) to describe the valid domains. Using the common name (CN) is obsolete for many years and several browsers and TLS stacks started to enforce this a while ago.
I am currently running a Chef Server.
There are 2 ways to access the server :
<HOSTNAME_OF_SERVER_OR_FQDN>
OR
<ACTUAL_URL_THAT_SHOULD_BE_OR_CNAME>
When I try to run knife ssl check, I get:
root#host:/opt/chef-server/embedded/jre# knife ssl check
Connecting to host <ACTUAL_URL_THAT_SHOULD_BE_OR_CNAME>:443
ERROR: The SSL certificate of <HOSTNAME_OF_SERVER_OR_FQDN> could not be verified
Certificate issuer data: /C=US/ST=MA/L=Boston/O=YouCorp/OU=Operations/CN=<HOSTNAME_OF_SERVER_OR_FQDN>.com/emailAddress=you#example.com
Configuration Info:
OpenSSL Configuration:
* Version: OpenSSL 1.0.1p 9 Jul 2015
* Certificate file: /opt/chefdk/embedded/ssl/cert.pem
* Certificate directory: /opt/chefdk/embedded/ssl/certs
Chef SSL Configuration:
* ssl_ca_path: nil
* ssl_ca_file: nil
* trusted_certs_dir: "/root/.chef/trusted_certs"
I want the knife ssl check command to be successful. Basically I want it to be able to successfully connect using <ACTUAL_URL_THAT_SHOULD_BE_OR_CNAME>
How can I add the CNAME to the current certificate which I believe is /opt/chefdk/embedded/ssl/cert.pem ?
One strange aspect about the certificate file is that when I try to read it and grep for the Hostnames or CNAMES, I do not find any :
# /opt/chef-server/embedded/jre/bin/keytool -printcert -file /opt/chefdk/embedded/ssl/cert.pem | grep <ACTUAL_URL_THAT_SHOULD_BE_OR_CNAME>
No result
# /opt/chef-server/embedded/jre/bin/keytool -printcert -file /opt/chefdk/embedded/ssl/cert.pem | grep <HOSTNAME_OF_SERVER_OR_FQDN>
No result
this is how i did it in the past
The Chef server can be configured to use SSL certificates by adding the following settings to the server configuration file
For example:
nginx['ssl_certificate'] = "/etc/pki/tls/certs/your-host.crt"
nginx['ssl_certificate_key'] = "/etc/pki/tls/private/your-host.key"
Save the file, and then run the following command:
$ sudo chef-server-ctl reconfigure
Currently, I have set up a registry in the following manner:
docker run -d \
-p 10.0.1.4:443:5000 \
--name registry \
-v `pwd`/certs/:/certs \
-v `pwd`/registry:/var/lib/registry \
-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/certificate.crt \
-e REGISTRY_HTTP_TLS_KEY=/certs/private.key \
registry:latest
Using Docker version 17.06.2-ce, build cec0b72
I have obtained my certificate.crt, private.key, and ca_bundle.crt from Let's Encrypt. And I have been able to establish https connections when using these certs on a nginx server, without having to explicitly trust the certificates on the client machine/browser.
Is it possible to setup a user experience with a docker registry similar to that of a CA certified website being accessed via https, where the browser/machine trusts the root CA and those along the chain, including my certificates?
Note:
I can of course specify the certificate in the clients docker files as described in this tutorial: https://docs.docker.com/registry/insecure/#use-self-signed-certificates . However, this is not an adequate solution for my needs.
Output of curl -v https://docks.behar.cloud/v2/:
* Trying 10.0.1.4...
* TCP_NODELAY set
* Connected to docks.behar.cloud (10.0.1.4) port 443 (#0)
* TLS 1.2 connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
* Server certificate: docks.behar.cloud
* Server certificate: Let's Encrypt Authority X3
* Server certificate: DST Root CA X3
> GET /v2/ HTTP/1.1
> Host: docks.behar.cloud
> User-Agent: curl/7.54.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Content-Length: 2
< Content-Type: application/json; charset=utf-8
< Docker-Distribution-Api-Version: registry/2.0
< X-Content-Type-Options: nosniff
< Date: Sun, 10 Sep 2017 23:05:01 GMT
<
* Connection #0 to host docks.behar.cloud left intact
Short answer: Yes.
My issue was caused by my os not having a build in trust of the root certificates from which my SSL certificate was signed by. This is likely due to the age of my os. See the answer from Matt for more information.
Docker will normally use the the OS provided CA bundle, so certificates signed by trusted roots should work without extra config.
Let's Encrypt certificates are cross signed by an IdentTrust root certificate (DST Root CA X3) so most CA bundles should already trust their certificates. The Lets Encrypt root cert (ISRG Root X1) is also distributed but will not be as widespread due to it being more recent.
Docker 1.13+ will use the host systems CA bundle to verify certificates. Prior to 1.13 this may not happen if you have installed a custom root cert. So if you use curl without any TLS warning then docker commands should also work the same.
To have DTR recognize the certificates you need to edit the configuration file so that you specify your certs correctly. DTR accepts and has special parameters for LetsEncrypt Certs. They also have specific requirements for them. You will need to make a configuration file and mount the appropriate directories and then there should be no further issues with insecure-registry errors and unrecognized certs.
...
http:
addr: localhost:5000
prefix: /my/nested/registry/
host: https://myregistryaddress.org:5000
secret: asecretforlocaldevelopment
relativeurls: false
tls:
certificate: /path/to/x509/public
key: /path/to/x509/private
clientcas:
- /path/to/ca.pem
- /path/to/another/ca.pem
letsencrypt:
cachefile: /path/to/cache-file
email: emailused#letsencrypt.com
...
I have a UCC SSL certificate which holds up to 10 domain names and when I browse to a page using the certificate I get an ERR_CONNECTION_REFUSED error.
Here's what I've done:
Purchased a UCC certificate
Created a Heroku SSL endpoint
Uploaded the certificate: heroku certs:add server.crt server.key
Checked the certs: heroku certs:info
Info returns:
Fetching SSL Endpoint blah-1234.herokussl.com info for app-name... done
Certificate details:
Common Name(s):domain.com
www.domain.com
others......
Expires At: 2015-05-25 23:48 UTC
Issuer: /OU=Domain Control Validated/CN=www.domain.com
Starts At: 2014-05-25 23:48 UTC
Subject: /OU=Domain Control Validated/CN=www.domain.com
SSL certificate is verified by a root authority.
heroku certs
Gives:
Endpoint Common Name(s) Expires Trusted
------------------------ ------------------------------------------------------------------------------------------- -------------------- -------
blah-1234.herokussl.com www.domain.com, domain.com, .......... 2015-05-25 23:48 UTC True
- Updated the domain to point to blah-1234.herokussl.com
CName:
www blah-1234.herokussl.com TTL1hr
Waited TTL
Run a curl test: curl -kvI https://www.domain.com
Response:
* About to connect() to www.domain.com port 443 (#0)
* Trying 50.19.XXX.XXX...
* Connection refused
* Trying 54.204.XXX.XXX...
* Connection refused
* Trying 23.21.XXX.XXX...
* Connection refused
* couldn't connect to host
* Closing connection #0
curl: (7) couldn't connect to host
The http urls work fine.
Any idea why this is happening? Is this because I'm using a UCC certificate?
The tech support guys at Heroku suggested removing and recreating the SSL end-point add-on and that solved the problem.
* Connection refused
These are no certificate errors, either your web server does not listen on port 443 or there is a firewall in between which is blocking connections.