Certificate chain - is my Intermediate correct - ssl

One thing I could not find. I've just received a Comodo ssl certificate (.crt file and a key) from a client to install on the webserver. I did not receive an Intermediate though. The certificate CN is:
Extended Validation Secure Server CA
and i did find this Intermediate on Comodo website:
https://support.comodo.com/index.php?/Knowledgebase/Article/View/931/91/intermediate-2-comodo-ev-secure-server-ca
How can I check whether this particular certificate is validated by this Intermediate?
I was trying
openssl verify -verbose -purpose sslserver -CAfile comodoextendedvalidationsecureserverca.crt my_certificate.crt
but got this error:
error 20 at 0 depth lookup:unable to get local issuer certificate
Which I would expect if the validation fails. But surprisingly I got similar error (error 2 at 1 depth lookup:unable to get issuer certificate) while trying this command on a certificate/Intermediate pair I'm sure is correct.
I want to make sure, I'm out of options of finding a proper Intermediate, before i start nagging my client.

As Patrick suggested:
openssl verify -purpose sslserver -untrusted <Intermediate_file.crt> <cerificate_file.crt>
Is a good way to go. Thanks

Related

ca-bundle.crt seems updated but CA still missing in it

I am trying to connect to a webserver from my CentOS but I got an error regarding the certificate.
curl https://mywebsite ends with error 60 : Peer's certificate issuer is not recognized.
I am not able to add my CA certificate issuer.crt to the ca-bundle.crt.
Looking at /etc/pki/tls/certs/ca-bundle.crt.
My website certificate issuer is missing, that's why i got an error.
Verifying my CA_issuer_crt with curl --cacert /path/to/my/CA_issuer.crt https://mywebsite
Curl is successful.
So, trying to add my CA_issuer.crt to the ca-bundle.crt
I put my CA_issuer.crt to /etc/pki/ca-trust/source/anchors/CA_issuer.crt
running update-ca-certificate
Tried the followings
update-ca-certificate enable update-ca-certificate force enable update-ca-certificate extract
My /etc/pki/tls/certs/ca-bundle.crt seems updated (the last modified date is right now) but my CA certificate is still missing in the file + my curl test is still KO.
My certificate is an authority CA certificate is X509v3 Basic Constraits: CA:TRUE
openssl verify my CA_issuer.crt gives me an error.
18 at 0 depth lookup:self signed certificate OK
This CA certificate is deployed on several servers without issue.
I only have a couple of servers with this issue.
Any help is welcome to find a solution.
Thank you.

Openssl certificate verification stricter/different than browsers? [duplicate]

This question already has answers here:
SSL working in chrome but sometimes in Firefox and not on IOS, Android or Blackberry
(2 answers)
Closed 1 year ago.
I've put together a Linux (Centos 7) server to serve eye-n-sky.net.
Serving content from that site to browsers on Win10 and Linux systems works beautifully. However, when I use openssl to access the site,
openssl s_client -connect eye-n-sky.net:443
the site certificate is rejected,
Verify return code: 21 (unable to verify the first certificate)
I've concluded that the way a browser verifies the certificate is different from what openssl does. Am I on the right track?
I've tested this on three different openssl instances (Debian, Centos, FreeBSD) and have consistent results.
Openssl as a client to other sites, e.g. www.godaddy.com, microsoft.com, work fine, being able to verify the certificate against the installed CA chain.
Believing that I was missing a CA cert, I used the -CAfile option to specify the possibly missing cert, to no effect.
What am I missing? I'm guessing that openssl has a stricter verification discipline, but I don't know where that gets configured.
Thanks,
Andy
Summary: yes, eye-n-sky was providing only it's cert when it needed to include the intermediate and root certs.
However, it took me forever to figure out that my Apache version did not support including the chain in the server cert file. Instead, I had to provide the chain file separately in an SSLCertificateChainFile directive.
OpenSSL's command-line s_client utility has nothing built in to validate the server's certificate. Browsers have a built-in list of trusted certificates to verify the server certificate against.
You have to supply the trusted certificates using options such as -CAfile file or -CApath directory. Per the OpenSSL 1.1.1 s_client man page:
-CApath directory
The directory to use for server certificate verification. This
directory must be in "hash format", see verify(1) for more
information. These are also used when building the client certificate
chain.
-CAfile file
A file containing trusted certificates to use during server
authentication and to use when attempting to build the client
certificate chain.
Note the use of words such as "certificate chain". If you go to godaddy.com you'll see that the server's cert is for *.godaddy.com, but it was signed by Go Daddy Secure Certificate Authority - G2, and that intermediate certificate was signed by Go Daddy Root Certificate Authority - G2 - a different certificate. There's a total of three certificates in that chain.
Verify return code 21 is "no signatures could be verified because the chain contains only one certificate and it is not self signed", so if your CA file only had the certificate from Go Daddy Root Certificate Authority - G2 and not the one from Go Daddy Secure Certificate Authority - G2, OpenSSL would see from the server's cert itself that it was signed by Go Daddy Secure Certificate Authority - G2 and could go no further - it doesn't have that cert to see who signed it.

Can't make Guzzle accept a certificate

I'm trying to have a server A communicate with a server B through HTTPS requests. Server B has a certificate that was issued to me by my employer, and connecting to it through both Safari and Chrome works without any issues.
However, when trying to send a request from A to B through Guzzle, I get the following error:
GuzzleHttp/Exception/RequestException with message 'cURL error 60: SSL certificate problem:
unable to get local issuer certificate (see https://curl.haxx.se/libcurl/c/libcurl-errors.html)'
I've tried setting the cert file as a parameter ( [verify => '/path/to/cert.pem'] ), but, first of all, I only had .crt, .csr and .key files; I tried making a .pem file through these instructions I found somewhere else:
(optional) Remove the password from the Private Key by following the steps listed below:
openssl rsa -in server.key -out nopassword.key
Note: Enter the pass phrase of the Private Key.
Combine the private key, public certificate and any 3rd party intermediate certificate files:
cat nopassword.key > server.pem
cat server.crt >> server.pem
Note: Repeat this step as needed for third-party certificate chain files, bundles, etc:
cat intermediate.crt >> server.pem
This didn't work – the error's the same. The request works with 'verify' set to false, but that's obviously not an option for production.
Certificates are not something I usually work with, so I'm having a lot of trouble just figuring out where the issue might lie, let alone fix it. Any help would be much appreciated.
Edit
I've also tried the solutions suggested in Guzzle Curl Error 60 SSL unable to get local issuer to no avail.
This was happening because the only certificate I had configured on server B was the End User certificate.
I'm new to this, so my explanation will probably be flawed, but from my understanding End User certificates link back to a trusted Certificate Authority (CA) certificate, with zero or more intermediate certificates in-between. Browsers can figure out this certificate chain, and download the required certificates that are missing; cURL does not.
Therefore, the solution was configuring Server B with the missing certificates. How to do this is a whole different issue, so I won't go into it in this answer.

Unable To Trust Self-Signed SSL Certificate

I have an application running on Centos7 that needs to connect to a remote host over HTTPS. However, it is unable to verify the certificate and fails. Also, if I try to download a file from the server using wget, I get the below error:
[root#foo:~]# wget https://10.65.127.9/index.html
--2017-05-22 09:03:01-- https://10.65.127.9/index.html
Connecting to 10.65.127.9:443... connected.
ERROR: cannot verify 10.65.127.9's certificate, issued by ‘/CN=us6877vnxe7827’:
Unable to locally verify the issuer's authority.
To connect to 10.65.127.9 insecurely, use `--no-check-certificate'.
So I get the certificate from the host:
openssl s_client -connect 10.65.127.9:443 <<<'' | openssl x509 -out /etc/pki/ca-trust/source/anchors/mycert.pem
And execute the following to process it:
update-ca-trust extract
This however results in the same issue.. If I run:
openssl s_client -connect 10.65.127.9:443 -showcerts -debug
I do get some errors and various messages:
depth=0 CN = us6877vnxe7827
verify error:num=20:unable to get local issuer certificate
verify return:1
depth=0 CN = us6877vnxe7827
verify error:num=21:unable to verify the first certificate
verify return:1
---
Certificate chain
0 s:/CN=us6877vnxe7827
i:/CN=us6877vnxe7827
Server certificate
subject=/CN=us6877vnxe7827
issuer=/CN=us6877vnxe7827
---
No client certificate CA names sent
---
Verify return code: 21 (unable to verify the first certificate)
Any ideas what I may be missing? If any further info helps, please let me know.
For wget you need to provide the certificate authority (CA) certificate(s) that signed the https server certificate. If you have those CA certificates - add them under --ca-certificate=file or --ca-directory=directory options
If you don't have them and you want to skip https server certificate verification (unsecure and can be dangerous) then use --no-check-certificate option.
I had the same problem with Jenkins trying to connect to our GitLab server.
The server does have a valid official certificate in our case, but Java didn't except it.
You are right about downloading the certificate.
However, the application you are mentioning is probably running inside a Java Virtual Machine (as a lot of applications are).
So from the point that you downloaded the certificate to a PEM file, you may have to add it to the VM's trusted certificates instead.
This article describes how to do that. Hope it helps.

OpenSSL: unable to verify the first certificate for Experian URL

I am trying to verify an SSL connection to Experian in Ubuntu 10.10 with OpenSSL client.
openssl s_client -CApath /etc/ssl/certs/ -connect dm1.experian.com:443
The problem is that the connection closes with a Verify return code: 21 (unable to verify the first certificate).
I've checked the certificate list, and the Certificate used to sign Experian (VeriSign Class 3 Secure Server CA - G3) is included in the list.
/etc/ssl/certs/ca-certificates.crt
Yet I don't know why it is not able to verify the first certificate.
The entire response could be seen here:
https://gist.github.com/1248790
The first error message is telling you more about the problem:
verify error:num=20:unable to get local issuer certificate
The issuing certificate authority of the end entity server certificate is
VeriSign Class 3 Secure Server CA - G3
Look closely in your CA file - you will not find this certificate since it is an intermediary CA - what you found was a similar-named G3 Public Primary CA of VeriSign.
But why does the other connection succeed, but this one doesn't? The problem is a misconfiguration of the servers (see for yourself using the -debug option). The "good" server sends the entire certificate chain during the handshake, therefore providing you with the necessary intermediate certificates.
But the server that is failing sends you only the end entity certificate, and OpenSSL is not capable of downloading the missing intermediate certificate "on the fly" (which would be possible by interpreting the Authority Information Access extension). Therefore your attempt fails using s_client but it would succeed nevertheless if you browse to the same URL using e.g. FireFox (which does support the "certificate discovery" feature).
Your options to solve the problem are either fixing this on the server side by making the server send the entire chain, too, or by passing the missing intermediate certificate to OpenSSL as a client-side parameter.
Adding additional information to emboss's answer.
To put it simply, there is an incorrect cert in your certificate chain.
For example, your certificate authority will have most likely given you 3 files.
your_domain_name.crt
DigiCertCA.crt # (Or whatever the name of your certificate authority is)
TrustedRoot.crt
You most likely combined all of these files into one bundle.
-----BEGIN CERTIFICATE-----
(Your Primary SSL certificate: your_domain_name.crt)
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
(Your Intermediate certificate: DigiCertCA.crt)
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
(Your Root certificate: TrustedRoot.crt)
-----END CERTIFICATE-----
If you create the bundle, but use an old, or an incorrect version of your Intermediate Cert (DigiCertCA.crt in my example), you will get the exact symptoms you are describing.
SSL connections appear to work from browser
SSL connections fail from other clients
Curl fails with error: "curl: (60) SSL certificate : unable to get local issuer certificate"
openssl s_client -connect gives error "verify error:num=20:unable to get local issuer certificate"
Redownload all certs from your certificate authority and make a fresh bundle.
I came across the same issue installing my signed certificate on an Amazon Elastic Load Balancer instance.
All seemed find via a browser (Chrome) but accessing the site via my java client produced the exception javax.net.ssl.SSLPeerUnverifiedException
What I had not done was provide a "certificate chain" file when installing my certificate on my ELB instance (see https://serverfault.com/questions/419432/install-ssl-on-amazon-elastic-load-balancer-with-godaddy-wildcard-certificate)
We were only sent our signed public key from the signing authority so I had to create my own certificate chain file. Using my browser's certificate viewer panel I exported each certificate in the signing chain. (The order of the certificate chain in important, see https://forums.aws.amazon.com/message.jspa?messageID=222086)
Here is what you can do:-
Exim SSL certificates
By default, the /etc/exim.conf will use the cert/key files:
/etc/exim.cert
/etc/exim.key
so if you're wondering where to set your files, that's where.
They're controlled by the exim.conf's options:
tls_certificate = /etc/exim.cert
tls_privatekey = /etc/exim.key
Intermediate Certificates
If you have a CA Root certificate (ca bundle, chain, etc.) you'll add the contents of your CA into the exim.cert, after your actual certificate.
Probably a good idea to make sure you have a copy of everything elsewhere in case you make an error.
Dovecot and ProFtpd should also read it correctly, so dovecot no longer needs the ssl_ca option.
So for both cases, there is no need to make any changes to either the exim.conf or dovecot.conf(/etc/dovecot/conf/ssl.conf)
If you are using MacOS use:
sudo cp /usr/local/etc/openssl/cert.pem /etc/ssl/certs
after this Trust anchor not found error disappears
For those using zerossl.com certificates, drag and drop all certificates (as is) to their respective folders.
Cut and pasting text into existing files, may cause utf8 issues - depending upon OS, format and character spacings.