CloudKit endpoint SSL certificate hostname issue - ssl

I'm using Apache HttpClient 4.5.2 to make CloudKit Web API requests. When I send requests to the CloudKit endpoint URL (https://api.apple-cloudkit.com/[...]), I get the following error:
javax.net.ssl.SSLException: Certificate for <api.apple-cloudkit.com> doesn't match any of the subject alternative names: [*.icloud.com]
I'm no expert on SSL, but it looks like Apple is serving a certificate for api.apple-cloudkit.com that's only valid for *.icloud.com. Am I understanding that right?
Or if the certificate is correct, then why is HttpClient complaining?

It looks like the library you use or the underlying platform does not support Server Name Indication (SNI). Without SNI you get:
$ openssl s_client -connect api.apple-cloudkit.com:443 | openssl x509 -text
...
Subject: CN=*.icloud.com,
But when using SNI you get a different certificate:
$ openssl s_client -connect api.apple-cloudkit.com:443 \
-servername api.apple-cloudkit.com | openssl x509 -text
...
Subject: ...CN=cdn.apple-cloudkit.com
X509v3 Subject Alternative Name:
DNS:api.apple-cloudkit.com, DNS:cdn.apple-cloudkit.com
Older versions of the Apache HTTPClient library are known to have missing support for SNI so make sure you use a recent version.

Related

SSL_connect:error in SSLv3 read server hello A [duplicate]

I am running Windows Vista and am attempting to connect via https to upload a file in a multi part form but I am having some trouble with the local issuer certificate. I am just trying to figure out why this isnt working now, and go back to my cURL code later after this is worked out. Im running the command:
openssl s_client -connect connect_to_site.com:443
It gives me an digital certificate from VeriSign, Inc., but also shoots out an error:
Verify return code: 20 (unable to get local issuer certificate)
What is the local issuer certificate? Is that a certificate from my own computer? Is there a way around this? I have tried using -CAfile mozilla.pem file but still gives me same error.
I had the same problem and solved it by passing path to a directory where CA keys are stored. On Ubuntu it was:
openssl s_client -CApath /etc/ssl/certs/ -connect address.com:443
Solution:
You must explicitly add the parameter -CAfile your-ca-file.pem.
Note: I tried also param -CApath mentioned in another answers, but is does not works for me.
Explanation:
Error unable to get local issuer certificate means, that the openssl does not know your root CA cert.
Note: If you have web server with more domains, do not forget to add also -servername your.domain.net parameter. This parameter will "Set TLS extension servername in ClientHello". Without this parameter, the response will always contain the default SSL cert (not certificate, that match to your domain).
This error also happens if you're using a self-signed certificate with a keyUsage missing the value keyCertSign.
Is your server configured for client authentication? If so you need to pass the client certificate while connecting with the server.
I had the same problem on OSX OpenSSL 1.0.1i from Macports, and also had to specify CApath as a workaround (and as mentioned in the Ubuntu bug report, even an invalid CApath will make openssl look in the default directory).
Interestingly, connecting to the same server using PHP's openssl functions (as used in PHPMailer 5) worked fine.
put your CA & root certificate in /usr/share/ca-certificate or /usr/local/share/ca-certificate.
Then
dpkg-reconfigure ca-certificates
or even reinstall ca-certificate package with apt-get.
After doing this your certificate is collected into system's DB:
/etc/ssl/certs/ca-certificates.crt
Then everything should be fine.
With client authentication:
openssl s_client -cert ./client-cert.pem -key ./client-key.key -CApath /etc/ssl/certs/ -connect foo.example.com:443
Create the certificate chain file with the intermediate and root ca.
cat intermediate/certs/intermediate.cert.pem certs/ca.cert.pem > intermediate/certs/ca-chain.cert.pem
chmod 444 intermediate/certs/ca-chain.cert.pem
Then verfify
openssl verify -CAfile intermediate/certs/ca-chain.cert.pem \
intermediate/certs/www.example.com.cert.pem
www.example.com.cert.pem: OK
Deploy the certific
I faced the same issue,
It got fixed after keeping issuer subject value in the certificate as it is as subject of issuer certificate.
so please check "issuer subject value in the certificate(cert.pem) == subject of issuer (CA.pem)"
openssl verify -CAfile CA.pem cert.pem
cert.pem: OK
I got this problem when my NGINX server did not have a complete certificate chain in the certificate file it was configured with.
My solution was to find a similar server and extract the certificates from that server with something like:
openssl s_client -showcerts -CAfile my_local_issuer_CA.cer -connect my.example.com:443 > output.txt
Then I added the ASCII armoured certificates from that 'output.txt' file (except the machine-certificate) to a copy of my machines certificate-file and pointed NGINX at that copied file instead and the error went away.
this error messages means that
CABundle is not given by (-CAfile ...)
OR
the CABundle file is not closed by a self-signed root certificate.
Don't worry. The connection to server will work even
you get theis message from openssl s_client ...
(assumed you dont take other mistake too)
I would update #user1462586 answer by doing the following:
I think it is more suitable to use update-ca-certificates command, included in the ca-certificates package than dpkg-reconfigure.
So basically, I would change its useful answer to this:
Retrieve the certificate (from this stackoverflow answer and write it in the right directory:
# let's say we call it my-own-cert.crt
openssl s_client -CApath /etc/ssl/certs/ -connect <hostname.domain.tld>:<port> 2>/dev/null </dev/null | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > /usr/share/ca-certificates/my-own-cert.crt
Repeat the operation if you need other certificates.
For example, if you need CA certs for ldaps/starttls with Active Directory, see here for how to process this + use openssl to convert it in pem/crt:
openssl x509 -inform der -in LdapSecure.cer -out my-own-ca.pem
#and copy it in the right directory...
cp my-own-ca.pem /usr/share/ca-certificates/my-own-ca.crt
Add this certificates to the /etc/ca-certificates.conf configuration file:
echo "my-own-cert.crt" >> /etc/ca-certificates.conf
echo "my-own-ca.crt" >> /etc/ca-certificates.conf
Update /etc/ssl/certs directory:
update-ca-certificate
Enjoy
Note that if you use private domain name machines, instead of legitimate public domain names, you may need to edit your /etc/hosts file to be able to have the corresponding FQDN.
This is due to SNI Certificate binding issue on the Vserver or server itself

Unable to verify ssl certificate

I am not able to verify webmaster account of one of my client.
Google is saying "Verification failed - The connection to your server timed out."
When I tried to do wget the URL, I found below error. Can someone please help me resolving this?
[pdurgapal]$ wget https://atlanticdiscountstore.com
--2017-06-28 11:48:48-- https://atlanticdiscountstore.com
Resolving atlanticdiscountstore.com... 188.241.58.18
Connecting to atlanticdiscountstore.com|188.241.58.18|:443... connected.
ERROR: cannot verify atlanticdiscountstore.com’s certificate, issued by “/CN=baldwincountyunited.com”:
Self-signed certificate encountered.
ERROR: certificate common name “baldwincountyunited.com” doesn’t match requested host name “atlanticdiscountstore.com”.
To connect to atlanticdiscountstore.com insecurely, use ‘--no-check-certificate’.
[pdurgapal]$
You must be using a very old version of wget which has no support for SNI. When using a proper client with support for SNI the certificate can be verified. Apart from that the server is terrible slow in responding after the TLS handshake is successfully done, but this is not the issue you asked about.
To demonstrate the problem an access to the site without SNI:
$ openssl s_client -connect atlanticdiscountstore.com:443 |\
openssl x509 -text
...
Subject: CN=baldwincountyunited.com
...
X509v3 Subject Alternative Name:
DNS:baldwincountyunited.com, DNS:mail.baldwincountyunited.com, DNS:www.baldwincountyunited.com
and with SNI:
$ openssl s_client -connect atlanticdiscountstore.com:443 \
-servername atlanticdiscountstore.com |\
openssl x509 -text
...
Subject: ... CN=*.atlanticdiscountstore.com
...
X509v3 Subject Alternative Name:
DNS:*.atlanticdiscountstore.com, DNS:atlanticdiscountstore.com

s_client and gethostbyname failure

I am working with an external company. Lets call them evilcorp.com. I want to use openssl to debug a two way SSL handshake.
https://evilcorp.com is setup to not require client authentication.
https://evilcorp.com/webservices is setup to require client authentication.
How can I specify this path in openssl. So basically this works:
openssl s_client -connect evilcorp.com:443
But this does not work and gives me gethostbyname failure
openssl s_client -connect evilcorp.com/webservices:443
How can I get this to work (if possible)
You have a very simple error in the address. Here's the fix:
"openssl s_client -connect evilcorp.com:443/webservice"
You had the 443 at the end - it needs to go directly after to the domain name.
I'm not sure if this can be done at all but if it can be done then you first have to use openssl to connect to the clients host and already specify the client certificates. Then inside the successful connection you need to speak HTTP to access the relevant page.
I.e. you first connect:
$ openssl s_client -connect host:port -cert cert.pem -key key.pem
... CONNECTED
... Verify return code...
---
And then access the URL using the HTTP protocol
GET /protected_page/ HTTP/1.0
Host: example.org
<empty line>
Note that the last line must be an empty line according to the HTTP protocol. It might also that you need to use the -crlf option in openssl to get the line ends correct in case you have a strict web server. If all goes right the server should now issue a renegotiation request to the client, i.e another TLS handshake is done.

Browser, s_client without SNI and expired certificate

When I access one of my subdomains: say https://foo.example.com in a browser and inspect the certificates, the certificate looks great. When I use openssl from a remote computer it shows an expired certificate. How can this be?
I tried to reproduce what was found in this question, but my scenario is different. When I run
echo | openssl s_client -showcerts -connect foo.example.com:443 2>&1 | grep Verify
I see:
Verify return code: 10 (certificate has expired)
When I run:
echo | openssl s_client -showcerts -connect foo.example.com:443 2>&1 | openssl x509 -noout -dates
I get:
notBefore=Sep 27 15:10:20 2014 GMT
notAfter=Sep 27 15:10:20 2015 GMT
It looks expired but the browser doesn't show it as expired. Here it is in the browser:
See the 1st comment by #jww. He pointed out that I needed to add -tls1 -servername foo.example.com to my openssl command. His comment:
Try adding -tls1 -servername foo.example.com. I'm guessing you have a front-end server that's providing a default domain for requests without SNI, and the default domain is routed to an internal server with the old certificate. When the browsers connect, they use SNI and get the server for which you have updated the certificate. Or, there could be an intermediate with an expired certificate in the chain that's being served. If you provide real information, its easier for us to help you with problems like this.

HTTPS call not referring the correct SSL certificate for the corresponding site

I'm having two coldfusion applications which runs on Apache web server
https://site1.com
https://site2.com
Both having its own SSL certificate and are configured in httpd-ssl.conf file with Name based virtual host for each site.
When I'm doing a HTTPS call from Site1.com to Site2.com,
httpService = new http();
httpService.setMethod("get");
httpService.setUrl("https://site2.com/comp.cfc?method=amethod&ID=12");
result = httpService.send().getPrefix();
it gives the following error
I/O Exception: hostname in certificate didn't match: <site2.com> != <site1.com>
Actually it should use site2's SSL certificate. But not sure why it is using Site1's SSL certificate and giving the error.
This looks like a Server Name Indication (SNI) issue. SNI is a TLS extension allowing to host several HTTPS servers on the same server.
You can confirm this issue using:
echo "" | openssl s_client -connect site2.com:443 | openssl x509 -noout -subject
If you see something like CN=site1.com try this:
echo "" | openssl s_client -connect site2.com:443 -servername site2.com | openssl x509 -noout -subject
if you get CN=site2.com, this is a SNI issue.
You can look at this bug, more specifically this comment:
The SNI support has been added in ColdFusion 11. The change required for supporting this is quite big and therefore it can't be backported to ColdFusion 10.
Other workarounds could be to host your 2 HTTPS sites on 2 separate servers, to set up a unique SSL certificate valid for both names (using X509 SubjectAltName extension) or to disable certificate CN validation (if possible).
You need to import the SSL certificate into ColdFusion/Java keystore. If this doesn't help, add -Djavax.net.debug=all in jvm.config for ColdFusion. This would require a CF service restart. Then try the SSL call.