Checking server certificate with openssl versus a web request - apache

Working with a standard MediaTemple server setup with an installed GeoTrust domain certificate I am getting different responses from openssl and web requests.
Visiting the site from a site checker site I get a good response and see my domain certificate and the full Geotrust certificate chain.
When using
openssl s_client -connect subdomain.domain.com:443 -showcerts -ssl3
from my local machine I see
Server certificate
subject=/C=US/ST=Virginia/L=Herndon/O=Parallels/OU=Parallels
Panel/CN=Parallels Panel/emailAddress=info#parallels.com
issuer=/C=US/ST=Virginia/L=Herndon/O=Parallels/OU=Parallels Panel/CN=Parallels
Panel/emailAddress=info#parallels.com
and Verify return code: 18 (self signed certificate)
openssl version -d = OPENSSLDIR: "/etc/pki/tls"
It's a Centos 6.x box.
The apache httpd.conf file points to a certificate and CA list in a completely different location: /usr/local/psa/var/certificates/ which would seem fine to me.
Where is the openssl s_client finding the Parallels certificate? It is not located in /etc/pki/tls. Is there a way to configure the box so that the openssl requests and apache use the same server certificate?
Thanks in advance!

openssl s_client gets the certificate from the server during the SSL handshake. OPENSSLDIR is only the place where any (optional) configurations for the openssl tool gets stored.
Note that you might get a different certificate with openssl than you have configured on your server because you need to use SNI (Server Name Indication) like the browser do. This feature is used if you have multiple certificates behind the same IP. To use this feature with openssl add the -servername hostname parameter and provide the name you expect. You must also remove the -ssl3 option since this restricts the connection to SSL 3.0 which is not only insecure but also does not support SNI.

Turns out that on MediaTemple servers they maintain certs in two locations. The apache server has a location for the CA file in its conf files that is different from where openssl maintains its CA files.
You can find the apache location in the conf files and the openssl location with
openssl version -d
Within MediaTemple's web administration pages you can use plesk to install the domain cert into the openssl location as the "server's" cert. The apache server should already have the cert and CA files in the right location. The MediaTemple custom apache configuration overrides the standard apache setup which sets apache's cert locations to be the same as openssl's.

Related

Openssl certificate verification stricter/different than browsers? [duplicate]

This question already has answers here:
SSL working in chrome but sometimes in Firefox and not on IOS, Android or Blackberry
(2 answers)
Closed 1 year ago.
I've put together a Linux (Centos 7) server to serve eye-n-sky.net.
Serving content from that site to browsers on Win10 and Linux systems works beautifully. However, when I use openssl to access the site,
openssl s_client -connect eye-n-sky.net:443
the site certificate is rejected,
Verify return code: 21 (unable to verify the first certificate)
I've concluded that the way a browser verifies the certificate is different from what openssl does. Am I on the right track?
I've tested this on three different openssl instances (Debian, Centos, FreeBSD) and have consistent results.
Openssl as a client to other sites, e.g. www.godaddy.com, microsoft.com, work fine, being able to verify the certificate against the installed CA chain.
Believing that I was missing a CA cert, I used the -CAfile option to specify the possibly missing cert, to no effect.
What am I missing? I'm guessing that openssl has a stricter verification discipline, but I don't know where that gets configured.
Thanks,
Andy
Summary: yes, eye-n-sky was providing only it's cert when it needed to include the intermediate and root certs.
However, it took me forever to figure out that my Apache version did not support including the chain in the server cert file. Instead, I had to provide the chain file separately in an SSLCertificateChainFile directive.
OpenSSL's command-line s_client utility has nothing built in to validate the server's certificate. Browsers have a built-in list of trusted certificates to verify the server certificate against.
You have to supply the trusted certificates using options such as -CAfile file or -CApath directory. Per the OpenSSL 1.1.1 s_client man page:
-CApath directory
The directory to use for server certificate verification. This
directory must be in "hash format", see verify(1) for more
information. These are also used when building the client certificate
chain.
-CAfile file
A file containing trusted certificates to use during server
authentication and to use when attempting to build the client
certificate chain.
Note the use of words such as "certificate chain". If you go to godaddy.com you'll see that the server's cert is for *.godaddy.com, but it was signed by Go Daddy Secure Certificate Authority - G2, and that intermediate certificate was signed by Go Daddy Root Certificate Authority - G2 - a different certificate. There's a total of three certificates in that chain.
Verify return code 21 is "no signatures could be verified because the chain contains only one certificate and it is not self signed", so if your CA file only had the certificate from Go Daddy Root Certificate Authority - G2 and not the one from Go Daddy Secure Certificate Authority - G2, OpenSSL would see from the server's cert itself that it was signed by Go Daddy Secure Certificate Authority - G2 and could go no further - it doesn't have that cert to see who signed it.

Root Certificate of website through openssl command

I am trying to obtain the root certificate of various websites for my project, but I am not sure the certificates that I am getting back with this command, contains root certificate or not?
openssl s_client -showcerts -connect google.com:443
I was searching for an answer when I came across a post where wget was used to get the root certificate from the certificate repository of godaddy
wget https://certs.godaddy.com/repository/gd_bundle.crt -O ~/.cert/mail.nixcraft.net/gd.pem
how do i find the repository for every website?
The server must include the certification chain during TLS connection (https). The chain may include the CA root certificate, but it is optional, So you have no guarantee that it will be available. The TLS protocol expects the client to have the certificate in their truststore to verify the trust
You can download the server certificate of every site programmatically, but it is needed to look for the root CA certificate. As you can see, godaddy publish them in its website. In many cases the certificate itself includes a reference to download the root certificate

Securing a private IP address (https certificate)

I have an unusual use case :
a web server on the Internet is serving pages through HTTPS,
inside those web pages, there are calls to XMLHttpRequests to a locally connected device (IP over USB)
the device supports both HTTP and HTTPS,
the device is accessible on http(s)://192.168.0.1
the http calls fail because of insecure content in a https page,
the https calls fail because the certificate is not trusted (self-signed),
Side question: Since the device is locally connected to the PC, the encryption is pretty useless: Does a http header exists that allows insecure connections to a specific URL ? (like CORS for cross domain)
Main question: Is it possible to obtain a certificate for a private IP address ?
Edit: it seems that Plex had a similar problem and solved it the way described on this blog. This is a way too big for me.
Is it possible to obtain a certificate for a private IP address ?
A certificate can be bound to an IP address (see this). You can issue a self-signed certificate to a private address, but a trusted CA will not issue a certificate to a private address because it can not verify its identity.
For example a certificate issued to 192.168.0.1 would be theoretically valid in any context, and this should not be allowed by a trusted CA
Plex solves the problem with a Dynamic DNS and a wildcard certificate. The connection are done using the name (not the IP) of the device which is resolved to the private IP
Does a http header exists that allows insecure connections to a specific URL ? (like CORS for cross domain)
No, it does not exist. The browser blocks your XHR connections because they are HTTP connections initiated from a HTTPS page (mixed-content warning). Non-secure content can theoretically be read or modified by attackers, even though the parent page is served over HTTPs, so is normal and recommended that the browser warns the user.
To fix the mixed-content and https errors, you could serve the content through HTTPS and a self-signed certificate, and request users to import your root CA at browser.
An SSL certificate cannot be issued for Reserved IP addresses (RFC 1918 and RFC 4193 range)/ private IP addresses (IPv4, IPv6), Intranet for Internal Server Name, local server name with a non-public domain name suffix.
You could however use a 'self-signed' certificate. Here's how to create one:
Creating a Self-signed Certificate for a private IP
(example https://192.168.0.1) :
You need OpenSSL installed.
For example, on Ubuntu, you could install it by: sudo apt-get install openssl
(It may already be installed. Type "openssl version" to find out)
For Windows, you could try this: https://slproweb.com/products/Win32OpenSSL.html
Once OpenSSL is installed, go to OpenSSL prompt by entering 'openssl' on the console (LINUX), or the cmd prompt (WINDOWS).
$ openssl
OpenSSL>
Now do the following steps to create: Private key, Certificate Request, Self-signing the certificate, and putting it all together, by using the below commands:
i) Create KEY called mydomain.key:
OpenSSL> genrsa -out mydomain.key 2048
ii) Use the key to create a Certificate request called mydomain.csr
You could accept the default options, or specify your own information:
OpenSSL> req -new -key mydomain.key -out mydomain.csr
iii) use the above to create a certificate:
OpenSSL> x509 -req -days 1825 -in mydomain.csr -signkey mydomain.key -out mydomain.crt
iv) Put all the above to create a PEM certificate:
exit OpenSSL (OpenSSL> q) and go to certificate location and do:
$ sudo cat mydomain.key mydomain.crt >> mylabs.com.pem
mylabs.com.pem is your self-signed certificate. You can use this in requests like https://192.168.0.1 if your server supports https. Remember to check the port number for https(443).

HTTPS call not referring the correct SSL certificate for the corresponding site

I'm having two coldfusion applications which runs on Apache web server
https://site1.com
https://site2.com
Both having its own SSL certificate and are configured in httpd-ssl.conf file with Name based virtual host for each site.
When I'm doing a HTTPS call from Site1.com to Site2.com,
httpService = new http();
httpService.setMethod("get");
httpService.setUrl("https://site2.com/comp.cfc?method=amethod&ID=12");
result = httpService.send().getPrefix();
it gives the following error
I/O Exception: hostname in certificate didn't match: <site2.com> != <site1.com>
Actually it should use site2's SSL certificate. But not sure why it is using Site1's SSL certificate and giving the error.
This looks like a Server Name Indication (SNI) issue. SNI is a TLS extension allowing to host several HTTPS servers on the same server.
You can confirm this issue using:
echo "" | openssl s_client -connect site2.com:443 | openssl x509 -noout -subject
If you see something like CN=site1.com try this:
echo "" | openssl s_client -connect site2.com:443 -servername site2.com | openssl x509 -noout -subject
if you get CN=site2.com, this is a SNI issue.
You can look at this bug, more specifically this comment:
The SNI support has been added in ColdFusion 11. The change required for supporting this is quite big and therefore it can't be backported to ColdFusion 10.
Other workarounds could be to host your 2 HTTPS sites on 2 separate servers, to set up a unique SSL certificate valid for both names (using X509 SubjectAltName extension) or to disable certificate CN validation (if possible).
You need to import the SSL certificate into ColdFusion/Java keystore. If this doesn't help, add -Djavax.net.debug=all in jvm.config for ColdFusion. This would require a CF service restart. Then try the SSL call.

What Keys/Certs need to be placed at Server/Client Side in HTTPS Connection

I need to check HTTPS working on my local Machine . I am using Openssl S_client for that .
I have Cert/keys files with me generated with OPENSSL in CentOS .
I am using Apache Server on Windows . I am able to connect succesfully for self-signed certs
but am getting issues with CA signed cases.
Can anybody please answer me the following questions
What files i needed to place in Apache Directory . I have placed CA cert , CA signed Cert , server Private Key in Apache directory
And on the client Side i.e. CentOS i have directory having CA cert , CA signed Client cert , Client Private key.
I am using the below command that works fine for self-signed Certs (with No CA )
openssl s_client -connect client_IP:8443 -CAfile server-selfsigned.pem
openssl s_client -connect client_IP:8444 -key client.key -cert selfsigned-client.pem -CAfile server-selfsigned.pem
but with CA signed certificates am confused what files i need to place at Apache and what files are need at Client (cent OS) side for creating a connection
Thanks in Advance
Depends on kind of Authentication you are doing if you are doing. I used Apache Tomcat as server on my machine and i didn't place any File in Tomcat directory . I place only JKS files and that contains all necessary cert and Placed them in Tomcat root Drectory..
I know may be my reply seems to be tough for anybdy to understand but can u modify your question and put some more details in that .??