Is it possible to have each domain use multiple ssl certificates? When I google for this, the top result is an article on how to have two ssl_certificates for two domains, but each domain is tied to one ssl_certificate. Is there a way to have each tied to multiple certificates? The way I'd want it to work is to try with the first ssl certificate and if it fails, try with the second, and if that didn't work, fallback to other options. We attempted this using techniques from the article, but when we did nginx gave us this warning:
2016/12/30 20:31:41 [warn] 186#186: conflicting server name "domain1" on 0.0.0.0:443, ignored
nginx: [warn] conflicting server name "domain1" on 0.0.0.0:443, ignored
2016/12/30 20:31:41 [warn] 186#186: conflicting server name "domain2" on 0.0.0.0:443, ignored
nginx: [warn] conflicting server name "domain2" on 0.0.0.0:443, ignored
Why do we want to do this? The ssl_certificate refers to a file that allows access to one inbound domain, and we also want the nginx to allow access from another domain. I don't know much about ssl/certificates. Is there an easy way to modify the ssl_certificate to allow multiple domains? That would be an alternative solution to this problem.
There is only a single leaf certificate served inside the TLS handshake. If the validation of this certificate fails the handshake will fail. While many browsers will retry with a lower protocol TLS version as a fallback against broken servers this is not intended to be used to serve different certificates. Apart from that almost no TLS implementations outside browsers implement this fallback.
Thus servers don't support serving multiple leaf certificates within a single host configuration. They usually do support having different certificates for different subdomains and it is also possible to have different servers for the same domain using different certificates (i.e. different IP address or port). It is also possible in newer servers that a single configuration allows both RSA and ECC certificates (i.e. ECDSA authentication) but in this case the server will simply pick the relevant certificate based on which ciphers the client supports and will still send only a single leaf certificate.
Related
I observed a site example.com has a cname mapping with mysite.com. Both example.com and mysite.com have ssl certificates.
Correct if I am wrong?
When a browser tries to connect https://example.com it checks DNS and finds it has cname mapping with mysite.com and connect to mysite.com web server directly.
When I observed browser it has ssl certificate for example.com domain. I am facing problem in understanding this case.
If request did not go to example.com web server how could browser get ssl certificate of example.com
or my cname mapping understanding is wrong?
or example.com private and public keys are shared with mysite.com webserver ?
DNS and TLS operate completely independent of each other.
TLS is used, among other things like encryption, to verify the identity of a server against its FQDN (Fully qualified domain name). This is done by checking whether the server in question is able to present a certificate, containing the FQDN, signed by a trusted certification authority (CA).
DNS is used to resolve host names to IP addresses, in order to establish network connections (like TCP connections) on a lower layer. How this resolution takes place is completely transparent to other components, like TLS. It does not matter whether the name resolution involves A, AAAA, or the mentioned CNAME record - in our context the input is always a single hostname, the output is always one (or more) IP addresses. Intermediate results, like CNAME mappings, are essentially discarded once name resolution is done.
This means that the TLS client always uses the FQDN initially requested by the user, regardless of any CNAME mappings, to verify the certificate. How to present a valid certificate is up to the server - sticking to your example, the server behind FQDN mysite.com will have to present a certificate valid for example.com in order for the client to accept it. How the private/public key of this certificate is generated, and whether it is shared with other certificates or servers, does not matter.
We would have to explicitly attach the SSL certificates of both the domains to the webserver/load balancer for both the domains to support HTTPS.
To understand this, it's useful to be aware of and understand SNI
When multiple websites are hosted on one server and share a single IP
address, and each website has its own SSL certificate, the server may
not know which SSL certificate to show when a client device tries to
securely connect to one of the websites. This is because the SSL/TLS
handshake occurs before the client device indicates over HTTP which
website it's connecting to.
Server Name Indication (SNI) is designed to solve this problem. SNI is
an extension for the TLS protocol (formerly known as the SSL
protocol), which is used in HTTPS. It's included in the TLS/SSL
handshake process in order to ensure that client devices are able to
see the correct SSL certificate for the website they are trying to
reach. The extension makes it possible to specify the hostname, or
domain name, of the website during the TLS handshake, instead of when
the HTTP connection opens after the handshake.
From: https://www.cloudflare.com/en-gb/learning/ssl/what-is-sni/
I have something like 100 similar websites in two VPS. I would like to use HAProxy to switch traffic dynamically but at the same time I would like to add an SSL certificate.
I want to use add a variable to call the specific certificate for each website.
For example:
frontend web-https
bind 0.0.0.0:443 ssl crt /etc/ssl/certs/{{domain}}.pem
reqadd X-Forwarded-Proto:\ https
rspadd Strict-Transport-Security:\ max-age=31536000
default_backend website
I'd like also check if the SSL certificate is really available and in case it is not available then switch to HTTP with a redirect.
Is this possibile with HAProxy?
This can be done, but TLS (SSL) does not allow you to do it the way you envision.
First, HAProxy allows you to specify a default certificate and a directory for additonal certificates.
From the documentation for the crt keyword
If a directory name is used instead of a PEM file, then all files found in
that directory will be loaded in alphabetic order unless their name ends with
'.issuer', '.ocsp' or '.sctl' (reserved extensions). This directive may be
specified multiple times in order to load certificates from multiple files or
directories. The certificates will be presented to clients who provide a
valid TLS Server Name Indication field matching one of their CN or alt
subjects. Wildcards are supported, where a wildcard character '*' is used
instead of the first hostname component (eg: *.example.org matches
www.example.org but not www.sub.example.org).
If no SNI is provided by the client or if the SSL library does not support
TLS extensions, or if the client provides an SNI hostname which does not
match any certificate, then the first loaded certificate will be presented.
This means that when loading certificates from a directory, it is highly
recommended to load the default one first as a file or to ensure that it will
always be the first one in the directory.
So, all you need is a directory containing each cert/chain/key in a pem file, and a modification to your configuration like this:
bind 0.0.0.0:443 ssl crt /etc/haproxy/my-default.pem crt /etc/haproxy/my-cert-directory
Note you should also add no-sslv3.
I want to use add a variable to call te specific certificate for each website
As noted in the documentation, if the browser sends Server Name Identification (SNI), then HAProxy will automatically negotiate with the browser using the appropriate certificate.
So configurable cert selection isn't necessary, but more importantly, it isn't possible. SSL/TLS doesn't work that way (anywhere). Until the browser successfully negotiates the secure channel, you don't know what web site the browser will be asking for, because the browser hasn't yet sent the request.
If the browser doesn't speak SNI -- a concern that should be almost entirely irrelevant any more -- or if there is no cert on file that matches the hostname presented in the SNI -- then the default certificate is used for negotiation with the browser.
I'd like also check if the ssl is real available and in case is not available switch to http with a redirect
This is also not possible. Remember, encryption is negotiated first, and only then is the HTTP request sent by the browser.
So, a user will never see your redirect unless they bypass the browser's security warning -- which they must necessarily see, because the hostname in the default certificate won't match the hostname the browser expects to see in the cert.
At this point, there's little point in forcing them back to http, because by bypassing the browser security warning, they have established a connection that is -- simultaneously -- untrusted yet still encrypted. The connection is technically secure but the user has a red × in the address bar because the browser correctly believes that the certificate is invalid (due to the hostname mismatch). But on the user's insistence at bypassing the warning, the browser still uses the invalid certificate to establish the secure channel.
If you really want to redirect even after all of this, you'll need to take a look at the layer 5 fetches. You'll need to verify that the Host header matches the SNI or the default cert, and if your certs are wildcards, you'll need to accommodate that too, but this will still only happen after the user bypasses the security warning.
Imagine if things were so simple that a web server without a valid certificate could hijack traffic by simply redirecting it without the browser requiring the server's certificate being valid (or deliberate action by the user to bypass the warning) and it should become apparent why your original idea not only will not work, but in fact should not work.
Note also that the certificates loaded from the configured directory are all loaded at startup. If you need HAProxy to discover new ones or discard old ones, you need a hot restart of HAProxy (usually sudo service haproxy reload).
I am getting the bad certificate error while accessing the server using IP address instead DNS name.
Is this functionality newly introduced in tls1.1. and tls 1.2? It would be good if someone would point out OpenSSL code where it fails and return the bad certificate error.
Why do we get bad certificate error while accessing the server using IP address instead dns name?
It depends on the issuing/validation policies, user agents, and the version of OpenSSL you are using. So to give you a precise answer, we need to know more about your configuration.
Generally speaking, suppose www.example.com has a IP address of www.xxx.yyy.zzz. If you connect via https://www.example.com/..., then the connection should succeed. If you connect using a browser via https://www.xxx.yyy.zzz/... then it should always fail. If you connect using another user agent via https://www.xxx.yyy.zzz/... then it should succeed if the certificate includes www.xxx.yyy.zzz; and fail otherwise.
Issuing/Validation Policies
There are two bodies which dominate issuing/validation policies. They are the CA/Browser Forum, and the Internet Engineering Task Force (IETF).
Browsers, Like Chrome, Firefox and Internet Explorer, follow the CA/B Baseline Requirements (CA/B BR).
Other user agents, like cURL and Wget, follow IETF issuing and validation policies, like RFC 5280, Internet X.509 Public Key Infrastructure Certificate and Certificate Revocation List (CRL) Profile and RFC 6125, Representation and Verification of Domain-Based Application Service Identity within Internet Public Key Infrastructure Using X.509 (PKIX) Certificates in the Context of Transport Layer Security (TLS). The RFCs are more relaxed that CA/B issuing policies.
User Agents
Different user agents have different policies that apply to DNS names. Some want a traditional hostname found in DNS, while others allow IP addresses.
Browsers only allow DNS hostnames in the Subject Alternate Name (SAN). If the hostname is missing from the SAN, then the match will not occur. Putting the server name in the Common Name is a waste of time and energy because browsers require host names in the SAN.
Browsers do not match a public IP address in the SAN. They will sometimes allow a Private IP from RFC 1918, Address Allocation for Private Internets.
Other user agents allow any name in the Subject Alternate Name (SAN). They also will match a name in both the Common Name (CN) and the Subject Alternate Name (SAN). Names include a DNS name like www.example.com, a public IP address, a private IP address like 192.168.10.10 and a local name like localhost and localhost.localdomain.
OpenSSL Version
OpenSSL version 1.0.2 and below did not perform hostname validation. That is, you had to perform the matching yourself. If you did not perform hostname validation yourself, then it appeared the connection always succeeded. Also see Hostname Validation and TLS Client on the OpenSSL wiki.
OpenSSL 1.1.0 and above perform hostname matching. If you switch to 1.1.0, then you should begin experiencing failures if you were not performing hostname matching youself or you were not strictly following issuing policies.
It would be good if someone would point out OpenSSL code where it fails and return the bad certificate error.
The check-ins occurred in early-2015, and they have been available in Master (i.e., 1.1.0-dev) since that time. The code was also available in 1.0.2, but you had to perform special actions. The routines were not available in 1.0.1 or below. Also see Hostname Validation on the OpenSSL wiki. I don't have the Git check-ins because I'm on a Windows machine at the moment.
More information of the rules for names and their locations can be found at How do you sign Certificate Signing Request with your Certification Authority and How to create a self-signed certificate with openssl. There are at least four or six more documents covering them, like how things need to be presented for HTTP Strict Transport Security (HSTS) and Public Key Pinning with Overrides for HTTP.
I'm implementing mutual SSL between service A and service B. Service A uses both 1-way and 2-way SSL. 1-way for the communication between a user and website A, and 2-way SSL to forward requests from that user to the service B in a secure way.
1-way SSL in service A is specified in Tomcat server.xml. 2-way SSL is implemented using JSEE secure socket communication on the client side (service A), and Tomcat config (service B). Atm. when I try to access service A I get ssl_error_rx_record_too_long error.
According to this answer ssl_error_rx_record_too_long and Apache SSL one of the reasons may be the fact that I'm using more than one SSL certificate for the same IP. Is this really the case that you can't use the same IP for several certificates? Even if one certificate is a server certificate (for 1-way SSL) and another is a client certificate (for 2-way SSL)?
This may not be the cause of my problem, but I just want to make sure if it's actually possible to have several certificates for the same FQDN. Thanks for help!
ssl_error_rx_record_too_long generally has nothing to do with certificate configuration, but the fact that what's talking on that port isn't actually using SSL/TLS.
The answers (and even the update to the question) in the question you linked to also point to this problem (e.g. missing SSLEngine on). You probably forgot something like SSLEnabled="true" in your connector configuration.
As I was saying in an answer to your other question, being able to configure two server certificates on the same IP address isn't really a problem for your case.
it's actually possible to have several certificates for the same FQDN
It is possible to configure multiple certificates on the same IP address and port using the Server Name Indication TLS extension, but both servers and clients would need to support it. In particular, this is not supported by the JSSE in Java 7 on the server side (only on the client side), but there are workarounds if you're willing to put a reverse proxy in front of your Java server.
This wouldn't be possible to do this with the same FQDN, since it's what allows to pick the certificate. This being said, having multiple server certificates for the same FQDN on the same IP address is generally pointless. Supporting multiple certificates is precisely useful when you need to support distinct names.
I am running an Apache web server and I have supposed to put 2 SSL cert on a single website. Is this possible? how can I do this? I read the apache user manual and it says I only can have 1 SSL cert for a single IP and port.
After the comments from the OP:
Setup two subdomains - one for static/to be CDN'd content and one for dynamic/not to be CDN'd content.
Get + setup a "wildcard cert" for your domain i.e. a cert for "*.yourdomain.com"... these are a bit more expensive but exactly for your situation...
As Yahia points out. A wildcard cert is an option. They are also expensive.
You can certainly have multiple named SSL certs on your server for images.domain.com and static.domain.com or whatever named sites you want and that is not a security issue. In fact, that is considered more secure than a wildcard cert.
It is true that you can only have one named cert per IP. Because SSL certs are bound to the IP in the web server config. So you would need to have multiple IP addresses on the server hosting the sites. If the dynamic and static content are already on different machines, then you're set there, but it sounds like they are on the same machine.
That doesn't mean that the ports need to be different between the site. You can have both 123.45.67.89 and 123.45.67.88 listening on the same port (443 in this case) on the same machine.
Here is a post I found that looks like it describes the config pretty well.
http://wiki.zimbra.com/wiki/Multiple_SSL_Virtual_Hosts