HAProxy dynamic SSL configuration for multiple domains - ssl

I have something like 100 similar websites in two VPS. I would like to use HAProxy to switch traffic dynamically but at the same time I would like to add an SSL certificate.
I want to use add a variable to call the specific certificate for each website.
For example:
frontend web-https
bind 0.0.0.0:443 ssl crt /etc/ssl/certs/{{domain}}.pem
reqadd X-Forwarded-Proto:\ https
rspadd Strict-Transport-Security:\ max-age=31536000
default_backend website
I'd like also check if the SSL certificate is really available and in case it is not available then switch to HTTP with a redirect.
Is this possibile with HAProxy?

This can be done, but TLS (SSL) does not allow you to do it the way you envision.
First, HAProxy allows you to specify a default certificate and a directory for additonal certificates.
From the documentation for the crt keyword
If a directory name is used instead of a PEM file, then all files found in
that directory will be loaded in alphabetic order unless their name ends with
'.issuer', '.ocsp' or '.sctl' (reserved extensions). This directive may be
specified multiple times in order to load certificates from multiple files or
directories. The certificates will be presented to clients who provide a
valid TLS Server Name Indication field matching one of their CN or alt
subjects. Wildcards are supported, where a wildcard character '*' is used
instead of the first hostname component (eg: *.example.org matches
www.example.org but not www.sub.example.org).
If no SNI is provided by the client or if the SSL library does not support
TLS extensions, or if the client provides an SNI hostname which does not
match any certificate, then the first loaded certificate will be presented.
This means that when loading certificates from a directory, it is highly
recommended to load the default one first as a file or to ensure that it will
always be the first one in the directory.
So, all you need is a directory containing each cert/chain/key in a pem file, and a modification to your configuration like this:
bind 0.0.0.0:443 ssl crt /etc/haproxy/my-default.pem crt /etc/haproxy/my-cert-directory
Note you should also add no-sslv3.
I want to use add a variable to call te specific certificate for each website
As noted in the documentation, if the browser sends Server Name Identification (SNI), then HAProxy will automatically negotiate with the browser using the appropriate certificate.
So configurable cert selection isn't necessary, but more importantly, it isn't possible. SSL/TLS doesn't work that way (anywhere). Until the browser successfully negotiates the secure channel, you don't know what web site the browser will be asking for, because the browser hasn't yet sent the request.
If the browser doesn't speak SNI -- a concern that should be almost entirely irrelevant any more -- or if there is no cert on file that matches the hostname presented in the SNI -- then the default certificate is used for negotiation with the browser.
I'd like also check if the ssl is real available and in case is not available switch to http with a redirect
This is also not possible. Remember, encryption is negotiated first, and only then is the HTTP request sent by the browser.
So, a user will never see your redirect unless they bypass the browser's security warning -- which they must necessarily see, because the hostname in the default certificate won't match the hostname the browser expects to see in the cert.
At this point, there's little point in forcing them back to http, because by bypassing the browser security warning, they have established a connection that is -- simultaneously -- untrusted yet still encrypted. The connection is technically secure but the user has a red × in the address bar because the browser correctly believes that the certificate is invalid (due to the hostname mismatch). But on the user's insistence at bypassing the warning, the browser still uses the invalid certificate to establish the secure channel.
If you really want to redirect even after all of this, you'll need to take a look at the layer 5 fetches. You'll need to verify that the Host header matches the SNI or the default cert, and if your certs are wildcards, you'll need to accommodate that too, but this will still only happen after the user bypasses the security warning.
Imagine if things were so simple that a web server without a valid certificate could hijack traffic by simply redirecting it without the browser requiring the server's certificate being valid (or deliberate action by the user to bypass the warning) and it should become apparent why your original idea not only will not work, but in fact should not work.
Note also that the certificates loaded from the configured directory are all loaded at startup. If you need HAProxy to discover new ones or discard old ones, you need a hot restart of HAProxy (usually sudo service haproxy reload).

Related

Using and then removing self-signed certificate localhost

Problem Background:
As part of the Computer Networking course assignment, I have been given task of implementing a Proxy Server ( using python socket and ssl module ) that handles https communications between the browser and the origin server (The real server that my browser wants to talk to).
What I have done so far:
I have implemented the above requirement using ssl sockets and also generated self-signed 'cert.pem' 'key.pem' files.
What I need to do:
Now I just need to tell my browser (chrome 89 on kubuntu 20.04) to accept this self-signed certificate and then test the working of my proxy server.
Reading from this stackoverflow question, I can see that I have to:
(1) become my own CA (2) then sign my SSL certificate as a CA. (3) Then import the CA certificate (not the SSL certificate, which goes onto my server) into Chrome.
My confusion/question:
So if I do this, when eventually I am done with this assignment, how do I reverse all these steps to get my browser in the previous state before I had made all these changes. Also, how to reverse the "become your own CA" and also delete the SSL certificates signed by my CA.
Basically, I want my system to return to the previous state it was before I would have made all these changes.
UPDATE:
I have done the previously outlined steps but now I get an error.
Here is a snippet of my code:
serv_socket = socket(AF_INET, SOCK_STREAM)
serv_socket.bind(('', serv_port))
serv_socket.setsockopt(SOL_SOCKET, SO_REUSEADDR, 1)
context = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH)
context = context.load_cert_chain('cert.pem', 'key.pem')
context.set_ciphers('EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH')
serv_socket.listen(10)
socket_to_browser, addr = serv_socket.accept()
conn_socket_to_browser = context.wrap_socket(socket_to_browser, server_side=True)
At the last line conn_socket_to_browser = context.wrap_socket(socket_to_browser, server_side=True) an exception is thrown: [SSL: HTTPS_PROXY_REQUEST] https proxy request (_ssl.c:1123)
What am I doing wrong ?
As glamorous as "becoming your own CA" sounds, with openssl it basically comes down to creating a self-signed certificate, and then creating a directory where some CA-specific configuration will be stored (I don't fully remember the specifics, but I think it was just some files related to CNs and serial numbers) so basically reversing the "become your own CA" step is something as mundane as deleting this directory along with the private key and self-signed certificate you were using for the CA. That's it, the CA is no more.
And for chrome returning to the previous state, you would just go the the CA list where you added the CA certificate, select it and delete it. Chrome will stop accepting certificates signed by your CA.
Regarding your new problem... In my opinion, you have developed some kind of reverse proxy (meaning that you expect normal HTTPS requests that you then redirect to the real server) but you have configured Chrome to use it as a forward proxy. In this case, Chrome does not send it a normal HTTPS request, it sends a special non-encrypted CONNECT command and only after receiving the non-encrypted response, it negotiates the TLS connection. That's why openssl says "https proxy request" because it has detected a "https proxy request" (a CONNECT command) instead of the normal TLS negotiation.
You can take a look at How can a Python proxy server (using SSL socket) pretend to be an HTTPS server and specify my own keys to get decrypted data?
It's python, but I think that you'll get the idea

HTTPS Spoofing in order to support legacy application

I have a legacy application that has a hardcoded url (I don't have access to the source) in which it tries to download a file. The url takes the form:
https://pre.hostname.org/index.json
but the organization that hosts that site has dropped that hostname and is using a new hostname, so that the url should be of the form:
https://hostname2.org/pre/index.json
I don't own the application source code or either website, but it occurred to me that I might be able to do some spoofing if I set up a redirect on my local webserver and point the old hostname to my webserver using the C:\windows\system32\drivers\etc\hosts file.
On my webserver in a lighttpd conf file:
$HTTP["scheme"] == "https" {
$HTTP["host"] =~ ".*" {
url.redirect = ( "^/(.*)$" => "https://hostname2.org/pre$0" )
}
}
On the client machine with the legacy application in the hosts file:
0.0.0.0 hostname.org
(0.0.0.0 represents the hostname of my webserver with the redirect instructions)
With this setup I can, on the client machine, access the old url in a web browser, and the redirect happens. However, it does not work from the legacy application, and I think it's due to the SSL certification hostname not matching.
If I use Edge browser, for example, I have to workaround the warning:
The hostname in the website's security certificate differs from the website you are trying to visit.
Error Code:
DLG_FLAGS_SEC_CERT_CN_INVALID
I have administrator access on the client machine, the webserver, etc. I obviously trust my webserver even though it doesn't match the cert...
I totally accept that this is as it should be -- that this is part of the protection that https and SSL certificates provide -- what I'm asking is, is there a way to cause my legacy application to ignore this situation? A way to circumvent https protection for this particular hostname / certificate system-wide, so that it will take effect for whatever API the legacy app is using to download the file through https?
is there a way to cause my legacy application to ignore this situation?
Since you only have the binary of the application you might try to replace the hard-coded domain name in the application with a domain you control, i.e. binary patching. Note that would not work if the application is signed since it would break the signature.
You could also try to create your own CA, import it as trusted into your system and use this CA to create your own certificate for the domain in question. If the application just does the simple certificate verification without any pinning of certificate or CA and with using the systems trust store, then it should accept the certificate you've created yourself because it trusts your CA and should thus accept the redirect.

NGINX: Each domain shares multiple ssl_certificate's

Is it possible to have each domain use multiple ssl certificates? When I google for this, the top result is an article on how to have two ssl_certificates for two domains, but each domain is tied to one ssl_certificate. Is there a way to have each tied to multiple certificates? The way I'd want it to work is to try with the first ssl certificate and if it fails, try with the second, and if that didn't work, fallback to other options. We attempted this using techniques from the article, but when we did nginx gave us this warning:
2016/12/30 20:31:41 [warn] 186#186: conflicting server name "domain1" on 0.0.0.0:443, ignored
nginx: [warn] conflicting server name "domain1" on 0.0.0.0:443, ignored
2016/12/30 20:31:41 [warn] 186#186: conflicting server name "domain2" on 0.0.0.0:443, ignored
nginx: [warn] conflicting server name "domain2" on 0.0.0.0:443, ignored
Why do we want to do this? The ssl_certificate refers to a file that allows access to one inbound domain, and we also want the nginx to allow access from another domain. I don't know much about ssl/certificates. Is there an easy way to modify the ssl_certificate to allow multiple domains? That would be an alternative solution to this problem.
There is only a single leaf certificate served inside the TLS handshake. If the validation of this certificate fails the handshake will fail. While many browsers will retry with a lower protocol TLS version as a fallback against broken servers this is not intended to be used to serve different certificates. Apart from that almost no TLS implementations outside browsers implement this fallback.
Thus servers don't support serving multiple leaf certificates within a single host configuration. They usually do support having different certificates for different subdomains and it is also possible to have different servers for the same domain using different certificates (i.e. different IP address or port). It is also possible in newer servers that a single configuration allows both RSA and ECC certificates (i.e. ECDSA authentication) but in this case the server will simply pick the relevant certificate based on which ciphers the client supports and will still send only a single leaf certificate.

Secure a url that has a cname record

I have a site that has subdomains for each user and a wildcard SSL Cert
https://user1.mysite.com
https://user2.mysite.com
The question is can someone set a cname record such as user1.theirsite.com -> user1.mysite.com and have it still use https?
Will it work if they install a SSL Cert on their server to secure the connection?
Thanks
The best way for this to work is if they arrange with you to have your SSL certificate include their "alias" as a Subject Alternate Name extension in your X.509 certificate.
This is the approach used by some CDNs when they host https sites for clients - they put all of the known site names that are hosted on one server in one large SSL certificate, and then the clients use CNAMEs to point their domain at the right CDN server.
The host name and certificate verification (and in fact, checking that SSL is used at all) are solely the responsibility of the client.
The host name verification will be done by the client, as specified in RFC 2818, based on the host name they request in their URL. Whether the host name DNS resolution is based on a CNAME entry or anything else is irrelevant.
If users are typing https://user1.theirsite.com/ in their browser, the certificate on the target site should be valid for user1.theirsite.com.
If they have their own server for user1.theirsite.com, different to user1.mysite.com, a DNS CNAME entry wouldn't make sense. Assuming the two hosts are effectively distinct, they could have their own valid certificate for user1.theirsite.com and make a redirection to https://user1.theirsite.com/. The redirection would also be visible in the address bar.
If you really wanted to have a CNAME from user1.theirsite.com to user1.mysite.com, they might be able to give you their certificate and private key so that you host it on your site too, using Server Name Indication (assuming same port, and of course same IP address since you're using a CNAME). This would work for clients that support SNI. There would however be a certain risk to them in giving you their private keys (which isn't generally recommended).
The following is set up and working:
DNS entry for a.corp.com -> CNAME b.corp2.com -> A 1.2.3.4
The haproxy at 1.2.3.4 will serve up the cert for a.corp.com and the site loads fine from a webserver backend.
So, on your server you will need user1.theirsite.com cert and it will work.

Is it possible to automatically select correct client side certificate?

I have configured an Apache httpd website with SSL client side certificates so that only users who have installed the correct certificate in their web browsers can access the website.
If there is only one client side certificate installed the web browser will automatically select it (it is not the default, but it can be configured somewhere in the settings dialog). But if a user has more than one certificate installed, the web browser presents a list of certificates and the user has to pick the right one to continue.
The question is: Is there a way to configure httpd to send a hint so that the web browser can automatically select the required certificate?
The SSL (TLS) protocol only allows the server to specify two constraints on the client certificate:
The type of certificate (RSA, DSA, etc.)
The trusted certificate authorities (CAs) that signed the client certificate
You can use "openssl s_client" to see which CAs your Apache server trusts for client certs. I do not know how to configure Apache to change that list (sorry), but I bet there is a way. So if you can limit the list to (say) your own organization's CA alone, then you will have done all you can to allow a Web browser to select the client cert automatically.
As Eugene said, whether the browser actually does so is up to the particular browser.
I'd say that as selection of the certificate is a client-side task, there's no definite way to force the client use this or that certificate from the server side.
In addition to what #Nemo and #Eugene said, by default, Apache Httpd will send the list of CAs it gets from its SSLCACertificateFile or SSLCACertificatePath configuration directives.
However, you can force it to send a different list in certificate_authorities using the SSLCADNRequestFile or SSLCADNRequestPath directives and pointing them to another set of certificates. Only the Subject DN of these certificates is used (and send in the list). If you want to force certain names, you can even self-sign these certificates with whichever name you want. I've tried this (in conjunction with SSLVerifyClient optional_no_ca, and you can get clients to send certificates for CA certificates that the server doesn't actually have. (This isn't necessarily useful, but it works.)