The right SSL certificate at the right layer - ssl

I have a web app which requires fairly stringent security. I already have a reasonably secure solution stack, including Cloudflare, HAProxy and Modsecurity. I'm getting close to getting my Dev environment ready for testing before I build out my Staging and Production environments.
I was keen to use a Cloudflare Origin SSL cert on my Load Balancer and Web Server but I've struck an issue.
I was keen to achieve Full (Strict) crypto via Cloudflare which means Cloudflare will validate the cert on each request. So I wanted to use a Cloudflare Origin cert in order to but looks like I can use a Cloudflare origin cert on my Load Balancer only because it's designed for Cloudflare to Origin data flow only, which would leave me having to buy a cert from a CA for my Webserver(s).
That's three SSL certs to cover the three SSL termination points:
End User --> Cloudflare (via Dedicated cert)
Cloudflare --> Load Balancer
Load Balancer --> Web Server
I tried installing the Origin cert on the Web Server but it was unable to verify the validity of that cert. I've even updated OpenSSL to the latest stable release (v1.1.1b) to ensure I can prepare for TLS 1.3.
So I can only think of two possible approaches:
End User --> Cloudflare (via Dedicated cert)
Cloudflare --> Load Balancer (via Cloudflare Origin cert)
Load Balancer --> Web Server (via DigiCert cert)
or
End User --> Cloudflare (via Dedicated cert)
Cloudflare --> Load Balancer (via Digicert cert)
Load Balancer --> Web Server (via Digicert cert)
I would appreciate anyone highlighting if I've missed anything important.

Certs commonly contain the web server DNS as "common name". You need to make things compatible with that; that indeed means either installing an additional trust point on your load balancer or getting a "real" certificate.
Generally your end points can use a single certificate to identify themselves. The problem is that currently you are using certificates for which no trust chain can be build at the various end points. You can of course solve this by buying a cert for which a chain can be build (e.g. from a commercial CA). You can however generally also update the trust store to include additional certificates, so that a chain of trust can be build. In that case you can use your own certificates; it is your infrastructure after all.
What you don't want to do is to use the leaf certificates on multiple machines, as that would imply that you copy the private key to more machines. The security of the machines should be separate, so if you start copying PKCS#12 files you might want to rethink your key management solution (and if you don't have an explicit KM solution then this would be the right time).

Related

Automated ACME subdomain SSL certificate generation for resources on different IP addresses

I've been investigating the possibility of migrating to using Let's Encrypt to maintain the SSL certificates we have in place for the various resources we use for our operations. We have the following resources using SSL certificates:
Main website (www.example.com / example.com) - Hosted and maintained by a 3rd party who also maintains the SSL certificate
Client portal website (client.example.com) - IIS site hosted and maintained by us on a server located in a remote data center
FTP server (ftp.example.com) - WS_FTP Server hosted and maintained by us on a server located in a remote data center
Hardware firewall (firewall.example.com) - Local security appliance for our internal network
Remote Desktop Gateway (rd.example.com) - RDP server hosted and maintained by us on a server located locally
As indicated above, the SSL certificate for the main website (www) is maintained by the 3rd-party host, so I don't generally mess with that one. However, as you can tell, the DNS records for each of these endpoints point to a variety of different IP addresses. This is where my inexperience with the overall process of issuing and deploying SSL certificates has me a bit confused.
First of all, since I don't manage or maintain the main website, I'm currently manually generating the CSR's for each of the endpoints from the server/service that provides the endpoint - one from the IIS server, a different one from the RDP server, another from the WS_FTP server, and one from the hardware firewall. The manual process, while not excessively time-consuming, still requires me to go through several steps with different server systems requiring different processes.
I've considered using one of Let's Encrypt's free wildcard SSL certificates to cover all four of these endpoints (*.example.com), but I don't want to "interfere" with what our main website host is doing on that end. I realize the actual certificate itself is presented by the server to which the client is connecting, so it shouldn't matter (right?), but I'd probably still be more comfortable with individual SSL certificates for each of the subdomain endpoints.
So, I've been working on building an application using the Certes ACME client library in an attempt to automatically handle the entire SSL process from CSR to deployment. However, I've run into a few snags:
The firewall is secured against connections on port 80, so I wouldn't be able to serve up the HTTP-01 validation file for that subdomain (fw.example.com) on the device itself. The same is true for the FTP server's subdomain (ftp.example.com).
My DNS is hosted with a provider that does not currently offer an API (they say they're working on one), so I can't automate the process of the DNS-01 validation by writing the TXT record to the zone file.
I found the TLS-ALPN-01 validation method, but I'm not sure whether or not this is appropriate for the use case I'm trying to implement. According to the description of this method from Let's encrypt (emphasis mine):
This challenge is not suitable for most people. It is best suited to authors of TLS-terminating reverse proxies that want to perform host-based validation like HTTP-01, but want to do it entirely at the TLS layer in order to separate concerns. Right now that mainly means large hosting providers, but mainstream web servers like Apache and Nginx could someday implement this (and Caddy already does).
Pros:
It works if port 80 is unavailable to you.
It can be performed purely at the TLS layer.
Cons:
It’s not supported by Apache, Nginx, or Certbot, and probably won’t be soon.
Like HTTP-01, if you have multiple servers they need to all answer with the same content.
This method cannot be used to validate wildcard domains.
So, based on my research so far and my environment, my three biggest questions are these:
Would the TLS-ALPN-01 validation method be an effective - or even available - option for generating the individual SSL certificates for each subdomain? Since the firewall and FTP server cannot currently serve up the appropriate files on port 80, I don't see any way to use the HTTP-01 validation for these subdomains. Not being able to use an API to automate a DNS-01 validation would make that method generally more trouble than it's worth. While I could probably do the HTTP-01 validation for the client portal - and maybe the RDP server (I haven't gotten that far in my research yet) - I'd still be left with handling the other two subdomains manually.
Would I be better off trying to do a wildcard certificate for the subdomains? Other than "simplifying" the process by reducing the number of SSL certificates that need to be issued, is there any inherent benefit to going this route versus using individual certificates for each subdomain? Since the main site is hosted/managed by a 3rd-party and (again) I can't currently use an API to automate a DNS-01 validation, I suppose I would need to use an HTTP-01 validation. Based on my understanding, that means that I would need to get access/permission to create the response file, along with the appropriate directories on that server.
Just to be certain, is there any chance of causing some sort of "conflict" if I were to generate/deploy a wildcard certificate to the subdomains while the main website still used its own SSL certificate for the www? Again, I wouldn't think that to be the case, but I want to do my best to avoid introducing more complexity and/or problems into the situation.
I've responded to your related question on https://community.certifytheweb.com/t/tls-alpn-01-validation/1444/2 but the answer is to use DNS validation and my suggestion is to use Certify DNS (https://docs.certifytheweb.com/docs/dns/providers/certifydns), which is an alternative managed alternative cloud implementation of acme-dns (CNAME delegation of DNS challenge responses.
Certify DNS is compatible with most existing acme-dns clients so it can be used with acme-dns compatible clients as well as with Certify The Web (https://certifytheweb.com)

Terraform Init/apply/destroy - SSL Connection Problems

our company proxy brokes the SSL Connections and the proxy use our own CA.
So i have always tell the applications i use (RubyGems, Python Pip, Azure CLI ...) to use our company CA Certificate.
Does anyone know, how i can use our CA Certificate with a local Terraform installation?
Is the CA deployed to your OS's certificate store or can you import it? If so, Terraform (and probably other tools) should just be able to work with a proxy like this with no other configuration. If you need some further direction, tell us what operating system and how you typically access you have to the CA.
Edit:
#Kreikeneka have you have the certain the location CentOS expects to import into the store. There is a command you need to run that actually imports it update-ca-trust. Have you run this? If the cert is being used for SSL and you just need to trust it when going through your proxy, that is all you should need to do. You shouldn't need to tell your tools (Terraform, PIP, etc) to trust it for SSL with the proxy. If the cert is imported into your certificate store, it should be passively usable from any connection on from the machine from any process.
If you are using the cert for client authentication to the proxy then just trusting the cert by placing it in the certificate store probably won't work.
I'm not clear from your comments if you need the cert for SSL or for client authentication to the proxy. Check with your IT what it is really used for if you aren't sure and get back to us.
As of CentOS 6+, there is a tool for this. Per this guide,
certificates can be installed first by enabling the system shared CA
store:
update-ca-trust enable
Then placing the certificates to trust as CA's
in /etc/pki/ca-trust/source/anchors/ for high priority
(non-overridable), or /usr/share/pki/ca-trust-source/ (lower priority,
overridable), and finally updating the system store with:
update-ca-trust extract
Et voila, system tools will now trust those
certificates when making secure connections!
Source:
https://serverfault.com/questions/511812/how-does-one-install-a-custom-ca-certificate-on-centos

Multi-tenant SSL with Cloudflare and Heroku

Im currently building an application that will reside at app.mydomain.com which is running on Heroku. All users will have their own entry points, like app.mydomain.com/client1, app.mydomain.com/client2, etc. I want clients to be able to setup their own domain (www.clientdomain.com) and cname it to their entry point. I understand this is pretty straight forward up until now.
All my DNS is handled by Cloudflare and I believe I can configure Cloudlfare into Full (Strict) mode, all I need to do is install their Origin Cert onto my Heroku dyno. This will ensure that all direct connects to my domain will be secure (going to app.mydomain.com/client1).
Question is, how does a client go about getting an SSL'ed connection for their domain; do I need to get a multidomain cert and start adding domains to it as I get clients, or am i supposed to install their cert onto Heroku (I believe I can only install 1 so thats a no go) or is it supposed to live on Cloudflare somewhere, or are there additional options I'm not seeing (I hope there are!).
Im not wondering what to do for my own domains, but rather, how do clients setup an SSL connection with their domains that resolve onto my servers.
This is rather perplexing!
The flow would be (I think):
User Browser -> Clients DNS -> (cname to) My Cloudflare -> Heroku
Hmm, it looks like this might be a pretty solid solution to this issue...
https://blog.cloudflare.com/introducing-ssl-for-saas/
Edit - after clarification
I'm currently building an application that will reside at
app.mydomain.com which is running on Heroku. All users will have their
own entry points, like app.mydomain.com/client1,
app.mydomain.com/client2, etc. Question is, how does a client go about
getting an SSL'ed connection for their domain; do I need to get a
multidomain cert and start adding domains to it as I get clients?
If you are going to use the same Heroku app for all of your clients (I think this is a bad idea by the way, but you might be required to) - then yes - you should get a multi-domain certificate and keep adding domains to it as your list of clients expand.
Original answer - which explains SSL + Load Balancing on Heroku.
Im currently building an application that will reside at
app.mydomain.com which is running on Heroku. I was clients to be able
to setup their own domain www.clientdomain.com and cname it to mine.
You will need a wildcard certificate to cover your subdomain (for the app.mydomain.com). You'll have use that cert in heroku.
...all I need to do is install their Origin Cert onto my Heroku dyno.
You are correct - except it's not on your Heroku dyno, it's on your Heroku app endpoint. There's a good read here: https://serverfault.com/questions/68753/does-each-server-behind-a-load-balancer-need-their-own-ssl-certificate
If you do your load balancing on the TCP or IP layer (OSI layer 4/3,
a.k.a L4, L3), then yes, all HTTP servers will need to have the SSL
certificate installed.
If you load balance on the HTTPS layer (L7), then you'd commonly
install the certificate on the load balancer alone, and use plain
un-encrypted HTTP over the local network between the load balancer and
the webservers (for best performance on the web servers).
So you should install your SSL certificate to your Heroku endpoint and let Heroku handle the rest.
Question is, how does a client go about getting an SSL'ed connection;
do I need to get a multidomain cert and start adding domains to it as
I get clients, am i supposed to install their cert onto Heroku (I
believe I can only install 1 so thats a no go) or is it supposed to
live on Cloudflare somewhere?
If you're referring to adding servers to your service from heroku, all you need to do is increase the number of web-dynos. Heroku will handle the load balancing in between these dynos. Your SSL certificate should be resolved in the load balancer so your dynos will be serving requests for the same endpoint. You shouldn't need another SSL certificate for the endpoint you've defined - as long as you're serving traffic from multiple dynos attached to it.

How can I get Letsencrypt certificates before adding the server to production

I am trying to lunch new servers automatically when needed but I am having some difficulty getting the certificate before making the server live. What I want to do is run a setup script which gets all the packages, websites and certificates ready and after that add the server to production. However, Letsencrypt wants me to verify that the server requesting the certificate is actually the website which replies to requests. How can I get the Letsencrypt certificate before adding the server to production? I don't want requests to the real website to be routed to the new server until it is fully setup and has the certificates.
One solution I thought of is to save the certificates on an AWS S3 bucket and synchronize them whenever a renewal is needed. Then when I setup a new server I just get the latest certificate from my AWS S3 bucket and I don't have to worry about getting the certificate from the CA until after the server is added to production.But this solution doesn't seem "clean" and would require me to have an S3 bucket just for my Letsencrypt certificate which also adds another weakness where a certificate could be stolen from.
Is there a more simple solution which I haven't thought of yet?
In a load-balanced (LB) scenario, you should consider having exactly one entity responsible for performing LE certificate acquisition. Things get complicated with multiple entities doing this asynchronously - you'd need to be able to guarantee that the ACME challenges get routed to the relevant server(s), and your LB doesn't have that information (without additional complexity).
So I'd suggest either:
Terminating HTTPS at your load-balancer. Then none of your servers need to care about HTTPS or certificates.
Having one "special" server that's responsible for interacting with LE, and then distributing the cert to the other servers. The details of how you do that is implementation-dependent, because it depends on how you're managing server/service configuration.

SSL certificates with unknown domain name

We're having an issue with securing an intranet / internet website with SSL where
we can't know the qualified domain name in advance.
Basically, I'm trying to make a program that will be installed on a webserver
outside my direct control, to be accessable over intra- or internet. In either case
I want it to be secure via SSL (https). To do this, I would like to include and
install a SSL certificate on the target machine. My installer is fully prepackaged
and should not require any particular during- or postinstall intervention from my
end. Problem is, I can't know ahead of time the target machine's name or domain
name, so as far as I can tell the SSL connection will be returning warnings (or
worse?) when accessed, since the certificate I include will (must) have a different
name on it.
I really want to avoid those warnings, but I still want to keep it secure. Is there
any way to install a SSL connection without certificate warnings without the domain
name known ahead of time?
Thanks for any help you folks can give.
What you want to do is not possible. Here's why.
A certificate will include a set of names (Common Name, possibly along with Subject Alternative Names, possibly including wildcard names).
The client's web browser will do the following:
The user wanted to visit "https://myapp.mydomain.com/blog/posts/1".
The request is via SSL and the domain name in the request is "myapp.mydomain.com".
Get the certificate from the Web server.
Ensure that at least one of the names in the certificate is exactly equal to, or wildcard-matches, the domain name in the request.
Display the page.
Therefore, you need a certificate with the exact domain name (or a wildcard matching the exact domain name) by which the application will be used. And the certificate needs to be available at the same time as, or later than, the time when the exact domain name of the website becomes known, and cannot be made available any earlier.
You seem to be under the misapprehension that somehow a certificate can "create" or "install" an SSL connection. That is false. The Web server - Apache, IIS, Nginx, LigHTTPD, or whichever one you happen to use - is the program that knows how to every aspect of SSL connectivity. The certificate is just a file that the Web server sends to the client, without even opening or using in any way.
Additionally, the author of a webapp to be distributed is not responsible for creating or distributing certificates, and should not be under the misapprehension that he is responsible. Only the website maintainer should be responsible for obtaining a certificate for his website. As another person remarked, in your installation process or perhaps in a post-installation process, you may ask the person installing the webapp for a certificate. But that is the best you can do.
The best you can do is to buy a wildcard SSL certificate - but wait, it's not what you think. You still need to know the second-level domain (the TLD being ".com") ahead of time. You can effectively ask for a cert that covers *.foo.com - then any site, a.foo.com, b.foo.com will be covered. Of course, these certs are more expensive that FQDN certs because you are doing the buggers out of some extra coin.
-Oisin
Each of those sites should have their own SSL certificate. Why not prompt the user to provide the cert file during installation?
In most (if not all) cases, the SSL certificate is associated with the webserver (apache, IIS, etc.) and is not part of your application. It's up to the admin of the web server to install the certificate and not you as the author of the program.
If your installation program does have the ability to modify the web server configuration, and you are willing to have it use a self-signed certificate, you can script the creation of the certificate to allow the domain name to be input. However, I sense this is not really available to you. Also, a self-signed certificate will generally cause certificate warnings.
If I understand you correctly there might be a solution to your problem now. This solution won't help you, however, if you have no control over specifying what SSL certificates are served from the web server where your program is installed (as mentioned by someone else). If your program itself contains a web server you won't have this issue.
If you start with a trusted https website, you can make cross-domain TLS (SSL) XmlHttpRequests to the web servers that are running your application. This is made possible using the opensource Forge project. The project uses a TLS implementation written in JavaScript and a small Flash swf to handle the cross-domain requests. Your program will need to serve an XML Flash policy file that grants the trusted website access to the web server running the application.
Your program will also need to generate a self-signed SSL certificate and upload it to the trusted website. From there, each program's certificate can be included as trusted via the JavaScript TLS implementation. Alternatively, you can have your program upload its certificate to be signed by a CA you create, using a common or subject alternative name that is appropriate for your use (it doesn't have to be the domain name). Then you can use JavaScript to trust the CA certificate and look for the correct name on each certificate.
For more details check out the Forge project at github:
http://github.com/digitalbazaar/forge/blob/master/README
The links to the blog posts at the end provide more in-depth information about how it works.