Traefik: How do I best configure many websites (static virtual hosts) on the same backend container? - virtualhost

I have a webserver (currently nginx, but could just as well be Apache) which hosts many static websites as virtual hosts. With "many", I mean dozens, and I keep adding and removing them.
In my setup, I have a docker container with traefik, and a docker container with nginx. The same nginx container serves all these websites (that point is key to my question).
What is the best way to tell traefik about these host names, so that traefik can create Let's Encrypt certificates for it, and route traffic to this container?
The standard way seems to be to use a label on the nginx container, e.g.
docker run ...
-l traefik.backend=webserver \
-l traefik.port=80 \
-l traefik.frontend.rule="Host:example.com,www.example.com,docs.example.com,example.net,www.example.net,docs.example.net,example.org,www.example.org,example.de,www.example.de,development.com,www.development.com"
and so on. That list goes on and on and on. This works, but:
This is not very maintainable.
Worse, Traefik seems to pull one single cert for all these names. Let's say development.com is a completely differ entity from example.com, and I don't want both of them to be listed in the same cert.
Even worse, let's say I made a mistake somewhere. I misconfigured docs.example.net. Or, worse, they all work, but then in the future, I forget to renew example.net. And my Let's Encrypt cert needs to be renewed. Now, that renewal will fail, because if any one of the host names fails to verify, Let's Encrypt will refuse the certificate, which is totally correct. But means that all my websites will be down, suddenly at any unforseable time in the future, if any of the hostnames has a problem. That's a big risk. A risk one shouldn't take. The websites should be independent in the certificate.
It appears I am not using this right. So, my question is: How can I better configure this, so that each website is independent (in the configuration of traefik, and esp. in the SSL certificate), but I still use only one webserver container for all of them?
Here's what I tried:
I tried to manually configure the certificates in [acme] sections:
[[acme.domains]]
main = "example.com"
sans = [ "www.example.com" ]
[[acme.domains]]
main = "example.org"
sans = [ "www.example.org" ]
That looks more sane to me than the long label line on docker run. traefik apparently tries to get these certs, and writes them to acme.json. But it doesn't seem to use them. Even with these lines, traefik still uses the cert that has all the hostnames from the traefik.frontend.rule instead of the manually configured, more specific cert. That seems ill-advised.
Also, if I remove the hostname from the traefik.frontend.rule, traefik doesn't find the backend and returns a 404 to the client. That's logical, because traefik doesn't know where to route the traffic for this host.
I tried to set up [frontend] rules.
[frontends]
[frontends.example]
backend = "webserver"
[frontends.example.routes.com]
rule = "Host:example.com,www.example.com,docs.example.com"
[frontends.example.routes.org]
rule = "Host:example.org,www.example.org,docs.example.org"
...
That seems to be the right direction, although the configuration directives are very chatty, esp. all the section headers.
But I couldn't get this to work, all I got was "backend not found" in the traefik access log.

Related

Kubernetes/Ingress/TLS - block access with IP Address in URL

A pod is accessible via nginx-ingress and https://FQDN. That works well with the configured public certificates. But if someone uses https://IP_ADDRESS - he will get a certificate error because of the default "Kubernetes Fake Certificate". Is it possible to block access completely using the IP_ADDRESS url?
I think you would first need the TLS handshake to complete, before Nginx could deny the access.
On the other hand, HAproxy may be able to close the connection while checking the ServerName. Say setting some ACL in your https frontend, routing applications to their backends. Though I'm not sure this would be doable unless mounting a custom HAproxy configuration template into your ingress controller.

How do I create a tls cert for a three node server domain that covers the parent domain as well?

I'm not even sure I asked the question right...
I have three servers running minio in distributed mode. I need all three servers to run with TLS enabled. It's easy enough to run certbot, generate a cert for each node, drop said certs into /etc/minio/certs/ and go! but here's where I start running into issues.
The servers are thus:
node1.files.example.com
node2.files.example.com
node3.files.example.com
I'm launching minio using the following command:
MINIO_ACCESS_KEY=minio \
MINIO_SECRET_KEY=secret \
/usr/local/bin/minio server \
-C /etc/minio --address ":443" \
https://node{1...3}.files.example.com:443/volume/{1...4}/
This works and I am able to connect to all three servers from a webbrowser using https with good certs. however, users will connect to the server using the parent domain "files.example.com" (using distributed DNS)
I already ran certbot and generated the certs for the parent domain... and I copied the certs into /etc/minio/certs/ as well as /etc/minio/certs/CAs/ (calling both files "files.example.com-public.crt" and "files.example.com-public.key" respectively)... this did not work. when I try to open the parent domain "files.example.com" I get a cert error (chich I can bypass) indicating the certificate is for the node in which I have connected and not for the parent domain.
I'm pretty sure this is just a matter of putting the cert in the right place and naming it correctly... right? does anyone know how to do that? I also have an idea there might be a way to issue a cert that covers multiple domains... is that how I'm supposed to do this? how?
I already hit up minio's slack channel and posted on their github, but no ones replying to me. not even, "this won't work."
any ideas?
I gave up and ran certbot in manual mode. it had to install apache on one of the nodes, then certbot had me jump through a couple of minor hoops (namely it had me create a new txt record with my DNS provider, and then create a file with a text string on the server for verification). I then copied the created certs into my minio config directory (/etc/minio/certs/) on all three nodes. that's it.
to be honest, I'd rather use the plugin as it allows for an automated cert renewal, but I'll live with this for now.
You could also run all of them behind a reverse proxy to handle the TLS termination using a wildcard domain certificate (ie. *.files.example.com). The reverse proxy would centralize the certificates, DNS, and certbot script if you prefer, etc to a single node, essentially load balancing the TLS and DNS for the minio nodes. The performance hit of "load-balancing" TLS like this may be acceptable depending on your workload, considering the simplification to your current DNS and TLS cert setup.
[Digital Ocean example using nginx and certbot plugins] https://www.digitalocean.com/community/tutorials/how-to-create-let-s-encrypt-wildcard-certificates-with-certbot

Let's encrypt SSL certificate on subdomain

I developed an application for a client which I host on a subdomain, now the problem is that I don't own the main domain/website. They've added a DNS record to point to the IP on which I host that app. Now I want to request a Free & automatic certificate from Let's Encrypt. But when I try the handshake it says
Getting challenge for subdomain.example.com from acme-server...
Error: http://subdomain.example.com/.well-known/acme-challenge/letsencrypt_**** is not reachable. Aborting the script.
dig output for subdomain.example.com:subdomain.example.com
Please make sure /.well-known alias is setup in WWW server.
Which makes sense cause I don't own that domain on my server. But if I try to generate it without the main domain I get:
You must include your main domain: example.com.
Cannot Execute Your Request
Details
Must include your domain example.com in the LetsEncrypt entries.
So I'm curious on how I can just set up a certificate without owning the main domain. I tried googling the issue but I couldn't find any relevant results. Any help would be much appreciated.
First
You don't need to own the domain, you just need to be able to copy a file to the location serving that domain. (You're all set there it sounds like)
Second
What tool are you using? The error message you gave makes me think the client is misconfigured. The challenge name is usually something like https://example.com/.well-known/acme-challenge/jQqx6qlM8u3wpi88N6lwvFd7SA07oK468mB1x4YIk1g. Compare that to your error:
Error: http://example.com/.well-known/acme-challenge/letsencrypt_example.com is not reachable. Aborting the script.
Third
I'm the author of Greenlock, which is compatible with Let's Encrypt. I'm confident that it will work for you.
Install
# Feel free to read the source first
curl -fsS https://get.greenlock.app/ | bash
Usage with existing webserver:
Let's say that:
You're using Apache or Nginx.
You confirm that ping example.com gives the IP of your server
You're exposing http on port 80 (otherwise verification will fail)
Your website is located in /srv/www/example.com
Your email is jon#example.com (must be a real email address)
You want to store your certificate as /etc/acme/live/example.com/fullchain.pem
This is what the command would look like:
sudo greenlock certonly --webroot \
--acme-version draft-11 --acme-url https://acme-v02.api.letsencrypt.org/directory \
--agree-tos --email jon#example.com --domains example.com \
--community-member \
--root /srv/www/example.com \
--config-dir /etc/acme
If that doesn't work on the first try then change out --acme-url https://acme-v02.api.letsencrypt.org/directory for --acme-url https://acme-staging-v02.api.letsencrypt.org/directory while you debug. Otherwise your server could become blocked from Let's Encrypt for too many bad requests. Just know that you'll have to delete the certificates from the staging environment and retry with the production url since the tool cannot tell which certificates are "production" and which ones are "testing".
The --community-member flag is optional, but will provide me with analytics and allow me to contact you about important or mandatory changes as well as other relevant updates.
After you get the success message you can then use those certificates in your webserver config and restart it.
That will work as a cron job as well. You could run it daily and it will only renew the certificate after about 75 days. You could also put a cron job to send the "update configuration" signal to your webserver (normally HUP or USR1) every few days to cause it to start using the new certificates without even restarting (...or just have it restart).
Usage without a web server
If you just want to quickly test without even having a webserver running, this will do it for you:
sudo greenlock certonly --standalone \
--acme-version draft-11 --acme-url https://acme-v02.api.letsencrypt.org/directory \
--agree-tos --email jon#example.com --domains example.com \
--community-member \
--config-dir /etc/acme/
That runs expecting that you DO NOT have a webserver running on port 80, as it will start one temporarily just for the purpose of the certificate.
sudo is required for using port 80 and for writing to root and httpd-owned directories (like /etc and /srv/www). You can run the command as your webserver's user instead if that has the correct permissions.
Use Greenlock as your webserver
We're working on an option to bypass the middleman altogether and simply use greenlock as your webserver, which would probably work great for simple vhosting like it sounds like you're doing. Let me know if that's interesting to you and I'll make sure to update you about it.
Fourth
Let's Encrypt also has an official client called certbot which will likely work just as well, perhaps better, but back in the early days it was easier for me to build my own than to use theirs due to issues which they have long since fixed.
Whats important is the sub domains A record. It should be the IP Address of from where you are trying to request the sub domains certificate.

Internal and external services running behind Traefik in Docker Swarm mode

I'm having some trouble finding any way to make my situation workable. I have 2 applications:
1: External service web application running on sub1.domain.com. If I run this application behind traefik with acme (LetsEncrypt) it works fine. I have a few more backend services (api/auth) that all run with a valid LetsEncrypt certificate and get their http traffic redirected to https by traefik
[entryPoints.http.redirect]
entryPoint = "https"
I have to have some form of http to https forwarding for this service.
2: Internal service web application running on sub2.domain.com. I have a self signed trusted certificate (internal CA) which works fine behind traefik if I set it as a default certificate, or if I use it in the application itself (inside tomcat). However, since it is an internal service I can live without ssl for this if it solves my problem. However, this does not work with traefik's http to https forwarding.
I have been trying to get these 2 services to run behind the same traefik instance but all the possible scenarios I could think of do not work because they are either still work in progress or just plain not working.
Scenarios
1: No http to https redirect, don't bother with https for the internal service and just use http. Then inside the backend for the external webservice redirect to https.
Problems:
Unable to have 2 traefik ports which traefik forwards too Unable to
forward 1 single port to another proto (since the backend is always
either http or https port)
Use ACME over the default cert
2: Use ACME over default certificate
someone else thought this was a good idea. It's just not working yet.
3: Re-use backend ssl certificate. Have traefik just redirect without "ssl termination". I'm not sure if this is the same thing but there is an option called "passTLSCert". However it seems that this is only possible with frontends defined in the .toml file which do not work (probably because I use docker for backends).
4: use DNS-01 challenge to create an SSL certificate for my internal service.
Sounds like this could work, so I'm now using CloudFlare and have an api key. However, it does not seem to work for subdomains. and there is no reply on my issue report: https://github.com/containous/traefik/issues/1953
EDIT: I might be able to fix the issue described in 4 to get this to work. It seems the internal DNS might be conflicting with traefik
Someone decided that on our internal DNS zones would be added per subdomain, meaning that the SOA request returned the subdomain as the name. This does not play nice with cloudflare since the internal dns zone is not the same as the cloudflare dns.
Changing this to a main zone with a records for the subdomains fixed the issue (in combination with the delayDontCheckDNS option).

How to set up 2 SSL cert on a single webpage

I am running an Apache web server and I have supposed to put 2 SSL cert on a single website. Is this possible? how can I do this? I read the apache user manual and it says I only can have 1 SSL cert for a single IP and port.
After the comments from the OP:
Setup two subdomains - one for static/to be CDN'd content and one for dynamic/not to be CDN'd content.
Get + setup a "wildcard cert" for your domain i.e. a cert for "*.yourdomain.com"... these are a bit more expensive but exactly for your situation...
As Yahia points out. A wildcard cert is an option. They are also expensive.
You can certainly have multiple named SSL certs on your server for images.domain.com and static.domain.com or whatever named sites you want and that is not a security issue. In fact, that is considered more secure than a wildcard cert.
It is true that you can only have one named cert per IP. Because SSL certs are bound to the IP in the web server config. So you would need to have multiple IP addresses on the server hosting the sites. If the dynamic and static content are already on different machines, then you're set there, but it sounds like they are on the same machine.
That doesn't mean that the ports need to be different between the site. You can have both 123.45.67.89 and 123.45.67.88 listening on the same port (443 in this case) on the same machine.
Here is a post I found that looks like it describes the config pretty well.
http://wiki.zimbra.com/wiki/Multiple_SSL_Virtual_Hosts