How do I create a tls cert for a three node server domain that covers the parent domain as well? - ssl

I'm not even sure I asked the question right...
I have three servers running minio in distributed mode. I need all three servers to run with TLS enabled. It's easy enough to run certbot, generate a cert for each node, drop said certs into /etc/minio/certs/ and go! but here's where I start running into issues.
The servers are thus:
node1.files.example.com
node2.files.example.com
node3.files.example.com
I'm launching minio using the following command:
MINIO_ACCESS_KEY=minio \
MINIO_SECRET_KEY=secret \
/usr/local/bin/minio server \
-C /etc/minio --address ":443" \
https://node{1...3}.files.example.com:443/volume/{1...4}/
This works and I am able to connect to all three servers from a webbrowser using https with good certs. however, users will connect to the server using the parent domain "files.example.com" (using distributed DNS)
I already ran certbot and generated the certs for the parent domain... and I copied the certs into /etc/minio/certs/ as well as /etc/minio/certs/CAs/ (calling both files "files.example.com-public.crt" and "files.example.com-public.key" respectively)... this did not work. when I try to open the parent domain "files.example.com" I get a cert error (chich I can bypass) indicating the certificate is for the node in which I have connected and not for the parent domain.
I'm pretty sure this is just a matter of putting the cert in the right place and naming it correctly... right? does anyone know how to do that? I also have an idea there might be a way to issue a cert that covers multiple domains... is that how I'm supposed to do this? how?
I already hit up minio's slack channel and posted on their github, but no ones replying to me. not even, "this won't work."
any ideas?

I gave up and ran certbot in manual mode. it had to install apache on one of the nodes, then certbot had me jump through a couple of minor hoops (namely it had me create a new txt record with my DNS provider, and then create a file with a text string on the server for verification). I then copied the created certs into my minio config directory (/etc/minio/certs/) on all three nodes. that's it.
to be honest, I'd rather use the plugin as it allows for an automated cert renewal, but I'll live with this for now.

You could also run all of them behind a reverse proxy to handle the TLS termination using a wildcard domain certificate (ie. *.files.example.com). The reverse proxy would centralize the certificates, DNS, and certbot script if you prefer, etc to a single node, essentially load balancing the TLS and DNS for the minio nodes. The performance hit of "load-balancing" TLS like this may be acceptable depending on your workload, considering the simplification to your current DNS and TLS cert setup.
[Digital Ocean example using nginx and certbot plugins] https://www.digitalocean.com/community/tutorials/how-to-create-let-s-encrypt-wildcard-certificates-with-certbot

Related

Traefik: How do I best configure many websites (static virtual hosts) on the same backend container?

I have a webserver (currently nginx, but could just as well be Apache) which hosts many static websites as virtual hosts. With "many", I mean dozens, and I keep adding and removing them.
In my setup, I have a docker container with traefik, and a docker container with nginx. The same nginx container serves all these websites (that point is key to my question).
What is the best way to tell traefik about these host names, so that traefik can create Let's Encrypt certificates for it, and route traffic to this container?
The standard way seems to be to use a label on the nginx container, e.g.
docker run ...
-l traefik.backend=webserver \
-l traefik.port=80 \
-l traefik.frontend.rule="Host:example.com,www.example.com,docs.example.com,example.net,www.example.net,docs.example.net,example.org,www.example.org,example.de,www.example.de,development.com,www.development.com"
and so on. That list goes on and on and on. This works, but:
This is not very maintainable.
Worse, Traefik seems to pull one single cert for all these names. Let's say development.com is a completely differ entity from example.com, and I don't want both of them to be listed in the same cert.
Even worse, let's say I made a mistake somewhere. I misconfigured docs.example.net. Or, worse, they all work, but then in the future, I forget to renew example.net. And my Let's Encrypt cert needs to be renewed. Now, that renewal will fail, because if any one of the host names fails to verify, Let's Encrypt will refuse the certificate, which is totally correct. But means that all my websites will be down, suddenly at any unforseable time in the future, if any of the hostnames has a problem. That's a big risk. A risk one shouldn't take. The websites should be independent in the certificate.
It appears I am not using this right. So, my question is: How can I better configure this, so that each website is independent (in the configuration of traefik, and esp. in the SSL certificate), but I still use only one webserver container for all of them?
Here's what I tried:
I tried to manually configure the certificates in [acme] sections:
[[acme.domains]]
main = "example.com"
sans = [ "www.example.com" ]
[[acme.domains]]
main = "example.org"
sans = [ "www.example.org" ]
That looks more sane to me than the long label line on docker run. traefik apparently tries to get these certs, and writes them to acme.json. But it doesn't seem to use them. Even with these lines, traefik still uses the cert that has all the hostnames from the traefik.frontend.rule instead of the manually configured, more specific cert. That seems ill-advised.
Also, if I remove the hostname from the traefik.frontend.rule, traefik doesn't find the backend and returns a 404 to the client. That's logical, because traefik doesn't know where to route the traffic for this host.
I tried to set up [frontend] rules.
[frontends]
[frontends.example]
backend = "webserver"
[frontends.example.routes.com]
rule = "Host:example.com,www.example.com,docs.example.com"
[frontends.example.routes.org]
rule = "Host:example.org,www.example.org,docs.example.org"
...
That seems to be the right direction, although the configuration directives are very chatty, esp. all the section headers.
But I couldn't get this to work, all I got was "backend not found" in the traefik access log.

Let's encrypt SSL certificate on subdomain

I developed an application for a client which I host on a subdomain, now the problem is that I don't own the main domain/website. They've added a DNS record to point to the IP on which I host that app. Now I want to request a Free & automatic certificate from Let's Encrypt. But when I try the handshake it says
Getting challenge for subdomain.example.com from acme-server...
Error: http://subdomain.example.com/.well-known/acme-challenge/letsencrypt_**** is not reachable. Aborting the script.
dig output for subdomain.example.com:subdomain.example.com
Please make sure /.well-known alias is setup in WWW server.
Which makes sense cause I don't own that domain on my server. But if I try to generate it without the main domain I get:
You must include your main domain: example.com.
Cannot Execute Your Request
Details
Must include your domain example.com in the LetsEncrypt entries.
So I'm curious on how I can just set up a certificate without owning the main domain. I tried googling the issue but I couldn't find any relevant results. Any help would be much appreciated.
First
You don't need to own the domain, you just need to be able to copy a file to the location serving that domain. (You're all set there it sounds like)
Second
What tool are you using? The error message you gave makes me think the client is misconfigured. The challenge name is usually something like https://example.com/.well-known/acme-challenge/jQqx6qlM8u3wpi88N6lwvFd7SA07oK468mB1x4YIk1g. Compare that to your error:
Error: http://example.com/.well-known/acme-challenge/letsencrypt_example.com is not reachable. Aborting the script.
Third
I'm the author of Greenlock, which is compatible with Let's Encrypt. I'm confident that it will work for you.
Install
# Feel free to read the source first
curl -fsS https://get.greenlock.app/ | bash
Usage with existing webserver:
Let's say that:
You're using Apache or Nginx.
You confirm that ping example.com gives the IP of your server
You're exposing http on port 80 (otherwise verification will fail)
Your website is located in /srv/www/example.com
Your email is jon#example.com (must be a real email address)
You want to store your certificate as /etc/acme/live/example.com/fullchain.pem
This is what the command would look like:
sudo greenlock certonly --webroot \
--acme-version draft-11 --acme-url https://acme-v02.api.letsencrypt.org/directory \
--agree-tos --email jon#example.com --domains example.com \
--community-member \
--root /srv/www/example.com \
--config-dir /etc/acme
If that doesn't work on the first try then change out --acme-url https://acme-v02.api.letsencrypt.org/directory for --acme-url https://acme-staging-v02.api.letsencrypt.org/directory while you debug. Otherwise your server could become blocked from Let's Encrypt for too many bad requests. Just know that you'll have to delete the certificates from the staging environment and retry with the production url since the tool cannot tell which certificates are "production" and which ones are "testing".
The --community-member flag is optional, but will provide me with analytics and allow me to contact you about important or mandatory changes as well as other relevant updates.
After you get the success message you can then use those certificates in your webserver config and restart it.
That will work as a cron job as well. You could run it daily and it will only renew the certificate after about 75 days. You could also put a cron job to send the "update configuration" signal to your webserver (normally HUP or USR1) every few days to cause it to start using the new certificates without even restarting (...or just have it restart).
Usage without a web server
If you just want to quickly test without even having a webserver running, this will do it for you:
sudo greenlock certonly --standalone \
--acme-version draft-11 --acme-url https://acme-v02.api.letsencrypt.org/directory \
--agree-tos --email jon#example.com --domains example.com \
--community-member \
--config-dir /etc/acme/
That runs expecting that you DO NOT have a webserver running on port 80, as it will start one temporarily just for the purpose of the certificate.
sudo is required for using port 80 and for writing to root and httpd-owned directories (like /etc and /srv/www). You can run the command as your webserver's user instead if that has the correct permissions.
Use Greenlock as your webserver
We're working on an option to bypass the middleman altogether and simply use greenlock as your webserver, which would probably work great for simple vhosting like it sounds like you're doing. Let me know if that's interesting to you and I'll make sure to update you about it.
Fourth
Let's Encrypt also has an official client called certbot which will likely work just as well, perhaps better, but back in the early days it was easier for me to build my own than to use theirs due to issues which they have long since fixed.
Whats important is the sub domains A record. It should be the IP Address of from where you are trying to request the sub domains certificate.

Haproxy wont recognize new certificate

I recently changed my certificate to LetsEncrypt's.
I placed the new certificate in the location of the old one:
cat /etc/haproxy/certs/fullchain.pem /etc/haproxy/certs/privkey.pem > /etc/haproxy/certs/mydomain.com.pem
And in my haproxy.cfg I have:
frontend https
bind :::8443 v4v6 ssl crt /etc/haproxy/certs/mydomain.com.pem no-sslv3
Then I ran systemctl reload haproxy, but it still brings the old one when I access it in my browser or using SSLLabs.
If I use curl -kv mydomain.com it shows the correct certificate.
I had this same issue where even after reloading the config, haproxy would randomly serve old certs. After looking around for many days the issue was that "reload" operation created a new process without killing the old one. The old processes were serving the outdated certs. You can check this by "ps aux | grep haproxy".
Fix
If your environment allows for a few seconds of downtime run "service haproxy stop" until no haproxy processes are left and then start haproxy.
**OR**
Sort by starting time and kill old processes while checking if the service is still running in between.
1 Year later EDIT
Instead of manually doing the fix mentioned above after every reload, we added a "hard-stop-after" for 600 seconds. Made sure to kill all old processes after adding the param and checked the same using ps aux. So now, older processes have to die after 600 seconds, and cannot serve outdated certs.
If you have the old pem file in /etc/haproxy/certs, HAproxy might be using it instead of new one.
I had a similar problem. HAproxy was using expired certificate that was first created for only dev.domain.com with Let's Encrypt. Later I changed certificate creation process to include multiple domains:
domain.dom www.domain.com and dev.domain.com.
The old dev.domain.com.pem was still in /etc/haproxy/certs folder. When I visited https://dev.domain.com, HAproxy used old pem certificate file and Chrome issued a warning for expired certificate.
When I deleted dev.domain.com.pem file and reloaded HAproxy, it started using new certificate and SSL is working correctly again.
My problem was historic and outdated wildcard cert that HAProxy (HA-Proxy version 1.8.19-1+deb10u3 2020/08/01) erroneously picked up and spitted out as outdated subdomain cert, both in the browser and in cURL.
Reloading, restarting, stopping+starting and even upgrading Debian did not help. What did help was to remove the outdated wildcard cert and reload.

Copied a let's encrypt certificate from one server to another, how to auto renew?

I have just copied a SSL certificate (generated via let's encrypt certbot) from one server (A) to another (B). So, I have created a custom directory in my new server, let's say /home/my-certificate/, and copied the fullchain.pem and privkey.pem from (A) to (B). Everything works, the server is alive, the certificate are OK. Now I want to enable auto-renew on the new server (B). How can I do that?
Two good options stand out
Copy the Let's Encrypt certbot metadata from A to B as well, then install and continue to use certbot to renew as usual. This metadata is kept in /etc/letsencrypt/ and it tracks how your certificate was issued, from which certbot will conclude how it should renew it.
OR
Install certbot and perform a fresh certificate request on B, any time between now and when the existing certificate would expire. Assuming the certificate is for the exact same list of FQDNs this will only count against the per-certificate limit of 5 such requests per week, which is fine unless you're going to do this transition every day or you keep screwing it up and having to try again.
You need to copy letsencrypt renewal config to the new server, and then modify nginx config to point to the new, custom location: /home/my-certificate/
I would suggest moving your certs to the exact same location on the new server, in this case, you can just copy and paste certs and config without any modification.
Here is the list of steps:
Archive certificates on the old servers
Move them to a new server
Extract to the correct location
Create symlinks
Redirect domain
Based on this article
In addition to Druss's answer, a few more steps to be followed.
The solution provided will encounter problems when you try to renew the certificate.
To resolve this issue, a new certbot account should be registered, and conf file should be edited pointing to the new account. I followed the steps provided in this link.

OpenSSL in GitLab, what verification for self-signed certificate?

On Debian, using GitLab, I ran into issues with my self-signed certificate.
Reading through the code after a lot of searching on the Internet (I guess, it's the last resort, FOSS is helpful), I found the following lines in gitlab-shell/lib/gitlab_net.rb which left me... perplexed.
if config.http_settings['self_signed_cert']
http.verify_mode = OpenSSL::SSL::VERIFY_NONE
end
Most Stack Overflow responses about the diverse issues I've had until now have led me to believe that VERIFY_NONE, as you'd expect, doesn't verify anything. VERIFY_PEER seems, based on my reading, to be the correct setting for self-signed.
As I read it, it feels like taking steps to secure my connection using a certificate, and then just deciding to not use it? Is it a bug, or am I misreading the source?
gitlab-shell (on the GitLab server) has to communicate to the GitLab instance through an HTTPS or SSH URL API.
If it is a self-signed certificate, it doesn't want any error/warning when trying to access those GitLab URLs, hence the SSL::VERIFY_NONE.
But, that same certificate is also used by clients (outside of the GitLab server), using those same GitLab HTTPS URLs from their browser.
For them, the self-signed certificate is useful, provided they install it in their browser keystore.
For those transactions (clients to GitLab), the certificate will be "verified".
The OP Kheldar point's out in Mislav's post:
OpenSSL expects to find each certificate in a file named by the certificate subject’s hashed name, plus a number extension that starts with 0.
That means you can’t just drop My_Awesome_CA_Cert.pem in the directory and expect it to be picked up automatically.
However, OpenSSL ships with a utility called c_rehash which you can invoke on a directory to have all certificates indexed with appropriately named symlinks.
(See for instance OpenSSL Verify location)
cd /some/where/certs
c_rehash .