Let's encrypt SSL certificate on subdomain - ssl

I developed an application for a client which I host on a subdomain, now the problem is that I don't own the main domain/website. They've added a DNS record to point to the IP on which I host that app. Now I want to request a Free & automatic certificate from Let's Encrypt. But when I try the handshake it says
Getting challenge for subdomain.example.com from acme-server...
Error: http://subdomain.example.com/.well-known/acme-challenge/letsencrypt_**** is not reachable. Aborting the script.
dig output for subdomain.example.com:subdomain.example.com
Please make sure /.well-known alias is setup in WWW server.
Which makes sense cause I don't own that domain on my server. But if I try to generate it without the main domain I get:
You must include your main domain: example.com.
Cannot Execute Your Request
Details
Must include your domain example.com in the LetsEncrypt entries.
So I'm curious on how I can just set up a certificate without owning the main domain. I tried googling the issue but I couldn't find any relevant results. Any help would be much appreciated.

First
You don't need to own the domain, you just need to be able to copy a file to the location serving that domain. (You're all set there it sounds like)
Second
What tool are you using? The error message you gave makes me think the client is misconfigured. The challenge name is usually something like https://example.com/.well-known/acme-challenge/jQqx6qlM8u3wpi88N6lwvFd7SA07oK468mB1x4YIk1g. Compare that to your error:
Error: http://example.com/.well-known/acme-challenge/letsencrypt_example.com is not reachable. Aborting the script.
Third
I'm the author of Greenlock, which is compatible with Let's Encrypt. I'm confident that it will work for you.
Install
# Feel free to read the source first
curl -fsS https://get.greenlock.app/ | bash
Usage with existing webserver:
Let's say that:
You're using Apache or Nginx.
You confirm that ping example.com gives the IP of your server
You're exposing http on port 80 (otherwise verification will fail)
Your website is located in /srv/www/example.com
Your email is jon#example.com (must be a real email address)
You want to store your certificate as /etc/acme/live/example.com/fullchain.pem
This is what the command would look like:
sudo greenlock certonly --webroot \
--acme-version draft-11 --acme-url https://acme-v02.api.letsencrypt.org/directory \
--agree-tos --email jon#example.com --domains example.com \
--community-member \
--root /srv/www/example.com \
--config-dir /etc/acme
If that doesn't work on the first try then change out --acme-url https://acme-v02.api.letsencrypt.org/directory for --acme-url https://acme-staging-v02.api.letsencrypt.org/directory while you debug. Otherwise your server could become blocked from Let's Encrypt for too many bad requests. Just know that you'll have to delete the certificates from the staging environment and retry with the production url since the tool cannot tell which certificates are "production" and which ones are "testing".
The --community-member flag is optional, but will provide me with analytics and allow me to contact you about important or mandatory changes as well as other relevant updates.
After you get the success message you can then use those certificates in your webserver config and restart it.
That will work as a cron job as well. You could run it daily and it will only renew the certificate after about 75 days. You could also put a cron job to send the "update configuration" signal to your webserver (normally HUP or USR1) every few days to cause it to start using the new certificates without even restarting (...or just have it restart).
Usage without a web server
If you just want to quickly test without even having a webserver running, this will do it for you:
sudo greenlock certonly --standalone \
--acme-version draft-11 --acme-url https://acme-v02.api.letsencrypt.org/directory \
--agree-tos --email jon#example.com --domains example.com \
--community-member \
--config-dir /etc/acme/
That runs expecting that you DO NOT have a webserver running on port 80, as it will start one temporarily just for the purpose of the certificate.
sudo is required for using port 80 and for writing to root and httpd-owned directories (like /etc and /srv/www). You can run the command as your webserver's user instead if that has the correct permissions.
Use Greenlock as your webserver
We're working on an option to bypass the middleman altogether and simply use greenlock as your webserver, which would probably work great for simple vhosting like it sounds like you're doing. Let me know if that's interesting to you and I'll make sure to update you about it.
Fourth
Let's Encrypt also has an official client called certbot which will likely work just as well, perhaps better, but back in the early days it was easier for me to build my own than to use theirs due to issues which they have long since fixed.

Whats important is the sub domains A record. It should be the IP Address of from where you are trying to request the sub domains certificate.

Related

Influxdb over SSL connection

I'm a little bit confused about https communication with influxdb. I am running a 1.8 Influxdb instance on a virtual machine with a public IP. It is an Apache2 server but for now I am not willing to use it as a webserver to display web pages to clients. I want to use it as a database server for influxdb.
I obtained a valid certificate from Let's Encrypt, indeed the welcome page https://datavm.bo.cnr.it works properly over encrypted connection.
Then I followed all the instructions in the docs in order to enable https: I put the fullchain.pem file in /etc/ssl directory, I set the file permissions (not sure about the meaning of this step though), I edited influxdb.conf with https-enabled = true and set the path for https-certificate and https-private.key (fullchain.pem for both, is it right?). Then, systemctl restart influxdb. When I run influxdb -ssl -host datavm.bo.cnr.it I get the following:
Failed to connect to https://datavm.bo.cnr.it:8086: Get https://datavm.bo.cnr.it:8086/ping: http: server gave HTTP response to HTTPS client
Please check your connection settings and ensure 'influxd' is running.
Any help in understanding what I am doing wrong is very appreciated! Thank you
I figured out at least a part of the problem. It was a problem related to permissions on the *.pem files. This thing looks weird because if I tip the following, as documentation says, it does not connect.
sudo chmod 644 /etc/ssl/<CA-certificate-file>
sudo chmod 600 /etc/ssl/<private-key-file>
If, instead, I tip the second line with 644 all works perfectly. But this way I'm giving to anyone the permission to read the private key! I'm not able to figure out this point.
UPDATE
If I put inside /etc/ssl/ the symlinks that point to the .pem files that live inside /etc/letsencrypt/live/hostname the connection is refused. Only if I put a copy of the files the ssl connection starts.
The reason I am willing to put the links inside /etc/ssl/ is the automatic renew of the certificates.
Anyone can help?

How do I create a tls cert for a three node server domain that covers the parent domain as well?

I'm not even sure I asked the question right...
I have three servers running minio in distributed mode. I need all three servers to run with TLS enabled. It's easy enough to run certbot, generate a cert for each node, drop said certs into /etc/minio/certs/ and go! but here's where I start running into issues.
The servers are thus:
node1.files.example.com
node2.files.example.com
node3.files.example.com
I'm launching minio using the following command:
MINIO_ACCESS_KEY=minio \
MINIO_SECRET_KEY=secret \
/usr/local/bin/minio server \
-C /etc/minio --address ":443" \
https://node{1...3}.files.example.com:443/volume/{1...4}/
This works and I am able to connect to all three servers from a webbrowser using https with good certs. however, users will connect to the server using the parent domain "files.example.com" (using distributed DNS)
I already ran certbot and generated the certs for the parent domain... and I copied the certs into /etc/minio/certs/ as well as /etc/minio/certs/CAs/ (calling both files "files.example.com-public.crt" and "files.example.com-public.key" respectively)... this did not work. when I try to open the parent domain "files.example.com" I get a cert error (chich I can bypass) indicating the certificate is for the node in which I have connected and not for the parent domain.
I'm pretty sure this is just a matter of putting the cert in the right place and naming it correctly... right? does anyone know how to do that? I also have an idea there might be a way to issue a cert that covers multiple domains... is that how I'm supposed to do this? how?
I already hit up minio's slack channel and posted on their github, but no ones replying to me. not even, "this won't work."
any ideas?
I gave up and ran certbot in manual mode. it had to install apache on one of the nodes, then certbot had me jump through a couple of minor hoops (namely it had me create a new txt record with my DNS provider, and then create a file with a text string on the server for verification). I then copied the created certs into my minio config directory (/etc/minio/certs/) on all three nodes. that's it.
to be honest, I'd rather use the plugin as it allows for an automated cert renewal, but I'll live with this for now.
You could also run all of them behind a reverse proxy to handle the TLS termination using a wildcard domain certificate (ie. *.files.example.com). The reverse proxy would centralize the certificates, DNS, and certbot script if you prefer, etc to a single node, essentially load balancing the TLS and DNS for the minio nodes. The performance hit of "load-balancing" TLS like this may be acceptable depending on your workload, considering the simplification to your current DNS and TLS cert setup.
[Digital Ocean example using nginx and certbot plugins] https://www.digitalocean.com/community/tutorials/how-to-create-let-s-encrypt-wildcard-certificates-with-certbot

LetsEncrypt Certbot rejects DNS TXT record for wildcard Certificate

Task:
I want to create a wildcard certificate for both *.example.com and example.com in one go, using the DNS challenge method provided by the LetsEncrypt Certbot.
Reproduce:
When trying to obtain the certificate files neccessary to set up my SSL-Certificate, I run into a catch22-situation with the LetsEncrypt Certbot.
I call the certbot command with these parameters
certbot certonly --agree-tos --manual --preferred-challenges dns --server https://acme-v02.api.letsencrypt.org/directory -d "*.example.com,example.com"
and am requested to enter two DNS TXT records in the response from the command afterwards.
So far, so good. But if I enter the two requested DNS TXT records for the given domains as requested by the certbot command, I receive an error message:
IMPORTANT NOTES:
- The following errors were reported by the server:
Domain: example.com Type: unauthorized Detail: Incorrect
TXT record "[authentication snippet for example.com]" found at
_acme-challenge.example.com
To fix these errors, please make sure that your domain name was
entered correctly and the DNS A/AAAA record(s) for that domain
contain(s) the right IP address.
Problem: The Certbot does not accept the very same DNS TXT records is has just prompted me to set.
It seems that the Certbot is not able to cope with the fact that I am trying to request the certificate for both "*.example.com" and "example.com" at once, treating them as if they were belonging to two different domain realms and not accepting the two TXT records as expected.
It turned out that this error indeed occurred due to a DNS refresh lag caused by the domain provider. #low_skilled's response helped me figure out that the actual TXT-records I have entered took a few minutes to be set by the domain-service provider, even though its TTL was set to 60 seconds. Thanks for the reply. Problem solved!
I discover that have to be created just one record TXT (_acme-challenge.*) with two values (hash given by certbot) separated by line. After run certbot, remember to restart your webserver.
I think depends on your DNS server to setup this. I did it in Route 53 - AWS and this fix this problem.
Obs: Considering wait some seconds (+30) after change your records.
I know you fix your problem, but I think it can help someone to learn how certbot works.

DNS NXDOMAIN error command certbot

I'm trying to install a ssl certicate lets encrypt in my domain and my sub domaine.
I was sucessful installing the ssl certificate on my domain but i did't successful on my sub domain
I use the next command
certbot certonly --webroot -w /var/www/sub-domain/maxime-mazet.fr/owncloud/ -d cloud.maxime-mazet.fr
/var/www/sub-domain/maxime-mazet.fr/owncloud has the folder of my code.
cloud.maxime-mazet.fr is my sub domain.
my domain maxime-mazet.fr is host at ovh.
for cloud.maxime-mazet.fr I have created the enter A with the IP of server.
with my domain (maxime-mazet.fr) no error but with my sub domain (cloud.maxime-mazet.fr) the error is
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for cloud.maxime-mazet.fr
Using the webroot path /var/www/sub-domain/maxime-mazet.fr/owncloud for all unmatched domains.
Waiting for verification...
Cleaning up challenges
Failed authorization procedure. cloud.maxime-mazet.fr (http-01): urn:acme:error:connection :: The server could not connect to the client to verify the domain :: DNS problem: NXDOMAIN looking up A for cloud.maxime-mazet.fr
IMPORTANT NOTES:
- The following errors were reported by the server:
Domain: cloud.maxime-mazet.fr
Type: connection
Detail: DNS problem: NXDOMAIN looking up A for
cloud.maxime-mazet.fr
To fix these errors, please make sure that your domain name was
entered correctly and the DNS A record(s) for that domain
contain(s) the right IP address. Additionally, please check that
your computer has a publicly routable IP address and that no
firewalls are preventing the server from communicating with the
client. If you're using the webroot plugin, you should also verify
that you are serving files from the webroot path you provided.
The next pictures is my panel for the A of my domain and my sub domain
Thanks for your help
ns13.ovh.net and dns13.ovh.net do not appear to be authoritative for your domain name as they do not properly reply to queries on it. You will first need to solve that problem. Ask OVH if they are indeed the correct hosts to use for your domain. Since you seem to just have changed recently something on your domain name, you may just need to wait a little for things to settle.
Have a look at https://www.zonemaster.net/ to conduct tests on your zone. Until they are all ok do not play with Let's Encrypt.
I'm sorry for the screen but the ns13 and dns13 is good i have a new screen with all enter ;)

Ejabberd configuration issue

What I want to achieve is:
I want to run XMPP service on separate subdomain like xmpp.domain.com
But at the same time, to use usernames like john#domain.com(neither than john#xmpp.domain.com)
to auto detect xmpp service's url xmpp.domain.com on jabber clients while using username like john#domain.com
to correctly use ssl
Of course it might be, the way I think is not correct. If you have suggestion about it please comment me.
What I've done is:
Created Debian 8 server
Executed
apt-get update && apt-get dist-upgrade
apt-get -y install ejabberd
dpkg-reconfigure ejabberd
A hostname - xmpp.domain.com.
An administrative user - admin and two times password
Placed ssl pem key for domain.com in /etc/ejabberd/ejabberd.pem
Added all DNS records like
Then service ejabberd restart
Now when I try to register a new user like
ejabberdctl register admin domain.com 12345
It gives me an error message like, it's not allowed to register such username. But it allows when I enter
ejabberdctl register admin xmpp.domain.com 12345
The problem is...
So basically I can't use username admin#domain.com while using server xmpp.domain.com. What am I missing? any suggestions?
Also I'm a bit confused about SSL config and pem file. My SSL certificate currently supports www.domain.com and domain.com. Do I have to buy ssl cert for xmpp.domain.com also?
In the dpkg-reconfigure step, you should have used domain.com instead of xmpp.domain.com. ejabberd only needs to know the domain it should use for JIDs, it doesn't need to know the domain it is actually running on.
Your SRV records and SSL certificate are correct: if you want to use admin#domain.com, you have to have a certificate for domain.com.