Really appreciate any help with this.
I have a DO droplet (running on OpenLiteSpeed Server with Ubunto 20.04) that is hosting the main domain + 2 subdomains. In that droplet, there are 3 separate folders, each made for each website. For example:
maindomain (wordpress)
subdomain1 (wordpress)
subdomain2 (standalone application, not sure if there is a server setup since it was setup by a developer)
SSL certificate is valid for /maindomain and /subdomain1. However, /subdomain2 was setup much later and for whatever reason the SSL certificate expired only for this subdomain.
I SSH'd into the server and found that certbot is installed and the existing config file for the SSL certificate looked good, so I went ahead and ran the following command:
certbot renew
I got the following error
The following certs are not due for renewal yet:
/etc/letsencrypt/live/subdomain1.domain.com/fullchain.pem expires on 2022-05-30 (skipped)
/etc/letsencrypt/live/domain.com/fullchain.pem expires on 2022-05-30 (skipped)
All renewal attempts failed. The following certs could not be renewed:
/etc/letsencrypt/live/subdomain2.domain.com/fullchain.pem (failure)
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
1 renew failure(s), 0 parse failure(s)
IMPORTANT NOTES:
- The following errors were reported by the server:
Domain: subdomain2.domain.com
Type: unauthorized
Detail: Invalid response from
http://subdomain2.domain.com/.well-known/acme-challenge/RaedorbX25N5YA123TXeUAy43Rsp42_eJmwPYuVfQR8
[IP_ADDRESS]: 404
To fix these errors, please make sure that your domain name was
entered correctly and the DNS A/AAAA record(s) for that domain
contain(s) the right IP address.
I went head and checked the A record and its pointing to the correct IP.
I think easiest solution would be to get a new certificate and install it, instead of renewing the existing one, however I am not sure how to do that and only on that /subdomain2
I tried to run the following in the root
certbot certonly --standalone -d subdomain2.domain.com
and got the following error
Performing the following challenges:
http-01 challenge for subdomain2.domain.com
Cleaning up challenges
Problem binding to port 80: Could not bind to IPv4 or IPv6.
Thank you for your help in advance
Related
SYSTEM INFORMATION
OS type and version Ubuntu 20.04.3 LTS
Virtualmin version 6.2.2
I have a webmin with hostname virtualmin.xxx.com which is being used for development.
Then I have 2 virtual servers: one called virtualmin and the other xxx.domain.com. Both are used for development.
xxx.domain.com is set to Default Website for IP address. So as things are when I write the domain xxx.domain.com it is changed to virtualmin.xxx.com automatically.
I would like to change xxx.domain.com to https://xxx.domain.com but when I go to Server Configuration - SSL Certificate - Let’s Encrypt and enter my domain I get the following error:
Web-based validation failed
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for xxx.domain.com
http-01 challenge for www.xxx.domain.com
Using the webroot path /home/xxx/public_html for all unmatched domains.
Waiting for verification...
Challenge failed for domain www.xxx.domain.com
http-01 challenge for www.xxx.domain.com
Cleaning up challenges
Some challenges have failed.
IMPORTANT NOTES:
- The following errors were reported by the server:
Domain: www.xxx.domain.com
Type: dns
Detail: DNS problem: NXDOMAIN looking up A for
www.xxx.domain.com - check that a DNS record exists for this
domain; DNS problem: NXDOMAIN looking up AAAA for
www.xxx.domain.com - check that a DNS record exists for this
domain
DNS-based validation failed
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator manual, Installer None
Obtaining a new certificate
Performing the following challenges:
dns-01 challenge for www.xxx.domain.com
Running manual-auth-hook command: /etc/webmin/webmin/letsencrypt-dns.pl
Waiting for verification...
Challenge failed for domain www.xxx.domain.com
dns-01 challenge for www.xxx.domain.com
Cleaning up challenges
Running manual-cleanup-hook command: /etc/webmin/webmin/letsencrypt-cleanup.pl
Some challenges have failed.
IMPORTANT NOTES:
- The following errors were reported by the server:
Domain: www.xxx.domain.com
Type: dns
Detail: DNS problem: NXDOMAIN looking up TXT for
_acme-challenge.www.xxx.domain.com - check that a DNS record
exists for this domain
I have installed on my raspberry pi apache and wanted to create a new certificate for my domain.
I have created my domain via no-ip and configured the dyndns in the fritzbox settings. I have also created a virtual host and installed certbot with the follwing link:
https://certbot.eff.org/instructions?ws=apache&os=debianstretch
But while executing ""sudo certbot --apache"" I get the following error message:
Certbot failed to authenticate some domains (authenticator: apache). The Certificate Authority reported these problems: Domain: ""Domain deleted""
Type: unauthorized
Detail: ""Domain deleted"": Invalid response from http://""Domain deleted""/.well-known/acme-challenge/HTptNJcGtYB1e0I7jfNU-a8hAeY2upza0daUrEWP0Po: 404
Hint: The Certificate Authority failed to verify the temporary Apache configuration changes made by Certbot. Ensure that the listed domains point to this Apache server and that it is accessible from the internet.
I have tried a lot with some hints but none of these worked for me.
Is there anybody who could help me?
Thanks in advance :)
I am looking for an automated way to add a new domain.
I would like to add erzgebirgstraverse.de
From https://certbot.eff.org/docs/using.html#changing-a-certificate-s-domains :
... to expand the set of domains a certificate contains ...
certbot certonly --cert-name example.com -d example.org,www.example.org
I found a way to list all existing certs:
hz1:/etc/apache2# certbot certificates
Saving debug log to /var/log/letsencrypt/letsencrypt.log
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Found the following certs:
Certificate Name: hz1.yz.to
Serial Number: 345a3c36ff032d325e78120c98d8ddc71f7
Domains: hz1.yz.to thomas-guettler.de
Expiry Date: 2021-03-23 09:19:00+00:00 (VALID: 80 days)
Certificate Path: /etc/letsencrypt/live/hz1.yz.to/fullchain.pem
Private Key Path: /etc/letsencrypt/live/hz1.yz.to/privkey.pem
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Now I see the existing domains, and could add erzgebirgstraverse.de with the -d flag:
hz1:/etc/apache2# certbot certonly --cert-name hz1.yz.to -d hz1.yz.to,thomas-guettler.de,erzgebirgstraverse.de
But now an interactive script starts:
Saving debug log to /var/log/letsencrypt/letsencrypt.log
How would you like to authenticate with the ACME CA?
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
1: Apache Web Server plugin (apache)
2: Spin up a temporary webserver (standalone)
3: Place files in webroot directory (webroot)
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Select the appropriate number [1-3] then [enter] (press 'c' to cancel):
systemctl reload apache2
Is there a way to add a new domain (alternative name) but non-interactive?
By default, Certbot will try to assist you in the process of generating the certificates. In addition, it will prompt you for information to help installing them in your Apache/Nginx setup.
To skip this installation step, simply use certbot certonly ... subcommand. According to the CLI manpages: Obtain or renew a certificate, but do not install it
Alternatively, you can use the flag -n/--non-interactive to make sure certbot will process without prompting anything. In such case, you must ensure all needed information is passed through the command-line. In particular, you must ensure you already agreed to the Terms & Conditions, (--agree-tos) and you provided a valid contact email (-m email#domain). Example:
certbot certonly --agree-tos -m contact&mydomain.com --cert-name hz1.yz.to -d hz1.yz.to,thomas-guettler.de,erzgebirgstraverse.de
In your questions, you were prompted for authentication method. You must understand that let's encrypt must validate the server you are executing the client can correctly be associated with the domain(s) you are trying to generate your certs. Available methods are:
Apache Web Server plugin (apache) -> certbot will create apache settings so the HTTP challenge can be used to validate the domains are actually associated with your server
Spin up a temporary webserver (standalone) -> Certbot will run its own webserver to perform the HTTP challenge. This can work only if no other webserver is listening on port 80 (apache & nginx will listen on that addresse). This method is probably useless in most server
Place files in webroot directory (webroot) -> If you already have a HTTP server listening on port 80, you can instruct certbot to put a file in the webroot directory so the HTTP challenge can be used.
To pre-select one of the 3 available methods from the command line (and avoid interactive prompt), use option --apache (1), --standalone (2) or --webroot (3).
Keep in mind that HTTP challenge is not the only solution to validate your server/domains coherence. DNS and TLS based challenges can be very useful: https://letsencrypt.org/docs/challenge-types/ I'm not sure certbot implements such challenges natively, but you can find third-party plugins that will.
In addition, think that certbot is NOT the only let's encrypt client available to generate your certificates. There is plenty clients available: https://letsencrypt.org/docs/client-options/
You'll need a minimum of: --non-interactive, --agree-tos, and -m 'you#your-email.com'. That will allow certbot to run without any interaction.
In addition it may be useful to specify the --nginx or --apache if that's appropriate for your configuration (didn't specify what webserver type this is), or certonly --manual if you actually just need the certificate.
An example of a finished command to do what you're looking for (assuming nginx here) is: certbot --nginx --non-interactive --agree-tos -m 'you#your-email.com' -d 'erzgebirgstraverse.de'
Note that all this is specified (in a somewhat roundabout way) in https://certbot.eff.org/docs/using.html
I am using server side CLI to get an SSL for my web app (following these instructions: https://github.com/dokku/dokku-letsencrypt).
After following the setup I ran:
root#taaalk:~# dokku letsencrypt taaalk
=====> Let's Encrypt taaalk
-----> Updating letsencrypt docker image...
0.1.0: Pulling from dokku/letsencrypt
Digest: sha256:af5f8529c407645e97821ad28eba328f4c59b83b2141334f899303c49fc07823
Status: Image is up to date for dokku/letsencrypt:0.1.0
docker.io/dokku/letsencrypt:0.1.0
Done updating
-----> Enabling ACME proxy for taaalk...
[ ok ] Reloading nginx configuration (via systemctl): nginx.service.
-----> Getting letsencrypt certificate for taaalk...
- Domain 'taaalk.taaalk.co'
darkhttpd/1.12, copyright (c) 2003-2016 Emil Mikulic.
listening on: http://0.0.0.0:80/
2020-04-28 23:12:10,728:INFO:__main__:1317: Generating new account key
2020-04-28 23:12:11,686:INFO:__main__:1343: By using simp_le, you implicitly agree to the CA's terms of service: https://letsencrypt.org/documents/LE-SA-v1.2-November-15-2017.pdf
2020-04-28 23:12:12,017:INFO:__main__:1406: Generating new certificate private key
2020-04-28 23:12:14,753:ERROR:__main__:1388: CA marked some of the authorizations as invalid, which likely means it could not access http://example.com/.well-known/acme-challenge/X. Did you set correct path in -d example.com:path or --default_root? Are all your domains accessible from the internet? Please check your domains' DNS entries, your host's network/firewall setup and your webserver config. If a domain's DNS entry has both A and AAAA fields set up, some CAs such as Let's Encrypt will perform the challenge validation over IPv6. If your DNS provider does not answer correctly to CAA records request, Let's Encrypt won't issue a certificate for your domain (see https://letsencrypt.org/docs/caa/). Failing authorizations: https://acme-v02.api.letsencrypt.org/acme/authz-v3/4241725520
2020-04-28 23:12:14,757:INFO:__main__:396: Saving account_key.json
2020-04-28 23:12:14,758:INFO:__main__:396: Saving account_reg.json
Challenge validation has failed, see error log.
Debugging tips: -v improves output verbosity. Help is available under --help.
-----> Certificate retrieval failed!
-----> Disabling ACME proxy for taaalk...
[ ok ] Reloading nginx configuration (via systemctl): nginx.service.
done
root#taaalk:~#
So it's easier to read the error was:
2020-04-28 23:12:14,753:ERROR:__main__:1388: CA marked some of the authorizations as invalid, which likely means it could not access http://example.com/.well-known/acme-challenge/X. Did you set correct path in -d example.com:path or --default_root? Are all your domains accessible from the internet? Please check your domains' DNS entries, your host's network/firewall setup and your webserver config. If a domain's DNS entry has both A and AAAA fields set up, some CAs such as Let's Encrypt will perform the challenge validation over IPv6. If your DNS provider does not answer correctly to CAA records request, Let's Encrypt won't issue a certificate for your domain (see https://letsencrypt.org/docs/caa/). Failing authorizations: https://acme-v02.api.letsencrypt.org/acme/authz-v3/4241725520
I did a lot of googling around and the most promising post I found on the subject was this one:
https://veryjoe.com/tech/2019/07/06/HTTPS-dokku.html
In the post it suggested checking my Dokku domain misconfiguration and missing network listeners.
I ran dokku domains:report to check for the misconfiguration. This returned:
root#taaalk:~# dokku domains:report
=====> taaalk domains information
Domains app enabled: true
Domains app vhosts: taaalk.taaalk.co
Domains global enabled: true
Domains global vhosts: taaalk.co
And I then ran dokku network:report to check for missing listeners:
root#taaalk:~# dokku network:report
=====> taaalk network information
Network attach post create:
Network attach post deploy:
Network bind all interfaces: false
Network web listeners: 172.17.0.4:5000
After talking things through with a friend we tried adding an 'A' record to my DNS with the host 'taaalk.taaalk.co'.
I then ran:
root#taaalk:~# dokku letsencrypt taaalk
=====> Let's Encrypt taaalk
-----> Updating letsencrypt docker image...
0.1.0: Pulling from dokku/letsencrypt
Digest: sha256:af5f8529c407645e97821ad28eba328f4c59b83b2141334f899303c49fc07823
Status: Image is up to date for dokku/letsencrypt:0.1.0
docker.io/dokku/letsencrypt:0.1.0
Done updating
-----> Enabling ACME proxy for taaalk...
[ ok ] Reloading nginx configuration (via systemctl): nginx.service.
-----> Getting letsencrypt certificate for taaalk...
- Domain 'taaalk.taaalk.co'
darkhttpd/1.12, copyright (c) 2003-2016 Emil Mikulic.
listening on: http://0.0.0.0:80/
2020-04-30 13:39:58,623:INFO:__main__:1406: Generating new certificate private key
2020-04-30 13:40:03,879:INFO:__main__:396: Saving fullchain.pem
2020-04-30 13:40:03,880:INFO:__main__:396: Saving chain.pem
2020-04-30 13:40:03,880:INFO:__main__:396: Saving cert.pem
2020-04-30 13:40:03,880:INFO:__main__:396: Saving key.pem
-----> Certificate retrieved successfully.
-----> Installing let's encrypt certificates
-----> Unsetting DOKKU_PROXY_PORT
-----> Setting config vars
DOKKU_PROXY_PORT_MAP: http:80:5000
-----> Setting config vars
DOKKU_PROXY_PORT_MAP: http:80:5000 https:443:5000
-----> Configuring taaalk.taaalk.co...(using built-in template)
-----> Creating https nginx.conf
Enabling HSTS
Reloading nginx
-----> Configuring taaalk.taaalk.co...(using built-in template)
-----> Creating https nginx.conf
Enabling HSTS
Reloading nginx
-----> Disabling ACME proxy for taaalk...
[ ok ] Reloading nginx configuration (via systemctl): nginx.service.
done
Which was successful.
However, now taaalk.taaalk.co has an SSL, but taaalk.co does not.
I don't know where to go from here. I feel it makes sense to change the vhost from taaalk.taaalk.co to taaalk.co, but I am not sure if this is correct or how to do it. The Dokku documentation does not seem to cover changing the vhost name: http://dokku.viewdocs.io/dokku/configuration/domains/
Thank you for any help
Update
I changed the vhost to taaalk.co, so I now have:
root#taaalk:~# dokku domains:report
=====> taaalk domains information
Domains app enabled: true
Domains app vhosts: taaalk.co
Domains global enabled: true
Domains global vhosts: taaalk.co
However, I still get the following error:
root#taaalk:~# dokku letsencrypt taaalk
=====> Let's Encrypt taaalk
-----> Updating letsencrypt docker image...
0.1.0: Pulling from dokku/letsencrypt
Digest: sha256:af5f8529c407645e97821ad28eba328f4c59b83b2141334f899303c49fc07823
Status: Image is up to date for dokku/letsencrypt:0.1.0
docker.io/dokku/letsencrypt:0.1.0
Done updating
-----> Enabling ACME proxy for taaalk...
[ ok ] Reloading nginx configuration (via systemctl): nginx.service.
-----> Getting letsencrypt certificate for taaalk...
- Domain 'taaalk.co'
darkhttpd/1.12, copyright (c) 2003-2016 Emil Mikulic.
listening on: http://0.0.0.0:80/
2020-04-30 17:01:12,996:INFO:__main__:1406: Generating new certificate private key
2020-04-30 17:01:46,068:ERROR:__main__:1388: CA marked some of the authorizations as invalid, which likely means it could not access http://example.com/.well-known/acme-challenge/X. Did you set correct path in -d example.com:path or --default_root? Are all your domains accessible from the internet? Please check your domains' DNS entries, your host's network/firewall setup and your webserver config. If a domain's DNS entry has both A and AAAA fields set up, some CAs such as Let's Encrypt will perform the challenge validation over IPv6. If your DNS provider does not answer correctly to CAA records request, Let's Encrypt won't issue a certificate for your domain (see https://letsencrypt.org/docs/caa/). Failing authorizations: https://acme-v02.api.letsencrypt.org/acme/authz-v3/4277663330
Challenge validation has failed, see error log.
Debugging tips: -v improves output verbosity. Help is available under --help.
-----> Certificate retrieval failed!
-----> Disabling ACME proxy for taaalk...
[ ok ] Reloading nginx configuration (via systemctl): nginx.service.
done
root#taaalk:~#
Again, reproduced below for ease of reading:
2020-04-30 17:01:46,068:ERROR:__main__:1388: CA marked some of the authorizations as invalid, which likely means it could not access http://example.com/.well-known/acme-challenge/X. Did you set correct path in -d example.com:path or --default_root? Are all your domains accessible from the internet? Please check your domains' DNS entries, your host's network/firewall setup and your webserver config. If a domain's DNS entry has both A and AAAA fields set up, some CAs such as Let's Encrypt will perform the challenge validation over IPv6. If your DNS provider does not answer correctly to CAA records request, Let's Encrypt won't issue a certificate for your domain (see https://letsencrypt.org/docs/caa/). Failing authorizations: https://acme-v02.api.letsencrypt.org/acme/authz-v3/4277663330
Challenge validation has failed, see error log.
The fix was quite simple. First I made A records for both www. and root for my url pointing at my server.
I then set my vhosts to be both taaalk.co and www.taaalk.co with dokku domains:add taaalk www.taaalk.co, etc...
I then removed all the certs associated with taaalk.co with dokku certs:remove taaalk.
I then ran dokku letsencrypt taaalk and everything worked fine.
To anyone looking on who tried what Joshua did and still didn't get letsencrypt to generate certs:
My problem was that I didn't have any port mapping for port 80 on dokku, so letsencrypt was unable to communicate with the server to authorise the new cert, giving this error:
ERROR:__main__:1388: CA marked some of the authorizations as invalid, which likely means it could not access http://example.com/.well-known/acme-challenge/X. Did you set correct path in -d example.com:path or --default_root? Are all your domains accessible from the internet? Please check your domains' DNS entries, your host's network/firewall setup and your webserver config. If a domain's DNS entry has both A and AAAA fields set up, some CAs such as Let's Encrypt will perform the challenge validation over IPv6. If your DNS provider does not answer correctly to CAA records request, Let's Encrypt won't issue a certificate for your domain (see https://letsencrypt.org/docs/caa/). Failing authorizations: https://acme-v02.api.letsencrypt.org/acme/authz-v3/4277663330
Challenge validation has failed, see error log.
Silly me - I had removed the port http 80 mapping in dokku as I thought it was unnecessary.
To fix the problem I just added the port mapping again:
dokku proxy:ports-add myapp http:80:4000
(Note: my app connects to port 4000 hence above, your port may be different)
And then ran dokku letsencrypt:
dokku letsencrypt myapp
This sequence is important, setting the proxy ports correctly allows letsencrypt to connect and autorenew the TLS certs again.
I have a domain, which I'll simply refer to as "example.org" for security reasons, running on Amazon EC2 on which I'm trying to get SSL access working. I'm using letsencrypt and certbot to issue my certificates. Everything was working fine for the first 90 days until I tried to renew the SSL cert. I was able to successfully refresh the certificate for my domain name, but for some reason I seem to have a separate certificate for a proxy server I'm using on port 1337 that shows as expired.
When I try to use an app on my site, I see the following error on the JavaScript console in the browser:
example.org:1337 uses an invalid security certificate.
The certificate expired on Tuesday, October 9, 2018, 7:50 PM. The current time is Friday, October 19, 2018, 3:43 PM.
Error code: <a id="errorCode" title="SEC_ERROR_EXPIRED_CERTIFICATE">SEC_ERROR_EXPIRED_CERTIFICATE</a>
(unknown)
My client app is an Angular 6 SPA that directly communicates with "https://example.org:1337".
Using an SSL checker I can see that my "example.org" shows that the domain level certificate will expire in 47 days. However, when I check on "example.org:1337" it says the certificate expired 13 days ago. It's my understanding that I only need one SSL certificate per domain and I don't have to individually certify each port. I did not originally request a certificate for port 1337, but I do have a proxyPass specified for it in "/etc/apache2/sites-available/000-default-le-ssl.conf":
# allow us to call a node server without the user having to specify the port number
# directly e.g. call the proxy server like it's a standard http route.
ProxyRequests off
<Proxy *>
Order deny,allow
Allow from all
</Proxy>
<Location /servers/meta-data-proxy>
ProxyPass https://localhost:1337
ProxyPassReverse https://localhost:1337
</Location>
When I SSL check for another port I use, which does not have a proxy, it says "no ssl certificates found", which is what I would expect.
certbot shows the following certificates:
> sudo certbot certifictes
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Found the following certs:
Certificate Name: example.org
Domains: example.org www.example.org
Expiry Date: 2018-12-09 11:08:32+00:00 (VALID: 47 days)
Certificate Path: /etc/letsencrypt/live/example.org/fullchain.pem
Private Key Path: /etc/letsencrypt/live/example.org/privkey.pem
How can I either:
a) Refresh or synchronize the cert on port 1337 to be in sync with the one on the domain name?
b) Somehow delete the expired cert on port 1337 on my machine. I don't see it listed anywhere according to certbot. I'm hoping that by deleting it, it will somehow dynamically set things up again.
I'm not even really certain I need to have a "ProxyPass". It might have been something I just did originally to get things working, but isn't really needed.
Ubuntu 18.04
Apache2 2.4.29
Sorry everyone: this one falls under the category of never mind/ user error.
Turns out I did copy the /etc/letsencrypt/live/example.org/fullchain.pem and /etc/letsencrypt/live/example.org/privkey.pem into a subdirectory of my proxy server. The proxy server listening on port 1337 then loads these files. Once I refreshed them with the updated versions being used by the main https server, everything worked again. So in effect, I did have two versions of the keys out there, and you could only figure this out by looking at the code.
I shouldn't really have to remember to do this, and I would say this is a reflection of a bad design on the part of my app. It's probably not a good idea to have the client directly reference port numbers. I should really have the client reference the server through one port (e.g 443 for SSL) and then have the server decide to redirect to any internal servers. My app originally was running just http, and I wasn't really thinking that much about security. Then I added https at the last second. Not being very familiar with SSL, I made it a much harder problem than I had to, so that's my defense. Ugh.