I was using certbot and a plugin that matches my domain provider certbot-dns-aliyun to generate an SSL certificate for my domain, scp-makerspace.cn. However, as I run the following command:
sudo /mnt/certbot/venv/bin/certbot certonly -a certbot-dns-aliyun:dns-aliyun --certbot-dns-aliyun:dns-aliyun-credentials /mnt/certbot/credentials.ini -d "scp-makerspace.com" -d "*.scp-makerspace.com"
This error occurs:
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugin legacy name certbot-dns-aliyun:dns-aliyun may be removed in a future version. Please use dns-aliyun instead.
Plugins selected: Authenticator certbot-dns-aliyun:dns-aliyun, Installer None
Requesting a certificate for scp-makerspace.com and *.scp-makerspace.com
Performing the following challenges:
dns-01 challenge for scp-makerspace.com
dns-01 challenge for scp-makerspace.com
Cleaning up challenges
Encountered exception during recovery: certbot.errors.PluginError: Unable to determine zone identifier for scp-makerspace.com using zone names: ['scp-makerspace.com', 'com']
Unable to determine zone identifier for scp-makerspace.com using zone names: ['scp-makerspace.com', 'com']
I searched on the Internet and about 5 unique results of the same problem showed up, and they are all of v0.2x versions of certbot. My certbot version is now 1.11.0. Also, this seems to be a problem across multiple versions of dns plugins (regardless of the dns provider).
In some of the solutions, it is mentioned to check if the domain is not "blocked". I'm not entirely sure what does this mean/how can I check this, but I am able to access the webpage service hosted on my domain directly from browser.
How could I fix this?
P.S. The warning in the output data about changing certbot-dns-aliyun:dns-aliyun to dns-aliyun doens't make any difference to the output.
Related
I'm not even sure I asked the question right...
I have three servers running minio in distributed mode. I need all three servers to run with TLS enabled. It's easy enough to run certbot, generate a cert for each node, drop said certs into /etc/minio/certs/ and go! but here's where I start running into issues.
The servers are thus:
node1.files.example.com
node2.files.example.com
node3.files.example.com
I'm launching minio using the following command:
MINIO_ACCESS_KEY=minio \
MINIO_SECRET_KEY=secret \
/usr/local/bin/minio server \
-C /etc/minio --address ":443" \
https://node{1...3}.files.example.com:443/volume/{1...4}/
This works and I am able to connect to all three servers from a webbrowser using https with good certs. however, users will connect to the server using the parent domain "files.example.com" (using distributed DNS)
I already ran certbot and generated the certs for the parent domain... and I copied the certs into /etc/minio/certs/ as well as /etc/minio/certs/CAs/ (calling both files "files.example.com-public.crt" and "files.example.com-public.key" respectively)... this did not work. when I try to open the parent domain "files.example.com" I get a cert error (chich I can bypass) indicating the certificate is for the node in which I have connected and not for the parent domain.
I'm pretty sure this is just a matter of putting the cert in the right place and naming it correctly... right? does anyone know how to do that? I also have an idea there might be a way to issue a cert that covers multiple domains... is that how I'm supposed to do this? how?
I already hit up minio's slack channel and posted on their github, but no ones replying to me. not even, "this won't work."
any ideas?
I gave up and ran certbot in manual mode. it had to install apache on one of the nodes, then certbot had me jump through a couple of minor hoops (namely it had me create a new txt record with my DNS provider, and then create a file with a text string on the server for verification). I then copied the created certs into my minio config directory (/etc/minio/certs/) on all three nodes. that's it.
to be honest, I'd rather use the plugin as it allows for an automated cert renewal, but I'll live with this for now.
You could also run all of them behind a reverse proxy to handle the TLS termination using a wildcard domain certificate (ie. *.files.example.com). The reverse proxy would centralize the certificates, DNS, and certbot script if you prefer, etc to a single node, essentially load balancing the TLS and DNS for the minio nodes. The performance hit of "load-balancing" TLS like this may be acceptable depending on your workload, considering the simplification to your current DNS and TLS cert setup.
[Digital Ocean example using nginx and certbot plugins] https://www.digitalocean.com/community/tutorials/how-to-create-let-s-encrypt-wildcard-certificates-with-certbot
I installed a brand new DigitalOcean droplet using a marketplace base (so on paper everything should be OK out of the box).
When trying to issue certificates, i am getting this error:
[11.13.2019_04-48-28] /root/.acme.sh/acme.sh --issue -d thehouseinkorazim.co.il -d www.thehouseinkorazim.co.il --cert-file /etc/letsencrypt/live/thehouseinkorazim.co.il/cert.pem --key-file /etc/letsencrypt/live/thehouseinkorazim.co.il/privkey.pem --fullchain-file /etc/letsencrypt/live/thehouseinkorazim.co.il/fullchain.pem -w /home/thehouseinkorazim.co.il/public_html --force
[11.13.2019_04-48-28] [Errno 2] No such file or directory [Failed to obtain SSL. [obtainSSLForADomain]]
[11.13.2019_04-48-28] 283 Failed to obtain SSL for domain. [issueSSLForDomain]
[11.13.2019_04-48-34] Trying to obtain SSL for: thehouseinkorazim.co.il and: www.thehouseinkorazim.co.il
I checked and UFW is not installed.
I do have a network firewall but it is the same one as another droplet that does allow for certificates (same rules) so I think it is not the cause.
I searched all the answers online and no luck.
I even installed certboot to manually issue certificate but same error (i did it because I know you need to register initially to get certificates and I haven't so I thought it was the cause).
Any ideas? Thanks!
update: i did a clean droplet again, this is the issue without anything I did manually:
Cannot issue SSL. Error message: ln: failed to create symbolic link '/usr/local/lsws/admin/conf/cert/admin.crt': No such file or directory ln: failed to create symbolic link '/usr/local/lsws/admin/conf/cert/admin.key': No such file or directory 0,283 Failed to obtain SSL for domain. [issueSSLForDomain]
I checked and there is no folder "cert" under "conf" in the path written above.
There's an known SSL issue on recent version due to some environment/code changing. We already aware it and submitted a new version which has that issue fixed included. Please give it a day or two and you should be able to launch the new version from marketplace which comes with CyberPanel v1.9.2.
Best
Task:
I want to create a wildcard certificate for both *.example.com and example.com in one go, using the DNS challenge method provided by the LetsEncrypt Certbot.
Reproduce:
When trying to obtain the certificate files neccessary to set up my SSL-Certificate, I run into a catch22-situation with the LetsEncrypt Certbot.
I call the certbot command with these parameters
certbot certonly --agree-tos --manual --preferred-challenges dns --server https://acme-v02.api.letsencrypt.org/directory -d "*.example.com,example.com"
and am requested to enter two DNS TXT records in the response from the command afterwards.
So far, so good. But if I enter the two requested DNS TXT records for the given domains as requested by the certbot command, I receive an error message:
IMPORTANT NOTES:
- The following errors were reported by the server:
Domain: example.com Type: unauthorized Detail: Incorrect
TXT record "[authentication snippet for example.com]" found at
_acme-challenge.example.com
To fix these errors, please make sure that your domain name was
entered correctly and the DNS A/AAAA record(s) for that domain
contain(s) the right IP address.
Problem: The Certbot does not accept the very same DNS TXT records is has just prompted me to set.
It seems that the Certbot is not able to cope with the fact that I am trying to request the certificate for both "*.example.com" and "example.com" at once, treating them as if they were belonging to two different domain realms and not accepting the two TXT records as expected.
It turned out that this error indeed occurred due to a DNS refresh lag caused by the domain provider. #low_skilled's response helped me figure out that the actual TXT-records I have entered took a few minutes to be set by the domain-service provider, even though its TTL was set to 60 seconds. Thanks for the reply. Problem solved!
I discover that have to be created just one record TXT (_acme-challenge.*) with two values (hash given by certbot) separated by line. After run certbot, remember to restart your webserver.
I think depends on your DNS server to setup this. I did it in Route 53 - AWS and this fix this problem.
Obs: Considering wait some seconds (+30) after change your records.
I know you fix your problem, but I think it can help someone to learn how certbot works.
I developed an application for a client which I host on a subdomain, now the problem is that I don't own the main domain/website. They've added a DNS record to point to the IP on which I host that app. Now I want to request a Free & automatic certificate from Let's Encrypt. But when I try the handshake it says
Getting challenge for subdomain.example.com from acme-server...
Error: http://subdomain.example.com/.well-known/acme-challenge/letsencrypt_**** is not reachable. Aborting the script.
dig output for subdomain.example.com:subdomain.example.com
Please make sure /.well-known alias is setup in WWW server.
Which makes sense cause I don't own that domain on my server. But if I try to generate it without the main domain I get:
You must include your main domain: example.com.
Cannot Execute Your Request
Details
Must include your domain example.com in the LetsEncrypt entries.
So I'm curious on how I can just set up a certificate without owning the main domain. I tried googling the issue but I couldn't find any relevant results. Any help would be much appreciated.
First
You don't need to own the domain, you just need to be able to copy a file to the location serving that domain. (You're all set there it sounds like)
Second
What tool are you using? The error message you gave makes me think the client is misconfigured. The challenge name is usually something like https://example.com/.well-known/acme-challenge/jQqx6qlM8u3wpi88N6lwvFd7SA07oK468mB1x4YIk1g. Compare that to your error:
Error: http://example.com/.well-known/acme-challenge/letsencrypt_example.com is not reachable. Aborting the script.
Third
I'm the author of Greenlock, which is compatible with Let's Encrypt. I'm confident that it will work for you.
Install
# Feel free to read the source first
curl -fsS https://get.greenlock.app/ | bash
Usage with existing webserver:
Let's say that:
You're using Apache or Nginx.
You confirm that ping example.com gives the IP of your server
You're exposing http on port 80 (otherwise verification will fail)
Your website is located in /srv/www/example.com
Your email is jon#example.com (must be a real email address)
You want to store your certificate as /etc/acme/live/example.com/fullchain.pem
This is what the command would look like:
sudo greenlock certonly --webroot \
--acme-version draft-11 --acme-url https://acme-v02.api.letsencrypt.org/directory \
--agree-tos --email jon#example.com --domains example.com \
--community-member \
--root /srv/www/example.com \
--config-dir /etc/acme
If that doesn't work on the first try then change out --acme-url https://acme-v02.api.letsencrypt.org/directory for --acme-url https://acme-staging-v02.api.letsencrypt.org/directory while you debug. Otherwise your server could become blocked from Let's Encrypt for too many bad requests. Just know that you'll have to delete the certificates from the staging environment and retry with the production url since the tool cannot tell which certificates are "production" and which ones are "testing".
The --community-member flag is optional, but will provide me with analytics and allow me to contact you about important or mandatory changes as well as other relevant updates.
After you get the success message you can then use those certificates in your webserver config and restart it.
That will work as a cron job as well. You could run it daily and it will only renew the certificate after about 75 days. You could also put a cron job to send the "update configuration" signal to your webserver (normally HUP or USR1) every few days to cause it to start using the new certificates without even restarting (...or just have it restart).
Usage without a web server
If you just want to quickly test without even having a webserver running, this will do it for you:
sudo greenlock certonly --standalone \
--acme-version draft-11 --acme-url https://acme-v02.api.letsencrypt.org/directory \
--agree-tos --email jon#example.com --domains example.com \
--community-member \
--config-dir /etc/acme/
That runs expecting that you DO NOT have a webserver running on port 80, as it will start one temporarily just for the purpose of the certificate.
sudo is required for using port 80 and for writing to root and httpd-owned directories (like /etc and /srv/www). You can run the command as your webserver's user instead if that has the correct permissions.
Use Greenlock as your webserver
We're working on an option to bypass the middleman altogether and simply use greenlock as your webserver, which would probably work great for simple vhosting like it sounds like you're doing. Let me know if that's interesting to you and I'll make sure to update you about it.
Fourth
Let's Encrypt also has an official client called certbot which will likely work just as well, perhaps better, but back in the early days it was easier for me to build my own than to use theirs due to issues which they have long since fixed.
Whats important is the sub domains A record. It should be the IP Address of from where you are trying to request the sub domains certificate.
When I try to connect to the remote PostgreSQL database with a Rails 3.2 project I get this error:
FATAL: no pg_hba.conf entry for host "10.0.0.3", user "projectx", database "projectx", SSL off
My configuration on Rails looks like this:
staging:
adapter: postgresql
database: projectx
username: projectx
password: 123456
host: 10.0.0.3
encoding: utf8
template: template0
min_messages: warning
and on PostgreSQL looks like this:
hostssl all all 0.0.0.0/0 md5
hostssl all all ::/0 md5
Both machines are running on an Ubuntu 12.04.
I found posts saying that it should work automatically, which clearly doesn't happen. I found some saying that libpq didn't have SSL enabled and enabling it solved the problem, but no explanation on how to enable it. I can see when I look at the dependencies of libpq that it depends on the some SSL packages, so I would assume SSL support is compiled.
Some posts recommended adding this:
sslmode: require
or this:
sslmode: enabled
to enable ssl mode, but it had no effect for me. I read that it's silently ignored.
I also tried the database string approach, ending up with:
staging:
adapter: postgresql
database: "host=10.0.0.3 dbname=projectx user=projectx password=123456 sslmode=require"
and then I got the error:
could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
which seems to indicate that Rails was trying to connect to localhost or rather, the local PostgreSQL (there's none) instead of 10.0.0.3.
Any ideas?
As you wrote, normally the Ubuntu 12.x packages are set up so that SSL is activated, works out of the box, and in addition is the first method tried by rails, or any client that lets libpq deal with this stuff, which means almost all clients.
This automatic enabling is not necessarily true with other PostgreSQL packages or with a self-compiled server, so the answers or advice applying to these other contexts don't help with yours.
As your setup should work directly, this answer is a list of things to check to find out what goes wrong. Preferably, use psql first to test a connection setup rather than rails, so that generic postgresql issues can be ruled out first.
Client-side
The client-side sslmode parameter controls the sequence of connect attempts.
To voluntarily avoid SSL, a client would need to put sslmode=disable somewhere in the connection string, or PGSSLMODE=disable in the environment, or mess up with one of the other PGSSL* variables. In the unlikely case your rails process had this in its environment, that would explain the error you're getting, given that pg_hba.conf does not allow non-SSL connections.
Another reason to not try SSL is obviously when libpq is not compiled with SSL support but that's not the case with the Ubuntu packages.
The default for sslmode is prefer, described as:
prefer (default)
first try an SSL connection; if that fails, try a non-SSL connection
The SSL=off at the end of your error message relates to the last connect attempt that fails. It may be that SSL was tried and failed, or not tried at all, we can't know from this message alone. The connect attempt with SSL=off is rejected normally by the server per the policy set in pg_hba.conf (hostssl in the first column).
It's more plausible that the problem is server-side, because there are more things than can go wrong.
Server-side
Here are various things to check server-side:
There should be ssl=on in postgresql.conf (default location: /etc/postgresql/9.1/main/)
when connecting to localhost with psql, you should be greeted with a message like this:
psql (9.1.13)
SSL connection (cipher: DHE-RSA-AES256-SHA, bits: 256)
Type "help" for help.
The ca-certificates package should be installed and up-to-date.
The ssl-cert package should be installed and up-to-date.
Inside the postgres data directory (/var/lib/postgresql/9.1/main by default), there should be soft links:
server.crt -> /etc/ssl/certs/ssl-cert-snakeoil.pem or another valid certificate, and
server.key -> /etc/ssl/private/ssl-cert-snakeoil.key or another valid key.
/etc/ssl/certs and parent directories should be readable and cd'able by postgres.
The postgres unix user should be in the ssl-cert unix group (check with id -a postgres) otherwise it can't read the private key.
If changing postgresql.conf, be sure that postgresql gets restarted before doing any other test.
There shouldn't be any suspicious message about SSL in /var/log/postgresql/postgresql-9.1-main.log at startup time or at the time of the failed connection attempt.
Rails uses the PG gem for postgres to connect see here for the implementation:
https://github.com/rails/rails/blob/02a3c0e771b3e09173412f93d8699d4825a366d6/activerecord/lib/active_record/connection_adapters/postgresql_adapter.rb#L881
The PG gem uses libpg (c library) and the documentation on "PG::Connection.new" found here:
http://deveiate.org/code/pg/PGconn.html
Suggests the following options:
host
server hostname
hostaddr
server address (avoids hostname lookup, overrides host)
port
server port number
dbname
connecting database name
user
login user name
password
login password
connect_timeout
maximum time to wait for connection to succeed
options
backend options
tty
(ignored in newer versions of PostgreSQL)
sslmode
(disable|allow|prefer|require)
krbsrvname
kerberos service name
gsslib
GSS library to use for GSSAPI authentication
service
service name to use for additional parameters
So this would indicate that the connection string will not work (since it is not recognised by the adapter, this might be a mysql adapter option)
Also this indicates that the sslmode=required option should work, as this is a basic feature of libpg.
So:
database.yml
staging:
...
sslmode: "require"
...
should definitely do the trick, are you sure you use staging mode? // add sslmode to the other environments too to be sure.
Also libpg uses SSL by default as first try, maybe you see the error with SSL Off because SSL mode failed first, then libpq retried without ssl and eventually raised an error.
Please check your psql version,
older version do not support slmode=require.
It worked for me after upgrading psql to the latest version.