I am trying to setup a self-hosted GitLab instance, everything works except when I try to create a https connection using Let's encrypt. I get the following error when trying to reconfigure the GitLab instance:
There was an error running gitlab-ctl reconfigure:
letsencrypt_certificate[gitlab.***.org] (letsencrypt::http_authorization line 6) had an error: Acme::Client::Error::AccountDoesNotExist: acme_certificate[staging] (/opt/gitlab/embedded/cookbooks/cache/cookbooks/letsencrypt/resources/certificate.rb line 41) had an error: Acme::Client::Error::AccountDoesNotExist: No account exists with the provided key
My external_url=https://gitlab.***.org, and on my network I have set port forwarding for both port :80 and :443. I also set the DNS to my IP, this works as the site is reachable when not secured.
Hope someone recognizes the error, I looked all over and didn't see it pop up anywhere.
Best Regards
I had the same problem while I tried to change the url of my GitLab.
I solved this issue thanks to : https://gbe0.com/posts/linux/server/gitlab-acme-account-does-not-exist/, by desactiving the old Acme private key then reloading GitLab config
sudo mv /etc/acme/account_private_key.pem /etc/acme/account_private_key.pem.backup
sudo gitlab-ctl reconfigure
Related
I am using Google Cloud deployment manager - Wordpress click to deploy solution.
I installed a certificate through the Virtual machine SSH on the compute engine page using Certbot. Immediately after I installed the certificate the page started showing
"ssl_error_bad_cert_domain " and didn´t open.
I went back to the SSH and deleted the certificate by using the certbot command $ sudo certbot delete . Since that didn´t solved the error I tried turning off and on and also restarting the VM which didn't resolve the issue either
I could see in the logs explorer there was an error coming from the VM saying: Invalid ssh key entry - expired key:[expired_key] so I requested a new one through the VM ssh using:
ssh-keygen -t rsa -f ~/.ssh/gcloud_instance1 -C username printed the content of cd ~/.ssh && cat gcloud_instance1.pub and then added that to the VM ssh-keys text-area. That did stop the errors on the logs explorer but didn't solve the issue since the Wordpress implementation still doesn't open.
Another thing to add is that when the VM was turned off and on the IP address changed, which I am not sure it is also causing the crash.
This is how the page currently looks like: webpage failing
This is the logs explorer : https://docs.google.com/spreadsheets/d/1cKXWkfaFbmUFakomwM_-TtoSelxWKBDxNBK7fnMy_PA/edit?usp=sharing
Any ideas what could be happening ?
Thanks
The "ssl_error_bad_cert_domain" error can happen when the SSL certificate is not properly installed or is not issued for the correct domain.
My guess is that you had a typo in the domain name or you need a static ip.
You can use this "-sudo certbot certificates" command to verify after the installation.
It is recommended to have a static IP for better stability, especially if you are using SSL certificates.
Thanks you for taking the time out for helping. I am facing an issue with my apache server and the story goes like this:
I was running an ubuntu 18.04 server and my SSL(letsencrypt ssl obtained through certbot) got expired when I ran the command : certbot renew but it gave me errors relating to DNS. Then I thought it would be a good idea if I simply delete the existing certificate and install a new one so I googled how to delete a ssl certificate using certbot and got to know about sudo certbot delete but it didn't worked as expected and when I restarted the server apache didn't started and when I ran the command apache2ctl configtest it returned an error saying :
AH00526: Syntax error on line 20 of /etc/apache2/sites-enabled/000-default-le-ssl.conf:
SSLCertificateFile: file '/etc/letsencrypt/live/tomebox.in/fullchain.pem' does not exist or is empty
Action 'configtest' failed.
The Apache error log may have more information.
Can anyone please help me understanding and resolving the issue and getting my website back to normal.
You'll need to edit the etc/apache2/sites-enabled/000-default-le-ssl.conf file to change the file being referenced, as you have deleted that file. You can create a new certificate to reference using certbot instructions, or delete 000-default-le-ssl.conf altogether if you are no longer using SSL.
To edit the file, navigate to the directory in SSH using
cd etc/apache2/sites-enabled/
and then edit the file as root using
sudo nano 000-default-le-ssl.conf
ctrl+ x will exit and give option to save when done. You may want to restart apache2 after:
sudo service apache2 restart
I'm a little bit confused about https communication with influxdb. I am running a 1.8 Influxdb instance on a virtual machine with a public IP. It is an Apache2 server but for now I am not willing to use it as a webserver to display web pages to clients. I want to use it as a database server for influxdb.
I obtained a valid certificate from Let's Encrypt, indeed the welcome page https://datavm.bo.cnr.it works properly over encrypted connection.
Then I followed all the instructions in the docs in order to enable https: I put the fullchain.pem file in /etc/ssl directory, I set the file permissions (not sure about the meaning of this step though), I edited influxdb.conf with https-enabled = true and set the path for https-certificate and https-private.key (fullchain.pem for both, is it right?). Then, systemctl restart influxdb. When I run influxdb -ssl -host datavm.bo.cnr.it I get the following:
Failed to connect to https://datavm.bo.cnr.it:8086: Get https://datavm.bo.cnr.it:8086/ping: http: server gave HTTP response to HTTPS client
Please check your connection settings and ensure 'influxd' is running.
Any help in understanding what I am doing wrong is very appreciated! Thank you
I figured out at least a part of the problem. It was a problem related to permissions on the *.pem files. This thing looks weird because if I tip the following, as documentation says, it does not connect.
sudo chmod 644 /etc/ssl/<CA-certificate-file>
sudo chmod 600 /etc/ssl/<private-key-file>
If, instead, I tip the second line with 644 all works perfectly. But this way I'm giving to anyone the permission to read the private key! I'm not able to figure out this point.
UPDATE
If I put inside /etc/ssl/ the symlinks that point to the .pem files that live inside /etc/letsencrypt/live/hostname the connection is refused. Only if I put a copy of the files the ssl connection starts.
The reason I am willing to put the links inside /etc/ssl/ is the automatic renew of the certificates.
Anyone can help?
I've just done a fresh install of Ubuntu 20.04 and followed the Digital Ocean instructions to get my apache server up and running:
https://www.digitalocean.com/community/tutorials/how-to-install-linux-apache-mysql-php-lamp-stack-on-ubuntu-20-04
Which worked fine for HTTP traffic, then I used the Digital Ocean instructions (which I knew, but followed them anyway) to set up for SSL (https) access:
https://www.digitalocean.com/community/tutorials/how-to-secure-apache-with-let-s-encrypt-on-ubuntu-20-04
I selected the option to redirect all traffic to https. I opened my firewall using sudo ufw allow 'Apache Full'.
But I am unable to see my sites - the browsers just timeout. I have tried disabling ufw just to see, and nope, nothing.
SSL Labs just gives me an "Assessment failed: Unable to connect to the server" error.
I also ran https://check-your-website.server-daten.de/?q=juglugs.com
and it timed out:
I have deleted the letsencrypt stuff and ran through it again three times with the same result, and now I'm stuck...
Everything I've searched points to a firewall error, but as I've said, I've disabled that and have the same result. The router settings have not been changed since I did my fresh Ubuntu install.
Any help gratefully received.
Thanks in advance.
on8tom answered this one for me - In setting up the new build of Ubuntu, my local IP address for the apache server had changed, and my Virgin Media Hub only had port 443 open to the old IP address.
Many thanks for pointing me at that (but I should have checked that before posting this - kicking myself!)
I am trying to use knife from my laptop to connect to a newly configured Chef server hosted on AWS. I know what is listed below is the right direction for me but I'm not sure how to go about this exactly.
If you are not able to connect to the server using the hostname ip-xx-x-x-xx.ec2.internal
you will have to update the certificate on the server to use the correct hostname.
I had this same problem. The problem is that EC2 instances place their private ip into their hostname file. Which causes chef to self assign certs to the internal ip. When you do knife ssl check you'll probably get an error message that looks like this:
ERROR: The SSL cert is signed by a trusted authority but is not valid for the given hostname
ERROR: You are attempting to connect to: 'ec2-x-x-x-x.us-west-2.compute.amazonaws.com'
ERROR: The server's certificate belongs to 'ip-y-y-y-y.us-west-2.compute.internal'
connecting to the public IP is correct however you'll continue to get this error if you don't configure your chef server to use your public dns when signing the cert.
EDIT: Chef's documentation used to have steps to correct this issue, but since the time I initially answered this question they have removed those steps from their tutorial. The following steps worked for me with Chef 12, Ubuntu 16 on an ec2 instance.
ssh onto your chef server
open your hostname file with the following command sudo vim /etc/hostname
remove the line containing you internal ip and replace it with your public ip and save the file.
reboot the server with sudo reboot
run sudo chef-server-ctl reconfigure (this signs a new certificate, among other things)
Go back to your workstation and use knife ssl fetch followed by knife ssl check and you should be good to go.
What you could ALSO do, is just complete steps 1 - 4 before you even install chef onto the server.
Update public IP on Chef Server
run chef-server-ctl reconfigure on Server (No reboot needed)
Update the knife.rb on Workstation with new IP address
run 'knife ssl fetch' on the Chef Workstation
This should resolve the issue, to confirm run 'knife client list'
You can't connect to an internal IP (or DNS that points to an internal IP) from outside AWS. Those are nonroutable IP addresses.
Instead, connect to the public IP of the instance, if you have one.