I have a sails application that is hosted on digitalocean via dokku. Everying runs and deploys fine and if I havigate to my domain, I can see that the app is working.
Now I have added a TLS certificate (so that my app is accessible via HTTPS) by:
Creating my private key and CSR request.
Using them to get an certificate from CA authority.
Adding my private key and issued certificate to config/local.js
tarballing key and certificate and adding them to dokku via dokku certs:add
So after all that if I push my app to dokku it boots just fine without any errors upon deployment phase. I can clearly see that upon deployment my app should be accessible via https from buildpack logs:
...
-----> Creating https nginx.conf
-----> Running nginx-pre-reload
Reloading nginx
-----> Setting config vars
DOKKU_APP_RESTORE: 1
-----> Shutting down old containers in 60 seconds
=====> c302066ebd1ecc0ac5323c3cbbcaf9132eebf905f5616e5b4407cecf2b316969
=====> Application deployed:
http://my-domain-here.com
https://my-domain-here.com
The only problem is that when I navigate to my domain, I get "502 bad gateway" error in browser and if I look at nginx's error log of the app I can see the following errors there:
2016/07/14 03:09:30 [error] 7827#0: *391 upstream prematurely closed connection while reading response header from upstream, client: --hidden--, server: my-domain-here.com, request: "GET / HTTP/1.1", upstream: "http://172.17.0.2:5000/", host: "getmocky.com"
What is wrong? How to fix it?
Ok, I have figured it out. It turned out that if you read closely about deployment in Sails you can see there a text like
don't worry about configuring Sails to use an SSL certificate. SSL will almost always be resolved at your load balancer/proxy server, or by your PaaS provider
What this means is that from my list I have to exclude p3 and after that everything will work.
Related
I am using server side CLI to get an SSL for my web app (following these instructions: https://github.com/dokku/dokku-letsencrypt).
After following the setup I ran:
root#taaalk:~# dokku letsencrypt taaalk
=====> Let's Encrypt taaalk
-----> Updating letsencrypt docker image...
0.1.0: Pulling from dokku/letsencrypt
Digest: sha256:af5f8529c407645e97821ad28eba328f4c59b83b2141334f899303c49fc07823
Status: Image is up to date for dokku/letsencrypt:0.1.0
docker.io/dokku/letsencrypt:0.1.0
Done updating
-----> Enabling ACME proxy for taaalk...
[ ok ] Reloading nginx configuration (via systemctl): nginx.service.
-----> Getting letsencrypt certificate for taaalk...
- Domain 'taaalk.taaalk.co'
darkhttpd/1.12, copyright (c) 2003-2016 Emil Mikulic.
listening on: http://0.0.0.0:80/
2020-04-28 23:12:10,728:INFO:__main__:1317: Generating new account key
2020-04-28 23:12:11,686:INFO:__main__:1343: By using simp_le, you implicitly agree to the CA's terms of service: https://letsencrypt.org/documents/LE-SA-v1.2-November-15-2017.pdf
2020-04-28 23:12:12,017:INFO:__main__:1406: Generating new certificate private key
2020-04-28 23:12:14,753:ERROR:__main__:1388: CA marked some of the authorizations as invalid, which likely means it could not access http://example.com/.well-known/acme-challenge/X. Did you set correct path in -d example.com:path or --default_root? Are all your domains accessible from the internet? Please check your domains' DNS entries, your host's network/firewall setup and your webserver config. If a domain's DNS entry has both A and AAAA fields set up, some CAs such as Let's Encrypt will perform the challenge validation over IPv6. If your DNS provider does not answer correctly to CAA records request, Let's Encrypt won't issue a certificate for your domain (see https://letsencrypt.org/docs/caa/). Failing authorizations: https://acme-v02.api.letsencrypt.org/acme/authz-v3/4241725520
2020-04-28 23:12:14,757:INFO:__main__:396: Saving account_key.json
2020-04-28 23:12:14,758:INFO:__main__:396: Saving account_reg.json
Challenge validation has failed, see error log.
Debugging tips: -v improves output verbosity. Help is available under --help.
-----> Certificate retrieval failed!
-----> Disabling ACME proxy for taaalk...
[ ok ] Reloading nginx configuration (via systemctl): nginx.service.
done
root#taaalk:~#
So it's easier to read the error was:
2020-04-28 23:12:14,753:ERROR:__main__:1388: CA marked some of the authorizations as invalid, which likely means it could not access http://example.com/.well-known/acme-challenge/X. Did you set correct path in -d example.com:path or --default_root? Are all your domains accessible from the internet? Please check your domains' DNS entries, your host's network/firewall setup and your webserver config. If a domain's DNS entry has both A and AAAA fields set up, some CAs such as Let's Encrypt will perform the challenge validation over IPv6. If your DNS provider does not answer correctly to CAA records request, Let's Encrypt won't issue a certificate for your domain (see https://letsencrypt.org/docs/caa/). Failing authorizations: https://acme-v02.api.letsencrypt.org/acme/authz-v3/4241725520
I did a lot of googling around and the most promising post I found on the subject was this one:
https://veryjoe.com/tech/2019/07/06/HTTPS-dokku.html
In the post it suggested checking my Dokku domain misconfiguration and missing network listeners.
I ran dokku domains:report to check for the misconfiguration. This returned:
root#taaalk:~# dokku domains:report
=====> taaalk domains information
Domains app enabled: true
Domains app vhosts: taaalk.taaalk.co
Domains global enabled: true
Domains global vhosts: taaalk.co
And I then ran dokku network:report to check for missing listeners:
root#taaalk:~# dokku network:report
=====> taaalk network information
Network attach post create:
Network attach post deploy:
Network bind all interfaces: false
Network web listeners: 172.17.0.4:5000
After talking things through with a friend we tried adding an 'A' record to my DNS with the host 'taaalk.taaalk.co'.
I then ran:
root#taaalk:~# dokku letsencrypt taaalk
=====> Let's Encrypt taaalk
-----> Updating letsencrypt docker image...
0.1.0: Pulling from dokku/letsencrypt
Digest: sha256:af5f8529c407645e97821ad28eba328f4c59b83b2141334f899303c49fc07823
Status: Image is up to date for dokku/letsencrypt:0.1.0
docker.io/dokku/letsencrypt:0.1.0
Done updating
-----> Enabling ACME proxy for taaalk...
[ ok ] Reloading nginx configuration (via systemctl): nginx.service.
-----> Getting letsencrypt certificate for taaalk...
- Domain 'taaalk.taaalk.co'
darkhttpd/1.12, copyright (c) 2003-2016 Emil Mikulic.
listening on: http://0.0.0.0:80/
2020-04-30 13:39:58,623:INFO:__main__:1406: Generating new certificate private key
2020-04-30 13:40:03,879:INFO:__main__:396: Saving fullchain.pem
2020-04-30 13:40:03,880:INFO:__main__:396: Saving chain.pem
2020-04-30 13:40:03,880:INFO:__main__:396: Saving cert.pem
2020-04-30 13:40:03,880:INFO:__main__:396: Saving key.pem
-----> Certificate retrieved successfully.
-----> Installing let's encrypt certificates
-----> Unsetting DOKKU_PROXY_PORT
-----> Setting config vars
DOKKU_PROXY_PORT_MAP: http:80:5000
-----> Setting config vars
DOKKU_PROXY_PORT_MAP: http:80:5000 https:443:5000
-----> Configuring taaalk.taaalk.co...(using built-in template)
-----> Creating https nginx.conf
Enabling HSTS
Reloading nginx
-----> Configuring taaalk.taaalk.co...(using built-in template)
-----> Creating https nginx.conf
Enabling HSTS
Reloading nginx
-----> Disabling ACME proxy for taaalk...
[ ok ] Reloading nginx configuration (via systemctl): nginx.service.
done
Which was successful.
However, now taaalk.taaalk.co has an SSL, but taaalk.co does not.
I don't know where to go from here. I feel it makes sense to change the vhost from taaalk.taaalk.co to taaalk.co, but I am not sure if this is correct or how to do it. The Dokku documentation does not seem to cover changing the vhost name: http://dokku.viewdocs.io/dokku/configuration/domains/
Thank you for any help
Update
I changed the vhost to taaalk.co, so I now have:
root#taaalk:~# dokku domains:report
=====> taaalk domains information
Domains app enabled: true
Domains app vhosts: taaalk.co
Domains global enabled: true
Domains global vhosts: taaalk.co
However, I still get the following error:
root#taaalk:~# dokku letsencrypt taaalk
=====> Let's Encrypt taaalk
-----> Updating letsencrypt docker image...
0.1.0: Pulling from dokku/letsencrypt
Digest: sha256:af5f8529c407645e97821ad28eba328f4c59b83b2141334f899303c49fc07823
Status: Image is up to date for dokku/letsencrypt:0.1.0
docker.io/dokku/letsencrypt:0.1.0
Done updating
-----> Enabling ACME proxy for taaalk...
[ ok ] Reloading nginx configuration (via systemctl): nginx.service.
-----> Getting letsencrypt certificate for taaalk...
- Domain 'taaalk.co'
darkhttpd/1.12, copyright (c) 2003-2016 Emil Mikulic.
listening on: http://0.0.0.0:80/
2020-04-30 17:01:12,996:INFO:__main__:1406: Generating new certificate private key
2020-04-30 17:01:46,068:ERROR:__main__:1388: CA marked some of the authorizations as invalid, which likely means it could not access http://example.com/.well-known/acme-challenge/X. Did you set correct path in -d example.com:path or --default_root? Are all your domains accessible from the internet? Please check your domains' DNS entries, your host's network/firewall setup and your webserver config. If a domain's DNS entry has both A and AAAA fields set up, some CAs such as Let's Encrypt will perform the challenge validation over IPv6. If your DNS provider does not answer correctly to CAA records request, Let's Encrypt won't issue a certificate for your domain (see https://letsencrypt.org/docs/caa/). Failing authorizations: https://acme-v02.api.letsencrypt.org/acme/authz-v3/4277663330
Challenge validation has failed, see error log.
Debugging tips: -v improves output verbosity. Help is available under --help.
-----> Certificate retrieval failed!
-----> Disabling ACME proxy for taaalk...
[ ok ] Reloading nginx configuration (via systemctl): nginx.service.
done
root#taaalk:~#
Again, reproduced below for ease of reading:
2020-04-30 17:01:46,068:ERROR:__main__:1388: CA marked some of the authorizations as invalid, which likely means it could not access http://example.com/.well-known/acme-challenge/X. Did you set correct path in -d example.com:path or --default_root? Are all your domains accessible from the internet? Please check your domains' DNS entries, your host's network/firewall setup and your webserver config. If a domain's DNS entry has both A and AAAA fields set up, some CAs such as Let's Encrypt will perform the challenge validation over IPv6. If your DNS provider does not answer correctly to CAA records request, Let's Encrypt won't issue a certificate for your domain (see https://letsencrypt.org/docs/caa/). Failing authorizations: https://acme-v02.api.letsencrypt.org/acme/authz-v3/4277663330
Challenge validation has failed, see error log.
The fix was quite simple. First I made A records for both www. and root for my url pointing at my server.
I then set my vhosts to be both taaalk.co and www.taaalk.co with dokku domains:add taaalk www.taaalk.co, etc...
I then removed all the certs associated with taaalk.co with dokku certs:remove taaalk.
I then ran dokku letsencrypt taaalk and everything worked fine.
To anyone looking on who tried what Joshua did and still didn't get letsencrypt to generate certs:
My problem was that I didn't have any port mapping for port 80 on dokku, so letsencrypt was unable to communicate with the server to authorise the new cert, giving this error:
ERROR:__main__:1388: CA marked some of the authorizations as invalid, which likely means it could not access http://example.com/.well-known/acme-challenge/X. Did you set correct path in -d example.com:path or --default_root? Are all your domains accessible from the internet? Please check your domains' DNS entries, your host's network/firewall setup and your webserver config. If a domain's DNS entry has both A and AAAA fields set up, some CAs such as Let's Encrypt will perform the challenge validation over IPv6. If your DNS provider does not answer correctly to CAA records request, Let's Encrypt won't issue a certificate for your domain (see https://letsencrypt.org/docs/caa/). Failing authorizations: https://acme-v02.api.letsencrypt.org/acme/authz-v3/4277663330
Challenge validation has failed, see error log.
Silly me - I had removed the port http 80 mapping in dokku as I thought it was unnecessary.
To fix the problem I just added the port mapping again:
dokku proxy:ports-add myapp http:80:4000
(Note: my app connects to port 4000 hence above, your port may be different)
And then ran dokku letsencrypt:
dokku letsencrypt myapp
This sequence is important, setting the proxy ports correctly allows letsencrypt to connect and autorenew the TLS certs again.
I currently have an HTTPS Load Balancer setup operating with a 443 Frontend, Backend and Health Check that serves a single host nginx instance.
When navigating directly to the host via browser the page loads correctly with valid SSL certs.
When trying to access the site through the load balancer IP, I receive a 502 - Server error message. I check the Google logs and I notice "failed_to_pick_backend" errors at the load balancer. I also notice that it failing health checks.
Some digging around leads me to these two links: https://cloudplatform.googleblog.com/2015/07/Debugging-Health-Checks-in-Load-Balancing-on-Google-Compute-Engine.html
https://github.com/coreos/bugs/issues/1195
Issue #1 - Not sure if google-address-manager is running on the server
(RHEL 7). I do not see an entry for the HTTPS load balancer IP in the
routes. The Google SDK is installed. This is a Google-provided image
and if I update the IP address in the console, it also gets updated on
the host. How do I check if google-address-manager is running on
RHEL7?
[root#server]# ip route ls table local type local scope host
10.212.2.40 dev eth0 proto kernel src 10.212.2.40
127.0.0.0/8 dev lo proto kernel src 127.0.0.1
127.0.0.1 dev lo proto kernel src 127.0.0.1
Output of all google services
[root#server]# systemctl list-unit-files
google-accounts-daemon.service enabled
google-clock-skew-daemon.service enabled
google-instance-setup.service enabled
google-ip-forwarding-daemon.service enabled
google-network-setup.service enabled
google-shutdown-scripts.service enabled
google-startup-scripts.service enabled
Issue #2: Not receiving a 200 OK response. The certificate is valid
and the same on both the LB and server. When running curl against the
app server I receive this response.
root#server.com curl -I https://app-server.com
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.haxx.se/docs/sslcerts.html
Thoughts?
You should add firewall rules for the health check service -
https://cloud.google.com/compute/docs/load-balancing/health-checks#health_check_source_ips_and_firewall_rules and make sure that your backend service listens on the load balancer ip (easiest is bind to 0.0.0.0) - this is definitely true for an internal load balancer, not sure about HTTPS with an external ip.
A couple of updates and lessons learned:
I have found out that "google-address-manager" is now deprecated and replaced by "google-ip-forward-daemon" which is running.
[root#server ~]# sudo service google-ip-forwarding-daemon status
Redirecting to /bin/systemctl status google-ip-forwarding-daemon.service
google-ip-forwarding-daemon.service - Google Compute Engine IP Forwarding Daemon
Loaded: loaded (/usr/lib/systemd/system/google-ip-forwarding-daemon.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2017-12-22 20:45:27 UTC; 17h ago
Main PID: 1150 (google_ip_forwa)
CGroup: /system.slice/google-ip-forwarding-daemon.service
└─1150 /usr/bin/python /usr/bin/google_ip_forwarding_daemon
There is an active firewall rule allowing IP ranges 130.211.0.0/22 and 35.191.0.0/16 for port 443. The target is also properly set.
Finally, the health check is currently using the default "/" path. The developers have put an authentication in front of the site during the development process. If I bypassed the SSL cert error, I received a 401 unauthorized when running curl. This was the root cause of the issue we were experiencing. To remedy, we modified nginx basic authentication configuration to disable authentication to a new route (eg. /health)
Once nginx configuration was updated and the path was updated to the new /health route at the health check, we were receivied valid 200 responses. This allowed the health check to return healthy instances and allowed the LB to pass through traffic
I have a digitalocean droplet where I have hosted a Laravel application App Url . I have added a SSL using Tutorial Link. But when I run the application in https it returns 404 page not found error. can anyone check the issue. Config file ( assamgas.tk.conf ) is below.
I'm seeing two things wrong here.
1) The web server is not redirecting to port 443 (SSL/HTTPS)
2) The certificate is not present.
I could not find any certs through HTTPS on your server.
I suggest, run through the tutorial again, or try this DigitalOcean tutorial
Don't generate too many production certificates while you test, rather use the Let's Encrypt staging server for your testing, when you get the self-signed certificate, then you switch over to the production server for Let's Encrypt, otherwise you will get Rate Limited for a week.
I have a server that runs Nexus, I can get access to Nexus and download artifacts via https (browser) without problem.
Now I want to get the artifact using wget via https:
wget https://195.20.100.100:8081/repository/myrepo/com/myrepo/program/1.0-SNAPSHOT/program.tar.gz
and it tells me :
WARNING: cannot verify 195.20.100.100's certificate, issued by ‘/C=US/ST=Unspecified/L=Unspecified/O=Sonatype/OU=Example/CN=*.195.20.100.100’:
Self-signed certificate encountered.
Proxy request sent, awaiting response... 401 Unauthorized
Authorization failed.
I want to know the exact steps I have to do?
Thanks in advance.
This isn't a Nexus Repository Manager issue per se, I believe you just need to do something akin to the answer in this post: wget, self-signed certs and a custom HTTPS server
I use self-signed certificate to crypt.
after some work, the https is working for git, but the git#xxxxx way does not work. here's the output:
Cloning into 'test'...
/usr/lib/ruby/1.9.1/net/http.rb:762:in `initialize': Connection refused - connect(2 (Errno::ECONNREFUSED)
from /usr/lib/ruby/1.9.1/net/http.rb:762:in `open'
from /usr/lib/ruby/1.9.1/net/http.rb:762:in `block in connect'
from /usr/lib/ruby/1.9.1/timeout.rb:54:in `timeout'
from /usr/lib/ruby/1.9.1/timeout.rb:99:in `timeout'
from /usr/lib/ruby/1.9.1/net/http.rb:762:in `connect'
from /usr/lib/ruby/1.9.1/net/http.rb:755:in `do_start'
from /usr/lib/ruby/1.9.1/net/http.rb:744:in `start'
from /home/git/gitlab-shell/lib/gitlab_net.rb:56:in `get'
from /home/git/gitlab-shell/lib/gitlab_net.rb:17:in `allowed?'
from /home/git/gitlab-shell/lib/gitlab_shell.rb:51:in `validate_access'
from /home/git/gitlab-shell/lib/gitlab_shell.rb:21:in `exec'
from /home/git/gitlab-shell/bin/gitlab-shell:16:in `<main>'
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
both ssh and http works fine before I start the self-signed cert thing, so now the ssh+ssl not working.
I'm using nginx, gitlab 5.3, followed the install instruction on gitlab website.
I did a check, too.
~> sudo -u git -H /home/git/gitlab-shell/bin/check
Check GitLab API access: FAILED. code: 301
Check directories and files:
/home/git/repositories: OK
/home/git/.ssh/authorized_keys: OK
I think the 301 might be this part in my nginx config:
server {
listen 80;
server_name gitlab.MYDOMAIN.com;
rewrite ^ https://$server_name$request_uri? permanent;
}
don't know if it's related or something
thanks.
The issue you're having is when you enabled ssl you also redirected http to https.
Accessing the old http:// url works on most clients, but gitlab-shell (used as part of the login process on the gitlab server) will not follow 3xx redirects and instead return an error, thus disabling ssh-based access.
The fix is to edit /home/git/gitlab-shell/config.yml and replace the http:// in gitlab_url: with https://.
If you're using self-signed certificates, you may also have to set self_signed_cert: true under http_settings:
for gitlab 6.0 this fixed the error for me: if using self signed certificates make sure that in gitlab-shell/config.yml your gitlab_url is https://... rather than http://... and that you specify self_signed_cert: true
ssh+ssl ?
But the two aren't related from the client's side perspective (unless you want to do some kind of ssh tunneling through NGiNX)
An ssh connection would talk to the ssh daemon (which doesn't need any certificate) and would require that the correct ssh public key has been registered to the server account ~/.ssh/authorized_keys (done by GitLab when a user register said public key in his/her profile page).
The gitlab-shell/bin/check error is another issue, again not related with ssh issue.
It is gitlab-shell which tries to contact locally gitlab through an https API.
Solve that locally, and any connection (https or ssh) from the client will succeed.
In particular, check issues 3892, and see if you need to add a CA to the .crt file served by NGiNX.
LJ Vankuiken adds in the comments:
the self-signed flag needs to be set to "true" if the certificate chain presented by your gitlab server cannot be completely verified by the gitlab-shell.
I was able to set the self-signed flag to "false" by adding the signing authority's certificate to the system certificate store.
For what it's worth in case anyone gets similar, I am running Gitlab on port 8080 and because gitlab_url in gitlab-shell/config.yml was NOT pointing to port 8080 it was failing with a redirect error (which my server running on 80 was kicking up).
So to summarize, if you access gitlab via http://gitlab.mydomain.com:8080/ make sure gitlab_url points to http://gitlab.mydomain.com:8080/ as well!