I have a Ghost blog hosted on digitalocean, my domain can only be accessible with a secure connection (it's a .dev site).
My site is available when I access it with www, e.g. www.androidoss.dev, but not when accessed directly as androidoss.dev.
What could be the issue?
If you have deployed the Ghost on the DigitalOcean server then it's running behind the Nginx probably. So during the Ghost installation there a command is executed which is ghost setup nginx which setup Nginx for you and then run ghost setup ssl which set up Let's Encrypt SSL for the provided domain name and it doesn't create a redirection rule from non-www to www.
So you can do this by adding a redirection URI in your Nginx file.
You have to add these lines in the server block for http. It will look like this and the file-path is /etc/nginx/sites-available/ww.example.com
server {
listen 80;
...................
...................
}
you have to add the below lines at the place of dotted lines.
server_name example.com www.example.com;
return 301 https://www.example.com$request_uri;
Related
I'm learning how to build and host my own website using Python and Flask, but I'm unable to make my website work as I keep getting an infinite redirect loop when I try to access my website through my domain name.
I've made my website using Python, Flask, and Flask-Flatpages. I uploaded the code to GitHub and pulled it onto a Raspberry Pi 4 that I have at my house. I installed gunicorn on the RasPi to serve the website and set up two workers to listen for requests. I've also set up nginx to act as a reverse proxy and listen to requests from outside. Here is my nginx configuration:
server {
if ($host = <redacted>.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
# listen on port 80 (http)
listen 80;
server_name <redacted>.com www.<redacted>.com;
location ~ /.well-known {
root /home/pi/<redacted>.com/certs;
}
location / {
# redirect any requests to the same URL but on https
return 301 https://$host$request_uri;
}
}
server {
# listen on port 443 (https)
listen 443;
ssl on;
server_name <redacted>.com www.<redacted>.com;
# location of the SSL certificate
ssl_certificate /etc/letsencrypt/live/<redacted>.com/fullchain.pem; # m$
ssl_certificate_key /etc/letsencrypt/live/<redacted>.com/privkey.pem; #$
# write access and error logs to /var/log
access_log /var/log/blog_access.log;
error_log /var/log/blog_error.log;
location / {
# forward application requests to the gunicorn server
proxy_pass http://localhost:8000;
proxy_redirect off;
proxy_set_header X_Forwarded_Proto $scheme;
proxy_set_header Host $host;
location /static {
# handle static files directly, without forwarding to the application
alias /home/pi/<redacted>.com/blog/static;
expires 30d;
}
}
When I access the website by typing in the local IP of the RasPi (I've set up a static IP address in /etc/dhcpcd.conf), the website is served just fine, although it seems like my browser won't recognize the SSL certificate even though Chrome says the certificate is valid when I click on Not Secure > Certificate next to the .
To make the website public, I've forwarded port 80 on my router to the RasPi and set up ufw to allow requests only from ports 80, 443, and 22. I purchased a domain name using GoDaddy, then added the domain to CloudFlare by changing the nameservers in GoDaddy (I'm planning to set up cloudflare-ddns later, which is why I added the domain to CloudFlare in the first place). As a temporary solution, I've added the current IP of my router to the A Record in the CloudFlare DNS settings, which I'm hoping will be the same for the next few days.
My problem arises when I try to access my website via my public domain name. When I do so, I get ERR_TOO_MANY_REDIRECTS, and I suspect this is due to some problem with my nginx configuration. I've already read this post and tried changing my CloudFlare SSL/TLS setting from Flexible to Full (strict). However, this leads to a different problem, where I get a CloudFlare error 522: connection timed out. None of the solutions in the CloudFlare help page seem to apply to my situation, as I've confirmed that:
I haven't blocked any CloudFlare IPs in ufw
The server isn't overloaded (I'm the only one accessing it right now)
Keepalive is enabled (I haven't changed anything from the default, although I'm unsure whether it is enabled by default)
The IP address in the A Record of the DNS Table matches the Public IP of my router (found through searching "What is my IP" on google)
Apologies if there is a lot in here for a single question, but any help would be appreciated!
I only see one obvious problem with your config, which is that this block that was automatically added by certbot should probably be removed:
if ($host = <redacted>.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
Because that behavior is already specified in the location / {} block, and I think the Certbot rule may take effect before the location ~ /.well-known block and break that functionality. I'm not certain about that, and I don't think that would cause the redirects, but you can test the well-known functionality yourself by trying to access http://yourhost.com/.well-known and seeing if it redirects to HTTPS or not.
On that note, the immediate answer to your question is, get more information about what's happening! My next step would be to see what the redirect loop is - your browser may show this in its network requests log, or you can use a command-line tool like curl or httpie or similar to try to access your site via the hostname and see what requests are being made. Is it simply trying to access the same URL over and over, or is it looping through multiple URLs? What are they? What does that point at?
And as a side note, it makes sense that Chrome wouldn't like your certificate when accessing it via IP - certificates are tied to one or more hostnames, so when you're accessing it over an IP address, the hostname doesn't match, so Chrome is probably (correctly) pointing that out and warning you that you're not at the hostname the certificate says you should be at.
I have an app that is "multi-domain", Other domain just have to point to the IP address to run on my app on the web-server.
Using letsencrypt, I have also generated SSL for those pointed domain using "HTTP" challenges.
Now, my problem is - how do I tell my webserver to read that generated SSL files for the pointed domain.
They are not hosted on my server with config settings. They are just pointed with the IP address to my App and My app renders the content based on a domain name.
I am using VestaCP to manage server, domain, and email
Pointed domains have no config file on my server. They work on the web-application level.
How do I set https for that pointed domain? On a note, I already have valid SSL files - just not sure, where to post or point them, since there is no config.
Can they be kept using "htaccess" or at a web-application level?
E.g, My app runs at "http://example.com" and shows content for example.com, and for the second domain that is pointed to my server "http://anotherExample.com" - my app shows the content for "anotherExample.com" and so on and so forth. "example.com" is hosted on my server with Nginx and apache config, so SSL is set. But anotherExample.com is not hosted on server level but only at the app level - now, where do I set SSL for it? I have already successfully generated SSL using letsencrypt with HTTP challenge.
Update: I run a platform like Blogspot.com Multi-Domain blogs - How to serve SSL for pointed domain?
Thanks
I don't think what you want is directly possible. From your question, I think you are creating multiple A records which points to your application IP address, from which your application decides what data to serve.
So what you have to do is to get SSL certificate for each and every domain you want to serve. Then configure the web server to send the corresponding certificate. This can be done easily with most web servers. Eg: On nginx
server {
listen *:443 ssl;
server_name domain1.com;
ssl_certificate /path/to/domain1.crt;
ssl_certificate_key /path/to/domain1.key;
...
}
server {
listen *:443 ssl;
server_name domain2.com;
ssl_certificate /path/to/domain2.crt;
ssl_certificate_key /path/to/domain2.key;
...
}
Incase you are serving on different subdomains like domain1.example.com and domain2.example.com, then you could get a wildcard certificate which will do the trick.
I'm trying to redirect from an old domain to a new one.
The old domain used to have an SSL cert, but it doesn't any more.
So I need to 301 redirect these:
http://olddomain.co.uk
http://www.olddomain.co.uk
https://olddomain.co.uk
https://www.olddomain.co.uk
All to: https://www.newdomain.co.uk
This is my config:
server {
listen 80;
listen 443 ssl;
server_name olddomain.co.uk www.olddomain.co.uk;
return 301 https://www.newdomain.co.uk;
}
I'm using http://www.redirect-checker.org to test.
Both of the http URL's redirect fine, however the https URL's are not found at all, as if this server directive doesn't catch the https URL's.
Is that because I need an SSL cert even though I'm not serving anything..?
Is an SSL cert still needed, just to redirect..?
If not, why would this not work..?
EDIT
To be clear, I don't see cert errors, Chrome says "This site can't be reached", it does't say anything about a cert. redirect-checker.org says "no URLs found".
EDIT 2
I've found another .conf file, which is working (all 4 url's, inc 2 https, redirecting, without a cert installed). This is copy-pasted:
server {
listen 80;
listen 443 ssl;
server_name thepreventduty.com www.thepreventduty.com;
return 301 $scheme://www.thepreventduty.co.uk$request_uri;
}
These all redirect:
http://thepreventduty.com
http://www.thepreventduty.com
https://thepreventduty.com
https://www.thepreventduty.com
To https://www.thepreventduty.co.uk, and I don't have an ssl cert for thepreventduty.com.
You can see if works here: http://www.redirect-checker.org/
When I add another .conf for another domain (I'm using include websites/*.conf; in nginx.conf), exact same server directive, just the domain names changed - it doesn't work!
Why..?
HTTPS connection means HTTP connections in SSL-session.
For establishing SSL-session you need certificate and key.
From official site of nginx:
To configure an HTTPS server, the ssl parameter must be enabled on listening sockets in the server block, and the locations of the server certificate and private key files should be specified
So, you need specify locations of certificate and key.
In case of incorrect SSL you will get cert error before redirect.
I recommend you use acme.sh for getting valid certificate and key.
At first, you need temporary disable redirects and specify root directory of old domain.
Then follow instruction:
https://github.com/Neilpang/acme.sh
Once it is done, enable redirect back.
So I created an Ubuntu 16.04 droplet in DigitalOcean and used ServerPilot to set it up. ServerPilot automatically creates a webserver on top of your Ubuntu using Apache and nginx as reverse proxy.
Now I'm not sure how I can go about installing Let's Encrypt SSL on a reverse proxy server. Do I have to run Certbot for nginx since nginx serves the frontend? I'm trying to be able to use HTTPS to access my site.
Is procedure different for reverse proxy servers?
No, there's no difference. You can either use the webroot of Apache or include a rule in Nginx to answer all ACME challenge requests directly.
Example
server {
listen 80;
server_name example.org;
location / {
return 301 https://$host$request_uri;
}
location /.well-known/acme-challenge {
root /tmp;
}
}
Then use /tmp or any other path for your challenges. Personally, I use my own client written in PHP and much simpler than the official one. See https://github.com/kelunik/acme-client.
I have a web service hosted on local ip 192.168.1.21:8080 (Apache Tomcat) which is up and running (ie I can surf to that IP and get the tomcat front page as expected).
I'm now trying to set up a proxy rule in my nginx saying that the url "jft.pdf.home.se" should redirect to that ip (using below nginx proxy rule:)
# GeneratePDF
server{
listen 80;
server_name jft.pdf.home.se;
#GeneratePDF
location / {
proxy_pass http://192.168.1.21:8080/;
include /etc/nginx/proxy_params;
}
}
When I try to surf to jft.pdf.home.se I get page cannot be found error. Again, if I use 192.168.1.21:8080, it works fine.
I also tried changing server_name to pdf.home.se but with the same result.
Can anyone see what I might be missing?
I soon realized that I hadn't posted this DNS yet which was what caused the page not found!