since now, I used my server only for education purposes and for encoding video. Now i wanted to try to host some site on it (for my friend) using nginx and apache, but the problem is, that even though it successfully loads on my computer, and some other ones too, I also have seen that the page didn't load and instead of it was showing just the "Welcome to nginx on debian" page.
How can i make it work everytime?
/etc/nginx/sites-available/uterfleru.cz :
server {
listen 80;
root /var/uterfleru.cz;
index index.html index.php index.htm;
server_name uterfleru.cz;
}
DNS - A:
uterfleru.cz 64.188.46.67
www.uterfleru.cz 64.188.46.67
64.188.46.67 is ipv4 of my server,
http://uterfleru.cz/ is the webpage.
server_name uterfleru.cz; means exactly uterfleru.cz domain name. To make this server block working for www subdomain you have to modify it like that:
server_name www.uterfleru.cz uterfleru.cz;
To make it work with any subdomain you have to change it to:
# synonym of *.uterfleru.cz uterfleru.cz;
server_name .uterfleru.cz;
To make this server block work by default you have to remove /etc/nginx/sites-enabled/default.conf file and modify your listen directive like that:
listen 80 default;
Official documentation have all the information you need, it's one of the best documents for software I've ever seen and I highly recommend you learn to make use of it.
Related
I'm learning how to build and host my own website using Python and Flask, but I'm unable to make my website work as I keep getting an infinite redirect loop when I try to access my website through my domain name.
I've made my website using Python, Flask, and Flask-Flatpages. I uploaded the code to GitHub and pulled it onto a Raspberry Pi 4 that I have at my house. I installed gunicorn on the RasPi to serve the website and set up two workers to listen for requests. I've also set up nginx to act as a reverse proxy and listen to requests from outside. Here is my nginx configuration:
server {
if ($host = <redacted>.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
# listen on port 80 (http)
listen 80;
server_name <redacted>.com www.<redacted>.com;
location ~ /.well-known {
root /home/pi/<redacted>.com/certs;
}
location / {
# redirect any requests to the same URL but on https
return 301 https://$host$request_uri;
}
}
server {
# listen on port 443 (https)
listen 443;
ssl on;
server_name <redacted>.com www.<redacted>.com;
# location of the SSL certificate
ssl_certificate /etc/letsencrypt/live/<redacted>.com/fullchain.pem; # m$
ssl_certificate_key /etc/letsencrypt/live/<redacted>.com/privkey.pem; #$
# write access and error logs to /var/log
access_log /var/log/blog_access.log;
error_log /var/log/blog_error.log;
location / {
# forward application requests to the gunicorn server
proxy_pass http://localhost:8000;
proxy_redirect off;
proxy_set_header X_Forwarded_Proto $scheme;
proxy_set_header Host $host;
location /static {
# handle static files directly, without forwarding to the application
alias /home/pi/<redacted>.com/blog/static;
expires 30d;
}
}
When I access the website by typing in the local IP of the RasPi (I've set up a static IP address in /etc/dhcpcd.conf), the website is served just fine, although it seems like my browser won't recognize the SSL certificate even though Chrome says the certificate is valid when I click on Not Secure > Certificate next to the .
To make the website public, I've forwarded port 80 on my router to the RasPi and set up ufw to allow requests only from ports 80, 443, and 22. I purchased a domain name using GoDaddy, then added the domain to CloudFlare by changing the nameservers in GoDaddy (I'm planning to set up cloudflare-ddns later, which is why I added the domain to CloudFlare in the first place). As a temporary solution, I've added the current IP of my router to the A Record in the CloudFlare DNS settings, which I'm hoping will be the same for the next few days.
My problem arises when I try to access my website via my public domain name. When I do so, I get ERR_TOO_MANY_REDIRECTS, and I suspect this is due to some problem with my nginx configuration. I've already read this post and tried changing my CloudFlare SSL/TLS setting from Flexible to Full (strict). However, this leads to a different problem, where I get a CloudFlare error 522: connection timed out. None of the solutions in the CloudFlare help page seem to apply to my situation, as I've confirmed that:
I haven't blocked any CloudFlare IPs in ufw
The server isn't overloaded (I'm the only one accessing it right now)
Keepalive is enabled (I haven't changed anything from the default, although I'm unsure whether it is enabled by default)
The IP address in the A Record of the DNS Table matches the Public IP of my router (found through searching "What is my IP" on google)
Apologies if there is a lot in here for a single question, but any help would be appreciated!
I only see one obvious problem with your config, which is that this block that was automatically added by certbot should probably be removed:
if ($host = <redacted>.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
Because that behavior is already specified in the location / {} block, and I think the Certbot rule may take effect before the location ~ /.well-known block and break that functionality. I'm not certain about that, and I don't think that would cause the redirects, but you can test the well-known functionality yourself by trying to access http://yourhost.com/.well-known and seeing if it redirects to HTTPS or not.
On that note, the immediate answer to your question is, get more information about what's happening! My next step would be to see what the redirect loop is - your browser may show this in its network requests log, or you can use a command-line tool like curl or httpie or similar to try to access your site via the hostname and see what requests are being made. Is it simply trying to access the same URL over and over, or is it looping through multiple URLs? What are they? What does that point at?
And as a side note, it makes sense that Chrome wouldn't like your certificate when accessing it via IP - certificates are tied to one or more hostnames, so when you're accessing it over an IP address, the hostname doesn't match, so Chrome is probably (correctly) pointing that out and warning you that you're not at the hostname the certificate says you should be at.
1) How to make Apache to redirect the whole url with parameters, and make it visible to a client, for example:
when client comes to :
https://domain1.com/app/index.php?device_id=WeWeWe&ordna_ver=5.0&num=+1234567890
it redirects him to:
https://domain2.com/app/index.php?device_id=WeWeWe&ordna_ver=5.0&num=+1234567890
2) Also, how to make the same redirect but NOT visible to a client (he still see the URL from domain1.com while opening it from domain2.com) ?
3) And the third, how to make the same two things (redirects) with nginx ?
Thank you very much for your help.
In nginx, visible to the client:
server_name domain1.com;
return https://domain2.com$request_uri;
In nginx, hiding the redirect from being visible to the client:
server_name domain1.com;
location / {
proxy_pass https://domain2.com;
}
You might also want to use the optional module http://nginx.org/docs/http/ngx_http_sub_module.html#sub_filter (requires recompilation of nginx), if you want to make sure to replace any mention of domain2.com from the proxied web-page with domain1.com.
sub_filter "https://domain1.com" "https://domain2.com";
sub_filter_once off;
With Apache's mod_vhost_alias you can use Directory Interpolation to serve sites based on directory structure. See here http://httpd.apache.org/docs/2.2/mod/mod_vhost_alias.html#interpol
Is this possible with NGINX? if so how?
The simplest one would be:
server {
listen 80 default_server;
root /var/www/$host;
}
For http://www.example.com/directory/file.html this will serve file /var/www/www.example.com/directory/file.html.
I've just found out through randomly searching for something else that the exact same functionality as Apache's Directory Interpolation (and more) can be achieved using regex, for example...
server_name "~^(?<machine>.*?)\.(?<domain>.*?)\.(?<group>.*?)\.dev$";
root "/some/place/projects/$group/$domain/$machine";
For anyone coming here wanting to auto manage their local webserver setup, I found these useful to take care of the DNS side of things
https://echo.co/blog/never-touch-your-local-etchosts-file-os-x-again (mac)
http://mayakron.altervista.org/wikibase/show.php?id=AcrylicHome (win)
At the moment I am using nginx as my webserver. Some time ago I found out about Server Name Indication (SNI), which was very helpful since I have a couple domains (including all the subdomains) running from my server.
So exited as I was, I put a ssl-certificate on my main domain and a couple of subdomains on my server. Works great, no problem :D
(the reason I didn't just use a wildcard-certificate is because I am using http://www.startssl.com/ which provides me with free ssl-certificates, but for wildcard I will have to pay, and my server isn't that important for that, it's merely a hobby-project)
So on to the question:
If someone browses to a non-existing (sub)domain, or one that does not have an certificate installed, they get of course a big warning in there browser, because nginx served the default certificate which of course does not match that non-existing domain.
I was wondering, would it be possible to tell nginx if the sni-system gets asked for a non-exisiting domain-name, to just terminate the connection or maybe do something so instead of a name-mismatch warning there will appear some other warning in the browser saying the site does not exist (if even such a system does exist...)
I know one solution: take away the wildcard-dns-record so non-existing will indeed not exist, but that doesn't help for the ones that exist, but not on https. Also with that I cannot easily just add a new subdomain and have to also edit dns-settings for it. (I know, I am lazy, but which nerd isn't :P)
Oh and if such thing is not possible I'll just have to live with the way it is now, not that big of a deal, but it would make my server less 'cooler' xd (im sorry, im just a random tech-hobbyist).
Just don't use SSL for subdomains :)
eg
server {
listen 80;
listen 443;
server_name example.com;
...
}
server {
listen 80;
server_name *.example.com;
...
}
Turns out the answer by SuddenHead was almost the solution. It needed just a tiny little additional step. I already had done what SuddenHead said, but if you would visit a non-existing subdomain, or a subdomain that doesnt have ssl it would just serve the first ssl-enabled subdomain it can find. The solution turns out to be very simple. Make sure you add a server-block to the nginx-config and make sure it is the first one nginx will read:
server {
listen 80 default_server;
listen 443 default_server;
return 404;
}
This will also serve a 404-page to any non-existing subdomain (something i also wanted, if not desired, I think omitting the listen 80 rule should work) and because no ssl-certificate details are specified, accessing this on https gives an error about an ssl-connection that could not be established, instead of a domain-mismatch-error with warnings about something could be wrong and stuff. This is basically exactly what I was looking for :D
Now I'm sure there are more 'nicer' ways to do this, but this does everything i need.
EDIT: I derped, by putting default_server in the listen directives, that of course breaks everything cause by default ssl would be broken... ooops. So im afraid i didn't think this thought and this isn't the solution after all :(
Been playing with nginx for about an hour trying to setup mass dynamic virtual hosts.
If you ever done it in apache you know what I mean.
Goal is to have dynamic subdomains for few people in the office (more than 50)
Perhaps doing this will get you where you want to be:
server {
root /sites/$http_host;
server_name $http_host;
...
}
I like this as I can literally create sites on the fly, just create new directory named after the domain and point the DNS to the server ip.
You will need some scripting knowledge to put this together. I would use PHP, but if you are good in bash scripting use that. I would do it like this:
First create some folder (/usr/local/etc/nginx/domain.com/).
In main nginx.conf add command : include /usr/local/etc/nginx/domain.com/*.conf;
Every file in this folder should be different vhost names subdomain.conf.
You do not need to restart nginx server for config to take action, you only need to reload it : /usr/local/etc/rc.d/nginx reload
OR you can make only one conf file, where all vhosts should be set. This is probably better so that nginx doesn't need to load up 50 files, but only one....
IF you have problems with scripting, then ask question about that...
Based on user2001260's answer, later edited by partlov, here's my outcome.
Bear in mind this is for a dev server located on a local virtual machine, where the .dev prefix is used at the end of each domain. If you want to remove it, or use something else, the \.dev part in the server_name directive could be edited or altogether removed.
server {
listen 80 default_server;
listen [::]:80 default_server;
# Match any server name with the format [subdomain.[.subdomain...]].domain.tld.dev
server_name ~^(?<subdomain>([\w-]+\.)*)?(?<domain>[\w-]+\.[\w-]+)\.dev$;
# Map by default to (projects_root_path)/(domain.tld)/www;
set $rootdir "/var/www/$domain/www";
# Check if a (projects_root_path)/(subdomain.)(domain.tld)/www directory exists
if (-f "/var/www/$subdomain.$domain/www"){
# in which case, set that directory as the root
set $rootdir "/var/www/$subdomain.$domain/www";
}
root $rootdir;
index index.php index.html index.htm index.nginx-debian.html;
# Front-controller pattern as recommended by the nginx docs
location / {
try_files $uri $uri/ /index.php;
}
# Standard php-fpm based on the default config below this point
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/run/php/php7.0-fpm.sock;
}
location ~ /\.ht {
deny all;
}
}
The regex in server_name captures the variables subdomain and domain. The subdomain part is optional and can be empty. I have set it so that by default, if you have a subdomain, say admin.mysite.com the root is set to the same root as mysite.com. This way, the same front-controller (in my case index.php) can route based on the subdomain. But if you want to keep an altogether different application in a subdomain, you can have a admin.mysite.com dir and it will use that directory for calls to admin.mysite.com.
Careful: The use of if is discouraged in the current nginx version, since it adds extra processing overhead for each request, but it should be fine for use in a dev environment, which is what this configuration is good for. In a production environment, I would recommend not using a mass virtual host configuration and configuring each site separately, for more control and better security.
server_name ~^(?<vhost>[^.]*)\.domain\.com$;
set $rootdir "/var/www/whatever/$vhost";
root $rootdir;
As #Samuurai suggested here is a short version Angular 5 with nginx build integration:
server {
server_name ~^(?<branch>.*)\.staging\.yourdomain\.com$;
access_log /var/log/nginx/branch-access.log;
error_log /var/log/nginx/branch-error.log;
index index.html;
try_files $uri$args $uri$args/ $uri $uri/ /index.html =404;
root /usr/share/nginx/html/www/theft/$branch/dist;
}
Another alternative is to have includes a few levels deep so that directories can be categorized as you see fit. For example:
include sites-enabled/*.conf;
include sites-enabled/*/*.conf;
include sites-enabled/*/*/*.conf;
include sites-enabled/*/*/*/*.conf;
As long as you are comfortable with scripting, it is not very hard to put together some scripts that will quickly set up vhosts in nginx. This slicehost article goes through setting up a couple of vhosts and does it in a way that is easily scriptable and keeps the configurations separate. The only downside is having to restart the server, but that's to be expected with config changes.
Update: If you don't want to do any of the config maintaining yourself, then your only 2 options (the safe ones anyways) would be to either find a program that will let your users manage their own chunk of their nginx config (which will let them create all the subdomains they want), or to create such a user-facing management console yourself.
Doing this yourself would not be too hard, especially if you already have the scripts to do the work of setting things up. The web-based interface can call out to the scripts to do the actual work so that all the web interface has to deal with is managing who has access to what things.