Apache + NginX reverse proxy: serving static and proxy files within nested URLs - apache

I have Apache running on my server on port 8080 with NginX running as a reserve proxy on port 80. I am attempting to get NginX to serve static HTML files for specific URLs. I am struggling to write the NginX configuration that does this. I have conflicting directives as my URLs are nested within each other.
Here's what I want to have:
One URL is at /example/ and I want NginX to serve a static HTML file located on my server at /path/to/www/example-content.html instead of letting Apache serve the page to NginX.
Another URL is at /example/images/ and I want Apache to serve that page to NginX, just as it does for the rest of the site.
I have set up my nginx.conf file like this:
server {
listen 80;
server_name localhost;
# etc...
location / {
proxy_pass http://127.0.0.1:8080/;
}
# etc...
My attempt to serve the static file at /example/ from NginX went like this:
location /example/ {
alias /path/to/www/;
index example-content.html;
}
This works, but it means everything after the /example/ URL (such as /example/images/) is aliased to that local path also.
I would like to use regex instead but I've not found a way to serve up the file specifically, only the folder. What I want to be able to say is something like this:
location ~ ^/example/$ {
alias /path/to/www/example-content.html;
}
This specifically matches the /example/ folder, but using a filename like that is invalid syntax. Using the explicit location = /example/ in any case doesn't work either.
If I write this:
location ~ ^/example/$ {
alias /path/to/www/;
index example-content.html;
}
location ~ /example/(.+) {
alias /path/to/www/$1;
}
The second directive attempts to undo the damage of the first directive, but it ends up overriding the first directive and it fails to serve up the static file.
So I'm at a bit of a loss. Any advice at all extremely welcome! Many thanks.

Since you say you have Apache running, I assume it is there to run dynamic content such as PHP (Otherwise it is not needed). In that case, the example config below will serve all static content with Nginx and pass others to Apache.
server {
# default index should be defined once as high up the tree as possible
# only override lower down if absolutely required
index index.html index.php
# default root should be defined once as high up the tree as possible
# only override lower down if absolutely required
root /path/to/www/
location / {
try_files $uri $uri/ #proxy;
}
location #proxy {
proxy_pass http://127.0.0.1:8080;
# Other Proxy Params
}
location ~ \.php$ {
error_page 418 = #proxy
location ~ \..*/.*\.php$ { return 400; }
return 418;
}
}
What this assumes is that you are following a structured setup where the default file in every folder is either called "index.html" or "index.php" such as "/example/index.html", "some-folder/index.html" and "/some-other-folder/index.html".
With this, navigating to "/example/", "/some-folder/" or "/some-other-folder/" will just work with no further action.
If each folder has default files with random different names, such as "/example/example-content.html", "some-folder/some-folder.html" and "some-other-folder/yet-another-different-default.html", then it becomes a bit more difficult as you then need to do something like
server {
# default index should be defined once as high up the tree as possible
# only override lower down if absolutely required
index index.html index.php
# default root should be defined once as high up the tree as possible
# only override lower down if absolutely required
root /path/to/www/
location / {
try_files $uri $uri/ #proxy;
}
location #proxy {
# Proxy params
proxy_pass http://127.0.0.1:8080;
}
location ~ .+\.php$ {
error_page 418 = #proxy
location ~ \..*/.*\.php$ { return 400; }
return 418;
}
location /example/ {
# Need to keep defining new index due to lack of structure
# No need for alias or new root
index example-content.html;
}
location /some-folder/ {
# Need to keep defining new index due to lack of structure
# No need for alias or new root
index some-folder.html;
}
location /some-other-folder/ {
# Need to keep defining new index due to lack of structure
# No need for alias or new root
index yet-another-different-default.html;
}
# Keep adding new location blocks for each folder
# Obviously not the most efficient arrangement
}
The better option is to have a structured and logical layout of files on the site instead of multiple differing locations.

Related

Serve multiple local dev servers (vue or react) with nginx

I have a folder called my-repo which contains two apps:
my-repo/packages/foo
my-repo/packages/bar
Let's say I run them locally (vue dev server): localhost:3000 → foo and localhost:4000 → bar
I want to serve them on localhost:5000, such that
localhost:5000/foo → localhost:3000 (my-repo/packages/foo)
localhost:5000/bar → localhost:4000 (my-repo/packages/bar)
So I have the following config in my nginx.conf file:
server {
listen 5000;
server_name localhost;
# this works:
location / {
alias /Users/my-user/my-repo/packages/foo; # direct path
proxy_pass http://127.0.0.1:3000/;
}
# this does not work:
location /foo {
alias /Users/my-user/my-repo/packages/foo; # absolute path
proxy_pass http://127.0.0.1:3000/;
}
# as expected, this does not work:
location /bar {
alias /Users/my-user/my-repo/packages/bar; # absolute path
proxy_pass http://127.0.0.1:4000/;
}
I tried all sorts of regex based try_files and maps and I could not get it to work. I tried building the projects and serving the /dist, I could not get that to work either.
I know there are some options like baseURL, or a homapage in package.json, but I could not set these up either.
Why does the default location / work, but not the nested ones?

Nuxt.js app behind nginx reverse-proxy loading multiple pages at once

I've a nuxt.js app behind nginx reverse proxy. The nginx conf file looks like this:
server {
listen 80;
# Match *.lvh.me
server_name ~^(?<user>.+)\.lvh\.me$;
location / {
proxy_pass http://localhost:8080/sites/$user$uri$is_args$args;
}
location ~* \.(?:js|css|jpg|jpeg|gif|png|ico|cur|svg)$ {
proxy_pass http://localhost:8080;
}
}
As you can see I'm mapping all my site subdomains to a specific path on my site and it is working fine. I'm also mapping all assets to be loaded from the root (because otherwise it throws a 404 error).
The only issue I'm facing is the whenever I visit a subdomain e.g subdomain.lvh.me it loads two pages on top of each other, the original page from subdomain root (which is expected) but also the page from the main domain root i.e. lvh.me (which is not expected).
Can you please checkout my conf file to see if I'm doing anything wrong here?
So I've encountered this issue and what I did to fix it was to not rely on Nginx's root nor proxy_pass. Instead, I used a location block with an alias and a try_files inside like so:
location ^~ / {
alias /path/to/dist;
try_files $uri $uri/ /index.html = 404;
}

Nginx basic HTML authentication on a subfolder

I need help applying basic HTML authentication to password protect a subfolder on my site - hosted by Digitalocean served using Nginx. I followed the tutorial - https://www.digitalocean.com/community/tutorials/how-to-set-up-http-authentication-with-nginx-on-ubuntu-12-10.
But my results are:
1. The entire site prompts for credentials rather than the specified subfolder, and
2. All the pages on the site can longer find css and js files.
Here's what I tried:
1) Generated a .htpasswd file in the subfolder,
2) Added a location block in the nginx.conf file (see below), and
3) Reloaded nginx.
location / {
auth_basic "Restricted";
auth_basic_user_file /var/www/pepperslice/current/public/ps/jeffaltman/.htpasswd;
}
The complete nginx.config file is as follows:
upstream unicorn {
server unix:/tmp/unicorn.pepperslice.sock fail_timeout=0;
}
server {
listen 80 default;
root /var/www/pepperslice/current/public;
try_files $uri/index.html $uri #unicorn;
location = /images {
root /var/www/pepperslice/current/public/images;
}
location #unicorn {
proxy_pass http://unicorn;
}
location / {
auth_basic "Restricted";
auth_basic_user_file /var/www/pepperslice/current/public/ps/jeffaltman/.htpasswd;
}
error_page 500 502 503 504 /500.html;
}
At the moment, the location block I added is commented out.
I am seeking help on how I can:
1. password protect the subfolder pepperslice.com/ps/jeffaltman, and
2. password protect other subfolders using different username and password combinations.
Also, any ideas why the css and js paths failed? I am guessing once the authentication problem is fixed, the css/js path problem will go away.
Thanks.
After some more research I found a solution and solved the issue. This guide is where I found it - see section 3 - https://www.howtoforge.com/basic-http-authentication-with-nginx
Instead of the location block starting with "location / {...}" (root for site), for a subfolder it should start with the subfolder path. For example:
location /ps/jeffaltman {
auth_basic "Restricted";
auth_basic_user_file /var/www/pepperslice/current/public/ps/jeffaltman/.htpasswd;
}
Also the css and js path problem disappeared.
Cheers.

How do I interpret URLs without extension as files rather than missing directories in nginx?

So I'm trying to install ZoneMinder and have it run under Nginx, but it has a few compiled files that need to be run using Fast-CGI. These files lack an extension.
If the file has an extension, then there is no issue and Nginx interprets it as a file and will just return a 404 if it can't find it. If it has no extension, it will assume it's a directory and then eventually return a 404 when it can't find any sort of index page.
Here is what I have now:
# Security camera domain.
server {
listen 888;
server_name mydomain.com;
root /srv/http/zm;
# Enable PHP support.
include php.conf;
location / {
index index.html index.htm index.php;
}
# Enable CGI support.
location ~ ^/cgi-bin/(.+)$ {
alias /srv/cgi-bin/zm/;
fastcgi_pass unix:/run/fcgiwrap.sock;
fastcgi_param SCRIPT_FILENAME $document_root/$1;
include fastcgi.conf;
}
}
The idea is, anything under the cgi-bin directory goes through the fastcgi pipe.
Any idea on howI can fix this?
Upon closer inspection, I realized ZoneMinder only has two cgi scripts (nph-zms, zms), so it was easier just to explicitly state them in nginx as so:
# Enable CGI support.
location ~ ^/cgi-bin/(nph-zms|zms)$ {
alias /srv/cgi-bin/zm/;
fastcgi_pass unix:/run/fcgiwrap.sock;
fastcgi_param SCRIPT_FILENAME $document_root/$1;
include fastcgi.conf;
}
Seems to be the best way to go :)

How to setup mass dynamic virtual hosts in nginx?

Been playing with nginx for about an hour trying to setup mass dynamic virtual hosts.
If you ever done it in apache you know what I mean.
Goal is to have dynamic subdomains for few people in the office (more than 50)
Perhaps doing this will get you where you want to be:
server {
root /sites/$http_host;
server_name $http_host;
...
}
I like this as I can literally create sites on the fly, just create new directory named after the domain and point the DNS to the server ip.
You will need some scripting knowledge to put this together. I would use PHP, but if you are good in bash scripting use that. I would do it like this:
First create some folder (/usr/local/etc/nginx/domain.com/).
In main nginx.conf add command : include /usr/local/etc/nginx/domain.com/*.conf;
Every file in this folder should be different vhost names subdomain.conf.
You do not need to restart nginx server for config to take action, you only need to reload it : /usr/local/etc/rc.d/nginx reload
OR you can make only one conf file, where all vhosts should be set. This is probably better so that nginx doesn't need to load up 50 files, but only one....
IF you have problems with scripting, then ask question about that...
Based on user2001260's answer, later edited by partlov, here's my outcome.
Bear in mind this is for a dev server located on a local virtual machine, where the .dev prefix is used at the end of each domain. If you want to remove it, or use something else, the \.dev part in the server_name directive could be edited or altogether removed.
server {
listen 80 default_server;
listen [::]:80 default_server;
# Match any server name with the format [subdomain.[.subdomain...]].domain.tld.dev
server_name ~^(?<subdomain>([\w-]+\.)*)?(?<domain>[\w-]+\.[\w-]+)\.dev$;
# Map by default to (projects_root_path)/(domain.tld)/www;
set $rootdir "/var/www/$domain/www";
# Check if a (projects_root_path)/(subdomain.)(domain.tld)/www directory exists
if (-f "/var/www/$subdomain.$domain/www"){
# in which case, set that directory as the root
set $rootdir "/var/www/$subdomain.$domain/www";
}
root $rootdir;
index index.php index.html index.htm index.nginx-debian.html;
# Front-controller pattern as recommended by the nginx docs
location / {
try_files $uri $uri/ /index.php;
}
# Standard php-fpm based on the default config below this point
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/run/php/php7.0-fpm.sock;
}
location ~ /\.ht {
deny all;
}
}
The regex in server_name captures the variables subdomain and domain. The subdomain part is optional and can be empty. I have set it so that by default, if you have a subdomain, say admin.mysite.com the root is set to the same root as mysite.com. This way, the same front-controller (in my case index.php) can route based on the subdomain. But if you want to keep an altogether different application in a subdomain, you can have a admin.mysite.com dir and it will use that directory for calls to admin.mysite.com.
Careful: The use of if is discouraged in the current nginx version, since it adds extra processing overhead for each request, but it should be fine for use in a dev environment, which is what this configuration is good for. In a production environment, I would recommend not using a mass virtual host configuration and configuring each site separately, for more control and better security.
server_name ~^(?<vhost>[^.]*)\.domain\.com$;
set $rootdir "/var/www/whatever/$vhost";
root $rootdir;
As #Samuurai suggested here is a short version Angular 5 with nginx build integration:
server {
server_name ~^(?<branch>.*)\.staging\.yourdomain\.com$;
access_log /var/log/nginx/branch-access.log;
error_log /var/log/nginx/branch-error.log;
index index.html;
try_files $uri$args $uri$args/ $uri $uri/ /index.html =404;
root /usr/share/nginx/html/www/theft/$branch/dist;
}
Another alternative is to have includes a few levels deep so that directories can be categorized as you see fit. For example:
include sites-enabled/*.conf;
include sites-enabled/*/*.conf;
include sites-enabled/*/*/*.conf;
include sites-enabled/*/*/*/*.conf;
As long as you are comfortable with scripting, it is not very hard to put together some scripts that will quickly set up vhosts in nginx. This slicehost article goes through setting up a couple of vhosts and does it in a way that is easily scriptable and keeps the configurations separate. The only downside is having to restart the server, but that's to be expected with config changes.
Update: If you don't want to do any of the config maintaining yourself, then your only 2 options (the safe ones anyways) would be to either find a program that will let your users manage their own chunk of their nginx config (which will let them create all the subdomains they want), or to create such a user-facing management console yourself.
Doing this yourself would not be too hard, especially if you already have the scripts to do the work of setting things up. The web-based interface can call out to the scripts to do the actual work so that all the web interface has to deal with is managing who has access to what things.