Nginx try_files points to wrong folder - vue.js

I'm hosting a vue history site using nginx docker and the nginx config is the following
server {
listen 80;
location / {
root /var/www/dist/;
index index.html index.htm;
try_files $uri $uri/ index.html;
}
}
I can access "http://the.ip.of.server". However when I tried to visit "http://the.ip.of.server/search", it shows the following error. It tries to open "/etc/nginx/htmlindex.html" instead of "/var/www/dist/index.html" which makes no sense.
2020/05/10 04:24:01 [error] 7#7: *2 open() "/etc/nginx/htmlindex.html" failed (2: No such file or directory),
client: xx.xx.xx.xx, server: , request: "GET /search HTTP/1.1", host: "the.ip.of.server"
I have no idea where "/etc/nginx/htmlindex.html" comes. The configuration should be right since I can access /.

You should put the root element outside the location block.
server {
listen 80;
root /var/www/dist;
index index.html index.htm
location / {
try_files $uri $uri/ =404;
}
}
After change the configuration remember to restart your nginx instance (if using docker force build image).

Related

Nginx - how to block endpoint by redirect to vue 403 page

Have a problem with vue/quasar framework with nginx as a host.
my standard endpoint looks like: exmaple.com/#/
I want to block exmaple.com/#/test by rerout this to 403. So in my nginx.conf I added like below:
server {
root /usr/share/nginx/html;
index index.html;
location / {
try_files $uri $uri/ /index.html$is_args$args;
}
location ^~ /test {
deny all;
}
}
When I go to exmaple.com/#/test it simply passes me. But when I go to exmaple.com/test I get the 403. So it seems that the issue related to vue.js setup?
How can I resolve this?

How to enable linked file in nginx proxy_pass sites?

I have a server, and 2 Express-based project running on port 3000 and 4000. Landing page has default template nginx html code, with Botkit iframe embed code. Port 3000 Express server is Botkit Starter Guide project, running with no modification. Port 4000 Express server is just Hello World project. Both server executed using pm2.
Below is my /etc/nginx/sites-enabled/default config:
server {
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
server_name _;
return 301 http://hwsrv-492795.hostwindsdns.com;
location / {
try_files $uri $uri/ =404;
}
}
server {
listen 80;
listen [::]:80;
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
server_name hwsrv-492795.hostwindsdns.com;
location / {
try_files $uri $uri/ =404;
}
location /test01/ {
proxy_pass http://142.11.241.150:3000/;
}
location /test02/ {
proxy_pass http://142.11.241.150:4000/;
}
}
So here is what I can't solve:
Botkit Chatbot is successfully loaded on http://hwsrv-492795.hostwindsdns.com.
While Botkit Chatbot behaves normally on http://142.11.241.150:3000/, it doesn't do so on http://hwsrv-492795.hostwindsdns.com/test01/. It just load /index.html, but failed (404) to load /css/styles.css, /embed.js, and /chat.html
Hello world behaves just fine on both http://142.11.241.150:4000/ and http://hwsrv-492795.hostwindsdns.com/test02/.
I can curl those file from the server terminal, meaning that there is no problem on accessing those file. The question is, how do I enable or allow linked files and folder for sites that is generated using Express server to be readable on browser?

How To Set Up common one Nginx Server Blocks (Virtual Hosts) for all peojects in Centos

I have a Centos with Nginx server and multiple site folders are exist in wamp.
But for every project i should need to write separate Nginx server blocks under /etc/nginx/conf.d/websites.conf file. So whenever i created a new project then after i have to add below lines under websites.conf file of Nginx.
location /project-folder {
root path;
index index.php index.html index.htm;
rewrite ^/project-folder/(.*)$ /project-folder/app/webroot/$1 break;
try_files $uri $uri/ /project-folder/app/webroot/index.php?q=$uri&$args;
location ~ .*\.php$ {
include /etc/nginx/fastcgi_params;
fastcgi_pass 127.0.0.1:xxxx;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
location ~* /project-folder/(.*)\.(css|js|ico|gif|png|jpg|jpeg)$ {
root path/project-folder/app/webroot/;
try_files /$1.$2 =404;
}
}
So is it any other way to make a common block for all site-folder and doesn't need to add new server block for new site?
Thanks in advance.
There are multiple ways to implement this. If you are using multiple domain names, you can use a regular expression in the server_name to create named captures (see this document for more). You can use a regular expression in the location directive to capture the value of project-folder (see this document for more).
The main function of this configuration is to insert the text /app/webroot between the project name and the remainder of the URI. The challenge is to do it without creating a redirection loop.
I have tested the following example, which works by placing a generalised version of your rewrite statement into the server block and capturing the project name for use later in the one of the try_files statements:
server {
...
root /path;
index index.php index.html index.htm;
rewrite ^(?<project>/[^/]+)(/.*)$ $1/app/webroot$2;
location / {
try_files $uri $uri/ $project/index.php?q=$uri&$args;
}
location ~ .*\.php$ {
include /etc/nginx/fastcgi_params;
fastcgi_pass 127.0.0.1:xxxx;
fastcgi_param SCRIPT_FILENAME $request_filename;
}
location ~* \.(css|js|ico|gif|png|jpg|jpeg)$ {
try_files $uri =404;
}
}

nginx configuration for basic auth that allows lets encrypt to auto renew

Currently, I can successfully install a Let's Encrypt SSL certificate by leaving the below line in my nginx configuration:
location / {
try_files $uri $uri/ /index.php?$query_string;
}
I would like to have basic auth enabled on my website. The below works fine to do so:
location / {
try_files $uri $uri/ /index.php?$query_string;
auth_basic "Restricted Content";
auth_basic_user_file /etc/nginx/.htpasswd;
}
However, the Let's Encrypt SSL certificate must renew every month. I can automate this renewal using curl. However, Let's Encrypt has a callback along a route that contains a route with the prefix:
/.well-known/acme-challenge/
but now, I need to add an exception for the above route prefix:
location ^~ /.well-known/acme-challenge/ {
auth_basic off;
}
I have tried many variations of the above, but none seem to be working. I cannot get the Let's Encrypt SSL certificate to install with the above configuration. I even determined that if I take off the basic auth (by deleting the two lines that begin with auth_basic), the Let's Encrypt SSL certificate won't install if all I have is the below:
location ^~ /.well-known/acme-challenge/ {
try_files $uri $uri/ /index.php?$query_string;
}
location / {
try_files $uri $uri/ /index.php?$query_string;
}
I would appreciate any suggestions so the callback route will run while basic auth is enabled. I'd also be interested in learning why I can't install Let's Encrypt with the config as shown above where basic auth isn't even enabled. Thanks in advance.
Additional error logs:
/var/log/nginx# cat cert.dev.farm-error.log.1
2016/03/18 13:12:26 [error] 12336#12336: *1 open() "/home/forge/cert.dev.farm/public/.well-known/acme-challenge/jGhldzH8cV3d666a44nRy-Gzf98m1u2qUbkWnNv0aMI" failed (2: No such file or directory), client: 66.133.109.36, server: cert.dev.farm, request: "GET /.well-known/acme-challenge/jGhldzH8cV3d666a44nRy-Gzf98m1u2qUbkWnNv0aMI HTTP/1.1", host: "cert.dev.farm"
2016/03/18 13:17:09 [error] 13202#13202: *1 open() "/home/forge/cert.dev.farm/.well-known/acme-challenge/k9jsDB_mkvU5UzvL-B7hd3iA90ZTq61OaDNixoeRQuQ" failed (2: No such file or directory), client: 66.133.109.36, server: cert.dev.farm, request: "GET /.well-known/acme-challenge/k9jsDB_mkvU5UzvL-B7hd3iA90ZTq61OaDNixoeRQuQ HTTP/1.1", host: "cert.dev.farm"
2016/03/18 13:17:09 [error] 13202#13202: *1 FastCGI sent in stderr: "Unable to open primary script: /home/forge/cert.dev.farm/index.php (No such file or directory)" while reading response header from upstream, client: 66.133.109.36, server: cert.dev.farm, request: "GET /.well-known/acme-challenge/k9jsDB_mkvU5UzvL-B7hd3iA90ZTq61OaDNixoeRQuQ HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "cert.dev.farm"
2016/03/18 13:23:48 [error] 14522#14522: *1 open() "/home/forge/cert.dev.farm/public/.well-known/acme-challenge/5ZBXFn23tOUPcQFNrI3DqUE-l9x3rmdiOkYwabDl-jk" failed (2: No such file or directory), client: 66.133.109.36, server: cert.dev.farm, request: "GET /.well-known/acme-challenge/5ZBXFn23tOUPcQFNrI3DqUE-l9x3rmdiOkYwabDl-jk HTTP/1.1", host: "cert.dev.farm"
You probably miss a document root being set for the /.well-known/acme-challenge location block.
location ^~ /.well-known/acme-challenge/ {
root /path/to/your/public/directory;
try_files $uri;
auth_basic off;
}
location / {
...
}
You said you can automate it using CURL, did you have a look at the list of available clients?
You can just use the renew command of the cli client:
➜ sudo letsencrypt renew
-------------------------------------------------------------------------------
Processing /etc/letsencrypt/renewal/domain.com.conf
-------------------------------------------------------------------------------
The following certs are not due for renewal yet:
/etc/letsencrypt/live/domain.com/fullchain.pem (skipped)
No renewals were attempted.

nginx - How do I make server-wide configuration for every hosted website

For each website I host with nginx, I have got a different file which holds the server {} block in /etc/nginx/conf.d/
For example...
/etc/nginx/conf.d/website1.co.uk
/etc/nginx/conf.d/website2.org
/etc/nginx/conf.d/website3.com
I find myself repeating the same code in every server {} block and was wondering if it is possible to make a "catch all" server {} block to house the reusable code.
This new "catch all" file would include things such as...
# Redirect all www. attempts to non-www.
server {
server_name www.$anything; hmm?
return 301 $scheme://$hostname$request_uri;
}
server {
server_name _; hmm?
# Add expires to static files
location ~* \.(?:ico|css|js|gif|jpe?g|png|bmp)$ {
expires max;
access_log off;
}
# Pass PHP files to PHP
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
}
}
Has anyone does this before?
Do we have to repeat these types of generic codes for each website we host?
All you need is include. Put all boilerplate in separate files, without server blocks. include fastcgi_params mentioned in the sample config is a prime example.
Without a leading slash, nginx will look in the directory where the main configuration file is. So:
include fastcgi_params;
include /etc/nginx/fastcgi_params;
are equivalent, if nginx.conf is in /etc/nginx.