I have a directory public, where I will upload various files. I want those files to be downloadable without any authentication.
However, I also want to be able to edit and add new files using WebDAV, if authentication is provided. Is there any way I can have both behaviors on the same directory (location)?
I tried the following hack:
location /public/ {
if ($remote_user != "") {
return 302 e;
}
auth_basic off;
fancyindex off;
dav_methods off;
}
location /public/e {
}
However, I could not find a way to make /public/e index /public/ instead.
I also tried the following:
location /public/ {
if ($remote_user = "") {
auth_basic off;
fancyindex off;
dav_methods off;
}
}
But Nginx complained that I cannot have those directives in an if-statement.
I also tried:
location /public/ {
set $authenticated off;
if ($remote_user != "") {
set $authenticated on;
}
auth_basic $authenticated;
fancyindex $authenticated;
dav_methods $authenticated;
}
But Nginx said a variable is invalid, on or off was expected.
I also thought about making two directories, public and public-webdav, which are symlinked to the same directory, and assign different nginx locations to them, but I was hoping for a cleaner solution.
Thanks!
I solved it using this method:
location #public {
auth_basic off;
}
location /public/ {
if ($remote_user = "") {
error_page 418 = #public;
return 418;
}
}
That is, creating a custom handler for error 418 (teapot) and forcing that error.
Related
Please bear with me as I don't have much experience configuring Nginx.
I want to create micro-services as subfolders. eg: www.example.com/users
USERS is the micro-service so USERS will be the name of the subfolder as well.
The website can be hosted at /var/www/html/ or /var/www/html/example.com/public_html (Assuming this one for this question)
So inside the main folder, /var/www/html/example.com/public_html there are many folders, each representing a micro-service.
/var/www/html/example.com/public_html/users/
/var/www/html/example.com/public_html/students/
/var/www/html/example.com/public_html/teachers/
/var/www/html/example.com/public_html/parents/
Each micro-service will have a latest folder which holds the code to be executed.
/var/www/html/example.com/public_html/users/latest/
Each latest folder has 2 folders - APP which holds the SPA front end code and API which holds the API code
/var/www/html/example.com/public_html/users/latest/app/
/var/www/html/example.com/public_html/users/latest/api/
I would like to add and delete folders as required without making changes to the Nginx config file.
If the folder exists, i would like the relevant front end code to be displayed. Else 404
I am aware that I can add the folders as required in Nginx location but that is not what I want.
(DON'T WANT TO DO THE BELOW CODE)
location /users {
alias /var/www/html/example.com/public_html/users/latest/app
}
location /api/users {
alias /var/www/html/example.com/public_html/users/latest/api
}
I am trying to figure out if I can configure NGINX to point to the relevant folder based on the url.
This is what I have so far but it just shows as 404 for everything I am trying. I am working towards writing the API in Go-lang but since I have experience using PHP I am taking that route first before switching to Go.
server {
listen 80;
listen [::]:80;
server_name _;
root /var/www/html/example.com/public_html;
index index.html;
location ^/<service>/<additional-optional-parameters-can-go-here>$ {
root /var/www/html/example.com/public_html/<service>/latest/app;
try_files $uri $uri/ =404
}
location ^/api/<service>/<additional-optional-parameters-can-go-here>$ {
root /var/www/html/example.com/public_html/<service>/latest/api;
try_files $uri $uri/ =404
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_param SCRIPT_FILENAME $request_filename;
fastcgi_pass unix:/var/run/php/php7.2-fpm.sock;
}
}
location ~ /\.ht {
deny all;
}
}
I think you can use regex in the location and server lines.
Perhaps that will help
server_name ~^(?<subdomain>.+)\.domain\.tdl$ ~^(?<subdomain>.+)\.domain2\.tdl$ domain.tdl domain2.tdl;
location ~ ^/sitename/[0-9a-z]+/index.php$ {
fastcgi_pass phpcgi;
}
location ~ \.php$ {
return 404;
}
location ~ ^/a/b/(?<myvar>[a-zA-Z]+) {
# use variable $myvar here
if ($myvar = "sth") { ... }
}
# Example using the subdomain var created in last steps
sub_filter_once off;
sub_filter 'Site Text' '$subdomain';
sub_filter 'Site Text' '$myvar';
Personally I like to create different conf files for each service, then issue a "service nginx reload". Perhaps you could also automate the conf file generation.
Just incase anyone else is trying a similar approach.
I went on Freelancer.com and hired someone to help me.
location ~ ^/api/([a-z]+)/.*?$ {
alias $folder_path/$1/$api_path;
rewrite ^/api/([a-z]+)/.*?$ /$1/$api_path last;
}
location ~ ^/([a-z]+)/(?:(?!\blatest\b).)*$ {
alias $folder_path/$1/$app_path;
rewrite ^/([a-z]+)/(?:(?!\blatest\b).)*$ /$1/$app_path last;
}
I have two web applications (node.js express apps), web1 and web2. These web apps expect to be hosted on sites that are typically something like http://www.web1.com and http://www.web2.com. I'd like to host them behind an nginx reverse proxy as https://www.example.com/web1 and https://www.example.com/web2. I do not want to expose the two web apps as two subdomains on example.com.
Here is a snippet of my nginx configuration (without SSL termination details) that I had hoped would accomplish this:
server {
listen 443;
server_name .example.com;
location /web1 {
proxy_pass http://www.web1.com:80;
}
location /web2 {
proxy_pass http://www.web2.com:80;
}
}
This works, except for the relative links that the web apps use. So web app web1 will have a relative link like /js/script.js which won't be handled correctly.
What is the best/standard way to accomplish this?
You should be able to do this by checking the $http_referer, something like:
location / {
if ($http_referer ~ ^http://(www.)?example.com/web1) {
proxy_pass http://www.web1.com:80;
}
if ($http_referer ~ ^http://(www.)?example.com/web2) {
proxy_pass http://www.web2.com:80;
}
}
The browser would be setting the referer to http://example.com/web1/some/page when it requests /js/script.js so the apps shouldn't need to change, unless they need to process or care about the referer internally.
The $http_referer does not seem to be easy to find in nginx docs, but is mentioned in a few sites:
http://nginx.2469901.n2.nabble.com/HTTP-Referer-Module-td3356604.html
https://stackoverflow.com/a/18917016/1422492
I think something like this:
server {
listen 443;
server_name .example.com;
location /web1 {
proxy_pass http://www.web1.com:80;
}
location /web2 {
proxy_pass http://www.web2.com:80;
}
location / {
if ($http_referer ~* (/web1) ) {
proxy_pass http://www.web1.com:80;
}
if ($http_referer ~* (/web2) ) {
proxy_pass http://www.web2.com:80;
}
}
How about using cookie and ngx_http_map_module?
Add add_header Set-Cookie "site=web1;Path=/;Domain=.example.com;"; to location /web1 {...} (web2 too).
Add map to under http
map $cookie_site $site {
default http://www.web1.com:80;
"web2" http://www.web2.com:80;
}
Default location is this
location / {
proxy_pass $site;
}
You can pass the value of cookie to proxy_pass directly. But, using map is more secure way.
I'm having trouble requiring authorization to use view my site using nginx. I wonder if anyone can help.
I created a password using htpasswd. It created a file called htpasswd which is stored in Conf/ directly next to nginx.conf
The Conf directory is a sibling of hello_world and child of the scta folder seen below in the listed root path.
After restarting the server, my browser asks me for a password. I enter the information, the dialogue box goes away and the browser simply says not authorized. After that, the browser doesn't even give me an opportunity to re-enter my username and password.
I'm not sure what I'm doing wrong.
UPDATE, i realized i'm actually successfully getting through, because when I type in the wrong password I get a 401 error. But when I type in the correct password, I'm moving past the 401 error and instead getting a 403 error.
server {
listen 28005;
passenger_enabled on;
root /home/me/app/scta/hello_world/public;
server_name localhost;
location / {
auth_basic "closed site";
auth_basic_user_file conf/htpasswd;
}
}
}
You have to check whether the remote_user exists or not, if it exists, it means it is authenicated, so assuming the user is "admin", you would do the following check.
server {
listen 28005;
passenger_enabled on;
root /home/me/app/scta/hello_world/public;
server_name localhost;
location / {
auth_basic "closed site";
auth_basic_user_file conf/htpasswd;
if ($remote_user ~ ^$) { break; }
set $ok "no";
if ($remote_user = 'admin') { set $ok "yes"; }
if ($ok != "yes") {
return 403;
}
}
}
I'm trying to set up Nginx server as follows:
First, the server should check whether the user provides the client SSL certificate (via ssl_client_certificate).
If the SSL certificate is provided, then give access to the site,
If the SSL certificate is NOT provided, then ask the user to enter a password and logs through auth_basic.
I was able to configure both the authentication method at the same time. But this config is superfluous.
To make check, whether the user provides its SSL certificate I try the config like this:
18: if ($ssl_client_verify != SUCCESS) {
19: auth_basic "Please login";
20: auth_basic_user_file .passfile;
21: }
But Nginx returns an error:
"auth_basic" directive is not allowed here in .../ssl.conf:19
How can I to set the condition in this case?
You can set auth_basic configuration in the if clause like this:
server {
listen 443;
auth_basic_user_file .htpasswd;
ssl_client_certificate ca.cert;
ssl_verify_client optional;
...
location / {
...
if ($ssl_client_verify = SUCCESS) {
set $auth_basic off;
}
if ($ssl_client_verify != SUCCESS) {
set $auth_basic Restricted;
}
auth_basic $auth_basic;
}
}
Now, authentication falls back to HTTP Basic if no client certificate has been provided (or if validation failed).
I'm unable to test this currently, but would something like this work?
server {
listen 80;
server_name www.example.com example.com;
rewrite ^ https://$server_name$request_uri? permanent;
}
server {
listen 443;
...
if ($ssl_client_verify != SUCCESS) {
rewrite ^ http://auth.example.com/ permanent;
}
location / {
...
}
}
server {
listen 80;
server_name auth.example.com;
location / {
auth_basic "Please login";
auth_basic_user_file .passfile;
}
}
So basically:
- Accept all initial request (on port 80 for whatever name you're using) and rewrite to ssl
- Check if there's an the client is verified.
- If not, rewrite to an alternate domain that uses basic auth
Like I said, I can't test it right now, but I'll try to get around to it! Let me know if it helps, I'm interested to see if it works.
You may try using a map.
map $ssl_client_verify $var_auth_basic {
default off;
SUCCESS "Please login";
}
server {
....
auth_basic $var_auth_basic;
auth_basic_user_file .passfile;
that way the value depends on $ssl_client_verify but is alsa always defined and auth_basic and auth_basic_user_file is always inside server { block.
Nginx provides no way to fall back to basic authentication when client cert fails. As an alternative you can use variables to restrict access:
location / {
if ($ssl_client_verify = "SUCCESS") {
set $authorized 1;
}
if ($authorized != 1) {
error_page 401 #basicauth;
return 401;
}
}
location #basicauth {
auth_basic "Please login";
auth_basic_user_file .passfile;
set $authorized 1;
rewrite /(.*) /$1;
}
*keep in mind that IfIsEvil and these rules may work incorrectly or interfere with other parts of a larger configuration.
Forget about it, it won't work.
The reason why it fails is because if is not part of the general configuration module as one should believe. if is part of the rewrite module and auth_basic is another module. You just cannot have dynamic vhosts with basic auth.
On the other hand...
You can have dynamic vhosts with their own error pages. The following example is designed for a custom 404 page but you can implement it into your code.
server {
listen 80;
server_name _;
set $site_root /data/www/$host;
location / {
root $site_root;
}
error_page 404 =404 /404.html;
location /404.html {
root $site_root/error_files;
internal;
error_page 404 =404 #fallback_404;
}
location #fallback_404 {
root /var/www/;
try_files /404.html =404;
internal;
}
error_log /var/log/nginx/error.log info;
access_log /var/log/nginx/access.log;
}
What happens...
you are telling Nginx to use /404.html in case of HTTP_NOT_FOUND.
changing the location root to match the Web site error_pages directory.
internal redirection
returning a 404 http code
configure the fallback 404 page in location #fallback_404: In this location, the root is changed to /var/www/ so it will read files from that path instead of $site_root
at the last stage the code returns /var/www/404.html if it exists with a 404 http code.
NOTE: According to Nginx documentation :
Specifies that a given location can only be used for internal
requests. For external requests, the client error 404 (Not Found) is
returned. Internal requests are the following:
requests redirected by the error_page, index, random_index, and try_files directives;
requests redirected by the “X-Accel-Redirect” response header field from an upstream server;
subrequests formed by the “include virtual” command of the ngx_http_ssi_module module and by the ngx_http_addition_module module
directives;
requests changed by the rewrite directive.
Also:
There is a limit of 10 internal redirects per request to prevent
request processing cycles that can occur in incorrect configurations.
If this limit is reached, the error 500 (Internal Server Error) is
returned. In such cases, the “rewrite or internal redirection cycle”
message can be seen in the error log.
Check this link for more, hope that helps.
I know these two questions (password protected dirs and autoindex) was get answered but not together.
I can do it at the same time but i have a problem with it. Take a look at it. This is my conf file of nginx.
location ~ /(archives|fallen) {
autoindex on;
auth_basic "Restricted Area for Private Use Only";
auth_basic_user_file passwords;
}
location / {
root /www/mirror;
index index.html index.htm index.php;
autoindex on;
autoindex_exact_size off;
}
as you can see, archives and fallen dirs are password protected and autoindex for both is open. But, normal un-protected dirs can be autoindexed but password protected dirs are not.
If i enter password protected dirs it shows me an 404 error because ther is no index.html and it just disable autoindex feature. But in the other hand as i said before, unprotected dirs are autoindexed as usual.
Is anyone have any solution for this? Please let me know.
Found the solution. We have to write root and autoindex methods outside of the location. In generally speaking; we have to set them as global in a server { } tag.