Nginx map client certificate to REMOTE_USER for uWSGI with fallback to basic auth? - ssl

I'm using Nginx with uWSGI to serve Mercurial; it does basic auth over SSL (Nginx is the SSL terminator; it doesn't get passed on to Hg), but due to the limited security of basic auth even over SSL, as discussed at various places including this site, I want to allow users to connect with client certificates as well, something that TortoiseHg for example supports.
ssl_verify_client optional;
...
map $ssl_client_s_dn $ssl_client_s_dn_cn
{
default "";
~/CN=(?<CN>[^/]+) $CN;
};
...
location /
{
uwsgi_pass unix:/run/uwsgi/app/hgweb/socket;
include uwsgi_params;
uwsgi_param SERVER_ADDR $server_addr;
uwsgi_param REMOTE_USER $ssl_client_s_dn_cn;
#uwsgi_param REMOTE_USER $remote_user;
#auth_basic "Mercurial repositories";
#auth_basic_user_file /srv/hg/.htpasswd;
}
So I treat the CN as a username.
But how do I make it fallback to basic auth when there's no client certificate (and preferably not when there is a certificate but its verification fails -- just error in that case)? An example I found does it by having a separate server block listening on another port, which I want to avoid: https://github.com/winne27/nginx-cert-and-basic-auth/blob/master/nginx-example.conf
Additionally, in some examples I've seen the following checks in location; are they necessary? if ($ssl_client_verify != SUCCESS) { return 496; } if ($ssl_client_s_dn_cn !~ "^[a-z0-9]{1,10}$") { return 496; } Given http://wiki.nginx.org/IfIsEvil I thought it best to avoid using if.

Nginx 1.11 and 1.12 changed the quoting of $ssl_client_s_dn_cn.
If you come here and have headache, try this regexp instead:
map $ssl_client_s_dn $ssl_client_s_dn_cn {
default "should_not_happen";
~CN=(?<CN>[^/,\"]+) $CN;
}

I see two possible solutions, you can either overwrite the uwsgi_param or use $remote_user a default value for the variable $ssl_client_s_dn_cn.
To overwrite the uwsgi_param (this should also work with fastcgi_param), use the map directive as you suggested (just remove the ";" after "}"), and use the if_not_empty parameter for the directive:
uwsgi_param REMOTE_USER $remote_user;
uwsgi_param REMOTE_USER $ssl_client_s_dn_cn if_not_empty;
$ssl_client_s_dn_cn should override $remote_user if present. This approach has the advantage to use the two different variable names separately elsewhere (for example, the log format).
See:
http://nginx.org/en/docs/http/ngx_http_uwsgi_module.html#uwsgi_param
To use $remote_user as the default value for the $ssl_client_s_dn_cn variable when defining the map:
map $ssl_client_s_dn $ssl_client_s_dn_cn
{
default $remote_user;
~/CN=(?<CN>[^/]+) $CN;
}
Please note that the map directive cannot be used at server context, while location should. And also note that Nginx variables cannot be overwritten.

Related

nginx - proxy facebook/bots to a different server without changing canonical URL

TLDR;
How can I make it so that all scraper/bot requests reaching my frontend https://frontend.example.test/any/path/here are fed the data from https://backend.example.test/prerender/any/path/here without changing the canonical URL?
I have a complex situation where I have a Vue app that pulls data from a php API to render data. These are hosted in China so niceties like netlify prerender and prerender.io are not an option.
Initially I tried:
if ($http_user_agent ~* "googlebot|bingbot|yandex|baiduspider|twitterbot|facebookexternalhit|rogerbot|linkedinbot|embedly|quora link preview|showyoubot|outbrain|pinterest\/0\.|pinterestbot|slackbot|vkShare|W3C_Validator|whatsapp") {
rewrite ^/(.*)$ https://backend.example.test/prerender/$1 redirect;
}
which workd but Facebook used backend.example.text the canonical URL frontend.example.test.
Setting the og:url to the frontend app caused problems due to a redirect loop. I tried then setting the og:url to the frontend with a query param that skipped the nginx forward, but for some reason this wasn't working properly on the live server and I imagine facebook would still end up pulling the data from the final url anyhow.
Thus I imagine the only solution is to use proxy_pass but it is not permitted with a URI inside an if statement (and I have read the if is evil article).
I feel like all I need is something like a functioning version of:
location / {
if ($http_user_agent ~* "googlebot|bingbot|yandex|baiduspider|twitterbot|facebookexternalhit|rogerbot|linkedinbot|embedly|quora link preview|showyoubot|outbrain|pinterest\/0\.|pinterestbot|slackbot|vkShare|W3C_Validator|whatsapp") {
proxy_pass https://backend.example.test/prerender;
}
...
}
(I am of course aware of the contradiction of having to have Facebook sharing work in China, but the client is requesting this for their international users as well).
Here is the solution for your problem:
https://www.claudiokuenzler.com/blog/934/nginx-proper-way-different-upstream-user-agent-based-reverse-proxying-rule-without-if
I'm copying here the main parts in case the link breaks:
Create a dynamic target upstream with the map directive:
map "$http_user_agent" $targetupstream {
default http://127.0.0.1:8080;
"~^mybot" http://127.0.0.1:8090;
}
Here "~^mybot" is a regular expression, if the user agent matches that expression it will use that upstream server.
If the user-agent does not match any entries, Nginx will use the "default" entry (saving http://127.0.0.1:8080 as $targetupstream variable).
Then you just have to use a that upstream in a proxy pass setting:
location / {
include /etc/nginx/proxy.conf;
proxy_set_header X-Forwarded-Proto https;
proxy_pass $targetupstream;
}
Now, you could use one upstream pointing to locahost at a port that is being used by nginx to serve static files (for client only) and another port for the server renderer.

CGI self-referenced URLs use HTTP instead of HTTPS when using AWS ELB

I have a Perl HTTPS application which runs behind an Elastic Load Balancer (ELB). The HTTPS is terminated by the ELB, which then forwards the requests as HTTP to my instances.
The issue is that, because the instance itself is accessed via HTTP, when I use the CGI module to generate self-referencing URL's, they incorrectly use HTTP instead of HTTPS, so form POSTs fail.
GET request are ok because they're redirected, but anything that uses POST doesn't work.
If I check my CGI environment variables, I have the following...
HTTP_X_FORWARDED_PORT = 443
HTTP_X_FORWARDED_PROTO = https
REQUEST_SCHEME = http
SERVER_PROTOCOL = HTTP/1.1
The CGI module is presumably using REQUEST_SCHEME or SERVER_PROTOCOL to determine that the URLs should use http://.
Is there some way I can fudge the environment variables at the Apache or NGINX level to convince CGI that the site is in fact HTTPS?
I found a relatively simple way to do it.
I just SetEnv HTTPS 'on' in the Apache config. The CGI module is now satisfied that it's HTTPS, so creates URLs accordingly.
I set the other environment variables to match just to be safe, but it just seems to be HTTPS that CGI uses.
This is the part of CGI.pm that is responsible for determining if HTTPS is being used:
sub https {
my ($self,$parameter) = self_or_CGI(#_);
if ( defined($parameter) ) {
$parameter =~ tr/-a-z/_A-Z/;
if ( $parameter =~ /^HTTPS(?:_|$)/ ) {
return $ENV{$parameter};
}
return $ENV{"HTTPS_$parameter"};
}
return wantarray
? grep { /^HTTPS(?:_|$)/ } keys %ENV
: $ENV{'HTTPS'};
}
protocol consults the return value of this method:
return 'https' if uc($self->https()) eq 'ON';
So, your solution of SetEnv HTTPS on would work.
However, if you wanted your program to respect HTTP_X_FORWARDED_PROTO, you could monkey patch CGI:
#!/usr/bin/env perl
use strict;
use warnings;
use CGI qw();
{
no warnings 'redefine';
my $original_https = \&CGI::https;
*CGI::https = sub {
goto $original_https unless my $proto = $ENV{HTTP_X_FORWARDED_PROTO};
return 'on' if lc($proto) eq 'https';
return 'off';
};
}
my $cgi = CGI->new;
print $cgi->self_url, "\n";
$ HTTP_X_FORWARDED_PROTO=https ./monkey.pl
https://localhost:80
$ ./monkey.pl
http://localhost
Of course, this points out that CGI now appends the port number, and because it does not pay attention to HTTP_X_FORWARDED_PORT, it likely gets that wrong as well. If I am not mistaken, for that you'd need to monkey patch virtual_port.

Meteor, prerender.io, Nginx and SSL

I'm trying to get prerender.io working for my Meteor app with the Nginx configuration, but not sure exactly how to integrate it.
I've done something similar to the following:
https://www.digitalocean.com/community/questions/how-to-setup-prerender-io-on-my-mean-stack-application-running-behind-nginx
By putting the http proxy stuff in the section:
if ($prerender = 0) {
#the directives
}
But have the issue of:
nginx: [emerg] "proxy_http_version" directive is not allowed here in /etc/nginx/sites-enabled/annachristoffer:48
nginx: configuration file /etc/nginx/nginx.conf test failed
Been stuck on this for a while and can't seem to find a source online that explains it.
The error means that the proxy_http_version directive is not allowed to be used inside an if block. The documentation specifies a context for each directive. For example, the proxy_pass directive is allowed to be used inside an if block.
Many of nginx directives can be inherited from an outer block, so it may be possible for you to restructure your configuration like this:
proxy_http_version ...;
proxy_... ...;
if ($prerender = 0) {
...;
proxy_pass ...;
}
Please be aware that the use of if comes with a caution.

Use one instance of Yourls URL shortner with multiple domains

I've been looking for a way to use Yourls with multiple domains, the main issue is that when configuring Yourls you need to supply the domain name in the config.php file (YOURLS_SITE constant).
Configuring just one domain with actual multiple domains pointing to Yourls causes unexpected behavior.
I've looked around and couldn't find a quick hack for this
I would use this line in config.php...
define('YOURLS_SITE', 'http://' . $_SERVER['HTTP_HOST'] . '');
(note to add any /subdirectory or whatever if that applies)
then as long as your apache hosts config is correct, any domain or subdomain pointing at this directory will work. keep in mind though, any redirect will work with any domain, so domain.one/redirect == domain.two/redirect
I found this quick-and-dirty solution and thought it might be useful for someone.
in the config.php file I changed the constant definition to be based on the value of $_SERVER['HTTP_HOST'], this works for me because I have a proxy before the server that sets this header, you can also define virtual hosts on your Apache server and it should work the same (perhaps you will need to use $_SERVER['SERVER_NAME'] instead).
so in config.php I changed:
define( 'YOURLS_SITE', 'http://domain1.com');
to
if (strpos($_SERVER['HTTP_HOST'],'domain2.com') !== false) {
define( 'YOURLS_SITE', 'http://domain2.com/YourlsDir');
/* domain2 doesn't use HTTPS */
$_SERVER['HTTPS']='off';
} else {
define( 'YOURLS_SITE', 'https://domain1/YourlsDir');
/* domain1 always uses HTTPS */
$_SERVER['HTTPS']='on';
}
Note1: if Yourls is located in the html root you can remove /YourlsDir from the URL
Note2: The url in YOURLS_SITE must not end with /
Hopefully this will help anyone else

Very simple authentication using one-time cookie on nginx

I have a site intended only for private consumption by 3 coders. It's simple HTML served by nginx directly but intended for consumption inside and outside the office.
I want to have a simple password or authentication scheme. I could use HTTP auth but these tend to expire fairly often which makes it a pain for people to use. I'm also nervous it's much easier for someone to sniff than cookies.
So I'm wondering if I could just set a cookie on their browsers in JavaScript with a unique long ID and somehow tell nginx to only accept requests (for a particular subdomain) which has this cookie.
Is this simple enough to do? How do I
tell nginx to filter by cookie
in the browser, set a cookie that never expires?
There is a really quite simple looking solution that I found from a blog post by Christian Stocker. It implements the following rules:
If the user is on an internal IP, they are allowed.
If the user has a cookie set, they are allowed.
If neither matches, the user is presented with http basic authentication, and if they successfully authenticate a long term cookie is set
This is really the best of both worlds.
Here's the config:
map $cookie_letmein $mysite_hascookie {
"someRandomValue" "yes";
default "no";
}
geo $mysite_geo {
192.168.0.0/24 "yes"; #some network which should have access
10.10.10.0/24 "yes"; #some other network which should have access
default "no";
}
map $mysite_hascookie$mysite_geo $mysite_authentication{
"yesyes" "off"; #both cookie and IP are correct => OK
"yesno" "off"; #cookie is ok, but IP not => OK
"noyes" "off"; #cookie is not ok, but IP is ok => OK
default "Your credentials please"; #everythingles => NOT OK
}
server {
listen 80;
server_name mysite.example.org;
location / {
auth_basic $mysite_authentication;
auth_basic_user_file htpasswd/mysite;
add_header Set-Cookie "letmein=someRandomValue;max-age=3153600000;path=/"; #set that special cookie, when everything is ok
proxy_pass http://127.0.0.1:8000/;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
To have Nginx filter by a cookie, you could perform some action if the cookie isn't present, and then your real action for the 3 people that have access, for example:
server {
listen 443 ssl http2; # Use HTTPS to protect the secret
...
if ($http_cookie !~ 'secretvalue') {
return 401;
}
location / {
# Auth'd behaviour goes here
}
}
And to set a cookie that never expires, fire up your browser's JavaScript console on a page that's on your server's hostname, and enter:
document.cookie = 'cookie=secretvalue;max-age=3153600000;path=/;secure';
That's technically not forever, but 100 years ought to do it. You can also use expires= for an absolute date in RFC1123 format if you're so inclined and can easily adjust the path if you need to. This also sets the secure flag on the cookie so it will only get sent over HTTPS.
There are also browser add-ons that will allow you to create arbitrary cookies, but all modern browsers have a JavaScript console.