Ansible: are variables for specific role and inventory possible? - configuration-management

Let's say I have a nginx config file created from a template, which I use to configure certain hosts to redirect a server name from http to https:
server {
listen 80;
server_name {{ server_name }};
rewrite ^ https://$server_name$request_uri? permanent;
}
Say I have two web sites hosted on the same machine:
site A
site B
each has its own server name and each needs the redirection above. At the same time let's say I have at least two separate deployment configurations, each represented by its own inventory file and its group_vars/ folder, for example:
vagrant onebox
production
each using a different server name. So now I have 2*2 = 4 separate server names:
sitea.myonebox.com
siteb.myonebox.com
sitea.production.com
siteb.production.com
I can't figure out how to define all of those 4 variables. I can't define two separate variables under group_vars/ because the j2 template expects only one variable name {{server_name}}, so I'd have to define the same template twice to make that work.
The other option is to have sitea and siteb as two separate roles (which I was going to do anyway), and store sever_name in roles/sitea/vars/main.yml, however that setup does not take inventory in consideration, meaning that I'd be down to 2 variables rather than 4.
Is this possible at all without template duplication or is Ansible not supporting this kind of scenario just yet?

If you're gonna separate them to two roles anyway, define the site names in inventory and pass them as role parameters:
roles:
- { role: sitea, server_name: "{{ sitea_server_name }}" }
- { role: siteb, server_name: "{{ siteb_server_name }}" }

How about trying this. This may not be the answer you would expect.
:nginx.j2
server {
listen 80;
server_name {% for host in groups['all'] -%}
{% if hostvars[host]['ansible_eth0']['ipv4']['address'] == ansible_eth0['ipv4']['address'] %}
{{ hostvars[host]['inventory_hostname'] }} {% endif %}{% endfor %} ;
rewrite ^ https://$server_name$request_uri? permanent;
}
This template will check all hosts. If group_host ip address and current_host ip address is same , add inventory_hostname to nginx config file.
Even if it is not this way, you can get inventory_hostoname in other groups or hosts by using hostvars.

Related

uWSGI and nginx configuration for multiple flask apps running on different ports on the same server

I have multiple small flask apps. I would want to run each of the flask apps on different ports on the same server(single domain).
For example say i have 3 flask apps
tasks.py --> has API endpoints with /task only
users.py --> has API endpoints with /user only
analysis.py --> has API endpoints with /analysis only
domain name : api.test.com
I want to run tasks.py on port 8080 , users.py on port 5000 and analysis.py on say port 4500.
I want to configure my uWSGI and nginx so that when i hit the api.test.com/task/xxxx i want the request to be directed to port 8080 (where tasks.py is running),
similarly api.test.com/user/xxxx should be directed to port 5000 and api.test.com/analysis/xxxx to 4500
It seems to me that you could have one single uWSGI isntance for that with one port, but if you like this way of thinking, then you can follow the next way.
Suppose, you already have several uWSGI instances running on different ports: 8080, 5000 and 4500.
Then you need to create an Nginx config with approximately the following content (read the comments, please):
# webserver configuration
server {
# port to be listened by the web-server
listen 80;
# domain name with its aliases or the ip
server_name api.test.com;
# SETUP THREE LOCATION SECTIONS FOR EACH ACTION
location /task {
# AND SPECIFY THE NEEDED ADDRESS:PORT HERE
uwsgi_pass YOUR_SERVER_IP:8080;
# ONE MAY ALSO USE UNIX-SOCKET (IF ON THE SAME SERVER), INSTEAD OF A PORT
# uwsgi_pass unix:/path/to/the/socket.sock
include /etc/nginx/uwsgi_params; # stadard parms with environment variables required by a uwsgi app
}
location /user {
uwsgi_pass YOUR_SERVER_IP:5000;
include /etc/nginx/uwsgi_params;
}
location /analysis {
uwsgi_pass YOUR_SERVER_IP:4500;
include /etc/nginx/uwsgi_params;
}
}
I hope, you know, how to run three uWSGI instances for each of the ports.
In my opinion, this is extremely ugly, as in order to add any new action you will have to edit Nginx config again. So I don't recommend this solution and only suggest this answer as demonstration of Nginx's capabilities in connection with other web servers.

Nginx map client certificate to REMOTE_USER for uWSGI with fallback to basic auth?

I'm using Nginx with uWSGI to serve Mercurial; it does basic auth over SSL (Nginx is the SSL terminator; it doesn't get passed on to Hg), but due to the limited security of basic auth even over SSL, as discussed at various places including this site, I want to allow users to connect with client certificates as well, something that TortoiseHg for example supports.
ssl_verify_client optional;
...
map $ssl_client_s_dn $ssl_client_s_dn_cn
{
default "";
~/CN=(?<CN>[^/]+) $CN;
};
...
location /
{
uwsgi_pass unix:/run/uwsgi/app/hgweb/socket;
include uwsgi_params;
uwsgi_param SERVER_ADDR $server_addr;
uwsgi_param REMOTE_USER $ssl_client_s_dn_cn;
#uwsgi_param REMOTE_USER $remote_user;
#auth_basic "Mercurial repositories";
#auth_basic_user_file /srv/hg/.htpasswd;
}
So I treat the CN as a username.
But how do I make it fallback to basic auth when there's no client certificate (and preferably not when there is a certificate but its verification fails -- just error in that case)? An example I found does it by having a separate server block listening on another port, which I want to avoid: https://github.com/winne27/nginx-cert-and-basic-auth/blob/master/nginx-example.conf
Additionally, in some examples I've seen the following checks in location; are they necessary? if ($ssl_client_verify != SUCCESS) { return 496; } if ($ssl_client_s_dn_cn !~ "^[a-z0-9]{1,10}$") { return 496; } Given http://wiki.nginx.org/IfIsEvil I thought it best to avoid using if.
Nginx 1.11 and 1.12 changed the quoting of $ssl_client_s_dn_cn.
If you come here and have headache, try this regexp instead:
map $ssl_client_s_dn $ssl_client_s_dn_cn {
default "should_not_happen";
~CN=(?<CN>[^/,\"]+) $CN;
}
I see two possible solutions, you can either overwrite the uwsgi_param or use $remote_user a default value for the variable $ssl_client_s_dn_cn.
To overwrite the uwsgi_param (this should also work with fastcgi_param), use the map directive as you suggested (just remove the ";" after "}"), and use the if_not_empty parameter for the directive:
uwsgi_param REMOTE_USER $remote_user;
uwsgi_param REMOTE_USER $ssl_client_s_dn_cn if_not_empty;
$ssl_client_s_dn_cn should override $remote_user if present. This approach has the advantage to use the two different variable names separately elsewhere (for example, the log format).
See:
http://nginx.org/en/docs/http/ngx_http_uwsgi_module.html#uwsgi_param
To use $remote_user as the default value for the $ssl_client_s_dn_cn variable when defining the map:
map $ssl_client_s_dn $ssl_client_s_dn_cn
{
default $remote_user;
~/CN=(?<CN>[^/]+) $CN;
}
Please note that the map directive cannot be used at server context, while location should. And also note that Nginx variables cannot be overwritten.

Use one instance of Yourls URL shortner with multiple domains

I've been looking for a way to use Yourls with multiple domains, the main issue is that when configuring Yourls you need to supply the domain name in the config.php file (YOURLS_SITE constant).
Configuring just one domain with actual multiple domains pointing to Yourls causes unexpected behavior.
I've looked around and couldn't find a quick hack for this
I would use this line in config.php...
define('YOURLS_SITE', 'http://' . $_SERVER['HTTP_HOST'] . '');
(note to add any /subdirectory or whatever if that applies)
then as long as your apache hosts config is correct, any domain or subdomain pointing at this directory will work. keep in mind though, any redirect will work with any domain, so domain.one/redirect == domain.two/redirect
I found this quick-and-dirty solution and thought it might be useful for someone.
in the config.php file I changed the constant definition to be based on the value of $_SERVER['HTTP_HOST'], this works for me because I have a proxy before the server that sets this header, you can also define virtual hosts on your Apache server and it should work the same (perhaps you will need to use $_SERVER['SERVER_NAME'] instead).
so in config.php I changed:
define( 'YOURLS_SITE', 'http://domain1.com');
to
if (strpos($_SERVER['HTTP_HOST'],'domain2.com') !== false) {
define( 'YOURLS_SITE', 'http://domain2.com/YourlsDir');
/* domain2 doesn't use HTTPS */
$_SERVER['HTTPS']='off';
} else {
define( 'YOURLS_SITE', 'https://domain1/YourlsDir');
/* domain1 always uses HTTPS */
$_SERVER['HTTPS']='on';
}
Note1: if Yourls is located in the html root you can remove /YourlsDir from the URL
Note2: The url in YOURLS_SITE must not end with /
Hopefully this will help anyone else

Apache + NginX reverse proxy: serving static and proxy files within nested URLs

I have Apache running on my server on port 8080 with NginX running as a reserve proxy on port 80. I am attempting to get NginX to serve static HTML files for specific URLs. I am struggling to write the NginX configuration that does this. I have conflicting directives as my URLs are nested within each other.
Here's what I want to have:
One URL is at /example/ and I want NginX to serve a static HTML file located on my server at /path/to/www/example-content.html instead of letting Apache serve the page to NginX.
Another URL is at /example/images/ and I want Apache to serve that page to NginX, just as it does for the rest of the site.
I have set up my nginx.conf file like this:
server {
listen 80;
server_name localhost;
# etc...
location / {
proxy_pass http://127.0.0.1:8080/;
}
# etc...
My attempt to serve the static file at /example/ from NginX went like this:
location /example/ {
alias /path/to/www/;
index example-content.html;
}
This works, but it means everything after the /example/ URL (such as /example/images/) is aliased to that local path also.
I would like to use regex instead but I've not found a way to serve up the file specifically, only the folder. What I want to be able to say is something like this:
location ~ ^/example/$ {
alias /path/to/www/example-content.html;
}
This specifically matches the /example/ folder, but using a filename like that is invalid syntax. Using the explicit location = /example/ in any case doesn't work either.
If I write this:
location ~ ^/example/$ {
alias /path/to/www/;
index example-content.html;
}
location ~ /example/(.+) {
alias /path/to/www/$1;
}
The second directive attempts to undo the damage of the first directive, but it ends up overriding the first directive and it fails to serve up the static file.
So I'm at a bit of a loss. Any advice at all extremely welcome! Many thanks.
Since you say you have Apache running, I assume it is there to run dynamic content such as PHP (Otherwise it is not needed). In that case, the example config below will serve all static content with Nginx and pass others to Apache.
server {
# default index should be defined once as high up the tree as possible
# only override lower down if absolutely required
index index.html index.php
# default root should be defined once as high up the tree as possible
# only override lower down if absolutely required
root /path/to/www/
location / {
try_files $uri $uri/ #proxy;
}
location #proxy {
proxy_pass http://127.0.0.1:8080;
# Other Proxy Params
}
location ~ \.php$ {
error_page 418 = #proxy
location ~ \..*/.*\.php$ { return 400; }
return 418;
}
}
What this assumes is that you are following a structured setup where the default file in every folder is either called "index.html" or "index.php" such as "/example/index.html", "some-folder/index.html" and "/some-other-folder/index.html".
With this, navigating to "/example/", "/some-folder/" or "/some-other-folder/" will just work with no further action.
If each folder has default files with random different names, such as "/example/example-content.html", "some-folder/some-folder.html" and "some-other-folder/yet-another-different-default.html", then it becomes a bit more difficult as you then need to do something like
server {
# default index should be defined once as high up the tree as possible
# only override lower down if absolutely required
index index.html index.php
# default root should be defined once as high up the tree as possible
# only override lower down if absolutely required
root /path/to/www/
location / {
try_files $uri $uri/ #proxy;
}
location #proxy {
# Proxy params
proxy_pass http://127.0.0.1:8080;
}
location ~ .+\.php$ {
error_page 418 = #proxy
location ~ \..*/.*\.php$ { return 400; }
return 418;
}
location /example/ {
# Need to keep defining new index due to lack of structure
# No need for alias or new root
index example-content.html;
}
location /some-folder/ {
# Need to keep defining new index due to lack of structure
# No need for alias or new root
index some-folder.html;
}
location /some-other-folder/ {
# Need to keep defining new index due to lack of structure
# No need for alias or new root
index yet-another-different-default.html;
}
# Keep adding new location blocks for each folder
# Obviously not the most efficient arrangement
}
The better option is to have a structured and logical layout of files on the site instead of multiple differing locations.

How to setup mass dynamic virtual hosts in nginx?

Been playing with nginx for about an hour trying to setup mass dynamic virtual hosts.
If you ever done it in apache you know what I mean.
Goal is to have dynamic subdomains for few people in the office (more than 50)
Perhaps doing this will get you where you want to be:
server {
root /sites/$http_host;
server_name $http_host;
...
}
I like this as I can literally create sites on the fly, just create new directory named after the domain and point the DNS to the server ip.
You will need some scripting knowledge to put this together. I would use PHP, but if you are good in bash scripting use that. I would do it like this:
First create some folder (/usr/local/etc/nginx/domain.com/).
In main nginx.conf add command : include /usr/local/etc/nginx/domain.com/*.conf;
Every file in this folder should be different vhost names subdomain.conf.
You do not need to restart nginx server for config to take action, you only need to reload it : /usr/local/etc/rc.d/nginx reload
OR you can make only one conf file, where all vhosts should be set. This is probably better so that nginx doesn't need to load up 50 files, but only one....
IF you have problems with scripting, then ask question about that...
Based on user2001260's answer, later edited by partlov, here's my outcome.
Bear in mind this is for a dev server located on a local virtual machine, where the .dev prefix is used at the end of each domain. If you want to remove it, or use something else, the \.dev part in the server_name directive could be edited or altogether removed.
server {
listen 80 default_server;
listen [::]:80 default_server;
# Match any server name with the format [subdomain.[.subdomain...]].domain.tld.dev
server_name ~^(?<subdomain>([\w-]+\.)*)?(?<domain>[\w-]+\.[\w-]+)\.dev$;
# Map by default to (projects_root_path)/(domain.tld)/www;
set $rootdir "/var/www/$domain/www";
# Check if a (projects_root_path)/(subdomain.)(domain.tld)/www directory exists
if (-f "/var/www/$subdomain.$domain/www"){
# in which case, set that directory as the root
set $rootdir "/var/www/$subdomain.$domain/www";
}
root $rootdir;
index index.php index.html index.htm index.nginx-debian.html;
# Front-controller pattern as recommended by the nginx docs
location / {
try_files $uri $uri/ /index.php;
}
# Standard php-fpm based on the default config below this point
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/run/php/php7.0-fpm.sock;
}
location ~ /\.ht {
deny all;
}
}
The regex in server_name captures the variables subdomain and domain. The subdomain part is optional and can be empty. I have set it so that by default, if you have a subdomain, say admin.mysite.com the root is set to the same root as mysite.com. This way, the same front-controller (in my case index.php) can route based on the subdomain. But if you want to keep an altogether different application in a subdomain, you can have a admin.mysite.com dir and it will use that directory for calls to admin.mysite.com.
Careful: The use of if is discouraged in the current nginx version, since it adds extra processing overhead for each request, but it should be fine for use in a dev environment, which is what this configuration is good for. In a production environment, I would recommend not using a mass virtual host configuration and configuring each site separately, for more control and better security.
server_name ~^(?<vhost>[^.]*)\.domain\.com$;
set $rootdir "/var/www/whatever/$vhost";
root $rootdir;
As #Samuurai suggested here is a short version Angular 5 with nginx build integration:
server {
server_name ~^(?<branch>.*)\.staging\.yourdomain\.com$;
access_log /var/log/nginx/branch-access.log;
error_log /var/log/nginx/branch-error.log;
index index.html;
try_files $uri$args $uri$args/ $uri $uri/ /index.html =404;
root /usr/share/nginx/html/www/theft/$branch/dist;
}
Another alternative is to have includes a few levels deep so that directories can be categorized as you see fit. For example:
include sites-enabled/*.conf;
include sites-enabled/*/*.conf;
include sites-enabled/*/*/*.conf;
include sites-enabled/*/*/*/*.conf;
As long as you are comfortable with scripting, it is not very hard to put together some scripts that will quickly set up vhosts in nginx. This slicehost article goes through setting up a couple of vhosts and does it in a way that is easily scriptable and keeps the configurations separate. The only downside is having to restart the server, but that's to be expected with config changes.
Update: If you don't want to do any of the config maintaining yourself, then your only 2 options (the safe ones anyways) would be to either find a program that will let your users manage their own chunk of their nginx config (which will let them create all the subdomains they want), or to create such a user-facing management console yourself.
Doing this yourself would not be too hard, especially if you already have the scripts to do the work of setting things up. The web-based interface can call out to the scripts to do the actual work so that all the web interface has to deal with is managing who has access to what things.