Securing Nginx with SSL - ssl

I´m securing an Nginx server with SSL and I have a question. I have two virtual servers one for http listening in port 80 and the https listening in 443 like this:
# HTTP server
server {
listen 80;
server_name localhost;
...
# many configuration rules here for caching, etc
}
# HTTPS server
server {
listen 443 ssl;
server_name localhost;
...
}
The question is, do I need to duplicate all the configuration rules that I have in the http version into my https version? Is there any way to avoid duplicate all these rules?
UPDATE
I´m trying to config with an include according to #ibueker answer. Looks easy but somehow is not working. Does the include need to be inside a location? Example attached:
# HTTP server
server {
listen 80;
server_name localhost;
...
include ./wpo
}
Where wpo file is in the same path, and it´s like:
# Expire rules for static content
# RCM: WPO
# Images
location ~* \.(?:jpg|jpeg|gif|png|ico|cur|gz|svg|svgz|mp4|ogg|ogv|webm|htc)$ {
root /home/ubuntu/env/production/www/yanpy/app;
expires 1w;
add_header Cache-Control "public";
}
# CSS and Javascript
location ~* \.(?:css|js)$ {
root /home/ubuntu/env/production/www/yanpy/app;
expires 1w;
add_header Cache-Control "public";
}
# cache.appcache, your document html and data
location ~* \.(?:manifest|appcache|html?|xml|json)$ {
root /home/ubuntu/env/production/www/yanpy/app;
expires -1;
}

You can put them in another file and include them for both server blocks.
include /path/to/file;

Related

nginx: proxy_pass subdirectories to other servers

Im using nginx as Web Server and Reverse Proxy with SSL enabled.
The Web Server serves an WoltLab Suite Forum 5.0.0 (formerly known as Burning Board) and proxied some subdomains to different hosts like a NodeJS backend, an Tomcat backend and many other different services.
This worked great so far, but now i have the problem that i can no longer use subdomains to accomplish this.
Please don't ask why, please don't.
Now, that i can no longer use subdomains, Im trying to get it working with sub directories.
An example:
I had xyz.example.com point to my nginx server at 12.13.14.15.
nginx proxied all requests to xyz.example.com to 10.20.30.40:1234
Now i want nginx to proxy all requests to example.com/xyz/ to 10.20.30.40:1234
I got this working with Apache Archiva as backend service, but all other services like my NodeJS backend are refusing to work correctly with my current configuration.
It sends me to the BurningBoard wich shows me their Page-Not-Found page.
example.com/xyz/admin/index.php becomes to example.com/admin/index.php wich wont work of course.
The directory that proxies to Archiva has the exact same configuration, just other directory names of course.
The Archiva URL looks like this after i call it from Web:
example.com/repo/ becomes example.com/repo/#welcome and shows me Archivas Welcome Page.
This is exactly what i want for my other services too.
Here are my current configuration files for nginx (sensitive data replaced with X):
<=== sites-available/default ===>
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name XXXX.XX www.XXXX.XX;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl default_server;
listen [::]:443 ssl default_server;
include snippets/ssl-XXXXX.XX.conf;
include snippets/ssl-params.conf;
root /var/www/html;
include /etc/nginx/snippets/proxies.conf;
# the last try_files argument is for SEO friendly URLs for the WSF 5.0
location / {
index index.php;
try_files $uri $uri/ /index.php?$uri&$args;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
}
# Lets Encrypt automation
location ~ /.well-known {
allow all;
}
location ~ /\.ht {
deny all;
}
}
<=== snippets/proxies.conf ===>
# Apache Archiva
location /repo {
rewrite ^/repo(/.*)$ $1 break;
proxy_pass http://XXXXXXXX:XXXXX;
return 404;
}
# Git Solution
location /git {
rewrite ^/git(/.*)$ $1 break;
proxy_pass http://XXXXXXXX:XXXXX;
return 404;
}
# Filehosting
location /cloud {
rewrite ^/cloud(/.*)$ $1 break;
proxy_pass http://XXXXXXXX:XXXXX;
return 404;
}
# NodeJS
location /webinterface {
rewrite ^/webinterface(/.*)$ $1 break;
proxy_pass https://XXXXXXXX:XXXXX;
include /etc/nginx/snippets/websocket-magic.conf;
return 404;
}
Any ideas how to solve this problem?
Also, please tell me if you need more informations like nginx' version or the like.

'ERR_TOO_MANY_REDIRECTS' error nginx docker

I am using nginx docker for deploying my app in aws server. I have to access my api using nginx proxy url which looks like https://domain.com/api/. This is a https request so i have to set proxy redirection to another port where api service running and the service is running under another docker container in same server instance. so my nginx conf file looks like below,
server {
listen 80;
server_name domain.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
server_name domain.com;
# add Strict-Transport-Security to prevent man in the middle attacks
add_header Strict-Transport-Security "max-age=31536000";
location /api/ {
proxy_pass http://my-public-ip-address:3000;
}
}
So my problem is that while I am trying to access the api endpoint using above url its showing ERR_TOO_MANY_REDIRECTS. So any one know about this issue? And also i went through all the article with same issue mentioned but no luck.

Only listen to ONE Subdomain and ONE Port

I need to find out how to listen to only one port-hostname combination and to return 404 on every other request.
My example:
I have set up a subdomain for owncloud that uses SSL and listens to port 12345 instead of 443. Let's assume this subdomain is oc.example.com. So I want nginx to only listen to https://oc.example.com:12345 but not to
http://oc.example.com:12345
https://oc.example.com:X where X != 12345
https://BLABLA.example.com:12345
http://IP.IP.IP.IP:12345
https://IP.IP.IP.IP:12345
and so on
If one requests any resource that does not exactly match https://oc.example.com:12345 an error (e.g. 404 not found) should be returned or the server simply should not respond.
My config so far looks like:
server {
# I think there's something wrong here?
listen 12345;
server_name oc.example.com;
ssl on;
ssl_certificate /etc/ssl/nginx/owncloud/owncloud.crt;
ssl_certificate_key /etc/ssl/nginx/owncloud/owncloud.key;
add_header Strict-Transport-Security max-age=31536000;
add_header X-Frame-Options DENY;
# Path to the root of your installation
root /var/some/path/to/my/owncloud/;
client_max_body_size 10G; # set max upload size
fastcgi_buffers 64 4K;
rewrite ^/caldav(.*)$ /remote.php/caldav$1 redirect;
rewrite ^/carddav(.*)$ /remote.php/carddav$1 redirect;
rewrite ^/webdav(.*)$ /remote.php/webdav$1 redirect;
index index.php;
error_page 403 = /core/templates/403.php;
error_page 404 = /core/templates/404.php;
... here are the definitions of my locations
}
I read the documentation and found that nginx first looks for the correct server definition by the port number. If there are multiple matches the server_name is used to find the correct definition. But I only have one definition!
Does somebody know how to solve the problem?
nginx checks for the host header and chooses the appropriate virtual host.. in order to have it drop everything else, just add a 'catch-all' using the default_server directive:
server {
listen 80 default_server;
return 404;
}
all requests to oc.example.com:X, besides 12345, will be dropped by default since you didn't define other vhosts listening on other ports.

Invalid ports added in redirects on AWS EC2 nginx using SSL decryption offloaded to ELB

On AWS, I'm trying to migrate a PHP Symfony app running on nginx. I want to be able to test the app by directly talking to the EC2 server and via an ELB (the public route in).
I've setup an elastic load balancer to decrypt all the SSL traffic and pass this on to my EC2 server via port 80, as well as pass port 80 directly onto my EC2 server via port 80.
Initially this caused infinite redirects in my app but I researched and then fixed this by adding
fastcgi_param HTTPS $https;
with some custom logic that looks at $http_x_forwarded_proto to figure out when its actually via SSL.
There remains one issue I can't solve. When a user logs into the Symfony app, if they come via the ELB, the form POST eventually returns a redirect back to
https://elb.mysite.com:80/dashboard
instead of
https://elb.mysite.com/dashboard
which gives the user an error of "SSL connection error".
I've tried setting
fastcgi_param SERVER_PORT $fastcgi_port;
to force it away from 80 and I've also added the
port_in_redirect off
directive but both make no difference.
The only way I've found to fix this is to alter the ELB 443 listener to pass traffic via https. The EC2 server has a self certified SSL certificate configured. But this means the EC2 server is wasting capacity performing this unnecessary 2nd decryption.
Any help very much appreciated. Maybe there is a separate way within nginx of telling POST requests to not apply port numbers?
Nginx vhost config:
server {
port_in_redirect off;
listen 80;
listen 443 ssl;
ssl_certificate /etc/nginx/ssl/mysite.com/self-ssl.crt;
ssl_certificate_key /etc/nginx/ssl/mysite.com/self-ssl.key;
# Determine if HTTPS being used either locally or via ELB
set $fastcgi_https off;
set $fastcgi_port 80;
if ( $http_x_forwarded_proto = 'https' ) {
# ELB is using https
set $fastcgi_https on;
# set $fastcgi_port 443;
}
if ( $https = 'on' ) {
# Local connection is using https
set $fastcgi_https on;
# set $fastcgi_port 443;
}
server_name *.mysite.com my-mysite-com-1234.eu-west-1.elb.amazonaws.com;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log error;
rewrite ^/app\.php/?(.*)$ /$1 permanent;
location / {
port_in_redirect off;
root /var/www/vhosts/mysite.com/web;
index app.php index.php index.html index.html;
try_files $uri #rewriteapp;
}
location ~* \.(jpg|jpeg|gif|png)$ {
root /var/www/vhosts/mysite.com/web;
access_log off;
log_not_found off;
expires 30d;
}
location ~* \.(css|js)$ {
root /var/www/vhosts/mysite.com/web;
access_log off;
log_not_found off;
expires 2h;
}
location #rewriteapp {
rewrite ^(.*)$ /app.php/$1 last;
}
location ~ ^/(app|app_dev|config)\.php(/|$) {
port_in_redirect off;
fastcgi_pass 127.0.0.1:9000;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
fastcgi_param HTTPS $fastcgi_https;
# fastcgi_param SERVER_PORT $fastcgi_port;
#fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /var/www/vhosts/mysite.com/web$fastcgi_script_name;
include fastcgi_params;
}
}
References:
FastCGI application behind NGINX is unable to detect that HTTPS secure connection is used
https://serverfault.com/questions/256191/getting-correct-server-port-to-php-fpm-through-nginx-and-varnish
http://nginx.org/en/docs/http/ngx_http_core_module.html#port_in_redirect
Finally got a solution via another channel.
The answer is to commented out SERVER_PORT with a # in the file fastcgi_params file.
Much thanks to Maxim from Nginx.

Nginx serving static content and proxy to apache

Is there a configuration I can use with nginx that would serve all static content for all wbsites on port 80 and all dynamic content would be forwarded to apache on port 8080? Preferably I would like to not have to change anything in apache vhosts other than port
Where can I find such working configuration?
Here's a good example; http://wiki.nginx.org/FullExample
Special emphasis on this part;
server { # simple reverse-proxy
listen 80;
server_name domain2.com www.domain2.com;
access_log logs/domain2.access.log main;
# serve static files
location ~ ^/(images|javascript|js|css|flash|media|static)/ {
root /var/www/virtual/big.server.com/htdocs;
expires 30d;
}
# pass requests for dynamic content to rails/turbogears/zope, et al
location / {
proxy_pass http://127.0.0.1:8080;
}
}