I'm using nginx for the proxy server. My application has a feature where user can use their own domain instead of my domain. For that, they need to point their CNAME to my domain.
This is my Nginx configuration
server {
server_name scan.mydomain.com anonymous.mydomain.com "";
access_log /etc/nginx/log/local-wc.access.log;
error_log /etc/nginx/log/local-wc.error.log;
location / {
root /var/www/html/qcg-scanning-frontend/dist/webapp/;
index index.html;
try_files $uri $uri/ /index.html;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Protocol $scheme;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/anonymous.mydomain.com-0001/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/anonymous.mydomain.com-0001/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = scan.mydomain.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
if ($host = anonymous.mydomain.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
server_name scan.mydomain.com anonymous.mydomain.com "";
listen 80;
return 404; # managed by Certbot
}
this configuration is working fine when browsed using my domain scan.mydomain.com and anonymous.mydomain.com but using any pointed domain like new.example.com, it gives 404 page (maybe due to return 404 statement).
For SSL, I'm using lets-encrypt certbot.
How can I configure to
Allow traffic from all CNAME pointed domains to my server?
Provide SSL certificate to all the domains?
I used CaddyServer which is far better than nginx and satisfies all requirements.
https://caddyserver.com/
Features of Caddy
Support for third party domain CNAME pointing
JSON based configuration
API support for the configuration
On-demand TLS
Default serves SSL/TLS to all the domains in the production server
No hassle to install and manage SSL certificates for the domains.
We have an app that uploads automatically generated SSL certificate to our NGINX load balancers. One time the we had this issue that a "bad certificate" got uploaded and then a automated nginx reload is thereafter executed, our server went offline for a while causing DNS issues (DNS not found) for our server domain. Causing a huge downtime to our clients.
However it is a feature / function in our application to allow apps to upload SSL cerficate and our backend server installs it automatically, is there a way to tell to ignore bad NGINX conf files and crt/key's altogether? Looking at the before logs I can remember that I saw something like SSL handshake error before the incident.
Here's how our main nginx-jelastic.conf looks like:
######## HTTP SECTION PROTOTYPE ########
http {
server_tokens off ;
### other settings hidden for simplicity
include /etc/nginx/conf.d/*.conf;
}
######## TCP SECTION PROTOTYPE ########
So what I am thinking if it's possible for nginx to just ignore all bad NGINX conf files that is located there. Here's a sample of what gets uploaded in the conf.d folder:
#
www.example-domain.com HTTPS server configuration
#
server {
listen 443 ssl;
server_name www.example-domain.com;
ssl_certificate /var/lib/nginx/ssl/www.example-domain.com.crt;
ssl_certificate_key /var/lib/nginx/ssl/www.example-domain.com.key;
access_log /var/log/nginx/localhost.access_log main;
error_log /var/log/nginx/localhost.error_log info;
proxy_temp_path /var/nginx/tmp/;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
location / {
set $upstream_name common;
include conf.d/ssl.upstreams.inc;
proxy_pass http://$upstream_name;
proxy_next_upstream error;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Host $http_host;
proxy_set_header X-Forwarded-For $http_x_forwarded_for;
proxy_set_header X-URI $uri;
proxy_set_header X-ARGS $args;
proxy_set_header Refer $http_refer;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
For some reason the certificate and key indicated in the configuration could be wrong, that that is going to wreck the nginx server and since our domain is pointed to this server via A record, it us a total disaster if the nginx fails as DNS issues happens and it could take 24-48 hours for DNS to get back.
I created a Node.js site that uses Google authentication. The site is used by 100+ users concurrently which affect the performance. So I understand that Nginx could help in scaling the site by creating multiple instance of the Node.js app in multiple ports and then we use Nginx as a load balancer.
So, I configured Nginx, but the issue is that it dose not seem to work with Google authentication. I am able to see the first page of my site and I am able to to login via Google but it is dose not work after this point.
Any suggestions to what could be missing to make this work.
This is my configuration file:
upstream my_app
{
least_conn; # Use Least Connections strategy
server ip:3001; # NodeJS Server 2 I changed the actual ip
server ip:3002; # NodeJS Server 3
server ip:3003; # NodeJS Server 4
server ip:3004; # NodeJS Server 5
keepalive 256;
}
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
expires epoch;
add_header Cache-Control "no-cache, public, must-revalidate, proxy-revalidate";
server_name ip;
access_log /var/log/nginx/example.com-access.log;
error_log /var/log/nginx/example.com-error.log error;
# Browser and robot always look for these
# Turn off logging for them
location = /favicon.ico { log_not_found off; access_log off; }
location = /robots.txt { log_not_found off; access_log off; }
# pass the request to the node.js server
# with some correct headers for proxy-awareness
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_buffers 8 16k;
proxy_buffer_size 32k;
proxy_pass http://my_app ;
proxy_redirect off ;
add_header Pragma "no-cache";
# Handle Web Socket connections
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
I just started learning about nginx, I checked when the upstream have only one ip address and it is working. i.e it works as a reverse proxy but not as a load balancer and my guess is due to google authentication nature.
And the error I receive in the error log is connection refused.
Thanks.
I figure out what was wrong. least_conn load balancing technique was not the right to choose since it dose not persist session. I changed it to
hash $remote_addr or hash_ip and it is working.
My goal is to redirect from port 80 to 443 (force https), but can't manage to get a working https configuration first. I get a 503 Server Error and nothing appears in the logs. I've looked at all the posts on SO and SF, none of them worked (X_FORWARDED_PROTO, X-Forwarded-For headers don't make a difference.). I'm on EC2 behind a load balancer, and so I don't need to use the SSL-related directives as I've configured my certificate on the ELB already. I'm using Tornado for a web server.
Here's the config, if anyone has ideas, thank you!
http {
# Tornado server
upstream frontends {
server 127.0.0.1:8002;
}
server {
listen 443;
client_max_body_size 50M;
root <redacted>/static;
location ^~/static/ {
root <redacted>/current;
if ($query_string) {
expires max;
}
}
location / {
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_pass http://frontends;
}
}
}
Well, there are two different tasks:
If you need to redirect all your http traffic to https, you'll need to create http server in nginx:
server {
listen 80;
server_name example.com;
return 301 https://example.com$request_uri;
}
Second note, if your SSL is terminated at ELB than you dont need ssl enabled nginx server at all. Simply pass traffic from ELB to your server 80 port.
So I've looked at every sample configuration I could find and yet every time I try and view a page that requires ssl, I end up in an redirect loop. I'm running nginx/0.8.53 and passenger 3.0.2.
Here's the ssl config
server {
listen 443 default ssl;
server_name <redacted>.com www.<redacted>.com;
root /home/app/<redacted>/public;
passenger_enabled on;
rails_env production;
ssl_certificate /home/app/ssl/<redacted>.com.pem;
ssl_certificate_key /home/app/ssl/<redacted>.key;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X_FORWARDED_PROTO https;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-Url-Scheme $scheme;
proxy_redirect off;
proxy_max_temp_file_size 0;
location /blog {
rewrite ^/blog(/.*)?$ http://blog.<redacted>.com/$1 permanent;
}
location ~* \.(js|css|jpg|jpeg|gif|png)$ {
if (-f $request_filename) {
expires max;
break;
}
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
Here's the non-ssl config
server {
listen 80;
server_name <redacted>.com www.<redacted>.com;
root /home/app/<redacted>/public;
passenger_enabled on;
rails_env production;
location /blog {
rewrite ^/blog(/.*)?$ http://blog.<redacted>.com/$1 permanent;
}
location ~* \.(js|css|jpg|jpeg|gif|png)$ {
if (-f $request_filename) {
expires max;
break;
}
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
Let me know if there's any additional info I can give to help diagnose the issue.
It's your line here:
listen 443 default ssl;
change it to:
listen 443;
ssl on;
This I'll call the old style.
Also, that along with
proxy_set_header X_FORWARDED_PROTO https;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-Url-Scheme $scheme;
proxy_redirect off;
proxy_max_temp_file_size 0;
did the trick for me. I see now i am missing the real IP line you have, but so far, this got rid of my infinite loop problem with ssl_requirement and ssl_enforcer.
I've toyed around with a bunch of these answers but nothing worked for me. Then I realized since I use Cloudflare the problem may not be in the server but with Cloudflare. Lo and behold when I set my SSL to Full (Strict) everything works as it should!
I found that it was this line
proxy_set_header Host $http_host;
Which should be changed to
proxy_set_header Host $host;
According to the nginx documentation by using '$http_host you're passing the "unchanged request-header".
Have you tried using "X-Forwarded-Proto" instead of X_FORWARDED_PROTO?
I've run into a problem with this header before, it wasn't causing redirects, but changing this header fixed it for me.
Since you have a rewrite statement found in both ssl and non-ssl sections
location /blog {
rewrite ^/blog(/.*)?$ http://blog.<redacted>.com/$1 permanent;
}
Where is the server section for blog..com?? Could that be the source of the issue?
I had a similar issue for my symfony2 application, albeit form a different cause: I had set fastcgi_param HTTPS off; when I of course needed fastcgi_param HTTPS on; in my nginx configuration.
location ~ ^/(app|app_dev|config)\.php(/|$) {
satisfy any;
allow all;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param HTTPS on;
}
In case someone else stumbles on this, I was attempting to configure both http and https via the same server {} block, but only added the "listen 443" directive believing that the "this line is default and implied" meant that it would also listen on 80 as well, it didn't. Uncommenting the "listen 80" line so that both listen lines were present corrected the infinite loop. No idea why it would have even been getting a redirect at all, but it did.
For those who are searching desperatly why their owncloud keep making a redirect loop in spite of having a good configuration file, i've found why it's not working.
My config:
nginx + php-fpm + mysql on a fresh centos 6.5
when installing php-fpm and nginx, default permission on /var/lib/php/session/ is root:apache
php-fpm through nginx store php session here, if nginx did not have authorization to write it fail miserably to keep any login session, resulting in an infinite loop.
So juste add nginx in apache group (usermod -a -G apache nginx) or change ownership of this folder.
Have a nice day.
X_FORWARDED_PROTO as in your file can cause errors and it did in my case. X-Forwarded-Proto is correct whereas the hiphens are more important than uppercase or lowercase letters.
You can avoid those problems by sticking to conventions ;)
see also here: Custom HTTP headers : naming conventions and here: http://www.ietf.org/rfc/rfc2047.txt