My situations is as follows:
app 1 running at: server.domain.com (192.168.1.3)
app 2 running at: server.domain.com:8080 (192.168.1.2)
My router is set up to route requests on port 80 to app 1 and port 8080 to app 2.
So far so good, this scenario has been working for ages.
Recently I tried switching to nginx and I decided to redirect http traffic to https traffic for app 1.
I set up a container with nginx and am using the following config:
server {
listen 80 default_server;
server_name _;
return 301 https://$host$request_uri;
}
# main server block
server {
listen 443 ssl default_server;
root /config/www;
index index.html index.htm index.php;
server_name _;
ssl_certificate /path to cert;
ssl_certificate_key /path to cert;
ssl_dhparam /path to cert;
ssl_ciphers '';
ssl_prefer_server_ciphers on;
client_max_body_size 0;
location / {
try_files $uri $uri/ /index.html /index.php?$args =404;
}
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
# With php7-cgi alone:
fastcgi_pass 127.0.0.1:9000;
# With php7-fpm:
#fastcgi_pass unix:/var/run/php7-fpm.sock;
fastcgi_index index.php;
include /etc/nginx/fastcgi_params;
}
}
This successfully redirects http to https and app 1 works as expected.
However when trying to visit app 2 I will also be redirected to https (which it shouldn't, app 2 doesn't support it)
Now I already figured out why this happens.
Google Chrome has a cache so when I visit the non-https url it gets a 301 redirect to the https version. It saves this in it's cache and now thinks I always want https regardless of the port.
The workaround I've found is going to chrome://net-internals and clearing the cache there. Opening app 2 then succeeds but after visiting app 1 I end up in the same loop all over again.
I've tried several default fixes found all over the net but none of them have worked thus far.
Anyone know what I have to put in my config to fix this?
ps: cert paths, domain names and ports are fake representations of the real situation
First off it would be helpful if in the nginx config you label which server definition corresponds to App 1 and App 2, because it appears there may be a mix up in the configuration. You are also missing some configuration, such as listening to port 8080. So first I'll clarify the requirements you clearly stated for both apps:
App 1:
Listens on port 80
Uses SSL
App 2:
Listens on port 8080
Does not use SSL / doesn't support it.
So I'd recommend config closer to:
# Corresponds better to app 2 given your requirements
server {
listen 8080 default_server;
server_name _;
# NOTE: You may want to listen for certain routes, without redirect EG
# location /foo/* { . . . }
return 301 $scheme://$host$request_uri;
}
# main server block - app 1
server {
listen 443 ssl default_server;
. . . # The rest of your definition here is fine for an SSL server
}
My main point here is that the server block on port 80 as you've defined it above is just a redirect machine to https, hardcoded. This block as you've defined it contradicts the requirements that you "route requests on port 80 to app 1" and you "use SSL for app 1" since your SSL configuration is actually in the second server definition. What you've set up in the first server definition is actually a pattern used to force ssl redirects leaving you in a position where you'll never serve non-ssl HTTP traffic. This might clear up the issue somewhat; perhaps I can help more once the server blocks more closely match the stated requirements.
Finally noting that it is possible to listen to multiple ports and route to http and https traffic within one server definition block:
server {
listen 80;
listen 443 ssl;
# can force some routes to be ssl or non ssl accordingly
}
Configuration like this may be more ideal if both app servers are hosted on the same machine using the same nginx service.
Related
I just setup a new website.
After setting up everything (SSL with LetsEncrypt), there is a too many redirects problem.
It took me hours to figure out that I can just solve it by switching from Flexible to Full in my cloudflare seetings. But why? Can somebody explain details to me?
Nginx conf:
server {
server_name mysite.com;
root /root/mysite;
index index.html;
location / {
try_files $uri $uri/ =404;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/mysite.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/mysite.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = mysite.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
server_name mysite.com;
listen 80;
return 404; # managed by Certbot
}
When the set up is configured in Flexible mode, it means that the connection follows this configuration:
Between the end user and Cloudflare, HTTPS is used
Between Cloudflare and your origin erver, HTTP is used
This can be useful if the origin does not support HTTPS, but you still want end users to connect securely to Cloudflare. The recommendation is to always have end to end TLS enabled with fully valid certificates.
If your origin is configured to redirect HTTP requests to HTTPS, then we enter into a loop, since the redirected HTTPS request goes back to Cloudflare, then Cloudflare makes an HTTP request to the origin ... and back to where it started!
In your case you seem to have a fully valid Let's Encrypt certificate on your origin server, so you should use Full (Strict) . More information is also available here
I am a newbie to Nginx config and all, I have a process which is an express app, running on port 3000 using pm2 and I have allowed port 3000 using ufw as well, and have made a server instance on Nginx to proxy it,
server {
# SSL configuration
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name .mysite.co;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /home/django/mysite;
}
proxy_cache mysite;
location / {
include proxy_params;
proxy_pass http://unix:/home/django/mysite/mysite.sock;
}
gzip_comp_level 3;
gzip_types text/plain text/css image/*;
ssl_certificate /etc/letsencrypt/live/mysite.co/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/mysite.co/privkey.pem; # managed by Certbot
}
server {
if ($host = www.mysite.co) {
return 301 https://$host$request_uri;
} # managed by Certbot
if ($host = mysite.co) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80 default_server;
listen [::]:80 default_server;
server_name .mysite.co;
return 404; # managed by Certbot
}
server{
listen 3000;
listen 443 ssl http2;
server_name .mysite.co:3000;
location / {
proxy_pass https://localhost:3000;
}
}
I ran netstat -napl | grep 3000 and I could confirm that the process is running and pm2 status also says its running and no errors in log as well.
How could I make this work? Thanks for the help in advance.
You won't be able to use nginx to listen on port 3000 as well as your node process as only one service can really listen on the port at once. So you'll need to ensure nginx is listening for connections on a different port. I imagine what you're trying to do is to listen on port 80 / 443 and then send the request onto your express service which is listening on port 3000?
In this case your bottom server block is nearly correct. To get this working without TLS/SSL (just on port 80) you'll want to use something like this:
server {
listen 80;
server_name node.mysite.co
location / {
proxy_pass http://localhost:3000;
}
}
The following is a very basic example and you'll probably want to toggle some other settings. This will make "http://node.mysite.co" go proxy through to whatever service (in this case an Express server) is listening on port 3000 locally.
You do not need to make a firewall (ufw) exception for port 3000 in this case as it's a local proxy pass. You should close the port on the firewall so people can't access it directly, this way the must go through nginx.
If you want to get SSL/TLS working, you'll want another block that'll look something like the following. Again, this is very basic and doesn't have a lot of settings you probably want to research and set (such as cipher choices).
server {
listen 443 ssl;
server_name node.mysite.co
ssl_certificate certs/mysite/server.crt;
ssl_certificate_key certs/mysite/server.key;
location / {
proxy_pass http://localhost:3000;
}
}
You'll need to replace the cert and key path to point to your SSL/TLS ceritifcate and key respectively. This will enable you to access https://node.mysite.co and it'll be proxied onto the service on port 3000 as well.
Once you've done that you might then choose to go back and change the http (port 80) server to a redirect to https to force https only connections.
Also note that I've ensured the server_name is different to your existing django server_name with a subdomain (node.mysite.co). You might wish to change this value but you can't have two server blocks listening on the same port and server_name, otherwise nginx would have no idea what to do with the request. I'm sure you're doing this anyway but I wanted to make sure it was explicit and would work with your existing setup.
If you wish the site to be served only for mysite.co:3000
If for some reason you want the user to go to port 3000 on the domain mysite.co, then you will need to set the "listen" to 3000 and keep the server name as "mysite.co". This will allow someone to go to mysite.co:3000 in their browser and hit your node service. I imagine this isn't really what you want for a public facing website though, it also won't line up very nicely with your port 443 version.
Note: I don't claim to be an nginx expert, but I've used it for all my node projects for the past few years and I find this setup to be pretty clear. There might be some nicer syntax you can use.
I have a nginx server which is running multiple vhost, I have configured one more vhost and tried to make it https, but when I tried to access it redirects to default page. I have configured SSL certs with letsencrypt.
my config file is
server {
listen 443 ssl;
root /var/www/html;
server_name abc.xyz.com;
include includes/letsencrypt;
location / {
proxy_pass http://abc;
include includes/proxy-config;
}
}
I have also tried with below config
server {
listen 80;
server_name abc.xyz.com;
return 301 https://abc.xyz.com$request_uri;
}
server {
listen 443 ssl;
server_name abc.xyz.com;
ssl on;
include includes/letsencrypt;
access_log /var/log/nginx/log/abc.access.log;
error_log /var/log/nginx/log/abc.error.log;
location /.well-known/acme-challenge {
root /var/www/letsencrypt;
}
location / {
proxy_pass http://abc;
}
}
After this page is redirecting to my firewall.
Port 443 is also opened up.
Any Ideas what is wrong here?
I have nail down this by adding NAT rule in firewall.
Basically nothing wrong in above configuration.
I had only opened port on firewall.
As opening port is just between Internet and firewall
NAT redirects traffic from public-ip:443 -> local-ip:443
I too had this problem, but for me the solution was eventually found in a problem with the configuration file for php-fpm. There was a problem creating/accessing the error log for php-fpm, which I had turned on myself in the config file for php-fpm earlier thinking it was a good thing to do. Turning it back off again, restarting php-fpm and nginx got everything working as expected.
Just in case you're googling around like I was and kept finding this question at the top ;-)
I am using nginx docker for deploying my app in aws server. I have to access my api using nginx proxy url which looks like https://domain.com/api/. This is a https request so i have to set proxy redirection to another port where api service running and the service is running under another docker container in same server instance. so my nginx conf file looks like below,
server {
listen 80;
server_name domain.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
server_name domain.com;
# add Strict-Transport-Security to prevent man in the middle attacks
add_header Strict-Transport-Security "max-age=31536000";
location /api/ {
proxy_pass http://my-public-ip-address:3000;
}
}
So my problem is that while I am trying to access the api endpoint using above url its showing ERR_TOO_MANY_REDIRECTS. So any one know about this issue? And also i went through all the article with same issue mentioned but no luck.
I haven't seen anything related to this topic on Google and since I'm a newbie on Nginx I'd like to ask a question about load balancing: I have a dedicated server currently running Apache with multiple accounts and domains. I'd like to switch to Nginx and set up a load balance only for one of these domains (mydomain1.com) to load balance traffic between this dedicated server and another 3 ones. I have the following Nginx config (/etc/nginx/conf.d/default.conf) on my dedicated server:
upstream mywebsite1 {
ip_hash;
server xxx.xxx.xxx.196 weight=1 max_fails=3 fail_timeout=15s;
server xxx.xxx.xxx.67 weight=1 max_fails=3 fail_timeout=15s;
server xxx.xxx.xxx.201 weight=1 max_fails=3 fail_timeout=15s;
}
server {
listen 80;
server_name mywebsite1.com;
access_log /var/log/nginx/proxy.log;
location / {
proxy_pass http://mywebsite1;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
}
But this is not working and when I read the proxy.log is also balancing traffic not just from mywebsite1.com, but also from my other domains: mywebsite2.com, mywebsite3.com, etc. Any help is really appreciated since as you see I'm not an expert! Thanks :)
I know it is years old question, but it still might help someone.
To make it work like you want, you must define at least two virtualhosts (server blocks).
1st is so called "default" - that is it serves everything that is not defined in any other virtualhost. Default in nginx context means defining:
server_name _;
You can add index.html to that virtualhost to tell visitors to go to right place. Display some sort of error message. Or redirect visitors to right place without any message - what ever suits your purposes.
But some sort of default is required if you want your other virtualhost block(s) to serve only specific domain(s) and nothing else.
2nd is "mywebsite1.com" - that only serves that particular domain. Your configuration for that domain is correct. And you can add more virtualhost blocks for different domains.
If you only have one virtualhost (even if it is not "default" type) then every single http request will go to that virtualhost, regardless if domain name matches or not.
You need to keep in mind that you should define different root path for every virtualhost, unless you want them all so serve same content.
root /some/path;
Which domain is served by which virtualhost is defined through server_name directive.
"_" means default and serves anything that does not match some other virtualhost.
You can define more than one domain if you want virtualhost block to serve more than one (do not forget to add both with and without www if you want both to work):
server_name www.example.com example.com some.other.domain.com;
You can also use wildcards:
server_name *.example.com;
So correct config file would be something like this:
# default virtualhost to serve everything that does not match other virtualhosts
server {
listen 80;
server_name _;
root /some/path/default_site;
# add other rules for default site
}
# virtualhost to server only (www.)mywebsite1.com
server {
listen 80;
# please note that you need to add both with and without "www." if you want both to work.
server_name mywebsite1.com www.mywebsite1.com;
root /some/path/mywebsite1.com;
# add other rules for mywebsite1.com
}
# virtualhost for example.com (without www)
server {
listen 80;
server_name example.com;
root /some/path/example.com;
# add other rules for example.com
}
If you send all of your traffic to your Nginx server, it has to do something with it. Since you only have one server block, regardless of what the server name is configured to be it will take the traffic for all host names.
If you don't want Nginx to handle traffic for all of your domains, simply don't point all of your domains at it (with DNS).