Nginx Enable HTTPS/SSL when forwarding to other URL - ssl

Currently, I'm working with an AWS Ubuntu EC2 instance, running a Node.js app on port 3000, that has an Nginx reverse proxy. I have been trying to enable HTTPS and add a SSL certificate and I've been successful in that I don't get any errors in the nginx.conf file. However, I am redirecting my main website, "example.com" to the public DNS of the AWS server and when I try to load the "http://example.com" or "https://example.com" page, I get a "Unable to Connect" error from Firefox, which is my testing browser. Also when I run sudo nginx -t, there are no syntactical errors in the configuration file and when I check the /var/log/nginx/error.log file it is empty. Below is my current nginx.conf file.
Update: I changed server_name from example.com to the public DNS of my server, lets call it amazonaws.com. Now, when I type in https://amazonaws.com the page loads and the SSL certificate shows up when running the website through ssllabs.com. However, when I type in amazonaws.com or http://amazonaws.com I get a blank page like before.
user root;
worker_processes 1;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
events {
# max_clients = worker_processes * worker_connections / 4
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
gzip on;
gzip_comp_level 6;
gzip_vary on;
gzip_min_length 1000;
gzip_proxied any;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
gzip_buffers 16 8k;
# backend applications
upstream nodes {
server 127.0.0.1:3000;
keepalive 64;
}
server {
listen 80;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
ssl_certificate /etc/nginx/ssl/example_com.crt;
ssl_certificate_key /etc/nginx/ssl/example_com.key;
ssl_protocols SSLv3 TLSv1;
ssl_ciphers HIGH:!aNULL:!MD5;
server_name example.com;
# everything else goes to backend node apps
location / {
proxy_pass http://nodes;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $host;
proxy_set_header X-NginX-Proxy true;
proxy_set_header Connection "";
proxy_http_version 1.1;
}
}
}

You should give this server definition
server {
listen 80;
return 301 https://$host$request_uri;
}
a server_name (eg amazonaws.com) as well.

Related

Refresh page very long with Nginx as reverse proxy for Express and NuxtJs

I configured my server with nginx as a reverse proxy for a Nuxt/Express SSR application. For the moment I have a login page, and a home page.
I can connect and disconnect without any problem.
However, when I'm connected and I refresh the page, the loading time is very long.
I don't know if this is due to Nginx configuration or the authentication API or redirection.
I've noticed that it also happens when I type my url myself in the address bar.
Thanks in advance for your help
Update :
Here is my nginx config
worker_processes 1;
error_log logs/error.log info;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
map_hash_max_size 64;
map_hash_bucket_size 64;
map $sent_http_content_type $expires {
"text/html" epoch;
"text/html; charset=utf-8" epoch;
default off;
}
server {
listen 9998 ssl;
server_name control.serenicity.fr;
ssl_certificate "C:/nginx-1.19.0/ssl/mydomain.fr.crt";
ssl_certificate_key "C:/nginx-1.19.0/ssl/mydomain.fr.key";
location / {
expires $expires;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 1m;
proxy_connect_timeout 1m;
proxy_pass http://10.0.5.11:3000;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
}
Here are the request when I refresh :
https://drive.google.com/file/d/1PeHzCaHvLNL8_YN9ZOEFNv_gxQe2jpu9/view?usp=sharing
https://drive.google.com/file/d/1b9m3cDL_LOv1d3Kaaucnq4Qi308P8TLT/view?usp=sharing
https://drive.google.com/file/d/1xe_EBnb_XXWo7IlaI47EtlaLE6eFgd2k/view?usp=sharing
Update 2 :
After several hours of debugging, it seems that there is no link with nginx.
The problem comes from nuxt in SSR. When I disable SSR there is no more problem.

How to serve devpi with https?

I have an out-of-the-box devpi-server running on http://
I need to get it to work on https:// instead.
I already have the certificates for the domain.
I followed the documentation for nginx-site-config, and created the /etc/nginx/conf.d/domain.conf file that has the server{} block that points to my certificates (excerpt below).
However, my devpi-server --start --init is totally ignoring any/all nginx configurations.
How do i point the devpi-server to use the nginx configurations? Is it even possible, or am I totally missing the point?
/etc/nginx/conf.d/domain.conf file contents:
server {
server_name localhost $hostname "";
listen 8081 ssl default_server;
listen [::]:8081 ssl default_server;
server_name domain;
ssl_certificate /root/certs/domain/domain.crt;
ssl_certificate_key /root/certs/domain/domain.key;
ssl_protocols TLSv1.1 TLSv1.2;
ssl_ciphers EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH;
gzip on;
gzip_min_length 2000;
gzip_proxied any;
gzip_types application/json;
proxy_read_timeout 60s;
client_max_body_size 64M;
# set to where your devpi-server state is on the filesystem
root /root/.devpi/server;
# try serving static files directly
location ~ /\+f/ {
# workaround to pass non-GET/HEAD requests through to the named location below
error_page 418 = #proxy_to_app;
if ($request_method !~ (GET)|(HEAD)) {
return 418;
}
expires max;
try_files /+files$uri #proxy_to_app;
}
# try serving docs directly
location ~ /\+doc/ {
try_files $uri #proxy_to_app;
}
location / {
# workaround to pass all requests to / through to the named location below
error_page 418 = #proxy_to_app;
return 418;
}
location #proxy_to_app {
proxy_pass https://localhost:8081;
proxy_set_header X-outside-url $scheme://$host:$server_port;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto https;
}
}
This is the answer I gave to the same question on superuser.
Devpi doesn't know anything about Nginx, it will just serve HTTP traffic. When we want to interact with a web-app via HTTPS instead, we as the client need to talk to a front-end which can handle it (Nginx) which will in turn communicate with our web-app. This application of Nginx is known as a reverse proxy. As a reverse proxy we can also benefit from Nginx's ability to serve static files more efficiently than getting our web-app to do it itself (hence the "try serving..." location blocks).
Here is a complete working Nginx config that I use for devpi. Note that this is /etc/nginx/nginx.conf file rather than a domain config like yours because I'm running Nginx and Devpi in docker with compose but you should be able to pull out what you need:
worker_processes 1;
events {
worker_connections 1024;
}
http {
# Define the location for devpi
upstream pypi-backend {
server localhost:8080;
}
# Redirect HTTP to HTTPS
server {
listen 80;
listen [::]:80;
server_name _;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
server_name example.co.uk; # This is the accessing address eg. https://example.co.uk
root /devpi/server; # This is where your devpi server directory is
gzip on;
gzip_min_length 2000;
gzip_proxied any;
proxy_read_timeout 60s;
client_max_body_size 64M;
ssl_certificate /etc/nginx/certs/cert.crt; Path to certificate
ssl_certificate_key /etc/nginx/certs/cert.key; Path to certificate key
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
ssl_prefer_server_ciphers on;
access_log /var/log/nginx/pypi.access.log;
# try serving static files directly
location ~ /\+f/ {
error_page 418 = #pypi_backend;
if ($request_method !~ (GET)|(HEAD)) {
return 418;
}
expires max;
try_files /+files$uri #pypi_backend;
}
# try serving docs directly
location ~ /\+doc/ {
try_files $uri #pypi_backend;
}
location / {
error_page 418 = #pypi_backend;
return 418;
}
location #pypi_backend {
proxy_pass http://pypi-backend; # Using the upstream definition
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-outside-url $scheme://$host:$server_port;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
}
With Nginx using this configuration and devpi running on http://localhost:8080, you should be able to access https://localhost or with your machine with appropriate DNS https://example.co.uk. A request will be:
client (HTTPS) > Nginx (HTTP) > devpi (HTTP) > Nginx (HTTPS) > client
This also means that you will need to make sure that Nginx is running yourself, as devpi start won't know any better. You should at the very least see an Nginx welcome page.

Is possible to use SSL in Odoo with NginX avoiding the standard ports (80 and 443)?

Following this tutorial I configured my Nginx like this:
upstream odoo8 {
server 127.0.0.1:8069 weight=1 fail_timeout=0;
}
upstream odoo8-im {
server 127.0.0.1:8072 weight=1 fail_timeout=0;
}
server {
# server port and name (instead of 443 port)
listen 22443;
server_name _;
# Specifies the maximum accepted body size of a client request,
# as indicated by the request header Content-Length.
client_max_body_size 2000m;
# add ssl specific settings
keepalive_timeout 60;
ssl on;
ssl_certificate /etc/ssl/nginx/server.crt;
ssl_certificate_key /etc/ssl/nginx/server.key;
error_page 497 https://$host:22443$request_uri;
# limit ciphers
ssl_ciphers HIGH:!ADH:!MD5;
ssl_protocols SSLv3 TLSv1;
ssl_prefer_server_ciphers on;
# increase proxy buffer to handle some Odoo web requests
proxy_buffers 16 64k;
proxy_buffer_size 128k;
# general proxy settings
# force timeouts if the backend dies
proxy_connect_timeout 3600s;
proxy_send_timeout 3600s;
proxy_read_timeout 3600s;
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503;
# set headers
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forward-For $proxy_add_x_forwarded_for;
# Let the Odoo web service know that we’re using HTTPS, otherwise
# it will generate URL using http:// and not https://
proxy_set_header X-Forwarded-Proto https;
# by default, do not forward anything
proxy_redirect off;
proxy_buffering off;
location / {
proxy_pass http://odoo8;
}
location /longpolling {
proxy_pass http://odoo8-im;
}
# cache some static data in memory for 60mins.
# under heavy load this should relieve stress on the Odoo web interface a bit.
location /web/static/ {
proxy_cache_valid 200 60m;
proxy_buffering on;
expires 864000;
proxy_pass http://odoo8;
}
}
And I have this ports in my Odoo configuration
longpolling_port = 8072
xmlrpc_port = 8069
xmlrpcs_port = 22443
proxy_mode = True
When I load https://my_domain:22443/web/database/selector in the browser it loads well. But when I choose a database or I make any action, the address loses the https and the port, so it's loaded through the port 80. Then I would need to add this to the NginX configuration and the port 80 should be open
## http redirects to https ##
server {
listen 80;
server_name _;
# Strict Transport Security
add_header Strict-Transport-Security max-age=2592000;
rewrite ^/.*$ https://$host:22443$request_uri? permanent;
}
Is there a way to avoid this redirection? Like that I could keep the port 80 closed in order to avoid spoofing
Update
I can open the login screen with the address https://my_domain:22443/web/login?db=dabatase_name and I can work well inside, but if I log out in order to choose another database in the droplist, it loses again the port and the ssl
Please, try to use this construction:
## http redirects to https ##
server
{
listen 80;
server_name _;
if ($http_x_forwarded_proto = 'http')
{
return 301 https://my_domain.com$request_uri;
}
}

Nginx reverse proxy configuration for multiple domains

I have multiple accounts/domains on my server. I'm using cPanel with Apache 2.4 and wanted to use Nginx as a front reverse proxy. I changed Apache port, installed Nginx and it works fine but for one domain/account only. I want to use it for all my domains on the server, and any future accounts. I tried to enter $domain variable instead of a specific domain but realized later that nginx doesn't support variables. Same thing with the user directory. Here is my config file:
user nobody;
worker_processes 4;
error_log logs/error.log crit;
worker_rlimit_nofile 8192;
events {
worker_connections 1024; # you might need to increase this setting for busy servers
use epoll; # Linux kernels 2.6.x change to epoll
}
http {
server_names_hash_max_size 2048;
server_names_hash_bucket_size 512;
server_tokens off;
include mime.types;
default_type application/octet-stream;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 10;
# Gzip on
gzip on;
gzip_min_length 1100;
gzip_buffers 4 32k;
gzip_types text/plain application/x-javascript text/xml text/css;
# Other configurations
ignore_invalid_headers on;
client_max_body_size 8m;
client_header_timeout 3m;
client_body_timeout 3m;
send_timeout 3m;
connection_pool_size 256;
client_header_buffer_size 4k;
large_client_header_buffers 4 32k;
request_pool_size 4k;
output_buffers 4 32k;
postpone_output 1460;
# Cache most accessed static files
open_file_cache max=10000 inactive=10m;
open_file_cache_valid 2m;
open_file_cache_min_uses 1;
open_file_cache_errors on;
# virtual hosts includes
include "/etc/nginx/conf.d/*.conf";
server {
# this is your access logs location
access_log /usr/local/apache/domlogs/accountusername/example.com;
error_log logs/vhost-error_log warn;
listen 80;
# change to your domain
server_name example.com www.example.com;
location ~* \.(gif|jpg|jpeg|png|wmv|avi|mpg|mpeg|mp4|htm|html|js|css)$ {
# this is your public_html directory
root /home/accountusername/public_html;
}
location / {
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffer_size 4k;
proxy_buffers 16 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
proxy_connect_timeout 30s;
# change to your domain name
proxy_redirect http://www.example.com:8080 http://www.example.com;
proxy_redirect http://example.com:8080 http://example.com;
proxy_pass http://127.0.0.1:8080/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
}
What I'm trying to do is to place a code that works for all domains on the server and any future domains will be added. I see some forums and blogs explain to setup virtual hosts (Server blocks) but I'm not sure what they're used for. I'd appreciate it if anyone provide any info about this. Should I setup virtual hosts? What is needed to be changed in my configuration file? Thank you.
You config is almost correct
server {
listen frontip:80 default_server;
location / {
proxy_pass http://127.0.0.1:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_redirect http://$host:8000/ http://$host/;
}
}
But best way to you do not use 8080 port. All you need is tell to nginx to bind only external ip. Add ip and bind keyword to all your listen in each server.
server {
listen frontip:80 default_server bind;
location / {
proxy_pass http://127.0.0.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
If you missed nothing, nginx will not bind 127.0.0.1:80, so apache can bind it.
In this case you do not need any proxy_redirect directives because you don't need any redirect rewrites.
For root folder you can use variables but much better use map;
http {
...
map $host $root {
hostnames;
default /var/www;
.domain1.com /home/user1/domain1.com;
custom.domain1.com /home/user1/custom;
domain2.com /home/user2/domain2.com;
www.domain2.com /home/user2/domain2.com;
}
server {
listen frontip:80 default_server;
root $root;
location / {
proxy_pass http://127.0.0.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
location ~* \.(gif|jpg|jpeg|png|wmv|avi|mpg|mpeg|mp4|htm|html|js|css)$ {
}
}
}
More about map http://nginx.org/en/docs/http/ngx_http_map_module.html
Your idea is a kind of fantastic. To operate in good and predictable\debuggable way, you should create "server" block for every server you serve, and you should write it domain name into "proxy_redirect" directive accordingly.
To handle a lot of domains - get a list of them and write shell\perl\python script to generate your actual config. This script will be rather simple one.
And read the docs - to understand clearly what "server blocks" are for. Shortly, they are the core of nginx's performance magic.

nginx doesn't serve static assets in Rails 3

Stackoverflowers. I have a problem with my Rails nginx configuration. I'm running a Rails 3.0.12 app, and I'm quite new to nginx.
I can't seem to get nginx to serve static assets. For every request in /public folder I get 404. I'm posting the nginx configuration I got so far. Maybe I missed something
nginx.conf:
user rails;
worker_processes 1;
daemon off;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
events {
worker_connections 2048;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
gzip on;
gzip_http_version 1.0;
gzip_comp_level 2;
gzip_proxied any;
gzip_types text/plain text/html text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript;
server_names_hash_bucket_size 64;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
sites-enabled/project.conf:
upstream project {
# fail_timeout=0 means we always retry an upstream even if it failed
# to return a good HTTP response (in case the Unicorn master nukes a
# single worker for timing out).
# for UNIX domain socket setups:
server unix:/tmp/project.socket fail_timeout=0;
}
server {
listen 80;
root /srv/www/project/current/public;
passenger_enabled on;
server_name dev.project.eu;
server_name *.dev.project.eu;
location / {
#all requests are sent to the UNIX socket
proxy_pass http://project;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
root /srv/wwww/project/current/public;
}
}
I've tried removing the location / block from project.conf, but it didn't do anything, the assets are still not visible.
I am also aware of serve_static_assets switch in Rails, but I'd rather have nginx serve those assets, as it should do so.
You need to add something like that (documentation on locations):
location / {
try_files $uri #ruby;
}
location #ruby {
proxy_pass http://project;
}
I know this thread is over a year old but i had the same problem running in production
The thing that made it work for me was running
rake assets:precompile
in development, and uncommenting
load 'deploy/assets'
even though I am using rails 4.