I'm running my application on CentOS 6.4 with Nginx 1.0.15 and gunicorn 19.1.1. My application works fine if I am just using port 80 and not using SSL. However, when I attempt to add SSL to the site, Nginx redirects to https://, however all I get after the redirect is "web page not available" with no additional information.
upstream apollo2_app_server {
# fail_timeout=0 means we always retry an upstream even if it failed
# to return a good HTTP response (in case the Unicorn master nukes a
# single worker for timing out).
server unix:/webapps/apollo2/run/gunicorn.sock fail_timeout=0;
}
#server {
# listen 80;
# server_name mysub.example.com;
# rewrite ^ https://$server_name$request_uri? permanent;
#}
# This works fine like this, but when I uncomment the above
# and the below ssl information, I get "webpage not available."
server {
listen 80;
# listen 443;
# ssl on;
# ssl_certificate /etc/nginx/ssl/2b95ec8183e5d1asdfasdfsadf.crt;
# ssl_certificate_key /etc/nginx/ssl/exmaple.com.key;
# server_name mysub.example.com;
client_max_body_size 4G;
keepalive_timeout 70;
access_log /webapps/apollo2/logs/nginx-access.log;
error_log /webapps/apollo2/logs/nginx-error.log;
location /static/ {
alias /webapps/apollo2/static/;
}
location /media/ {
alias /webapps/apollo2/media/;
}
location / {
# an HTTP header important enough to have its own Wikipedia entry:
# http://en.wikipedia.org/wiki/X-Forwarded-For
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# enable this if and only if you use HTTPS, this helps Rack
# set the proper protocol for doing redirects:
# proxy_set_header X-Forwarded-Proto https;
# pass the Host: header from the client right along so redirects
# can be set properly within the Rack application
proxy_set_header Host $http_host;
# we don't want nginx trying to do something clever with
# redirects, we set the Host: header above already.
proxy_redirect off;
# set "proxy_buffering off" *only* for Rainbows! when doing
# Comet/long-poll stuff. It's also safe to set if you're
# using only serving fast clients with Unicorn + nginx.
# Otherwise you _want_ nginx to buffer responses to slow
# clients, really.
# proxy_buffering off;
# Try to serve static files from nginx, no point in making an
# *application* server like Unicorn/Rainbows! serve static files.
if (!-f $request_filename) {
proxy_pass http://apollo2_app_server;
break;
}
}
# Error pages
error_page 500 502 503 504 /500.html;
location = /500.html {
root /webapps/apollo2/static/;
}
}
I do not see anything in error logs.
I have checked port 443 here and
it is open: http://www.yougetsignal.com/tools/open-ports/
This is a wildcard certificate that I am using successfully on
another subdomain on a different server running on Debian 7 with Nginx with what I think is the same setup.
What should I be looking at? What am I missing?
I should have also shown my iptables as certainly someone would have figured it out then. I'm no expert in this area, but there was something wrong with my setup that caused the redirection to fail.
I ended up using the example from Linode and now this works.
https://www.linode.com/docs/security/securing-your-server#creating-a-firewall
Related
I have a development server that's down a lot and I'm trying to use my stable static web server to provide custom error pages on error-ed out connections. However I don't feel comfortable leaving clear-text communication going between the proxy/load-balancer and the dev server. How, or can, I decrypt and re-encrypt communications between client-proxy and proxy-devServer while intercepting any error responses?
I have a sample config, but I'm pretty sure I'm misunderstanding it.
server {
listen 443;
#send to the dev server
proxy_pass 192.168.1.2:443;
#decrypt downstream ssl
ssl_certificate /etc/ssl/certs/frontend.crt;
ssl_certificate_key /etc/ssl/certs/frontend.key;
#Serve custom error page
error_page 500 502 503 504 /custom_50x.html;
location = /custom_50x.html {
root /var/www/errors/html;
internal;
}
#Encrypt upstream communication to the dev server
proxy_ssl on;
proxy_ssl_certificate /etc/ssl/certs/backend.crt;
proxy_ssl_certificate_key /etc/ssl/certs/backend.key;
}
The Nginx http server cannot pass through SSL connections (AFAIK), so you must terminate SSL at this server. An upstream SSL connection is established by using https:// in the proxy_pass statement. See this document for details.
For example:
server {
listen 443 ssl;
#decrypt downstream ssl
ssl_certificate /etc/ssl/certs/frontend.crt;
ssl_certificate_key /etc/ssl/certs/frontend.key;
location / {
#send to the dev server
proxy_pass https://192.168.1.2;
# Using `https` with an IP address, you will need to provide
# the correct hostname and certificate name to the upstream
# server. Use `$host` if it's the same name as this server.
proxy_set_header Host $host;
proxy_ssl_server_name on;
proxy_ssl_name $host;
}
#Serve custom error page
error_page 500 502 503 504 /custom_50x.html;
location = /custom_50x.html {
root /var/www/errors/html;
internal;
}
}
The proxy_ssl directive relates to the stream server only. The proxy_ssl_certificate directives relate to client certificate authentication, which you may or may not require. Also, you were missing an ssl suffix on the listen statement. See this document for more.
I am configuring nginx at port 80 as proxy server to Apache server on port 8080, using Centos 7.
I successfully configure both for http, but after installing lets encrypt certificate for Apache, I see Apache is directly receiving traffic for https. I tried to make nginx receive traffic for all HTTP and HTTPS, but face issue,
I do a lot of changes like disable apache to listen on port 443, and only listen to 8080.
I configure nginx to listen both at 80 and 443, additionally I remove certificate for apache and add to nginx configuration files. currently.
nginx configuration is as follow:
server {
listen 80;
listen [::]:80 default_server;
#server_name _;
server_name www.example.com;
root /usr/share/nginx/html;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
proxy_pass http://my.server.ip.add:8080;
root /usr/share/nginx/html;
proxy_redirect off;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
server {
listen 443 default_server;
server_name www.example.com;
root /usr/share/nginx/html;
ssl on;
ssl_certificate /etc/letsencrypt/live/www.example.com/cert.pem;
ssl_certificate_key /etc/letsencrypt/live/www.example.com/privkey.pem;
ssl_prefer_server_ciphers on;
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_session_tickets off;
# Diffie-Hellman parameter for DHE ciphersuites, recommended 2048 bits
#ssl_dhparam /etc/pki/nginx/dh2048.pem;
# intermediate configuration. tweak to your needs.
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'ECDHE-RSA--REMOVED-SOME-HERE-SHA';
location / {
proxy_pass http://127.0.0.1:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
}
}
Note: I am using php 7.0
currently site is working on both https and http with 1 known issue i.e. User images are not loading. but I am not sure it is served by apache or nginx, in RESPONSE I can see "nginx/1.10.2"
What I was actually going to implement: I was trying to run both
node.js and apache using nginx. I donot start node yet.
My questions:
Is it really beneficial to use nginx in front and apache at the backend? (I read it protect from dDos attacks).
Where should we put certificate at nginx or apache?
How can I add node.js in nginx configuration? I already installed node js.
What can be best configuration of using both nginx and apache?
Good evening,
First of all all the considerations you have made at the infrastructure level are very good and in my opinion the proxy configuration despite the difficulties of implementation at this time is the best.
I've been using it for some time now and the benefits are enormous. However, I would like to ask you what type of cloud infrastructure you are using because there are so many things that change depending on the technical infrastructure. For example, I use only Google Cloud Platform that is completely different from CloudFlare or Other AWS.
The configuration made is too articulated and unclear from the point of view of the structure. You should try this way:First, enter the http context with the upstream domain name directive and inside the server IP address with Apache, and then make declarations for server and location contexts by including the parameters of the proxy_params file and snippet ssl.
If you want and help me understand the infrastructure we adopt, we can see how to make the configuration together but so it is imminent because each infrastructure responds to a different configuration.
It also applies to php7.0. For example, configuring PrestaShop 1.7.1.1 with php7.0 I had to make a lot of changes to the php.ini code of the CMS as I did not use CGI in FPM but this as I said was very varied.
see https://www.webfoobar.com/node/35
On AWS, I'm trying to migrate a PHP Symfony app running on nginx. I want to be able to test the app by directly talking to the EC2 server and via an ELB (the public route in).
I've setup an elastic load balancer to decrypt all the SSL traffic and pass this on to my EC2 server via port 80, as well as pass port 80 directly onto my EC2 server via port 80.
Initially this caused infinite redirects in my app but I researched and then fixed this by adding
fastcgi_param HTTPS $https;
with some custom logic that looks at $http_x_forwarded_proto to figure out when its actually via SSL.
There remains one issue I can't solve. When a user logs into the Symfony app, if they come via the ELB, the form POST eventually returns a redirect back to
https://elb.mysite.com:80/dashboard
instead of
https://elb.mysite.com/dashboard
which gives the user an error of "SSL connection error".
I've tried setting
fastcgi_param SERVER_PORT $fastcgi_port;
to force it away from 80 and I've also added the
port_in_redirect off
directive but both make no difference.
The only way I've found to fix this is to alter the ELB 443 listener to pass traffic via https. The EC2 server has a self certified SSL certificate configured. But this means the EC2 server is wasting capacity performing this unnecessary 2nd decryption.
Any help very much appreciated. Maybe there is a separate way within nginx of telling POST requests to not apply port numbers?
Nginx vhost config:
server {
port_in_redirect off;
listen 80;
listen 443 ssl;
ssl_certificate /etc/nginx/ssl/mysite.com/self-ssl.crt;
ssl_certificate_key /etc/nginx/ssl/mysite.com/self-ssl.key;
# Determine if HTTPS being used either locally or via ELB
set $fastcgi_https off;
set $fastcgi_port 80;
if ( $http_x_forwarded_proto = 'https' ) {
# ELB is using https
set $fastcgi_https on;
# set $fastcgi_port 443;
}
if ( $https = 'on' ) {
# Local connection is using https
set $fastcgi_https on;
# set $fastcgi_port 443;
}
server_name *.mysite.com my-mysite-com-1234.eu-west-1.elb.amazonaws.com;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log error;
rewrite ^/app\.php/?(.*)$ /$1 permanent;
location / {
port_in_redirect off;
root /var/www/vhosts/mysite.com/web;
index app.php index.php index.html index.html;
try_files $uri #rewriteapp;
}
location ~* \.(jpg|jpeg|gif|png)$ {
root /var/www/vhosts/mysite.com/web;
access_log off;
log_not_found off;
expires 30d;
}
location ~* \.(css|js)$ {
root /var/www/vhosts/mysite.com/web;
access_log off;
log_not_found off;
expires 2h;
}
location #rewriteapp {
rewrite ^(.*)$ /app.php/$1 last;
}
location ~ ^/(app|app_dev|config)\.php(/|$) {
port_in_redirect off;
fastcgi_pass 127.0.0.1:9000;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
fastcgi_param HTTPS $fastcgi_https;
# fastcgi_param SERVER_PORT $fastcgi_port;
#fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /var/www/vhosts/mysite.com/web$fastcgi_script_name;
include fastcgi_params;
}
}
References:
FastCGI application behind NGINX is unable to detect that HTTPS secure connection is used
https://serverfault.com/questions/256191/getting-correct-server-port-to-php-fpm-through-nginx-and-varnish
http://nginx.org/en/docs/http/ngx_http_core_module.html#port_in_redirect
Finally got a solution via another channel.
The answer is to commented out SERVER_PORT with a # in the file fastcgi_params file.
Much thanks to Maxim from Nginx.
I recently bought a linode 1GB plan. Installed Apache on it, had my website up and running on it.
However I installed gitlab on the server and mapped it to a sub-domain. Since the installation guides recommended using Nginx as the server, I installed it and is running on port 80. Currently my apache is not running since there are port conflicts.
But I fixed it by editing the /etc/apache2/ports.conf file and instructing apache to serve on a different port.
Now when I visit the main domain that is pointing to a port(8000) and using apache doesn't show up.
gitlab is installed at http://gitlab.myserver.com and I am able to access it., but if I try to navigate to http://myserver.com, I get the content of http://gitlab.myserver.com
My gitlab config serving under Nginx is as follows:
# GITLAB
# Maintainer: #randx
# App Version: 5.0
upstream gitlab {
server unix:/home/git/gitlab/tmp/sockets/gitlab.socket;
}
server {
listen my_server_ip default_server; # e.g., listen 192.168.1.1:80; In most cases *:80 is a good idea
server_name gitlab.myserver.com; # e.g., server_name source.example.com;
server_tokens off; # don't show the version number, a security best practice
root /home/git/gitlab/public;
# individual nginx logs for this gitlab vhost
access_log /var/log/nginx/gitlab_access.log;
error_log /var/log/nginx/gitlab_error.log;
location / {
# serve static files from defined root folder;.
# #gitlab is a named location for the upstream fallback, see below
try_files $uri $uri/index.html $uri.html #gitlab;
}
# if a file, which is not found in the root folder is requested,
# then the proxy pass the request to the upsteam (gitlab unicorn)
location #gitlab {
proxy_read_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694
proxy_connect_timeout 800; # https://github.com/gitlabhq/gitlabhq/issues/694
proxy_redirect off;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://gitlab;
}
}
My Apache config for myserver.com is as follows:
# domain: myserver.com
# public: /home/me/public/myserver.com/
<VirtualHost *:8000>
# Admin email, Server Name (domain name), and any aliases
ServerAdmin webmaster#myserver.com
ServerName www.myserver.com
ServerAlias myserver.com
# Index file and Document Root (where the public files are located)
DirectoryIndex index.html index.php
DocumentRoot /home/me/public/myserver.com/public
# Log file locations
LogLevel warn
ErrorLog /home/me/public/myserver.com/log/error.log
CustomLog /home/me/public/myserver.com/log/access.log combined
</VirtualHost>
Where am I going wrong?
A good solution is to continue to run Nginx on port 80, while adding proxy directives to Nginx to serve as a proxy for specific domains that are running on Apache. An example Nginx configuration:
server {
server_name www.myserver.com;
server_name myserver.com;
location / {
proxy_pass http://127.0.0.1:8000;
}
}
I do this myself, and it works great.
I solved the issue. The issue was with the following line in my gitlab config:
server {
listen my_ip default_server;
server_name gitlab.myserver.com; # e.g., server_name source.example.com;
server_tokens off; # don't show the version number, a security best practice
root /home/git/gitlab/public;
......
}
I changed this to the following:
server {
listen 80;
server_name gitlab.myserver.com; # e.g., server_name source.example.com;
server_tokens off; # don't show the version number, a security best practice
root /home/git/gitlab/public;
......
}
And I followed your instructions on serving sites on apache via nginx by using proxy_pass
Thanks a lot..
Cheers....
My goal is to redirect from port 80 to 443 (force https), but can't manage to get a working https configuration first. I get a 503 Server Error and nothing appears in the logs. I've looked at all the posts on SO and SF, none of them worked (X_FORWARDED_PROTO, X-Forwarded-For headers don't make a difference.). I'm on EC2 behind a load balancer, and so I don't need to use the SSL-related directives as I've configured my certificate on the ELB already. I'm using Tornado for a web server.
Here's the config, if anyone has ideas, thank you!
http {
# Tornado server
upstream frontends {
server 127.0.0.1:8002;
}
server {
listen 443;
client_max_body_size 50M;
root <redacted>/static;
location ^~/static/ {
root <redacted>/current;
if ($query_string) {
expires max;
}
}
location / {
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_pass http://frontends;
}
}
}
Well, there are two different tasks:
If you need to redirect all your http traffic to https, you'll need to create http server in nginx:
server {
listen 80;
server_name example.com;
return 301 https://example.com$request_uri;
}
Second note, if your SSL is terminated at ELB than you dont need ssl enabled nginx server at all. Simply pass traffic from ELB to your server 80 port.