i have a problem with a qgis-system behind a nginx reverse-proxy.
some requests will result in 404 when requested over nginx - without nginx the 404 errors do not happen.
i will post some vital parts of the nginx-config:
log_format main '$host $remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" "$http_x_forwarded_for" $request_time $upstream_addr $upstream_http_status "$upstream_addr$request_uri"';
access_log /data/logs/nginx/access.log main buffer=4k;
upstream backend-geodatenportal {
ip_hash;
zone http_backend 256k;
server 172.28.136.21:80 weight=1 max_fails=3 fail_timeout=30s;
keepalive 1024;
}
server {
listen geodatenportal.domain.tld:443 ssl;
server_name geodatenportal.domain.tld;
...
location / {
proxy_set_header X-Forwarded-Proto https;
proxy_set_header Host $host;
proxy_set_header ClientProtocol https;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://backend-main;
}
location ~/kommunalportal(.*)$ {
proxy_set_header X-Forwarded-Proto https;
proxy_set_header Host $host;
proxy_set_header ClientProtocol https;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://backend-geodatenportal/kommunalportal$1;
}
now the requests with problems:
in nginx-log:
geodatenportal.domain.tld - $IP - - [18/Sep/2019:09:59:54 +0200] "GET /kommunalportal/index.php/view/media/illustration?repository=kp&project=m01_Klostermansfeld HTTP/2.0" 404 72 "https://geodatenportal.domain.tld/kommunalportal/index.php/view/" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.132 Safari/537.36" "-" 0.031 172.28.136.21:80 - "172.28.136.21:80/kommunalportal/index.php/view/media/illustration?repository=kp&project=m01_Klostermansfeld"
with the log-configuration from top the bold string should be the request to the backend
but this is what apache logs:
194.113.79.210 - - [18/Sep/2019:09:59:54 +0200] "GET /kommunalportal/index.php/view/media/illustration HTTP/1.1" 404 72
so it seems the GET-Parameters get lost on its way from nginx to apache
can someone explain this?
You use regex location to match against URI, then use it in proxy_pass. The NGINX regex location matching is done against URI without arguments.
If you want to pass them on, just add them like so:
proxy_pass http://backend-geodatenportal/kommunalportal$1$is_args$args;
Related
I've configured Nginx with Express according to this article.
This is my Nginx configuration:
server {
access_log /var/log/nginx/access.log upstream_time;
listen 8080;
server_name _;
location / {
# default port, could be changed if you use next with custom server
proxy_pass http://localhost:4000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
The first request I send to the Node app through Nginx is ok, but after that, any request takes 60 seconds to respond. This is the logs:
192.168.13.27 - - [20/Feb/2019:09:16:01 +0100] "GET / HTTP/1.1" 200 302276 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36"rt=0.324 uct="0.000" uht="0.002" urt="0.002"
192.168.13.27 - - [20/Feb/2019:09:17:04 +0100] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36"rt=60.003 uct="0.000, 0.000" uht="60.002, 0.001" urt="60.002, 0.001"
If I make a direct request to the node app on port 4000 everything is ok.
I've searched StackOverflow to look further but the problem still exists. I tried to lower the proxy_read_timeout and proxy_connect_timeout values, but it only works on static files specified in Express static module and other Express routes throw 504 Gateway Timeout error. How can I find and resolve the problem?
I have the plan to manage multiple websites on the same server and I'm currently handling the http request from nginx then handling it to apache.
This is what the configuration I currently have for my first website:
# Force HTTP requests to HTTPS
server {
listen 80;
server_name myfirstwebsite.net;
return 301 https://myfirstwebsite.ne$request_uri;
}
server {
listen 443 ssl;
root /var/opt/httpd/ifdocs;
server_name myfirstwebsite.ne ;
# add Strict-Transport-Security to prevent man in the middle attacks
add_header Strict-Transport-Security "max-age=31536000" always;
ssl on;
ssl_certificate /etc/pki/tls/certs/cert.pem;
ssl_certificate_key /etc/pki/tls/certs/cert.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
access_log /var/log/nginx/iflogs/http/access.log;
error_log /var/log/nginx/iflogs/http/error.log;
###include rewrites/default.conf;
index index.php index.html index.htm;
# Make nginx serve static files instead of Apache
# NOTE this will cause issues with bandwidth accounting as files wont be logged
location ~* \.(gif|jpg|jpeg|png|wmv|avi|mpg|mpeg|mp4|htm|html|js|css)$ {
expires max;
}
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $host;
proxy_pass https://127.0.0.1:4433;
}
# proxy the PHP scripts to Apache listening on <serverIP>:8080
location ~ \.php$ {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $host;
proxy_pass https://127.0.0.1:4433;
}
location ~ /\. {
deny all;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
Now, My question is, for the second, third website and so on, I'm thinking in modifying the line:
proxy_pass https://127.0.0.1:4433;
for
proxy_pass https://secondwebsite.net:4433;
but what I don't want to do is that the goes out of the internet and looks up for that dns and then comes back to the same server, but serve in the same server (which is why I had localhost:4433 in the first website), so I don't get latency issues.
Is there any solution for this?
Also, I want to know if there will be issues if I serve multiple servers using the same port (in this case 4433) or do I have to use a different port for each website.
Thank you in advance.
Multiple server confs
One way to do this would be to have multiple server blocks, ideally over different conf files. Something like this would do for your second server in a new file (e.g. /etc/nginx/sites-available/mysecondwebsite):
# Force HTTP requests to HTTPS
server {
listen 80;
server_name mysecondwebsite.net;
access_log off; # No need for logging on this
error_log off;
return 301 https://mysecondwebsite.net$request_uri;
}
server {
listen 443 ssl;
root /var/opt/httpd/ifdocs;
server_name mysecondwebsite.net ;
# add Strict-Transport-Security to prevent man in the middle attacks
add_header Strict-Transport-Security "max-age=31536000" always;
ssl on;
ssl_certificate /etc/pki/tls/certs/cert.pem;
ssl_certificate_key /etc/pki/tls/certs/cert.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
access_log /var/log/nginx/iflogs/http/access.log;
error_log /var/log/nginx/iflogs/http/error.log;
###include rewrites/default.conf;
index index.php index.html index.htm;
# Make nginx serve static files instead of Apache
# NOTE this will cause issues with bandwidth accounting as files wont be logged
location ~* \.(gif|jpg|jpeg|png|wmv|avi|mpg|mpeg|mp4|htm|html|js|css)$ {
expires max;
}
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $host;
proxy_pass https://127.0.0.1:4434;
}
# proxy the PHP scripts to Apache listening on <serverIP>:8080
location ~ \.php$ {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $host;
proxy_pass https://127.0.0.1:4434;
}
location ~ /\. {
deny all;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
You would then create a symlink using ln -s /etc/nginx/sites-available/mysecondwebsite /etc/nginx/sites-available/ and restart nginx. To answer your question about ports, you can only have one TCP application listening on any single port. This post provides a few more details about that.
You could also define an upstream in your server block like so:
upstream mysecondwebsite {
server 127.0.0.1:4434; # Or whatever port you use
}
And then reference this upstream using proxy pass like so:
proxy_pass http://mysecondwebsite;
This way if you change the port, you will only have to change it in one place in your server conf. Also, this is how you would scale your application with multiple Apache servers and implement load balancing.
I had a working setup for static file serving with nginx, using http.
Later, when I switched to https, changing the port from 80 to 443 as well as adding ssl certificate and key, the web server cannot find the static files any more. I get 404 responses for all static files. This is my nginx configuration file:
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
upstream http_backend {
server 127.0.0.1:3001;
keepalive 32;
}
server {
listen 80;
server_name test.com;
# Redirect to https
location / {
rewrite ^ https://$host$request_uri permanent;
}
}
server {
listen 443 default_server ssl;
server_name localhost;
root /var/www/myblog/app/resources/public;
ssl on;
ssl_certificate /etc/nginx/certificate.crt;
ssl_certificate_key /etc/nginx/key.pem;
location / {
proxy_redirect off;
proxy_pass http://http_backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header X-Forwarded-Proto https;
access_log /var/www/logs/myblog.access.log;
error_log /var/www/logs/myblog.error.log;
}
location ^~ /(css|js) {
root /var/www/myblog/app/resources/public;
}
}
}
My access log:
37.201.226.254 - - [26/Aug/2018:06:41:25 +0000] "GET /js/compiled/foo.js HTTP/1.1" 404 38 "https://www.test.com/" "Mozilla/5.0 (X11; Linux x86_64; rv:61.0) Gecko/20100101 Firefox/61.0"
37.201.226.254 - - [26/Aug/2018:06:41:25 +0000] "GET /css/mui.css HTTP/1.1" 404 38 "https://www.test.com/" "Mozilla/5.0 (X11; Linux x86_64; rv:61.0) Gecko/20100101 Firefox/61.0"
37.201.226.254 - - [26/Aug/2018:06:41:25 +0000] "GET /css/style.css HTTP/1.1" 304 0 "https://www.test.com/" "Mozilla/5.0 (X11; Linux x86_64; rv:61.0) Gecko/20100101 Firefox/61.0"
The files are in /var/www/myblog/app/resources/public: this is configured as root. I run the web server from /var/www/myblog/app.
Bear in mind this is my first deployment with NGINX. Does anyone have an idea what I forgot to configure to serve static files with https? My OS is linux.
I found out the problem through random trial and error. For some reason, changing
location ^~ /(css|js) {
root /var/www/myblog/app/resources/public;
}
to
location ~ ^/(css/|js/) {
root /var/www/myblog/app/resources/public/;
}
seems to work.
I am trying to push a docker image to a local docker repo on artifactory
docker push myNginxlb:2222/ubuntu
This gets a 403- Access is forbidden error. Folloing is my reverse proxy configuration under /etc/nginx/sites-enabled/artifactory
upstream artifactory_lb {
server mNginxLb.mycompany.com:8081;
server mNginxLb.mycompany.com backup;
}
log_format upstreamlog '[$time_local] $remote_addr - $remote_user - $server_name to: $upstream_addr: $request upstream_response_time $upstream_response_time msec $msec request_time $request_time';
server {
listen 80;
listen 443 ssl;
ssl_certificate /etc/nginx/ssl/my-certs/myCert.pem;
ssl_certificate_key /etc/nginx/ssl/my-certs/myserver.key;
client_max_body_size 2048M;
location / {
proxy_set_header Host $host:$server_port;
proxy_pass http://artifactory_lb;
proxy_read_timeout 90;
}
access_log /var/log/nginx/access.log upstreamlog;
location /basic_status {
stub_status on;
allow all;
}
}
# Server configuration
server {
listen 2222 ssl;
server_name mNginxLb.mycompany.com;
if ($http_x_forwarded_proto = '') {
set $http_x_forwarded_proto $scheme;
}
rewrite ^/(v1|v2)/(.*) /api/docker/my_local_repo_key/$1/$2;
client_max_body_size 0;
chunked_transfer_encoding on;
location / {
proxy_read_timeout 900;
proxy_pass_header Server;
proxy_cookie_path ~*^/.* /;
proxy_pass http://artifactory_lb;
proxy_set_header X-Artifactory-Override-Base-Url $http_x_forwarded_proto://$host;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
The access log indicate the following http requests
"GET /v2/ HTTP/1.1" 404 465 "-" "docker/1.9.1 go/go1.4.2 git-commit/a34a1d5 kernel/3.13.0-24-generic os/linux arch/amd64"
"GET /v2/ HTTP/1.1" 404 465 "-" "docker/1.9.1 go/go1.4.2 git-commit/a34a1d5 kernel/3.13.0-24-generic os/linux arch/amd64"
"GET /v1/_ping HTTP/1.1" 404 469 "-" "docker/1.9.1 go/go1.4.2 git-commit/a34a1d5 kernel/3.13.0-24-generic os/linux arch/amd64"
"PUT /v1/repositories/ubuntu/ HTTP/1.1" 403 449 "-" "docker/1.9.1 go/go1.4.2 git-commit/a34a1d5 kernel/3.13.0-24-generic os/linux arch/amd64"
Also in artifactory I have configured the docker local repo to use v2 api, what am I missing?
I fixed this by appending the intermediate certificate to the ssl certificate in context
I check the ip-address in the controller with
request.env['REMOTE_ADDR']
this works fine in my test environment.
But on the production server with nginx + unicorn I always get 127.0.0.1.
This is my nginx config for the site:
upstream unicorn {
server unix:/tmp/unicorn.urlshorter.sock fail_timeout=0;
}
server {
listen 80 default deferred;
# server_name example.com;
root /home/deployer/apps/urlshorter/current/public;
location ^~ /assets/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
try_files $uri/index.html $uri #unicorn;
location #unicorn {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://unicorn;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
I had trouble with this too; I found this question, but the other answer didn't help me.
I looked at Rails 3.2.8's implementation of Rack::Request#ip to see how it decided what to say; to get it to use an address passed via the environment without filtering out addresses from my local network (it's trying to filter out intermediate proxies, but that's not what I wanted), I had to set the HTTP_CLIENT_IP from my nginx proxy configuration block in addition to what you've got above (X-Forwarded-For has to be there too for this to work!):
proxy_set_header CLIENT_IP $remote_addr;
If you use request.remote_addr you'll get the of your Nginx proxy.
To get the real IP address of your user, you can use request.remote_ip.
According to Rails' source code, it checks for various http headers to give you the most relevant one : in Rails 3.2 or Rails 4.0.0.beta1
The answer is in your config file :) The following should do what you want:
real_ip = request.headers["X-Real-IP"]
more here: http://api.rubyonrails.org/classes/ActionDispatch/Request.html#method-i-headers
UPDATE: The proper answer is here in another Q:
https://stackoverflow.com/a/4465588
or in this thread:
https://stackoverflow.com/a/15883610
spoiler:
use request.remote_ip
For ELB - nginx - rails you want to follow this guide:
http://engineering.blopboard.com/resolving-real-client-ip-with-amazon-elb-nginx-and-php-fpm
See:
server {
listen 443 ssl spdy proxy_protocol;
set_real_ip_from 10.0.0.0/8;
real_ip_header proxy_protocol;
location /xxx {
proxy_http_version 1.1;
proxy_pass <api-endpoint>;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-By $server_addr:$server_port;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header CLIENT_IP $remote_addr;
proxy_pass_request_headers on;
}
...
The proxy_set_header CLIENT_IP $remote_addr; didn't work for me. Here's what did..
The solution I found after reviewing the actiondispatch code remote_ip.rb source. Now I get proper IP in my devise/warden processes as well as any other routine I'm looking at request.remote_ip
My config...
Ruby 2.2.1 - Rails 4.2.1 - NGINX v1.8.0 - Unicorn v4.9.0 - Devise v3.4.1
nginx.conf
HTTP_CLIENT_IP vs CLIENT_IP
location #unicorn {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header HTTP_CLIENT_IP $remote_addr; <-----
proxy_redirect off;
proxy_pass http://unicorn;
}
Source actionpack-4.2.1/lib/action_dispatch/middleware/remote_ip.rb
Line 114:
client_ips = ips_from('HTTP_CLIENT_IP').reverse
Line 126:
"HTTP_CLIENT_IP=#{#env['HTTP_CLIENT_IP'].inspect} " +