I've configured Nginx with Express according to this article.
This is my Nginx configuration:
server {
access_log /var/log/nginx/access.log upstream_time;
listen 8080;
server_name _;
location / {
# default port, could be changed if you use next with custom server
proxy_pass http://localhost:4000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
The first request I send to the Node app through Nginx is ok, but after that, any request takes 60 seconds to respond. This is the logs:
192.168.13.27 - - [20/Feb/2019:09:16:01 +0100] "GET / HTTP/1.1" 200 302276 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36"rt=0.324 uct="0.000" uht="0.002" urt="0.002"
192.168.13.27 - - [20/Feb/2019:09:17:04 +0100] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36"rt=60.003 uct="0.000, 0.000" uht="60.002, 0.001" urt="60.002, 0.001"
If I make a direct request to the node app on port 4000 everything is ok.
I've searched StackOverflow to look further but the problem still exists. I tried to lower the proxy_read_timeout and proxy_connect_timeout values, but it only works on static files specified in Express static module and other Express routes throw 504 Gateway Timeout error. How can I find and resolve the problem?
Related
I'm trying to setup an nginx as a reverse proxy for my web app.
The scheme:
nginx on a server host receives requests and transfers them to another host
nginx works as a HTTPS (wildcard certificate used globally), web app as a HTTP
at this stage i don't need nginx caching (at least while trying to figure out how to make it work)
nginx redirects all HTTP traffic to HTTPS
nginx blocks hostless requests
Here is a config:
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
ssl_certificate ./certs/my_domain.crt;
ssl_certificate_key ./certs/my_domain.key;
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
server {
listen 80;
return 444;
}
server {
listen 80;
server_name my_domain.com www.my_domain.com *.my_domain.com;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
server_name my_domain.com;
location / {
root html;
index index.html index.htm;
}
}
server {
listen 443 ssl;
server_name expert.my_domain.com;
location / {
proxy_pass http://192.168.0.83:50000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
}
While viewing nginx docs i've noticed that caching has to be turned on explicitly (proxy_cache off by default). But i'm not sure if there is something else has to be done to turn it off completely.
The sequence:
log in to the web app through nginx
authorize and get a cookie
open a second tab with the app
get a cookie from step (2)
refresh the page
loose a cookie (??)
Here is the results after step (3):
Request Method: GET
Request Scheme: https
Request Path: /
Request Headers:
Connection: keep-alive
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9
Accept-Encoding: gzip, deflate, br
Accept-Language: ru-RU,ru;q=0.9,en-US;q=0.8,en;q=0.7,ky;q=0.6
Cookie: .AspNetCore.Identity.Application=*** LONG ASP NET CORE IDENTITY COOKIE ***
Host: expert.my_domain.com
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.116 Safari/537.36
Upgrade-Insecure-Requests: 1
X-Original-Proto: http
X-Real-IP: 192.168.0.83
Sec-Fetch-Site: cross-site
Sec-Fetch-Mode: navigate
Sec-Fetch-User: ?1
Sec-Fetch-Dest: document
X-Original-For: [::ffff:192.168.0.12]:54035
The cookie and stuff is there.
And here is how it looks like after simple refresh:
Request Method: GET
Request Scheme: https
Request Path: /
Request Headers:
Cache-Control: max-age=0
Connection: keep-alive
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9
Accept-Encoding: gzip, deflate, br
Accept-Language: ru-RU,ru;q=0.9,en-US;q=0.8,en;q=0.7,ky;q=0.6
Host: expert.my_domain.com
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.116 Safari/537.36
Upgrade-Insecure-Requests: 1
X-Original-Proto: http
X-Real-IP: 192.168.0.83
Sec-Fetch-Site: cross-site
Sec-Fetch-Mode: navigate
Sec-Fetch-User: ?1
Sec-Fetch-Dest: document
X-Original-For: [::ffff:192.168.0.12]:54038
Notice the absence of cookie part.
I believe this has something to do with caching. Seems that after opening the page in another tab, browser (chrome in this case) shows me a cached version where cookie is present. Not sure who caches it though - nginx or chrome. And after that whoosh and it's gone.
Why i think this is an issue with nginx? When doing the same thing with the app directly there are no problems of any kind. It fully works as expected.
What could be missing here? Is caching problem involved? How to completely disable caching with nginx?
i have a problem with a qgis-system behind a nginx reverse-proxy.
some requests will result in 404 when requested over nginx - without nginx the 404 errors do not happen.
i will post some vital parts of the nginx-config:
log_format main '$host $remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" "$http_x_forwarded_for" $request_time $upstream_addr $upstream_http_status "$upstream_addr$request_uri"';
access_log /data/logs/nginx/access.log main buffer=4k;
upstream backend-geodatenportal {
ip_hash;
zone http_backend 256k;
server 172.28.136.21:80 weight=1 max_fails=3 fail_timeout=30s;
keepalive 1024;
}
server {
listen geodatenportal.domain.tld:443 ssl;
server_name geodatenportal.domain.tld;
...
location / {
proxy_set_header X-Forwarded-Proto https;
proxy_set_header Host $host;
proxy_set_header ClientProtocol https;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://backend-main;
}
location ~/kommunalportal(.*)$ {
proxy_set_header X-Forwarded-Proto https;
proxy_set_header Host $host;
proxy_set_header ClientProtocol https;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://backend-geodatenportal/kommunalportal$1;
}
now the requests with problems:
in nginx-log:
geodatenportal.domain.tld - $IP - - [18/Sep/2019:09:59:54 +0200] "GET /kommunalportal/index.php/view/media/illustration?repository=kp&project=m01_Klostermansfeld HTTP/2.0" 404 72 "https://geodatenportal.domain.tld/kommunalportal/index.php/view/" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.132 Safari/537.36" "-" 0.031 172.28.136.21:80 - "172.28.136.21:80/kommunalportal/index.php/view/media/illustration?repository=kp&project=m01_Klostermansfeld"
with the log-configuration from top the bold string should be the request to the backend
but this is what apache logs:
194.113.79.210 - - [18/Sep/2019:09:59:54 +0200] "GET /kommunalportal/index.php/view/media/illustration HTTP/1.1" 404 72
so it seems the GET-Parameters get lost on its way from nginx to apache
can someone explain this?
You use regex location to match against URI, then use it in proxy_pass. The NGINX regex location matching is done against URI without arguments.
If you want to pass them on, just add them like so:
proxy_pass http://backend-geodatenportal/kommunalportal$1$is_args$args;
I had a working setup for static file serving with nginx, using http.
Later, when I switched to https, changing the port from 80 to 443 as well as adding ssl certificate and key, the web server cannot find the static files any more. I get 404 responses for all static files. This is my nginx configuration file:
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
upstream http_backend {
server 127.0.0.1:3001;
keepalive 32;
}
server {
listen 80;
server_name test.com;
# Redirect to https
location / {
rewrite ^ https://$host$request_uri permanent;
}
}
server {
listen 443 default_server ssl;
server_name localhost;
root /var/www/myblog/app/resources/public;
ssl on;
ssl_certificate /etc/nginx/certificate.crt;
ssl_certificate_key /etc/nginx/key.pem;
location / {
proxy_redirect off;
proxy_pass http://http_backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header X-Forwarded-Proto https;
access_log /var/www/logs/myblog.access.log;
error_log /var/www/logs/myblog.error.log;
}
location ^~ /(css|js) {
root /var/www/myblog/app/resources/public;
}
}
}
My access log:
37.201.226.254 - - [26/Aug/2018:06:41:25 +0000] "GET /js/compiled/foo.js HTTP/1.1" 404 38 "https://www.test.com/" "Mozilla/5.0 (X11; Linux x86_64; rv:61.0) Gecko/20100101 Firefox/61.0"
37.201.226.254 - - [26/Aug/2018:06:41:25 +0000] "GET /css/mui.css HTTP/1.1" 404 38 "https://www.test.com/" "Mozilla/5.0 (X11; Linux x86_64; rv:61.0) Gecko/20100101 Firefox/61.0"
37.201.226.254 - - [26/Aug/2018:06:41:25 +0000] "GET /css/style.css HTTP/1.1" 304 0 "https://www.test.com/" "Mozilla/5.0 (X11; Linux x86_64; rv:61.0) Gecko/20100101 Firefox/61.0"
The files are in /var/www/myblog/app/resources/public: this is configured as root. I run the web server from /var/www/myblog/app.
Bear in mind this is my first deployment with NGINX. Does anyone have an idea what I forgot to configure to serve static files with https? My OS is linux.
I found out the problem through random trial and error. For some reason, changing
location ^~ /(css|js) {
root /var/www/myblog/app/resources/public;
}
to
location ~ ^/(css/|js/) {
root /var/www/myblog/app/resources/public/;
}
seems to work.
I am trying to push a docker image to a local docker repo on artifactory
docker push myNginxlb:2222/ubuntu
This gets a 403- Access is forbidden error. Folloing is my reverse proxy configuration under /etc/nginx/sites-enabled/artifactory
upstream artifactory_lb {
server mNginxLb.mycompany.com:8081;
server mNginxLb.mycompany.com backup;
}
log_format upstreamlog '[$time_local] $remote_addr - $remote_user - $server_name to: $upstream_addr: $request upstream_response_time $upstream_response_time msec $msec request_time $request_time';
server {
listen 80;
listen 443 ssl;
ssl_certificate /etc/nginx/ssl/my-certs/myCert.pem;
ssl_certificate_key /etc/nginx/ssl/my-certs/myserver.key;
client_max_body_size 2048M;
location / {
proxy_set_header Host $host:$server_port;
proxy_pass http://artifactory_lb;
proxy_read_timeout 90;
}
access_log /var/log/nginx/access.log upstreamlog;
location /basic_status {
stub_status on;
allow all;
}
}
# Server configuration
server {
listen 2222 ssl;
server_name mNginxLb.mycompany.com;
if ($http_x_forwarded_proto = '') {
set $http_x_forwarded_proto $scheme;
}
rewrite ^/(v1|v2)/(.*) /api/docker/my_local_repo_key/$1/$2;
client_max_body_size 0;
chunked_transfer_encoding on;
location / {
proxy_read_timeout 900;
proxy_pass_header Server;
proxy_cookie_path ~*^/.* /;
proxy_pass http://artifactory_lb;
proxy_set_header X-Artifactory-Override-Base-Url $http_x_forwarded_proto://$host;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
The access log indicate the following http requests
"GET /v2/ HTTP/1.1" 404 465 "-" "docker/1.9.1 go/go1.4.2 git-commit/a34a1d5 kernel/3.13.0-24-generic os/linux arch/amd64"
"GET /v2/ HTTP/1.1" 404 465 "-" "docker/1.9.1 go/go1.4.2 git-commit/a34a1d5 kernel/3.13.0-24-generic os/linux arch/amd64"
"GET /v1/_ping HTTP/1.1" 404 469 "-" "docker/1.9.1 go/go1.4.2 git-commit/a34a1d5 kernel/3.13.0-24-generic os/linux arch/amd64"
"PUT /v1/repositories/ubuntu/ HTTP/1.1" 403 449 "-" "docker/1.9.1 go/go1.4.2 git-commit/a34a1d5 kernel/3.13.0-24-generic os/linux arch/amd64"
Also in artifactory I have configured the docker local repo to use v2 api, what am I missing?
I fixed this by appending the intermediate certificate to the ssl certificate in context
I have a problem. Apache listens on a white ip and proxies all requests /ssd on nginx that proxies requests /city-dashboard to another server with websockets. In apache config:
ProxyPass /ssd/ http://10.127.32.24
ProxyPassReverse /ssd/ http://10.127.32.24
nginx config:
on nginx.conf:
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
include /etc/nginx/conf.d/*.conf;
on default.conf
location /city-dashboard/stream {
proxy_pass http://10.127.32.24:5000/stream;
proxy_set_header Host $host;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
Request headers:
Connection: Upgrade
Upgrade: Websocket
Response headers:
Connection: close
Status Code 400 Bad Request
what am I doing wrong?
What about authorization header in request ? it seems authorization problem.