Try to turn on Nginx gzip feature to save some bandwidth, but no luck.
Searching for hours still can not make it work.
Nginx is installed by Passenger. Here is the information about Nginx in my server:
nginx version: nginx/1.4.1
built by gcc 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5)
TLS SNI support enabled
configure arguments: --prefix=/opt/nginx --with-http_ssl_module --with-http_gzip_static_module --with-http_stub_status_module --with-cc-opt=-Wno-error --add-module=/home/lijc/.rvm/gems/ruby-1.9.3-p392#rails3.2/gems/passenger-4.0.2/ext/nginx
Nginx.conf:
worker_processes 1;
events {
worker_connections 1024;
}
http {
passenger_root /home/lijc/.rvm/gems/ruby-1.9.3-p392#rails3.2/gems/passenger-4.0.2;
passenger_ruby /home/lijc/.rvm/wrappers/ruby-1.9.3-p392#rails3.2/ruby;
include mime.types;
default_type application/octet-stream;
sendfile on;
client_max_body_size 100m;
keepalive_timeout 65;
gzip on;
gzip_min_length 1k;
gzip_disable "MSIE [1-6]\.";
gzip_http_version 1.1;
gzip_types text/plain text/css application/x-javascript application/xml application/json application/atom+xml application/rss+xml;
gzip_vary on;
server {
listen 80;
server_name localhost;
root /home/lijc/web/rails/jasli2team/evolution/public;
passenger_enabled on;
}
}
Then restart Nginx and Passenger. However with Chrome dev tools. The header return by my site still not including Content-Encoding:gzip
Here is the header return by my site:
Cache-Control:must-revalidate, private, max-age=0
Content-Type:text/html; charset=utf-8
Date:Wed, 05 Jun 2013 02:55:47 GMT
ETag:"f5a1b272c96c5786342ec4bfd9b6e608"
Proxy-Connection:close
Server:nginx/1.4.1 + Phusion Passenger 4.0.2
Status:200 OK
Vary:Accept-Encoding
Via:1.0 localhost.localdomain:1080 (squid/2.6.STABLE16)
X-Cache:MISS from localhost.localdomain
X-Cache-Lookup:MISS from localhost.localdomain:1080
X-Powered-By:Phusion Passenger 4.0.2
X-Rack-Cache:miss
X-Request-Id:8353874387ca7510158dd5bf93b37ab9
X-Runtime:0.042863
X-UA-Compatible:IE=Edge,chrome=1
I have no idea what is wrong or missing. Any help will be appreciated.
Related
windows nginx config
http config
include mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$upstream_response_time $status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log logs/access.log main;
sendfile on;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 5m;
gzip on;
gzip_static on;
gzip_disable "msie6";
gzip_min_length 100k;
gzip_buffers 4 16k;
gzip_comp_level 5;
gzip_types text/plain application/json application/x-javascript text/css application/xml text/javascript application/x-httpd-php image/jpeg image/gif image/png;
gzip_vary off;
server config
listen 443 ssl http2;
server_name www.xxxxxxx.com;
ssl_certificate C://com.cer;
ssl_certificate_key C://server.key;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-RSA-AES256-SHA384:AES256-SHA256:RC4:HIGH:!MD5:!aNULL:!eNULL:!NULL:!DH:!EDH:!AESGCM:!kEDH;
ssl_prefer_server_ciphers on;
keepalive_timeout 10;
Concurrent test the http request's avg time/min time/max time less then https request's 10 times
I'm sorry i used jmeter like this and i found test 1 time,jmeter's result was longer then chrome,so i change it use apache bench tool,it test result close to jmeter test http request result
i´m have migrated a Drupal site to my server,
the server uses nginx to do ssl-termination and let apache do the rest, e.g nginx works as a proxy.
However, using the Drupal Media-Browser to upload a file, i´m getting a "502 Bad Gateway" error for requesting /file/progress/xyz (i guess it´s the progress-bar) the actual file-upload works though.
this is the nginx server block for the site:
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name www.example.com;
port_in_redirect off;
ssl on;
ssl_certificate /etc/ssl/certs/xyz.crt;
ssl_certificate_key /etc/ssl/certs/xyz.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS;
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:20m;
ssl_session_timeout 60m;
add_header Strict-Transport-Security "max-age=31536000";
add_header X-Content-Type-Options nosniff;
location / {
gzip_static on;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header HTTPS "on";
include /etc/nginx/proxy.conf;
}
}
server {
listen 80;
listen [::]:80;
server_name www.example.com;
return 301 https://$server_name$request_uri;
}
and this is my proxy.conf
proxy_pass http://127.0.0.1:8080;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_buffering Off;
proxy_buffers 32 4m;
proxy_busy_buffers_size 25m;
proxy_buffer_size 512k;
proxy_ignore_headers "Cache-Control" "Expires";
proxy_max_temp_file_size 0;
client_max_body_size 1024m;
client_body_buffer_size 4m;
proxy_connect_timeout 75s;
proxy_read_timeout 300s;
proxy_send_timeout 300s;
proxy_intercept_errors off;
i also tried adding this to the http block of nginx.conf
fastcgi_temp_file_write_size 10m;
fastcgi_busy_buffers_size 512k;
fastcgi_buffer_size 512k;
fastcgi_buffers 16 512k;
client_max_body_size 50M;
with no success, so i basically tried everything I found on the web regarding this topic with no success, i´m pretty new to nginx though, so maybe i am just overseeing sth?
Nginx logs to error_log:
2019/05/15 08:09:26 [error] 21245#0: *42 upstream prematurely closed connection while reading response header from upstream,
client: 55.10.229.62, server: www.example.com, request: "GET /file/progress/190432132829 HTTP/1.1",
upstream: "http://127.0.0.1:8080/file/progress/190432132829",
host: "www.example.com",
referrer: "https://www.example.com/media/browser?render=media-popup&options=Uusog2IwkXxNr-0EaqD1L6-Y0aBHQVunf-k4J1oUb_U&plugins="
So maybe it´s because upstream is http?
What makes me worry even more is that I get a segfault logged in httpd-error_log
[core:notice] [pid 21277] AH00052: child pid 21280 exit signal Segmentation fault (11)
I have the latest Drupal-7.67 core and all modules are uptodate
using PHP 7.2.17 on a CENTOS7
with nginx 1:1.12.2-2.el7
and httpd 2.4.6-88.el7.centos
i also added this to drupal´s settings.php
$conf['reverse_proxy'] = TRUE;
$conf['reverse_proxy_addresses'] = ['127.0.0.1'];
but it doesn´t seem to have any effect
Update:
for beeing complete on this one, here are the details of the failing request (from chrome network-tab)
Response Headers
Connection: keep-alive
Content-Length: 575
Content-Type: text/html
Date: Wed, 15 May 2019 06:09:26 GMT
Server: nginx/1.12.2
Request Headers
Accept: application/json, text/javascript, */*; q=0.01
Accept-Encoding: gzip, deflate, br
Accept-Language: de-DE,de;q=0.9,en-US;q=0.8,en;q=0.7
Connection: keep-alive
Cookie: _pk_ses.10.9e92=1; has_js=1; cookie-agreed=2; SSESS812a016a321fb8affaf4=pY3nnqqagiCksF61R45R6Zrmi6g6DdMcYRxSPM1HLP0; Drupal.toolbar.collapsed=0; _pk_id.10.9e92=614e315e332df7.1557898005.1.1557900255.1557898005.
Host: www.example.com
Referer: https://www.example.com/media/browser?render=media-popup&options=Uusog2IwkXxNr-0EaqD1L6-Y0aBHQVunf-k4J1oUb_U&plugins=
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.103 Safari/537.36
X-Requested-With: XMLHttpRequest
when I remove the php pecl-uploadprogress
yum remove php-pecl-uploadprogress.x86_64
the error is gone, but the progress-bar is not working then, even though i have apc. On the Pecl-uploadprogress page they mention that other SAPI implementations, than apache with mod_php only unfortunately still have issues.
I guess i ran into one of these,
however i would highly approchiate to let Apache report the progress.
I'm configuring my backend using nginx as a reverse-proxy for my node/express server, but cannot seem to get it to work.
Right now, if I use curl to ping my site (dcdocs.app) I get the following headers:
curl -I https://dcdocs.app
HTTP/2 200
server: nginx/1.14.0 (Ubuntu)
date: Sat, 24 Nov 2018 03:32:24 GMT
content-type: text/html; charset=UTF-8
content-length: 388
x-powered-by: Express
accept-ranges: bytes
cache-control: public, max-age=0
last-modified: Mon, 19 Nov 2018 15:35:12 GMT
etag: W/"184-1672c9c7c51"
Using curl, the response body also returns my expected index file. However, when I visit this page on a web browser, I don't get any response.
Here's how I currently have my nginx.conf file configured:
user www-data;
worker_processes auto; # Spawn one process per core... To see #, use command nproc
events {
worker_connections 1024; # Number of concurrent requests per worker... To see #, use command ulimit -n
}
http {
include mime.types;
server {
listen 80;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl http2;
server_name dcdocs.app;
index index.html;
ssl_certificate /etc/letsencrypt/live/dcdocs.app/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/dcdocs.app/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
location / {
proxy_pass http://localhost:3000;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
}
What is causing the problem here? What am I missing that's causing the page to not load in a browser? The browser currently just hangs if you try to visit the site.
Thanks!
I'm using the following nginx.conf
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
gzip on;
gzip_disable "msie6";
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
server {
listen 80;
server_name mydomain.org;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl http2;
ssl_certificate /etc/letsencrypt/live/mydomain.org/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/mydomain.org/privkey.pem;
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_session_tickets off;
ssl_protocols TLSv1.1 TLSv1.2;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!3DES:!MD5:!PSK';
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/nginx/certs/dhparam.pem;
add_header Strict-Transport-Security max-age=15768000;
ssl_stapling on;
ssl_stapling_verify on;
ssl_trusted_certificate /etc/letsencrypt/live/mydomain.org/chain.pem;
resolver 8.8.8.8 8.8.4.4 valid=86400;
root /var/www/html;
index index.php;
location / {
try_files $uri $uri/ /index.php?$args;
}
rewrite /wp-admin$ $scheme://$host$uri/ permanent;
location ~* ^.+\.(ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|rss|atom|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$ {
access_log off; log_not_found off; expires max;
}
location ~ [^/]\.php(/|$) {
fastcgi_split_path_info ^(.+?\.php)(/.*)$;
if (!-f $document_root$fastcgi_script_name) {
return 404;
}
root /var/www/html;
fastcgi_pass wp_db:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /var/www/html$fastcgi_script_name;
include fastcgi_params;
}
}
}
But nginx container complains with:
nginx: [emerg] BIO_new_file("/etc/letsencrypt/live/mydomain.org/fullchain.pem") failed (SSL: error:02001002:system library:fopen:No such file or directory:fopen('/etc/letsencrypt/live/mydomain.org/fullchain.pem','r') error:2006D080:BIO routines:BIO_new_file:no such file)
I have all the certificates on that path for let's encrypt.
I found this thread
https://serverfault.com/questions/537343/nginx-startup-fails-ssl-no-such-file-or-directory
And did
chown -R root:root /etc/letsencrypt/live/mydomain.org/fullchain.pem
chmod -R 600 /etc/letsencrypt/live/mydomain.org/fullchain.pem
Same error was thrown from nginx container. I've placed the certs on /docker-compose/etc/nginx/certs giving the same permissions and changing links on nging.conf but nothing changed.
What I'm missing?
I was experiencing the same problem deploying harbor (a docker registry + access control UI) using volume mapping /etc/letsencrypt:/etc/letsencrypt
nginx reported "no such file" when loading the certificate file, even though I could enter that container (docker exec bash ..) and cat the files using the exact same path.
I suspected the problem is caused by letsencrypt use of symlinks, so my solution was to copy the live certs into another folder using cp -rL (to de-reference symlinks)
root#registry:/etc/letsencrypt# mkdir copy
root#registry:/etc/letsencrypt# cp -rL live/* copy/
then I changed the nginx.conf to refer to 'copy' instead of 'live'
Now nginx correctly starts inside docker.
This is not a long-term solution because when the certs are renewed the copy won't get automatically updated. But since I'll be running letsencrypt renew from a cronjob, that task can run the copy process again.
Also I've read that nginx must be restarted if the certs change, so that's another issue I'll need to face. But at least nginx starts correctly now.
I got this error when I renamed apps in Dokku (0.5.4). What had happened is that the links in the new app directory pointed to the old app name, e.g.
/home/dokku/[new app]/letsencrypt/certs/current -> /home/dokku/[old app]/letsencrypt/certs/f1f1f1f1f1f1f1f1f1f1f1f1f1f1f1f1f1f1f1f1
So I manually recreated the links so the pointed to the right place.
try to start with /root :
ssl_certificate /root/etc/letsencrypt/live/mydomain.org/fullchain.pem;
ssl_certificate_key /root/etc/letsencrypt/live/mydomain.org/privkey.pem;
I solved the problem like this.
Joy
I wasted a day today and found the solution.
Run nginx docker engine with
-v /etc/letsencrypt/archive/your_domain.com:/nginx/letsencrypt/your_domain.com
and nginx.conf
ssl_certificate /nginx/letsencrypt/your_domain.com/fullchain1.pem;
ssl_certificate_key /nginx/letsencrypt/your_domain.com/privkey1.pem;
how would one enable gzip compression of responses with Content-Type application/json when the asp.net 5 app is deployed to IIS 8 on Azure? Typically this would've been done using web.config but that's gone now... what's the new approach?
You need to reverse-proxy your kestrel application, then you can tell the reverse-proxy to compress.
In nginx, this goes as follows:
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
server_name localhost;
gzip on;
gzip_min_length 1000;
#gzip_proxied expired no-cache no-store private auth;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
gzip_vary on;
gzip_disable "MSIE [1-6]\.(?!.*SV1)";
location /
{
proxy_pass http://127.0.0.1:5004;
}
}
So here nginx will catch incoming requests on port 80, and then forward them to kestrel on the same machine, but on port 5004. Kestrel then sends the response back to nginx. Since gzip is on, nginx will compress the response, and send it to the user. All you need to ensure is that the application on Kestrel does not return HTTP headers, such as HTTP 1.1 chuncked-encoding when outputting for example a file (e.g. when using what used-to-be Response.TransmitFile).
IIS 7.5+ supports reverse proxying.
See here for closer information:
https://serverfault.com/questions/47537/can-iis-be-configure-to-forward-request-to-another-web-server