System information (AWS EC2 Instance (m4.large) behind the ElasticBeanstalk):
Region: us-west-1
Memory: 8GB
CPU: 2 core / 2.4GHz
PHP Version: 7.0.22 (ZTS) with FPM
Nginx Version: 1.10.2
There is an API used by web/mobile/other. Each endpoint is making database requests and using cache (APCu or Redis)
Apache
Apache serves ~40 requests per second. Latency was ~500-1200ms (depends on the API endpoint).
Nginx
Then we decided to move to Nginx. But faced the strange behavior - throughput decreased to ~ 20 requests per second. And the latency is constantly increasing (e.g.: test starts with 300ms and ends with >31000ms)
/etc/nginx/nginx.conf:
user webapp;
pid /var/run/nginx.pid;
worker_processes auto;
worker_rlimit_nofile 10000;
error_log /var/log/nginx/error.log;
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 4096;
use epoll;
multi_accept on;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
fastcgi_buffers 8 16k;
fastcgi_buffer_size 32k;
fastcgi_connect_timeout 60;
fastcgi_send_timeout 300;
fastcgi_read_timeout 300;
charset utf-8;
client_max_body_size 50m;
gzip on;
gzip_vary on;
gzip_min_length 10240;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/xml application/json;
gzip_disable "MSIE [1-6]\.";
include /etc/nginx/mime.types;
default_type application/octet-stream;
upstream php {
server 127.0.0.1:9000;
}
include /etc/nginx/conf.d/*.conf;
index index.html index.htm;
}
/fpm/pools/www.conf:
[www]
user = webapp
group = webapp
listen = 127.0.0.1:9000
pm = dynamic
pm.max_children = 75
pm.start_servers = 30
pm.min_spare_servers = 30
pm.max_spare_servers = 35
pm.max_requests = 500
... the rest is default
Performance is measured by Apache Jmeter, using custom scenarios.
Tests are run from the same region (another EC2 instance).
cURL stats:
lookup: 0.125
connect: 0.125
appconnect: 0.221
pretransfer: 0.221
redirect: 0.137
starttransfer: 0.252
total: 0.389
tcptraceroute is also perfect (1ms)
Please advise! I cannot find the cause of the problem by myself..
Thanks!
Related
I added SSL to my ELB by using the AWS Certificate Manager and running Nginx on ELB backed EC2 instance.
Configured SSL in Nginx conf file
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
server {
#if user hits http, then he will be redirected https
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
if ($http_x_forwarded_proto = "http") {
return 301 https://$host$request_uri;
}
sendfile on;
default_type application/octet-stream;
gzip on;
gzip_http_version 1.1;
gzip_disable "MSIE [1-6]\.";
gzip_min_length 256;
gzip_vary on;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain text/css application/json application/javascript application/x-javascript text/xml application/xml application/xml+rss text/javascript;
gzip_comp_level 9;
root /usr/share/nginx/html;
location / {
try_files $uri $uri/ /index.html =404;
}
}
}
am facing some latency issues, So How can I add gzip, cache, some SSL configurations in ELB?
I ve installed nginx 1.12.1 as a reverse proxy with a working Apache httpd 2.4.25 x64
I have a vmware virtual machine with centOs 6.9. I ve a working stack apache httpd 2.4.25 ---(mod_jk 1.2.42)---Tomcat 7.0.81---(jdbc)---MySQL server 5.7.19.
Now i ve installed and configure Nginx to work in front of Apache (reverse proxy).
It does not work since three days, (using curl or mozilla browser).
The error message is 400 Bad Request: Request Header Or Cookie Too Large
could someone help me?
Here is my /etc/nginx/nginx.conf
user nginx;
worker_processes 4;
error_log /var/log/nginx/error.core.log warn;
pid /var/run/nginx.pid;
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request"
' '$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.core.log main;
sendfile on;
keepalive_timeout 65;
client_max_body_size 200M;
client_body_buffer_size 32k;
client_header_buffer_size 64k;
large_client_header_buffers 4 64k;
fastcgi_buffers 16 16k;
fastcgi_buffer_size 32k;
include /etc/nginx/conf.d/*.conf;
}
my default server config:
server {
listen 80;
server_name localhost;
charset utf-8;
access_log /var/log/nginx/access.http.mydomain.log;
error_log /var/log/nginx/error.http.mydomain.log;
location / {
proxy_pass http://127.0.0.1:8080/;
root /opt/rh/httpd24/root/var/www/html/html;
index index.html index.htm;
include /etc/nginx/conf.d/proxy.inc;
client_max_body_size 10m;
client_body_buffer_size 128k;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
I am migrating from the apache web server and have problems with nginx jscript compression (css compression works fine). This is my config file:
#user nginx;
worker_processes 1;
#error_log /var/log/nginx/error.log;
#error_log /var/log/nginx/error.log notice;
#error_log /var/log/nginx/error.log info;
#pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
#tcp_nodelay on;
tcp_nodelay on;
gzip on;
gzip_http_version 1.1;
gzip_min_length 10;
#gzip_http_version 1.0;
gzip_vary on;
gzip_comp_level 7;
gzip_proxied any;
gzip_types text/plain text/css application/x-javascript text/javascript text/xml application/xhtml+xml application/xml;
#gzip_min_length 1100;
gzip_buffers 16 8k;
gzip_disable "MSIE [1-6]\.(?!.*SV1)";
server_tokens off;
include /etc/nginx/conf.d/*.conf;
}
When I check different file compression using online check tools, jscript files are not compressed at all. css and text files are OK.
What am I missing here?
Thanks
UPDATE: Having spent 5 hours in debugging this simple thing I've found the problem:
after changing the nginx configuration files reloading nginx (/etc/init.d/nginx reload) is not enough - the nginx services on Plesk panel should be restarted (off/on). Otherwise, the changes will not be applied.
try adding
application/javascript
to gzip_types
It used to be so easy to set header expiration with apache mod_headers, but I am having a hard time to figure out where to add it in nginx confi file.
This is my nginx.conf:
#user nginx;
worker_processes 1;
#error_log /var/log/nginx/error.log;
#error_log /var/log/nginx/error.log notice;
#error_log /var/log/nginx/error.log info;
#pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
#log_format main '$remote_addr - $remote_user [$time_local] "$request" '
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
#access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
#tcp_nodelay on;
tcp_nodelay on;
gzip on;
gzip_http_version 1.1;
#gzip_http_version 1.0;
gzip_vary on;
gzip_comp_level 6;
gzip_proxied any;
gzip_types text/plain text/html text/css application/x-javascript text/xml;
#gzip_min_length 1100;
gzip_buffers 16 8k;
gzip_disable "MSIE [1-6]\.(?!.*SV1)";
server_tokens off;
include /etc/nginx/conf.d/*.conf;
}
Where should I add the header expiration part like
location ~* \.(js|css)$ {
expires 30d;
}
I tried adding it inside "http" or including in another block "server", but it generates errors like unknown directive "server" or "location".
It is as easy to add expires headers in nginx. You need to place your location block inside a server block. There must be a default file in /your/nginx_dir/sites-enabled/.
If it is you can edit it directly and add your location block inside it, or you can copy the the whole content of the default file inside the http block of your nginx.conf.
If you choose to edit the default file in place, don't forget to add a line like this in your nginx.conf
include /etc/nginx/sites-enabled/*;
If you can't find the default file, just edit your nginx.conf so it will look like this
#....
server_tokens off;
#up to here the conf is the same as yours
#edit starts here, just add this server block
server {
#default_server is not necessary but is useful if you have many servers
#and want to capture requests where the host header doesn't match any server name
listen 80 default_server;
#requests with host header that matches server name will be handled by this server
server_name your.domain.com localhost;
#change your document root accordingly
root /path/to/your/html/root;
#you can have as many location blocks as you need
location ~* \.(js|css)$ {
expires 30d;
}
}
#end of conf changes
include /etc/nginx/conf.d/*.conf;
Since you are coming from apache, just think of nginx's server as apache's VirtualHost. Don't forget to reload nginx after each change in the conf files
check inside /etc/nginx/conf.d/ you'll probably find a file called default then you'll find the location / inside here.
Stackoverflowers. I have a problem with my Rails nginx configuration. I'm running a Rails 3.0.12 app, and I'm quite new to nginx.
I can't seem to get nginx to serve static assets. For every request in /public folder I get 404. I'm posting the nginx configuration I got so far. Maybe I missed something
nginx.conf:
user rails;
worker_processes 1;
daemon off;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
events {
worker_connections 2048;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
gzip on;
gzip_http_version 1.0;
gzip_comp_level 2;
gzip_proxied any;
gzip_types text/plain text/html text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript;
server_names_hash_bucket_size 64;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
sites-enabled/project.conf:
upstream project {
# fail_timeout=0 means we always retry an upstream even if it failed
# to return a good HTTP response (in case the Unicorn master nukes a
# single worker for timing out).
# for UNIX domain socket setups:
server unix:/tmp/project.socket fail_timeout=0;
}
server {
listen 80;
root /srv/www/project/current/public;
passenger_enabled on;
server_name dev.project.eu;
server_name *.dev.project.eu;
location / {
#all requests are sent to the UNIX socket
proxy_pass http://project;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
root /srv/wwww/project/current/public;
}
}
I've tried removing the location / block from project.conf, but it didn't do anything, the assets are still not visible.
I am also aware of serve_static_assets switch in Rails, but I'd rather have nginx serve those assets, as it should do so.
You need to add something like that (documentation on locations):
location / {
try_files $uri #ruby;
}
location #ruby {
proxy_pass http://project;
}
I know this thread is over a year old but i had the same problem running in production
The thing that made it work for me was running
rake assets:precompile
in development, and uncommenting
load 'deploy/assets'
even though I am using rails 4.