I am testing nginx as reverse proxy on Xampp apache web server on my local machine. When I open the site on the browser it won't find the css, js and images assets files. When I try to directly include 1 css files on the header without using Yii2 Asset bundle. It is still the same it won't find the files.
Here is my conf file
worker_processes 1;
error_log logs/error.log;
events {
worker_connections 1024;
}
http {
include mime.types;
include proxy.conf;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
gzip on;
gzip_comp_level 5;
gzip_http_version 1.0;
gzip_min_length 0;
gzip_types text/plain text/css image/x-icon application/x-javascript;
gzip_vary on;
server {
listen 80;
server_name 21pos.witty.localhost;
location ~* ^.+\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|pdf|txt|tar|wav|bmp|rtf|js|flv|swf|html|htm)$
{
#root html;
root D:/xamp7.1/htdocs/hr-witty/web;
expires max;
}
#set default location
location / {
proxy_pass http://127.0.0.1:8080/;
}
error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
#Optional. If you have a subdomain to serves static files so we have not set up a proxy_pass.
server {
listen 80;
server_name s0.jpa.gov.my s1.jpa.gov.my; # Alternately: _
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
access_log logs/static.access.log;
error_log logs/static.error.log;
index index.html;
location / {
expires max;
root D:/xamp7.1/htdocs/;
}
}
}
My proxy.conf file
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
client_max_body_size 10m;
client_body_buffer_size 128k;
client_header_buffer_size 64k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffer_size 16k;
proxy_buffers 32 16k;
proxy_busy_buffers_size 64k;
Related
I configured my server with nginx as a reverse proxy for a Nuxt/Express SSR application. For the moment I have a login page, and a home page.
I can connect and disconnect without any problem.
However, when I'm connected and I refresh the page, the loading time is very long.
I don't know if this is due to Nginx configuration or the authentication API or redirection.
I've noticed that it also happens when I type my url myself in the address bar.
Thanks in advance for your help
Update :
Here is my nginx config
worker_processes 1;
error_log logs/error.log info;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
map_hash_max_size 64;
map_hash_bucket_size 64;
map $sent_http_content_type $expires {
"text/html" epoch;
"text/html; charset=utf-8" epoch;
default off;
}
server {
listen 9998 ssl;
server_name control.serenicity.fr;
ssl_certificate "C:/nginx-1.19.0/ssl/mydomain.fr.crt";
ssl_certificate_key "C:/nginx-1.19.0/ssl/mydomain.fr.key";
location / {
expires $expires;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 1m;
proxy_connect_timeout 1m;
proxy_pass http://10.0.5.11:3000;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
}
Here are the request when I refresh :
https://drive.google.com/file/d/1PeHzCaHvLNL8_YN9ZOEFNv_gxQe2jpu9/view?usp=sharing
https://drive.google.com/file/d/1b9m3cDL_LOv1d3Kaaucnq4Qi308P8TLT/view?usp=sharing
https://drive.google.com/file/d/1xe_EBnb_XXWo7IlaI47EtlaLE6eFgd2k/view?usp=sharing
Update 2 :
After several hours of debugging, it seems that there is no link with nginx.
The problem comes from nuxt in SSR. When I disable SSR there is no more problem.
Server configuration :
Centos 7 + PHP7 + PHP-FPM + MariaDB 10 +Nginx as reverse proxy for Apache + Virtualmin
I'm new to setting up a server. I'm not sure where did I mess up and I have tried searching online and editing according to suggestion on Digital Ocean and many other forum but still no success. I always restart nginx, httpd and php-fpm after changes have been made.
I have 2 virtual servers, all of them open Nginx test page instead of their respective homepage.
I have been trying to configure but with no success. Please help.
Below are the configuration files for my virtual servers.
1) etc/nginx/conf.d/default.conf
server {
listen 80;
root /home/~;
index index.php index.html index.htm;
server_name localhost;
location / {
try_files $uri $uri/ /index.php;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
proxy_pass http://127.0.0.1:8080;
}
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
try_files $uri $uri/ =404;
#fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock;
fastcgi_pass 127.0.0.1:9000;
#fastcgi_pass php-fpm;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
location ~ /\.ht {
deny all;
}
}
2) etc/nginx/conf.d/php-fpm.conf
# PHP-FPM FastCGI server
# network or unix domain socket configuration
upstream php-fpm {
server 127.0.0.1:9000;
#server unix:/run/php-fpm/www.sock;
}
3) etc/nginx/nginx.conf
user nginx;
worker_processes auto;
worker_rlimit_nofile 10000;
# only log critical errors
error_log /var/log/nginx/error.log crit;
pid /run/nginx.pid;
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
use epoll;
multi_accept on;
}
http {
include mime.types;
fastcgi_buffers 8 16k;
fastcgi_buffer_size 32k;
fastcgi_connect_timeout 300;
fastcgi_send_timeout 300;
fastcgi_read_timeout 300;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] '
'"$request" $status $bytes_sent '
'"$http_referer" "$http_user_agent" '
'"$gzip_ratio"';
log_format download '$remote_addr - $remote_user [$time_local] '
'"$request" $status $bytes_sent '
'"$http_referer" "$http_user_agent" '
'"$http_range" "$sent_http_content_range"';
access_log /var/log/nginx/access.log main;
error_log /var/log/nginx/error.log;
# cache informations about FDs, frequently accessed files
# can boost performance, but you need to test those values
open_file_cache max=200000 inactive=20s;
open_file_cache_valid 2m;
open_file_cache_min_uses 5;
open_file_cache_errors on;
# to boost I/O on HDD we can disable access logs
access_log off;
# copies data between one FD and other from within the kernel
# faster then read() + write()
sendfile on;
# send headers in one peace, its better then sending them one by one
tcp_nopush on;
# don't buffer data sent, good for small data bursts in real time
tcp_nodelay on;
types_hash_max_size 2048;
index index.php index.html index.htm;
include /etc/nginx/conf.d/*.conf;
index index.php index.html index.htm;
server_names_hash_bucket_size 128;
##
# Gzip Settings
##
# reduce the data that needs to be sent over network -- for testing environment
gzip on;
gzip_http_version 1.1;
gzip_min_length 10240;
gzip_proxied expired no-cache no-store private auth;
gzip_vary on;
gzip_comp_level 6;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript text/x-js;
gzip_buffers 16 8k;
gzip_disable "MSIE [1-6]\.(?!.*SV1)";
# allow the server to close connection on non responding client, this will free up memory
reset_timedout_connection on;
# request timed out -- default 60
client_body_timeout 3m;
# if client stop responding, free up memory -- default 60
send_timeout 3m;
# server will close connection after this time -- default 75
keepalive_timeout 65;
# number of requests client can make over keep-alive -- for testing environment
keepalive_requests 100000;
ignore_invalid_headers on;
client_max_body_size 100m;
connection_pool_size 256;
request_pool_size 4k;
output_buffers 4 32k;
postpone_output 1460;
# limit the number of connections per single IP
limit_conn_zone $binary_remote_addr zone=conn_limit_per_ip:10m;
# limit the number of requests for a given session
limit_req_zone $binary_remote_addr zone=req_limit_per_ip:10m rate=5r/s;
# if the request body size is more than the buffer size, then the entire (or partial)
# request body is written into a temporary file
client_body_buffer_size 128k;
# headerbuffer size for the request header from client -- for testing environment
client_header_buffer_size 3m;
# maximum number and size of buffers for large headers to read from client request
large_client_header_buffers 4 256k;
# how long to wait for the client to send a request header -- for testing environment
client_header_timeout 3m;
server_tokens off;
#nginx compression
log_format compression '$remote_addr - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent ' '"$http_referer" "$http_user_agent" "$gzip_ratio"';
# Upstream to abstract backend connection(s) for PHP.
upstream php {
#this should match value of "listen" directive in php-fpm pool
#server unix:/tmp/php-fpm.sock;
server 127.0.0.1:9000;
}
server {
listen 80;
# listen [::]:80 default_server;
server_name _;
#root /home/~;
root /usr/share/nginx/html;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
# zone which we want to limit by upper values, we want limit whole server
limit_conn conn_limit_per_ip 10;
limit_req zone=req_limit_per_ip burst=10 nodelay;
location ~* .(woff|eot|ttf|svg|mp4|webm|jpg|jpeg|png|gif|ico|css|js)$ {
expires max;
}
gzip on;
access_log /var/log/nginx/access.log compression;
}
#1st virtual server
server {
listen 80;
server_name website1.co www.website1.co;
root /home/website1/public_html;
index index.html index.htm index.php;
access_log /var/log/virtualmin/website1_access_log;
error_log /var/log/virtualmin/website1_error_log;
# nginx configuration
location / {
#for web application
if (!-e $request_filename){
rewrite ^(/)?api/.*$ /api/index.php;
}
if (!-e $request_filename){
rewrite ^(/)?customer/.*$ /customer/index.php;
}
if (!-e $request_filename){
rewrite ^(/)?backend/.*$ /backend/index.php;
}
if (!-e $request_filename){
rewrite ^(.*)$ /index.php;
}
index index.html index.htm index.php;
#web application end
# [pre-existing configurations, if applicable]
autoindex on;
autoindex_exact_size off;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
proxy_temp_file_write_size 256k;
proxy_connect_timeout 30s;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
proxy_pass http://127.0.0.1:8080;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
location ~ \.php$ {
try_files $uri $uri/ /index.php?$args;
fastcgi_index index.php;
fastcgi_pass 127.0.0.1:9000;
#fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock;
fastcgi_split_path_info ^(.+\.php)(.*)$;
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
fastcgi_read_timeout 600s;
fastcgi_send_timeout 600s;
include fastcgi_params;
}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
location ~ /\.ht {
access_log off;
log_not_found off;
deny all;
}
listen 443 ssl;
ssl_certificate /home/website1/ssl.cert;
ssl_certificate_key /home/website1/ssl.key;
}
#2nd virtual server
server {
server_name website2.co www.website2.co;
listen 80;
root /home/website2/public_html;
index index.html index.htm index.php;
access_log /var/log/virtualmin/website2_access_log;
error_log /var/log/virtualmin/website2_error_log;
fastcgi_param GATEWAY_INTERFACE CGI/1.1;
fastcgi_param SERVER_SOFTWARE nginx;
fastcgi_param QUERY_STRING $query_string;
fastcgi_param REQUEST_METHOD $request_method;
fastcgi_param CONTENT_TYPE $content_type;
fastcgi_param CONTENT_LENGTH $content_length;
fastcgi_param SCRIPT_FILENAME /home/website2/public_html$fastcgi_script_name;
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
fastcgi_param REQUEST_URI $request_uri;
fastcgi_param DOCUMENT_URI $document_uri;
fastcgi_param DOCUMENT_ROOT /home/website2/public_html;
fastcgi_param SERVER_PROTOCOL $server_protocol;
fastcgi_param REMOTE_ADDR $remote_addr;
fastcgi_param REMOTE_PORT $remote_port;
fastcgi_param SERVER_ADDR $server_addr;
fastcgi_param SERVER_PORT $server_port;
fastcgi_param SERVER_NAME $server_name;
fastcgi_param HTTPS $https;
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
try_files $uri $uri/ =404;
fastcgi_pass 127.0.0.1:9000;
include fastcgi.conf;
}
location / {
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
proxy_temp_file_write_size 256k;
proxy_connect_timeout 30s;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
proxy_pass http://127.0.0.1:8080;
}
listen 443 ssl;
ssl_certificate /home/website2/ssl.cert;
ssl_certificate_key /home/website2/ssl.key;
}
Any help would be very helpful.
Thanks in advance.
Remove the below section from your nginx.conf
server {
listen 80;
# listen [::]:80 default_server;
server_name _;
#root /home/~;
root /usr/share/nginx/html;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
# zone which we want to limit by upper values, we want limit whole server
limit_conn conn_limit_per_ip 10;
limit_req zone=req_limit_per_ip burst=10 nodelay;
location ~* .(woff|eot|ttf|svg|mp4|webm|jpg|jpeg|png|gif|ico|css|js)$ {
expires max;
}
gzip on;
access_log /var/log/nginx/access.log compression;
}
The server_name _; means any virtual host and hence it is the first thing to respond in your config. Rest of the virtual hosts are not consulted only
Edit - Aug 17
You have lot of mess in your configs, including includes from different directory. I would suggest you remove nginx and reinstall and modify the base config again. Also you are using httpd also in your setup.
The proxy_pass http://127.0.0.1:8080; means that anything that is not php is getting proxied to your httpd server and your actually root is not being used only.
Your try_files $uri $uri/ =404; should not be inside the location ~ \.php$ { block, rather it should be inside the location \ { block. Also there should be no proxy_pass in your location \ { block
I'm using AWS for deploying my Rails app. The request flow likes this
request -> AWS ELB (80, 443 SSL) -> EC2 (80) force to use https -> Unicorn
I've just followed devise document and use the callback link /users/auth/facebook.
When run with http then it works fine, but when I force to load https on EC2 then the callback will return
http://domain.com:443/users/auth/facebook
instead of
https://domain.com/users/auth/facebook
Then it stucks here.
What should I check? Since I already recheck the Nginx config, setting on Facebook app...
Thanks!
UPDATE
I tried to use this setting
80 ELB -> 80 EC2
443 ELB -> 443 EC2
And redirect http request to https on EC2 but the same issue happened.
I have two AWS Opsworks instances behind an Elastic Load Balancer.
The OpsWorks instances stack is Ruby on Rails + Nginx + Unicorn.
I want that my site is available both in http and https so for this reason I configured properly nginx server and in my Rails app I left this row commented:
config/environments/production.rb
# config.force_ssl = true
But I was having a problem like yours!
PROBLEM:
When a user login from http, everything is fine, but for users that are signing in from HTTPS, from facebook/twitter/instagram with devise omniauth, they were redirecting to a bad url like:
http://www.examplesite.com:443/users/auth/facebook/callback?code=xxx...xxx
I configured the ELB listener (inside AWS console) like you did in the way below, providing my certificate for the https part:
Note that HTTPS ==> HTTP
** The problem was inside my nginx configuration** and I fixed it deleting this line inside the 80 server parts:
proxy_set_header X-Forwarded-Proto http;
So at the end this is my nginx file (look unicorn inside servers 80):
upstream unicorn_examplesite.com {
server unix:/srv/www/examplesite_pics/shared/sockets/unicorn.sock fail_timeout=0;
}
server {
listen 443 default deferred;
server_name www.examplesite.com;
access_log /var/log/nginx/examplesite.com.access.log;
root /srv/www/examplesite_pics/current/public;
location ~ ^/(system|assets|img|fonts|css|doc)/ {
add_header "Access-Control-Allow-Origin" "*";
expires max;
access_log off;
allow all;
add_header Cache-Control public;
break;
}
try_files $uri/index.html $uri #unicorn;
ssl on;
ssl_certificate /etc/nginx/ssl/examplesite.com.crt;
ssl_certificate_key /etc/nginx/ssl/examplesite.com.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
location #unicorn {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_read_timeout 60;
proxy_send_timeout 60;
proxy_pass http://unicorn_examplesite.com;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 70;
}
server {
listen 80 default deferred;
server_name www.examplesite.com;
access_log /var/log/nginx/examplesite.com.access.log;
root /srv/www/examplesite_pics/current/public;
location ~ ^/(system|assets|img|fonts|css|doc)/ {
add_header "Access-Control-Allow-Origin" "*";
expires max;
access_log off;
allow all;
add_header Cache-Control public;
break;
}
try_files $uri/index.html $uri #unicorn;
location #unicorn {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_read_timeout 60;
proxy_send_timeout 60;
proxy_pass http://unicorn_examplesite.com;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 70;
}
server {
listen 80;
server_name *.examplesite.com;
access_log /var/log/nginx/examplesite.com.access.log;
root /srv/www/examplesite_pics/current/public;
location ~ ^/(system|assets|img|fonts|css|doc)/ {
add_header "Access-Control-Allow-Origin" "*";
expires max;
access_log off;
allow all;
add_header Cache-Control public;
break;
}
try_files $uri/index.html $uri #unicorn;
location #unicorn {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_read_timeout 60;
proxy_send_timeout 60;
proxy_pass http://unicorn_examplesite.com;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 70;
}
server {
listen 443;
server_name *.examplesite.com;
access_log /var/log/nginx/examplesite.com.access.log;
root /srv/www/examplesite_pics/current/public;
location ~ ^/(system|assets|img|fonts|css|doc)/ {
add_header "Access-Control-Allow-Origin" "*";
expires max;
access_log off;
allow all;
add_header Cache-Control public;
break;
}
try_files $uri/index.html $uri #unicorn;
ssl on;
ssl_certificate /etc/nginx/ssl/examplesite.com.crt;
ssl_certificate_key /etc/nginx/ssl/examplesite.com.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
location #unicorn {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_read_timeout 60;
proxy_send_timeout 60;
proxy_pass http://unicorn_examplesite.com;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 70;
}
server {
listen 443;
server_name examplesite.com www.examplesite.it examplesite.it;
access_log /var/log/nginx/examplesite.com.access.log;
return 301 $scheme://www.examplesite.com$request_uri;
}
server {
listen 80;
server_name examplesite.com www.examplesite.it examplesite.it;
access_log /var/log/nginx/examplesite.com.access.log;
return 301 https://www.examplesite.com$request_uri;
}
Hope it helps!
I struggled for few hours to fix this issue but still it doesn't work. The error I see in my browser is:
POST /users 502 (Bad Gateway)
I know that it's the problem of setting nginx and unicorn, but I can't solve it. By the way, I deployed my code using digital ocean. Here is my config file
Unicorn config (nginx.conf):
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
server_names_hash_bucket_size 64;
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
#gzip on;
# Load config files from the /etc/nginx/conf.d directory
# The default server is in conf.d/default.conf
include /etc/nginx/conf.d/*.conf;
fastcgi_buffers 8 16k;
fastcgi_buffer_size 32k;
fastcgi_connect_timeout 300;
fastcgi_send_timeout 300;
fastcgi_read_timeout 300;
}
Unicorn config file (/var/nginx/unicorn.conf):
upstream unicorn {
server unix:/tmp/unicorn.sock fail_timeout=0;
}
server {
listen 80;
listen 443 ssl;
ssl_certificate /root/certs/server.crt;
ssl_certificate_key /root/certs/server.key;
client_max_body_size 4G;
keepalive_timeout 15;
root /var/www/quoine/current/public;
try_files $uri #unicorn;
location ~ ^/assets|app/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
location = /app/ {
rewrite $uri $uri/index.html;
}
location = /app/index.html {
add_header Pragma "no-cache";
add_header Cache-Control "no-cache, no-store, max-age=0, must-revalidate";
add_header Expires "Fri, 01 Jan 1990 00:00:00 GMT";
}
location #unicorn {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_redirect off;
proxy_pass http://unicorn;
}
if (-f $document_root/system/maintenance.html) {
return 503;
}
error_page 500 502 504 /500.html;
location = /500.html {
root /var/www/quoine/current/public;
}
error_page 503 #maintenance;
location #maintenance {
rewrite ^(.*)$ /system/maintenance.html break;
}
}
I'm using Rails 3. If any one got any idea about this problem, please tell me. It takes me 3 hours without any progress. Thanks
Can you provide the config you've got for unicorn too (the files you've provided are both for nginx). An example of what it should look like is in the first part of the "Configuring Servers" entry here: https://www.digitalocean.com/community/tutorials/how-to-deploy-rails-apps-using-unicorn-and-nginx-on-centos-6-5
I ran across this trying to figure out why I was getting 502 errors after using the 1-click install for Digital Ocean - and using a different version of Ruby.
I found my answer by looking at this guide: https://www.digitalocean.com/community/tutorials/how-to-use-the-1-click-ruby-on-rails-on-ubuntu-14-04-image
My issue was the following from the guide:
Once you have the location of Ruby that you are using by default,
change /etc/default/unicorn pathnames to include /usr/local/rvm/rubies
subfolder and /usr/local/rvm/gems subfolders for the newly installed
version as well as location of unicorn
Hope this helps helps someone
Stackoverflowers. I have a problem with my Rails nginx configuration. I'm running a Rails 3.0.12 app, and I'm quite new to nginx.
I can't seem to get nginx to serve static assets. For every request in /public folder I get 404. I'm posting the nginx configuration I got so far. Maybe I missed something
nginx.conf:
user rails;
worker_processes 1;
daemon off;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
events {
worker_connections 2048;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
gzip on;
gzip_http_version 1.0;
gzip_comp_level 2;
gzip_proxied any;
gzip_types text/plain text/html text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript;
server_names_hash_bucket_size 64;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
sites-enabled/project.conf:
upstream project {
# fail_timeout=0 means we always retry an upstream even if it failed
# to return a good HTTP response (in case the Unicorn master nukes a
# single worker for timing out).
# for UNIX domain socket setups:
server unix:/tmp/project.socket fail_timeout=0;
}
server {
listen 80;
root /srv/www/project/current/public;
passenger_enabled on;
server_name dev.project.eu;
server_name *.dev.project.eu;
location / {
#all requests are sent to the UNIX socket
proxy_pass http://project;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
root /srv/wwww/project/current/public;
}
}
I've tried removing the location / block from project.conf, but it didn't do anything, the assets are still not visible.
I am also aware of serve_static_assets switch in Rails, but I'd rather have nginx serve those assets, as it should do so.
You need to add something like that (documentation on locations):
location / {
try_files $uri #ruby;
}
location #ruby {
proxy_pass http://project;
}
I know this thread is over a year old but i had the same problem running in production
The thing that made it work for me was running
rake assets:precompile
in development, and uncommenting
load 'deploy/assets'
even though I am using rails 4.