Phusion Passenger configuration: How passenger_min_instances affects the sub-URI setup - ruby-on-rails-3

We are setting up multiple sub-URI (3 of them) in passenger config. We want to ensure that for each sub-uri, we have at least one instance running all the time. How can we use the passenger_min_instances
flag to do that. If I assign this flag to equal 3, is this right? Does it mean there are 3 instances for each sub-uri instance running ALL THE TIME after first visit to the server? Thanks.
Here is the setup in nginx server block:
server {
listen 80;
server_name 176.95.25.193;
root /var/www;
passenger_enabled on;
rails_env production;
passenger_base_uri /nbhy;
passenger_base_uri /bt;
passenger_base_uri /byop;
passenger_min_instances 2;
}

You can put configuration for a specific base URI in a location block. Like this:
location ~ ^/nbhy(/.*|$) {
...
passenger_min_instances 2;
}
location ~ ^/bt(/.*|$) {
...
passenger_min_instances 3;
}

Related

NGINX: set_by_lua doesn't work

I have NGINX running as a proxy service and want to set SSL key depending on the $ENV variable (in Docker Compose file).
I added to the nginx.conf:
env ENVKEY;
And then in the config file:
server {
resolver 10.0.0.4 valid=300s;
resolver_timeout 60s;
server_name _;
listen 443;
ssl on;
# perl_set $envkey 'sub { return $ENV{"ENVKEY"}; }';
set_by_lua $envkey 'return os.getenv("ENVKEY")';
ssl_certificate /etc/nginx/ssl/jm-website-$envkey.crt;
ssl_certificate_key /etc/nginx/ssl/jm-website-$envkey.key;
I also tried to use perl_set - but it can be used in the location only, but ssl_certificate - in the http or server blocks.
Using set_by_lua - I have an error:
nginx: [emerg] BIO_new_file("/etc/nginx/ssl/jm-website-$envkey.crt")
failed (SSL: error:02001002:system library:fopen:No such file or
directory:fopen('/etc/nginx/ssl/jm-website-$envkey.crt','r')
error:2006D080:BIO routines:BIO_new_file:no such file)
Although variable present in the environment:
root#d0718b0a3361:/etc/nginx# echo $ENVKEY
dev
What I'm doing wrong here?
Maybe there is better approach?
I do know that this is an old thread. I'm posting this for posterity's sake.
I was having the same problem as you were/are.
What is happening is that there's an order to when the lua blocks gets executed. set_by_lua happens right after the certificate is validated.
What you could do is either render you nginx.conf using a render engine of your choice (e.g.: python jinja) or you could write your own ssl_certificate_by_lua_block that reads from an environment variable. Here is an exemple of an implementation using said block. You could also check how Kong does it.
Hope it helps :)

Nginx giving 502 Bad Gateway error on port 80

I am using nginx, and trying to load the site on port 80 (or root) gives 502 bad gateway error after trying to connect for a while.
When I input: netstat -ltnp | grep :80 I get the results below.
And here it is my nginx.conf:
#user nginx;
# The number of worker processes is changed automatically by CustomBuild, according to the number of CPU cores, if it's set to "1"
worker_processes 4;
pid /var/run/nginx.pid;
#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;
events {
include /etc/nginx/nginx-events.conf;
}
http {
include /etc/nginx/mime.types;
# For user configurations not maintained by DirectAdmin. Empty by default.
include /etc/nginx/nginx-includes.conf;
# Supplemental configuration
#include /etc/nginx/nginx-modsecurity-enable.conf;
include /etc/nginx/nginx-defaults.conf;
include /etc/nginx/nginx-gzip.conf;
include /etc/nginx/nginx-proxy.conf;
include /etc/nginx/directadmin-ips.conf;
include /etc/nginx/directadmin-settings.conf;
include /etc/nginx/nginx-vhosts.conf;
include /etc/nginx/directadmin-vhosts.conf;
server {
listen 80;
root /var/www/html;
index index.html index.htm index.php;
}
}
Note: any other ports than 80 works fine. Obviously 8080 and 8081 are taken, but beside those, any other ports (e.g. 8000) works fine.
Things I have tried so far:
this solution, and using proxy.
What could be possibly cause this?
Alright, so I started commenting out all the includes, and came up to the knowing that the line below was defining another port 80 server:
include /etc/nginx/directadmin-vhosts.conf;
Therefore I manipulated this file (directadmin-vhosts.conf) instead of defining my servers in nginx.conf, and things were all good then.
Note, for some reason this command wasn't showing exactly all the port 80s that were taken:
netstat -ltnp | grep :80
Hence I used the command below instead, and that's when I realized there is somewhere else that nginx is using port 80 as well.
netstat -ltnp | grep :*
Noob mistake I reckon, but I hope this answer helps somebody out there who is struggling with the same issue.

Nginx Rails website and Passenger with multiple ports

I would like to run two versions of my rails site, one for production and one for development. The production one will listen on port 80 and the development will listen on port 9033. Here are my config server blocks which are located in the same file
server {
listen 80 default_server;
server_name mywebsite.com;
passenger_enabled on;
passenger_app_env production;
root /path/to/public/dir;
}
server {
listen 9033 default_server;
server_name mywebsite.com;
passenger_enabled on;
passenger_app_env development;
root path/to/public/dir;
passenger_friendly_error_pages on;
}
The problem lies in that when I try to connect to the website through my browser, regardless of which port I use I always get the version of the website corresponding to the environment specified in the first server block. So in the example I gave above, it'd always serve the production version of my website.
Why is it that the first server block overrides the second, and how can I make it so that I can access either version of my website without going in a manually changing the config files and reloading nginx?
UPDATE:
None of the suggestions were working, even after clearing the browser cache before sending every HTTP request. I changed my server blocks to the following in the hopes of my server returning different version of the website
server {
listen *:80;
server_name mywebsite.com;
passenger_enabled on;
passenger_app_env production;
root /home/alex/code/m2m/public/;
}
server {
listen *:80;
server_name dev.mywebsite.com;
passenger_enabled on;
passenger_app_env development;
root /home/alex/code/m2m/public/;
passenger_friendly_error_pages on;
}
and then added the following line in my /etc/hosts file
my.ip.addr.ess dev.mywebsite.com
But requests to both domains result in only the production version of my website being returned. Note I'm using the default nginx.conf file. Is there a way I can debug my browser (Chrome v40.0.2214.111 (64-bit)) to see if/where my requests are being altered? I'm thinking the problem lies clientside since the advice the commenters have given me seems like it should work.
And if you try this :
listen *:80;
and
listen *:9033;
This was my recommendation regarding the question that aims nginx config.
By putting those listen directives, according to nginx doc, nginx will first match ip:port server blocks and then look at server_name directives in server blocks that matched IP:port. So if request containing right 'port' end in the wrong environment this has something to do with either the app or the passenger directives.

nginx 1.2.0 with rails 3.2.3 and passenger 3.0.12 - 403 error

Folks
I Am trying to set up ruby on rails 3.2.3 with passenger 3.0.12 and nginx 1.2. I have followed instructions to compile nginx with passenger module. Following is my nginx configuration. When I try to go to the root page (using curl localhost), it gives me 403 forbidden error. It does not seem to pass the request on to passenger. Let me know if I am missing something simple. Thank you,
worker_processes 1;
events {
worker_connections 1024;
}
http {
passenger_root /home/ubuntu/.rvm/gems/ruby-1.9.3-p194/gems/passenger-3.0.12;
passenger_ruby /home/ubuntu/.rvm/wrappers/ruby-1.9.3-p194/ruby;
rails_env development;
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
server {
listen 80;
server_name localhost;
passenger_enabled on;
location / {
root /home/ubuntu/rails/myapp/public;
}
}
}
EDIT
If I do the following:
1) create a new app - dummy
2) Change the config.ru to print 'hello world'
3) change the root to point to dummy app's public directory
Then the error goes away.
Also, if I create a brand new rails app, I am able to access the default rails app page. I have also tried to make directory perms 777 for the entire myapp directory structure. No joy.
Solved it.The passenger_enabled clause has to be moved to within the location block.

Apache + NginX reverse proxy: serving static and proxy files within nested URLs

I have Apache running on my server on port 8080 with NginX running as a reserve proxy on port 80. I am attempting to get NginX to serve static HTML files for specific URLs. I am struggling to write the NginX configuration that does this. I have conflicting directives as my URLs are nested within each other.
Here's what I want to have:
One URL is at /example/ and I want NginX to serve a static HTML file located on my server at /path/to/www/example-content.html instead of letting Apache serve the page to NginX.
Another URL is at /example/images/ and I want Apache to serve that page to NginX, just as it does for the rest of the site.
I have set up my nginx.conf file like this:
server {
listen 80;
server_name localhost;
# etc...
location / {
proxy_pass http://127.0.0.1:8080/;
}
# etc...
My attempt to serve the static file at /example/ from NginX went like this:
location /example/ {
alias /path/to/www/;
index example-content.html;
}
This works, but it means everything after the /example/ URL (such as /example/images/) is aliased to that local path also.
I would like to use regex instead but I've not found a way to serve up the file specifically, only the folder. What I want to be able to say is something like this:
location ~ ^/example/$ {
alias /path/to/www/example-content.html;
}
This specifically matches the /example/ folder, but using a filename like that is invalid syntax. Using the explicit location = /example/ in any case doesn't work either.
If I write this:
location ~ ^/example/$ {
alias /path/to/www/;
index example-content.html;
}
location ~ /example/(.+) {
alias /path/to/www/$1;
}
The second directive attempts to undo the damage of the first directive, but it ends up overriding the first directive and it fails to serve up the static file.
So I'm at a bit of a loss. Any advice at all extremely welcome! Many thanks.
Since you say you have Apache running, I assume it is there to run dynamic content such as PHP (Otherwise it is not needed). In that case, the example config below will serve all static content with Nginx and pass others to Apache.
server {
# default index should be defined once as high up the tree as possible
# only override lower down if absolutely required
index index.html index.php
# default root should be defined once as high up the tree as possible
# only override lower down if absolutely required
root /path/to/www/
location / {
try_files $uri $uri/ #proxy;
}
location #proxy {
proxy_pass http://127.0.0.1:8080;
# Other Proxy Params
}
location ~ \.php$ {
error_page 418 = #proxy
location ~ \..*/.*\.php$ { return 400; }
return 418;
}
}
What this assumes is that you are following a structured setup where the default file in every folder is either called "index.html" or "index.php" such as "/example/index.html", "some-folder/index.html" and "/some-other-folder/index.html".
With this, navigating to "/example/", "/some-folder/" or "/some-other-folder/" will just work with no further action.
If each folder has default files with random different names, such as "/example/example-content.html", "some-folder/some-folder.html" and "some-other-folder/yet-another-different-default.html", then it becomes a bit more difficult as you then need to do something like
server {
# default index should be defined once as high up the tree as possible
# only override lower down if absolutely required
index index.html index.php
# default root should be defined once as high up the tree as possible
# only override lower down if absolutely required
root /path/to/www/
location / {
try_files $uri $uri/ #proxy;
}
location #proxy {
# Proxy params
proxy_pass http://127.0.0.1:8080;
}
location ~ .+\.php$ {
error_page 418 = #proxy
location ~ \..*/.*\.php$ { return 400; }
return 418;
}
location /example/ {
# Need to keep defining new index due to lack of structure
# No need for alias or new root
index example-content.html;
}
location /some-folder/ {
# Need to keep defining new index due to lack of structure
# No need for alias or new root
index some-folder.html;
}
location /some-other-folder/ {
# Need to keep defining new index due to lack of structure
# No need for alias or new root
index yet-another-different-default.html;
}
# Keep adding new location blocks for each folder
# Obviously not the most efficient arrangement
}
The better option is to have a structured and logical layout of files on the site instead of multiple differing locations.