Rails application http request - ruby-on-rails-3

When I run my Rails application in Apache using Passenger and open two browser log the request with thread id using log4r.
I see both the request uses same thread id. How is it possible?
If I do sleep in one request until sleep expire another request is blocked.
Where can I configure use different thread for each request or maxThreadCount?
Is it the behavior for development environment or in production too? how to overcome with this?

config.threadsafe!
put it in your production.rb or development.rb.
I have same problem when calling a local webservice inside a controller action.
Puma also has better concurrency, but that threadsafe confgi make webrick multi-thread for me.

Related

rufus-scheduler and delayed_job on Heroku: why use a worker dyno?

I'm developing a Rails 3.2.16 app and deploying to a Heroku dev account with one free web dyno and no worker dynos. I'm trying to determine if a (paid) worker dyno is really needed.
The app sends various emails. I use delayed_job_active_record to queue those and send them out.
I also need to check a notification count every minute. For that I'm using rufus-scheduler.
rufus-scheduler seems able to run a background task/thread within a Heroku web dyno.
On the other hand, everything I can find on delayed_job indicates that it requires a separate worker process. Why? If rufus-scheduler can run a daemon within a web dyno, why can't delayed_job do the same?
I've tested the following for running my every-minute task and working off delayed_jobs, and it seems to work within the single Heroku web dyno:
config/initializers/rufus-scheduler.rb
require 'rufus-scheduler'
require 'delayed/command'
s = Rufus::Scheduler.singleton
s.every '1m', :overlap => false do # Every minute
Rails.logger.info ">> #{Time.now}: rufus-scheduler task started"
# Check for pending notifications and queue to delayed_job
User.send_pending_notifications
# work off delayed_jobs without a separate worker process
Delayed::Worker.new.work_off
end
This seems so obvious that I'm wondering if I'm missing something? Is this an acceptable way to handle the delayed_job queue without the added complexity and expense of a separate worker process?
Update
As #jmettraux points out, Heroku will idle an inactive web dyno after an hour. I haven't set it up yet, but let's assume I'm using one of the various keep-alive methods to keep it from sleeping: Easy way to prevent Heroku idling?.
According to this
https://blog.heroku.com/archives/2013/6/20/app_sleeping_on_heroku
your dyno will go to sleep if he hasn't serviced requests for an hour. No dyno, no scheduling.
This could help as well: https://devcenter.heroku.com/articles/clock-processes-ruby

Play 2.1.0 + Apache 2.2 Reverse proxy => 502 proxy error when idle

Config
We have a play 2.1.0 with angularjs setup in a production mode.
We have reverse proxy load balancer setup with apache 2.2 something like mentioned in here
http://www.playframework.com/documentation/2.1.0/HTTPServer
This whole app is running in an iframe inside navigated from a jboss application.
Problem
Most of the time it works and sometimes when the connection is left idle for 2/3 hours, untouched, no one hit the reverse proxy url to load the jboss/play, then we are getting the 502 proxy error in the iframe content after a few mins wait.
Play receives the request, but somehow decides not to respond at all. This occurs only for the first time or couple of time after the wakeup. Then when we refresh the page play receives the request and responds it properly.
Tried
We get a tcpdump on the play port and it we have got all the requests being received, but no response sent from play for the failed scenario. Whereas the same request got responded by play subsequent times.
X-Forwarded-For: ,X-Forwarded-Host: X-Forwarded-Server: .. Connection: Keep-Alive - all these headers are being sent in the lost response tcpdump.
Tried KeepAlive, with timeouts in the proxy server, not much help. Why the play didn't respond for the initial connections after idle state, is there any conf we can set to keep it alive?
Workaround
Polling the play server url constantly every half an hour from the same server makes this issue not reproducible.
Still any help/suggestions would be really appreciated to fix this issue..
I tried to solve this problem myself. Approaches like the answers mentioned here and here did not change anything.
I then decided to go for nginx again which I have been using with Play applications before. The setup is to be found here. Since then the problem is gone.

puma hot restarting on_restart function: necessary for Rails apps?

In the configuration file example for Puma, it says the following for the on_restart function:
Code to run before doing a restart. This code should close log files,
database connections, etc.
Do I need to implement this for a Rails app, to close connections to the db and the logfile, or is that taken care of automatically? If not, how do I actually do all that?
No you don't, Rails takes care of reloading your code automatically. But this code reloading support is limited. For example changes to application.rb are not applied until you restart the app server.
But I would recommend Phusion Passenger over Puma. Phusion Passenger is a lot easier to setup, especially when you hit production. Phusion Passenger integrates into Apache and Nginx directly and provides advanced features like dynamic worker management. Phusion Passenger is very mature, stable and performant and used by the likes of New York Times, Symantec, AirBnB, etc.
I've found that using Redis as my Rails.cache provider causes an error page upon the first request every time my Rails/Puma server is restarted. The error I got was:
Redis::InheritedError (Tried to use a connection from a child process
without reconnecting. You need to reconnect to Redis after forking.)
To get around this error, I didn't add anything to on_restart, but did have to add code to on_worker_boot ( I am running Puma with workers=4 ):
puma-config.rb
on_worker_boot do
puts "Reconnecting Rails.cache"
Rails.cache.reconnect
end

concurrent handling in thin, unicorn, puma, webrick

If I have the following action in a controller
def give_a
print a
a = a+1
end
What happens in each webserver when a request comes and when multiple requests are recieved?
I know that webrick and thin and single threaded so I guess that means that the a request doesn't get processed until the current request is done.
What happens in concurrent webservers such as puma or unicorn (perhaps others)
If there are 2 requests coming and 2 unicorn threads handle them, would both responses give the same a value? (in a situation when both request enter the method in the same time)
or does it all depend on what happens on the server itself and the access to data is serial?
Is there a way to have a mutex/semaphore for the concurrent webservers?
afaik, the rails application makes a YourController.new with each request env.
from what you post, it is not possible to see, what a means. when it is some shared class variable, then it is mutuable state and could be modified from both request threads.

How to test a rest api with Zend?

I don't know how to make unit test of my rest controller. Here is my code :
public function testpostAction(){
$this->dispatch('/chain');
$this->request->setHeader('Content-Type', 'text/json')
->setMethod('POST')
->setPost(array(
'chain_name' => 'mychaintest'
));
$this->assertAction('post'); ???
}
How I make a post?
Not sure if this is what you need but, if you want to make a POST call (http) to test your REST service, you can use Zend_Http_Client:
http://framework.zend.com/manual/en/zend.http.client.html
Anyway, if this is for unit testing it will be more complicated, since you'll need your application (the current build being testet) to be live and accesible in the server. That depends on how you have configured your build environment.
There should be a staging (virtual) machine where the build is (automatically) deployed before tests are run. That machine should be visible to the machine runing the tests.
Hope this helped. Cheers!
So, basically your question is how to emulate calling PUT and DELETE in your controller tests?
Since this apparently doesn't work:
$this->request->setMethod('PUT');
You can access both these actions with plain HTTP POST by providing _method parameter.
So to call PUT:
$this->request->setMethod('POST');
$this->dispatch('articles/123?_method=put');
To call DELETE:
$this->request->setMethod('POST');
$this->dispatch('articles/123?_method=delete');
More reading on how to deal with RESTful routing here - http://framework.zend.com/manual/en/zend.controller.router.html#zend.controller.router.routes.rest