Rails process taking too long to respond - ruby-on-rails-3

When I am sending a a GET request to rails server it takes too long time to respond (29 minute)
Below is the log snippet
Log says that there is an error in code, it is ok, but why take so long to respond (1723579 ms) I am unable to find any reason of such kind of behavior. Previously when server was working fine this js request only take 9 ms to respond. but suddenly it started to behave like this. How should i debug the application to trace the root cause of such unexpected behavior.
Started GET "/my-server/jobs/workers?_=1356363515400" for 27*.*.*.* at 2012-12-24 21:08:35 +0530
ActionView::Template::Error ():
1: <% #jobs.each do |job| %>
2: $('#cron_<%= job.id %>').attr('data-content', '<%= distance_of_time_in_words_to_now(job.next_fire_info, true) %>');
3: <% end %>
4:
5: <% #workers.each do |worker| %>
app/models/job.rb:16:in `next_fire_info'
app/views/jobs/workers.js.erb:2:in `block in _app_views_jobs_workers_js_erb__101155230_81985760'
app/views/jobs/workers.js.erb:1:in `_app_views_jobs_workers_js_erb__101155230_81985760'
Rendered jobs/workers.js.erb (1718348.7ms)
Completed 500 Internal Server Error in 1723579ms
I am on Rails 3.1.3,
Ruby 1.9.3p194,
MongoDB version v2.2.0, pdfile version 4.5,
32 bit Ubuntu (12.04) with 2 gb ram

Versions of rails 3.1 before 3.1.5 have a bug whereby when an exception is raised from a view rails takes a very long time to generate the exception message.
If you can't update to 3.1.5, the fix is very simple (see the commit that fixes it) - you just need to monkey patch inspect:
module ActionDispatch
module Routing
module RouteSet
alias to_s inspect
end
end
end
I used to dump this in an initializer. There's also a gem (safe_inspect) that claims to do this for you although I never tried it.

Finally i have found the reason for taking so long time to respond. The workers are running periodically based on cron expression.Root cause of this particular issue is one cron expression for job was entered erroneously. that's why at the time of executing "distance_of_time_in_words_to_now" took too much time.

I had similar problem and it was caused by authentication, so this is just for record if someone has the same problem in the future. The server did not let the user in and then redirected to the page which was restricted as well etc. etc.

Related

Does chef 'notifies :action, 'resource', :before' work at all as it should?

I've recently tried to perform a simple task: Install a package if it does not exist by pulling distribution out of web location (in-house repo) and deleting it once it is not longer needed.
Learning about :before notification I came up with following elegant code (in this example variable "pkg" keeps name of distribution image, "pkg_src_location" is URL of my web repository, "name_of_package" is named installed package):
local_image = "#{Chef::Config['file_cache_path']}/#{pkg}"
remote_file 'package_image' do
path local_image
source "#{pkg_src_location}/#{pkg}"
action :nothing
end
package name_of_package do
source local_image
notifies :create, 'remote_file[package_image]', :before
notifies :delete, 'remote_file[package_image]', :delayed
end
I was quite surprised that it does not work... Actually 'package' resource is being converged without 'remote_file' being created - and it fails due to source local_image not being in place...
I did a simple test:
log 'before' do
action :nothing
end
log 'after' do
action :nothing
end
log 'at-the-end' do
action :nothing
end
log 'main' do
notifies :write, 'log[before]', :before
notifies :write, 'log[at-the-end]', :delayed
notifies :write, 'log[after]', :immediately
end
What I learned is that 'main' is actually converged twice! Once when first encountered and once again, after 'before' resource is converged...
Recipe: notify_test::default
* log[before] action nothing (skipped due to action :nothing)
* log[after] action nothing (skipped due to action :nothing)
* log[at-the-end] action nothing (skipped due to action :nothing)
* log[main] action write
* log[before] action write
* log[main] action write
* log[after] action write
* log[at-the-end] action write
Is it a bug or feature? If this is 'feature', it is a really bad one and Chef shouldn't have it at all. It is simply useless the way it works and only wastes people's time...
Can anyone having more in-depth Chef understanding comment on it? Is there any way to make ':before' work? Maybe I'm just doing something wrong here?
To be a bit more specific, the before timing uses the "why run" system to make a guess about if the resource needs to be updated. In this case, the package resource is invalid to begin with so whyrun can't tell that an update is needed.
After a little rethink about it I get what's going wrong here:
The before notification is fired if the actual resource will have to be updated, in chef inner this mean getting the actual resource state to compare to the desired resource state ('\load_current_resource\ in providers).
Here you're willing to install a package, chef will ask the system about this package and it's version, and then will compare this result with the source you provided.
And here comes the problem, you can't compare with the source package as it is not installed.
For your case, the best bet is to leave the file on the system and get rid of notifications.
The before notification could be of interest if you want to launch a database backup before upgrading the DB system for example or as mentioned in the RFC for :before to stop a service before upgrading its package.
But it should not be used to trigger another resource providing one of the "calling" resource properties.

Better logging from Capybara/RSpec?

I'm having a really tough time investigating the cause of a test failure. I'm a very experienced programmer and am well versed in general debugging techniques, but I'm new to Capybara and RSpec so I'm hoping there's some kind of facility I'm ignorant of that can help me.
In short, I have a test something like this:
expect { click('.fake_button'); sleep 1 }.to change { clicks.count }.by(1)
When the fake button is clicked, it triggers an AJAX call to the Rails app which, among other things, adds a click record to the database. I can think of dozens of things that could be causing this test to fail and have had only limited success getting information out of logs. The tests do not fail in development and it only fails sporadically in test. One of the differences of the test environment is that the tests are run on a server in our office against a server in the cloud, so there are network delays along with other possible issues.
This is very hard to diagnose because there's so little information coming out of the failed test and of course all the database information is thrown away by the time I read about the failure. I know clicks.count didn't change in the test and I can infer that click('.fake_button') succeeded, but due to server time sync issues I can't even be sure that the click happened on the right button or that the AJAX call fired.
What I'd like are some tools to help me follow this test case in the web server logs (maybe using automatic URL parameters, for example), detailed logging about what Capybara did, and a record of the web page as it was when the failure occurred, including cookie values. Can I get any of that? Anything like that?
Capybara simulates human actions. The test code does exactly what needed. It's something a real user should expect. I don't think you should complain the code.
I think it's okay to increase the wait time, say 1 to 2, due to your network latency, but it should not exceed a reasonable value otherwise the app does not work as real user expected.
To debug Capybara codes, there are three methods as I summarized:
Add "save_and_open_page" to the place you want to see result. Then a saved html page will appear during the test. (I forget if "launchy" gem should be added)
Temporarily set this test as JS to see how this test going.
scenario "a fake test", js: true do
# code here
end
By doing this a real browser will pop up and Capybara will show you step by step how it play the code.
Just run $ tail log/test.log to show what happened recently.
Building off what #Billy suggested, log/test.log was not giving me any useful information and I was already using js: true so I tried this:
begin
expect { click('.fake_button'); sleep 1 }.to change { clicks.count }.by(1)
rescue Exception => e
begin
timestamp = Time::now.strftime('%Y%m%d%H%M%S%L')
begin
screenshot_name = "tmp/capybara/capybara-screenshot-#{timestamp}.png"
$stderr.puts "Trying to save screenshot #{screenshot_name} due to test failure"
page.save_screenshot(screenshot_name)
rescue Exception => inner
$stderr.puts "Ignoring exception #{inner} while trying to save screenshot of test page"
end
begin
# Page saved by Capybara under tmp/capybara/ by default
save_page "capybara-html-#{timestamp}.html"
rescue Exception => inner
$stderr.puts "Ignoring exception #{inner} while trying to save HTML of failed test page"
end
ensure
raise e
end
end
Later I changed the test itself to take advantage of Capybara's AJAX synchronization features by doing something like this:
starting_count = clicks.count
click('.fake_button')
page.should have_css('.submitted') # Capybara is smart enough to wait for this to happen
clicks.count.should == starting_count + 1
Note that the CSS I'm looking for is something added to the page in JavaScript by the AJAX callback, so it showing up is a signal that the AJAX call completed.
The rescue blocks are important because the screenshot has a high failure rate from not having enough memory to render the full page and convert it to an image.
EDIT
Though I haven't tried it, a promising solution is Capybara::Screenshot which automatically saves the screenshot and HTML on any failure. Just reading the code it looks like it will have problems when the screenshot fails and I can't tell what state the page will be in by the time the screenshot is triggered, but it certainly looks like it's worth a try.
A nice way to debug tests is to use irb to watch what's actually happening in the browser. RSpec fails usually give decent information for simple cases, but for more complicated things I either split the case up until it is simple, or chuck it in irb for a live session to make sure its doing what it should do.
Make sure to use :selenium as your driver, and you should see firefox come up and be able to be driven by your irb session.

Rails Fragment Cache Won't Expire

I'm using fragment caching in my 3.1. Rails app and have one fragment that isn't expiring and I don't know why. It's a fragment that I want to expire based on time (every hour). I'm on Heroku, using the Memcachier Add-on for my caching. I don't seem to have any other caching issues.
In my app, there are three models: User, Community, and Activity. On the Community#index, there is a fragment that shows Activity by Users in this Community that I want to expire hourly. The calculation, which is a method in the Activity model, works fine - it's just that the fragment is expiring hourly (and refreshing).
In my view, I have:
<% cache("activity_#{community.id}", :expires_in => 1.hour) do %>
<-- content >
<% end %>
I've also tried making it a scheduled task, by adding an expiration for the cache in the User model.
def self.expire_activity
Community.find_each do |community|
ActionController::Base.new.expire_fragment('activity_#{community.id}')
end
end
I tried to follow the answer to this question to determine how to expire the cache from a model, but with this this code, I get the error:
NoMethodError: undefine method 'expire_fragment' for #<Class:0x5ffc3e8>
I was facing a similar issue on Rails 4.2.1. It was caused by a mismatch between the servers set timezone and that of the rails app. Setting both to be the same fixed my issue.
On the server, assuming Ubuntu, as root run this command and follow the prompts:
dpkg-reconfigure tzdata
Within the application.rb config file, be sure that the following line matches the server (default is UTC).
# config.time_zone = 'Central Time (US & Canada)'
To get a list of available time zones for rails, run:
bundle exec rake time:zones:all
I know this is an old question, but is returned high up in google when searching for a solution.

Constant 500 Errors on Heroku

I recently switched from Heroku's Bamboo stack to the Cedar one (Rails 3.1.4, Ruby 1.9.2, Thin gem for web server). Since then I keep getting 500 errors such as this, where it seems that the query is not acting right:
207 <13>1 2012-05-06T16:10:51+00:00 d. app web.1 - - ActiveRecord::StatementInvalid (Mysql::Error: : SELECT `foos`.* FROM `foos` WHERE `foos`.`id` = ? LIMIT 1)
It's not an error in the code though because the page eventually renders successfully (ie status 200) when I refresh the page. Sometimes it is 1 refresh, but can get up to 4 refreshes before I get a 200.
I thought it was the database because I was on ClearDB's free plan, but I upgraded to ClearDB's next plan with better I/O performance and it still happens
this never happened when I was on Bamboo
it happens on just about every page that does queries on the DB
it doesn't always happen, but I'd say it happens on at least 1 in 5 pages views
the model/query doesn't matter, the same error occurs (just indicating a different model/fields then the example above)
Do you get the same errors if you are in console heroku run console ? I've never seen this before. Try upgrading your Mysql gem, which one are you using http://api.rubyonrails.org/classes/ActiveRecord/StatementInvalid.html i think the correct one is mysql2 https://rubygems.org/gems/mysql2

Raising route not found error

I'm writing a book on Rails 3 at the moment and past-me has written in Chapter 3 or so that when a specific feature is run that a routing error is generated. Now, it's unlike me to go writing things that aren't true, so I'm pretty sure this happened once in the past.
I haven't yet been able to duplicate the scenario myself, but I'm pretty confident it's one of the forgotten settings in the environment file.
To duplicate this issue:
Generate a new rails project
important: Remove the public/index.html file
Add cucumber-rails and capybara to the "test" group in your Gemfile
run bundle install
run rails g cucumber:skeleton
Generate a new feature, call it features/creating_projects.feature
Inside this feature put:
This:
Feature: Creating projects
In order to value
As a role
I want feature
Scenario: title
Given I am on the homepage
When you run this feature using bundle exec cucumber features/creating_projects.feature it should fail with a "No route matches /" error, because you didn't define the root route. However, what I and others are seeing is that it doesn't.
Now I've set a setting in test.rb that will get this exception page to show, but I would rather Rails did a hard-raise of the exception so that it showed up in Cucumber as a failing step, like I'm pretty sure it used to, rather than a passing step.
Does anybody know what could have changed since May-ish of last year for Rails to not do this? I'm pretty confident it's some setting in config/environments/test.rb, but for the life of me I cannot figure it out.
After I investigate the Rails source code, it seems like the ActionDispatch::ShowExceptions middleware that responsible of raising exception ActionController::RoutingError is missing in the test environment. Confirmed by running rake middleware and rake middleware RAILS_ENV=test.
You can see that in https://github.com/josh/rack-mount/blob/master/lib/rack/mount/route_set.rb#L152 it's returning X-Cascade => 'pass' header, and it's ActionDispatch::ShowExceptions's responsibility to pick it up (in https://github.com/rails/rails/blob/master/actionpack/lib/action_dispatch/middleware/show_exceptions.rb#L52)
So the reason you're seeing that your test case is passing because rack-mount is returning "Not Found" text, with status 404.
I'll git blame people and get it fix for you. It's this conditional here: https://github.com/rails/rails/blob/master/railties/lib/rails/application.rb#L159. If the setting is true, the error got translated right but we got error page output. If it's false, then this middleware doesn't get loaded at all. Hold on ...
Update: To clearify the previous block, you're hitting the dead end here. If you're setting action_dispatch.show_exceptions to false, you'll not get that middleware loaded, resulted in the 404 error from rack-mount got rendered. Whereas if you're setting action_dispatch.show_exceptions to true, that middleware will got loaded but it will rescue the error and render a nice "exception" page for you.