Rails test fails requests did not finish in 60 seconds - ruby-on-rails-5

After upgrading rails from 4.2 to 5.2 my test gets stuck on a request while it is working in development server I'm getting following failure on running test suit.
Failures:
1) cold end overview shows cold end stats
Failure/Error: example.run
RuntimeError:
Requests did not finish in 60 seconds
# /home/asnad/.rvm/gems/ruby-2.5.0/gems/capybara-2.18.0/lib/capybara/server.rb:94:in `rescue in wait_for_pending_requests'
# /home/asnad/.rvm/gems/ruby-2.5.0/gems/capybara-2.18.0/lib/capybara/server.rb:91:in `wait_for_pending_requests'
# /home/asnad/.rvm/gems/ruby-2.5.0/gems/capybara-2.18.0/lib/capybara/session.rb:130:in `reset!'
# /home/asnad/.rvm/gems/ruby-2.5.0/gems/capybara-2.18.0/lib/capybara.rb:314:in `block in reset_sessions!'
# /home/asnad/.rvm/gems/ruby-2.5.0/gems/capybara-2.18.0/lib/capybara.rb:314:in `reverse_each'
# /home/asnad/.rvm/gems/ruby-2.5.0/gems/capybara-2.18.0/lib/capybara.rb:314:in `reset_sessions!'
# /home/asnad/.rvm/gems/ruby-2.5.0/gems/capybara-2.18.0/lib/capybara/rspec.rb:22:in `block (2 levels) in <top (required)>'
# ./spec/spec_helper.rb:43:in `block (3 levels) in <top (required)>'
# /home/asnad/.rvm/gems/ruby-2.5.0/gems/database_cleaner-1.6.2/lib/database_cleaner/generic/base.rb:16:in `cleaning'
# /home/asnad/.rvm/gems/ruby-2.5.0/gems/database_cleaner-1.6.2/lib/database_cleaner/base.rb:98:in `cleaning'
# /home/asnad/.rvm/gems/ruby-2.5.0/gems/database_cleaner-1.6.2/lib/database_cleaner/configuration.rb:86:in `block (2 levels) in cleaning'
# /home/asnad/.rvm/gems/ruby-2.5.0/gems/database_cleaner-1.6.2/lib/database_cleaner/configuration.rb:87:in `cleaning'
# ./spec/spec_helper.rb:37:in `block (2 levels) in <top (required)>'
# ------------------
# --- Caused by: ---
# Timeout::Error:
# execution expired
# /home/asnad/.rvm/gems/ruby-2.5.0/gems/capybara-2.18.0/lib/capybara/server.rb:92:in `sleep'
Top 1 slowest examples (62.59 seconds, 97.0% of total time):
cold end overview shows cold end stats
62.59 seconds ./spec/features/cold_end_overview_spec.rb:13
Finished in 1 minute 4.51 seconds (files took 4.15 seconds to load)
1 example, 1 failure
my spec_helper.rb has configurations
RSpec.configure do |config|
config.include FactoryBot::Syntax::Methods
config.around(:each) do |example|
DatabaseCleaner[:active_record].clean_with(:truncation)
DatabaseCleaner.cleaning do
if example.metadata.key?(:js) || example.metadata[:type] == :feature
# VCR.configure { |c| c.ignore_localhost = true }
WebMock.allow_net_connect!
VCR.turn_off!
VCR.eject_cassette
example.run
else
# WebMock.disable_net_connect!
VCR.turn_on!
cassette_name = example.metadata[:full_description]
.split(/\s+/, 2)
.join('/')
.underscore.gsub(/[^\w\/]+/, '_')
# VCR.configure { |c| c.ignore_localhost = false }
VCR.use_cassette(cassette_name) { example.run }
VCR.turn_off!
WebMock.allow_net_connect!
end
end
end
config.expect_with :rspec do |expectations|
expectations.include_chain_clauses_in_custom_matcher_descriptions = true
end
config.mock_with :rspec do |mocks|
mocks.verify_partial_doubles = true
end
config.filter_run :focus
config.run_all_when_everything_filtered = true
config.example_status_persistence_file_path = "spec/examples.txt"
if config.files_to_run.one?
config.default_formatter = 'doc'
end
# Print the 10 slowest examples and example groups at the
# end of the spec run, to help surface which specs are running
# particularly slow.
config.profile_examples = 10
# Run specs in random order to surface order dependencies. If you find an
# order dependency and want to debug it, you can fix the order by providing
# the seed, which is printed after each run.
# --seed 1234
config.order = :random
# Seed global randomization in this process using the `--seed` CLI option.
# Setting this allows you to use `--seed` to deterministically reproduce
# test failures related to randomization by passing the same `--seed` value
# as the one that triggered the failure.
Kernel.srand config.seed
end
# Selenium::WebDriver.logger.level = :debug
# Selenium::WebDriver.logger.output = 'selenium.log'
Capybara.register_driver :selenium_chrome_headless do |app|
capabilities = Selenium::WebDriver::Remote::Capabilities.chrome(chromeOptions: { args: %w[headless no-sandbox disable-dev-shm-usage disable-gpu window-size=1200,1500] }, loggingPrefs: { browser: 'ALL' })
Capybara::Selenium::Driver.new(app, browser: :chrome, desired_capabilities: capabilities)
end
Chromedriver.set_version '2.39'
Capybara.javascript_driver = :selenium_chrome_headless
Capybara::Screenshot.prune_strategy = :keep_last_run
in my spec the line sign_in current_user takes too much time actually it redirects to a page and do not get response even long time while it is working on development environment.
what can be the reason if you need anything else please comment.

I've just arrived here myself after upgrading from 4.2 to 5.1 and now 5.2, and I'm seeing the same thing in my testing, when I have frozen a test in mid-request with binding.pry, I get the message Requests did not finish in 60 seconds. What a great story, skip to the end for tl;dr (I may have figured it out.)
Now I have upgraded all of the gems, incrementally so I can preserve the capacity to bisect and observe the source of interesting changes like this one. I only noticed this new 60 second timeout after changing over from chromedriver-helper which reported it had been deprecated to the new webdrivers gem that's taking over, but that seems to be not related as I searched webdrivers for any timeout or 60 second value, and only found references to an unrelated Pull Request #60 (fixes Issue #59).
I checked my gem source directory for this message, Requests did not finish in 60 seconds, and found it was not in fact an older version of Capybara, but that it has been raised from versions dating back to at least 3.9.0, and in the most current version 3.24.0 in lib/capybara/server.rb.
The object used there is a Timer which you can find an interface to it here, in the helper:
https://github.com/teamcapybara/capybara/blob/320ee96bb8f63ac9055f7522961a1e1cf8078a8a/lib/capybara/helpers.rb#L79
This particular message is raised out of the method wait_for_pending_requests which passes a hard 60 into the :expire_in named parameter, then after sends any errors that were encountered in the server thread. This means the time is not configurable, probably 60 seconds is a reasonable length of time to wait for a web request in progress to complete, although it's a bit inconvenient for my test.
That method is only called in one place, reset!, which you can find defined here in capybara/session.rb: https://github.com/teamcapybara/capybara/blob/320ee96bb8f63ac9055f7522961a1e1cf8078a8a/lib/capybara/session.rb#L126
The reset! method is an interesting one that comes with some documentation about how it's used. #server&.wait_for_pending_requests looks like it might call wait_for_pending_requests if it has an active server thread in a request, and then raise_server_error! which similarly acts only if #server&.error is truthy.
Now we find that reset! comes with two aliases, this message reset! is received whenever Capybara calls cleanup! or reset_session!. At this point we can probably understand what happened, but it's still a little bit mysterious when I've been using chromedriver-helper and selenium testing for several years, but never recall seeing this 60 second timeout before. I'm hesitant to point the finger at webdriver, but I don't have any other answers for why this timeout is new. I haven't done really anything that could account for it but upgrade to this gem, and any other gems, plus clearing out deprecation warnings.
It seems possible that in Rails 5.1+, capybara calls reset! a lot more, maybe more than in between test examples. Especially when you read that documentation of the method and think about what Single-Page focus there has been now, and consider all of the things that reset! documentation tells you it doesn't reset, clear browser cache/HTML 5 local storage/IndexedDB/Web SQL database/etc — or maybe I'm imagining it, and this isn't new. But I'm imagining there are a lot of ways that it can call reset! and not land in this timeout code, that are likely to be driver-dependent.
Did you change to webdrivers gem by any chance when you did your Rails upgrade?
Edit: I reverted to chromedriver-helper just to be sure, and that wasn't it. What's actually happening is my test is failing in one thread, but the server has left a binding.pry session open. Capybara has moved onto the next test, and thus to get a fresh session it has called reset!, but 60 seconds later I am still in my pry session, and the server is still not ready to serve a root request. I have a feeling that the threading behavior of capybara has changed, in my memory a pry session opened during a server request would block the test from failing until it had returned. But that's apparently not what's happening anymore.
How did you arrive here? I have no idea unfortunately, but this is a fair description of what's happening when that message is received.

Related

Code coverage not working when used in conjunction with the parallel_spec task

I would like to know why parallel rspec is showing a different coverage percentage and missed resources compared to when I run without parallelisation.
Here is the output:
Sysctl[net.ipv6.conf.all.accept_redirects]
Sysctl[net.ipv6.conf.all.disable_ipv6]
Sysctl[net.ipv6.conf.default.accept_ra]
Sysctl[net.ipv6.conf.default.accept_redirects]
Sysctl[net.ipv6.conf.default.disable_ipv6]
Sysctl[net.ipv6.conf.lo.disable_ipv6]
Sysctl[vm.min_free_kbytes]
Sysctl[vm.swappiness]
Systemd::Unit_file[puppet_runner.service]
Users[application]
Users[global]
F
Failures:
1) Code coverage. Must be at least 95% of code coverage
Failure/Error: RSpec::Puppet::Coverage.report!(95)
expected: >= 95.0
got: 79.01
# /usr/local/bundle/gems/rspec-puppet-2.6.11/lib/rspec-puppet/coverage.rb:104:in `block in coverage_test'
# /usr/local/bundle/gems/rspec-puppet-2.6.11/lib/rspec-puppet/coverage.rb:106:in `coverage_test'
# /usr/local/bundle/gems/rspec-puppet-2.6.11/lib/rspec-puppet/coverage.rb:95:in `report!'
# ./spec/spec_helper.rb:22:in `block (2 levels) in <top (required)>'
Finished in 42.12 seconds (files took 2.11 seconds to load)
995 examples, 1 failure
Failed examples:
rspec # Code coverage. Must be at least 95% of code coverage
2292 examples, 2 failures
....................................................................
Total resources: 1512
Touched resources: 1479
Resource coverage: 97.82%
Untouched resources:
Apt::Source[archive.ubuntu.com-lsbdistcodename-backports]
Apt::Source[archive.ubuntu.com-lsbdistcodename-security]
Apt::Source[archive.ubuntu.com-lsbdistcodename-updates]
Apt::Source[archive.ubuntu.com-lsbdistcodename]
Apt::Source[postgresql]
Finished in 1 minute 25.3 seconds (files took 1.43 seconds to load)
2292 examples, 0 failures
Because it is not entirely clear from the question, I assume here that you have set up code coverage by adding a line to your spec/spec_helper.rb like:
at_exit { RSpec::Puppet::Coverage.report!(95) }
The coverage report is a feature provided by rspec-puppet.
Also, I have assumed that you have more than one spec file that contain your tests and that these are being run in parallel by calling the parallel_spec task that is provided by puppetlabs_spec_helper.
The problem is this:
For code coverage to work properly, all of the Rspec tasks need to run within the same process (see the code here).
Meanwhile, for parallelisation to occur, there must be multiple spec files, which are run in parallel in separate processes. That limitation arises from the parallel_tests library that is used by the parallel_spec task. See its README.
The code coverage report, therefore, only counts resources that were seen inside each process.
Example:
class test {
file { '/tmp/foo':
ensure => file,
}
file { '/tmp/bar':
ensure => file,
}
}
Spec file 1:
require 'spec_helper'
describe 'test' do
it 'is expected to contain file /tmp/foo' do
is_expected.to contain_file('/tmp/foo').with({
'ensure' => 'file',
})
end
end
Spec file 2:
require 'spec_helper'
describe 'test' do
it 'is expected to contain file /tmp/bar' do
is_expected.to contain_file('/tmp/bar').with({
'ensure' => 'file',
})
end
end
spec_helper.rb:
require 'puppetlabs_spec_helper/module_spec_helper'
at_exit { RSpec::Puppet::Coverage.report!(95) }
Run in parallel:
Total resources: 2
Touched resources: 1
Resource coverage: 50.00%
Untouched resources:
File[/tmp/bar]
Finished in 0.10445 seconds (files took 1.03 seconds to load)
1 example, 0 failures
Total resources: 2
Touched resources: 1
Resource coverage: 50.00%
Untouched resources:
File[/tmp/foo]
Must be at least 95% of code coverage (FAILED - 1)
4 examples, 0 failures
Took 1 seconds
Run without parallelisation:
Finished in 0.12772 seconds (files took 1.01 seconds to load)
2 examples, 0 failures
Total resources: 2
Touched resources: 2
Resource coverage: 100.00%

Is there support for country-select gem in Rails 5?

I am getting the following error in Rails 5. When I click create new user button via ActiveAdmin and it throws this error, FYI, User table has the field 'country' and i have used, gem 'country-select' in my Gemfile.
wrong number of arguments (given 4, expected 0)
I just resolved it by changing the gem. I replaced gem 'country-select' with gem 'country_select' in my Gemfile.
This works fine now.
Looks like country_select providing support for rails5. I installed rails5 and executed the test suite and all spec are passing.
Run options: include {:focus=>true}
All examples were filtered out; ignoring {:focus=>true}
Randomized with seed 45263
...................
Finished in 1.87 seconds (files took 0.94449 seconds to load)
19 examples, 0 failures
Randomized with seed 45263
May be your syntax is wrong usage

Cucumber - perform ActiveJob `perform_later` jobs immediately

I have many jobs that are calling other nested jobs using perform_later. However, during some tests on Cucumber, I'd like to execute those jobs immediately after to proceed with the rests of the tests.
I thought it would be enough to add
# features/support/active_job.rb
World(ActiveJob::TestHelper)
And to call jobs using this in a step definition file
perform_enqueued_jobs do
# call step that calls MyJob.perform_later(*args)
end
However I run into something like that
undefined method `perform_enqueued_jobs' for #<ActiveJob::QueueAdapters::AsyncAdapter:0x007f98fd03b900> (NoMethodError)
What am I missing / doing wrong ?
I switched to the :test adapter in tests and it worked out for me:
# initialisers/test.rb
config.active_job.queue_adapter = :test
# features/support/env.rb
World(ActiveJob::TestHelper)
It would seem as long as you call .perform_now inside the cucumber step, even if there are nested jobs with .deliver_later inside, it does work too
#support/active_job.rb
World(ActiveJob::TestHelper)
#my_job_steps.rb
Given(/^my job starts$/) do
MyJob.perform_now(logger: 'stdout')
end
#jobs/my_job.rb
...
MyNestedJob.perform_later(*args) # is triggered during the step
...
Also, in my environment/test.rb file I didn't write anything concerning ActiveJob, the default was working fine. I believe the default adapter for tests is :inline so calling .deliver_later _now shouldn't matter

ThinkingSphinx3: How to prevent searchd threads from freezing?

I am using Rails 3.2.12, RSpec-rails 2.13.0 and ThinkingSphinx 3.0.10
The problem:
When I run bundle exec rpsec spec/controllers/ads_controller_spec.rb, thinking sphinx spawns 3 searchd processes which become frozen, my tests just lockup until I manually killed the searchd processes after which the tests continue running.
The setup:
Here is my sphinx_env.rb file which in which I setup TS for testing:
require 'thinking_sphinx/test'
def sphinx_environment(*tables, &block)
obj = self
begin
before(:all) do
obj.use_transactional_fixtures = false
ThinkingSphinx::Test.init
ThinkingSphinx::Test.start
sleep(0.5)
end
yield
ensure
after(:all) do
ThinkingSphinx::Test.stop
sleep(0.5)
obj.use_transactional_fixtures = true
end
end
end
Here is my test script:
describe "GET index" do
before(:each) do
#web_origin = FactoryGirl.create(:origin)
#api_origin = FactoryGirl.create(:api_origin)
#first_ad = FactoryGirl.create(:ad, :origin_id => #web_origin.id)
ThinkingSphinx::Test.index #index ads created above
sleep 0.5
end
sphinx_environment :ads do
it 'should return a collection of all live ads' do
get :index, {:format => 'json'}
response.code.should == '200'
end
end
...
UPDATE
No progress made, however here are some additional details:
When I run my tests, thinking sphinx always starts 3 searchd
processes.
The pid in my test.sphinx.pid always has just one of the searchd
pid's, its always the second searchd process pid.
Here is the output from my test.searchd.log file:
[ 568] binlog: finished replaying total 49 in 0.006 sec
[ 568] accepting connections
[ 568] caught SIGHUP (seamless=1, in queue=1)
[ 568] rotating index 'ad_core': started
[ 568] caught SIGHUP (seamless=1, in queue=2)
[ 568] caught SIGTERM, shutting down
Any help is appreciated, I have been trying to sort out this issue for over a day & a bit lost.
Thanks.
Sphinx 2.0.x releases with threaded Sphinx workers (which is what Thinking Sphinx v3 uses, hence the multiple searchd processes) are buggy on OS X, but this was fixed in Sphinx 2.0.6 (which was one of the main things holding back TS v3 development - my own tests wouldn't run due to problems like what you've been seeing).
I'd recommend upgrading Sphinx to 2.0.6 and I'm pretty sure that should resolve these issues.

Rails 3.2.2 log files unordered, requests intertwined

I recollect getting log files that were nicely ordered, so that you could follow one request, then the next, and so on.
Now, the log files are, as my 4 year old says "all scroggled up", meaning that they are no longer separate, distinct chunks of text. Loggings from two requests get intertwined/mixed up.
For instance:
Started GET /foobar
...
Completed 200 OK in 2ms (Views: 0.4ms | ActiveRecord: 0.8ms)
Patient Load (wait, that's from another request that has nothing to do with foobar!)
[ blank space ]
Something else
This is maddening, because I can't tell what's happening within one single request.
This is running on Passenger.
I tried to search for the same answer but couldn't find any good info. I'm not sure if you should fix server or rails code.
If you want more info about the issue here is the commit that removed old way of logging https://github.com/rails/rails/commit/04ef93dae6d9cec616973c1110a33894ad4ba6ed
If you value production log readability over everything else you can use the
PassengerMaxInstancesPerApp 1
configuration. It might cause some scaling issues. Alternatively you could stuff something like this in application.rb:
process_log_filename = Rails.root + "log/#{Rails.env}-#{Process.pid}.log"
log_file = File.open(process_log_filename, 'a')
Rails.logger = ActiveSupport::BufferedLogger.new(log_file)
Yep!, they have made some changes in the ActiveSupport::BufferedLogger so it is not any more waiting until the request has ended to flush the logs:
http://news.ycombinator.com/item?id=4483390
https://github.com/rails/rails/commit/04ef93dae6d9cec616973c1110a33894ad4ba6ed
But they have added the ActiveSupport::TaggedLogging which is very funny and you can stamp every log with any kind of mark you want.
In your case could be good to stamp the logs with the request UUID like this:
# config/application.rb
config.log_tags = [:uuid]
Then even if the logs are messed up you still can follow which of them correspond to the request you are following up.
You can make more funny things with this feature to help you in your logs study:
How to log user_name in Rails?
http://zogovic.com/post/21138929607/running-time-in-rails-logs
Well, for me the TaggedLogging solution is a no go, I can live with some logs getting lost if the server crashes badly, but I want my logs to be perfectly ordered. So, following advice from the issue comments I'm applying this to my app:
# lib/sequential_logs.rb
module ActiveSupport
class BufferedLogger
def flush
#log_dest.flush
end
def respond_to?(method, include_private = false)
super
end
end
end
# config/initializers/sequential_logs.rb
require 'sequential_logs.rb'
Rails.logger.instance_variable_get(:#logger).instance_variable_get(:#log_dest).sync = false
As far as I can say this hasn't affected my app, it is still running and now my logs make sense again.
They should add some quasi-random reqid and write it in every line regarding one single request. This way you won't get confused.
I haven't used it, but I believe Lumberjack's unit_of_work method may be what you're looking for. You call:
Lumberjack.unit_of_work do
yield
end
And all logging done either in that block or in the yielded block are tagged with a unique ID.