Code coverage not working when used in conjunction with the parallel_spec task - testing

I would like to know why parallel rspec is showing a different coverage percentage and missed resources compared to when I run without parallelisation.
Here is the output:
Sysctl[net.ipv6.conf.all.accept_redirects]
Sysctl[net.ipv6.conf.all.disable_ipv6]
Sysctl[net.ipv6.conf.default.accept_ra]
Sysctl[net.ipv6.conf.default.accept_redirects]
Sysctl[net.ipv6.conf.default.disable_ipv6]
Sysctl[net.ipv6.conf.lo.disable_ipv6]
Sysctl[vm.min_free_kbytes]
Sysctl[vm.swappiness]
Systemd::Unit_file[puppet_runner.service]
Users[application]
Users[global]
F
Failures:
1) Code coverage. Must be at least 95% of code coverage
Failure/Error: RSpec::Puppet::Coverage.report!(95)
expected: >= 95.0
got: 79.01
# /usr/local/bundle/gems/rspec-puppet-2.6.11/lib/rspec-puppet/coverage.rb:104:in `block in coverage_test'
# /usr/local/bundle/gems/rspec-puppet-2.6.11/lib/rspec-puppet/coverage.rb:106:in `coverage_test'
# /usr/local/bundle/gems/rspec-puppet-2.6.11/lib/rspec-puppet/coverage.rb:95:in `report!'
# ./spec/spec_helper.rb:22:in `block (2 levels) in <top (required)>'
Finished in 42.12 seconds (files took 2.11 seconds to load)
995 examples, 1 failure
Failed examples:
rspec # Code coverage. Must be at least 95% of code coverage
2292 examples, 2 failures
....................................................................
Total resources: 1512
Touched resources: 1479
Resource coverage: 97.82%
Untouched resources:
Apt::Source[archive.ubuntu.com-lsbdistcodename-backports]
Apt::Source[archive.ubuntu.com-lsbdistcodename-security]
Apt::Source[archive.ubuntu.com-lsbdistcodename-updates]
Apt::Source[archive.ubuntu.com-lsbdistcodename]
Apt::Source[postgresql]
Finished in 1 minute 25.3 seconds (files took 1.43 seconds to load)
2292 examples, 0 failures

Because it is not entirely clear from the question, I assume here that you have set up code coverage by adding a line to your spec/spec_helper.rb like:
at_exit { RSpec::Puppet::Coverage.report!(95) }
The coverage report is a feature provided by rspec-puppet.
Also, I have assumed that you have more than one spec file that contain your tests and that these are being run in parallel by calling the parallel_spec task that is provided by puppetlabs_spec_helper.
The problem is this:
For code coverage to work properly, all of the Rspec tasks need to run within the same process (see the code here).
Meanwhile, for parallelisation to occur, there must be multiple spec files, which are run in parallel in separate processes. That limitation arises from the parallel_tests library that is used by the parallel_spec task. See its README.
The code coverage report, therefore, only counts resources that were seen inside each process.
Example:
class test {
file { '/tmp/foo':
ensure => file,
}
file { '/tmp/bar':
ensure => file,
}
}
Spec file 1:
require 'spec_helper'
describe 'test' do
it 'is expected to contain file /tmp/foo' do
is_expected.to contain_file('/tmp/foo').with({
'ensure' => 'file',
})
end
end
Spec file 2:
require 'spec_helper'
describe 'test' do
it 'is expected to contain file /tmp/bar' do
is_expected.to contain_file('/tmp/bar').with({
'ensure' => 'file',
})
end
end
spec_helper.rb:
require 'puppetlabs_spec_helper/module_spec_helper'
at_exit { RSpec::Puppet::Coverage.report!(95) }
Run in parallel:
Total resources: 2
Touched resources: 1
Resource coverage: 50.00%
Untouched resources:
File[/tmp/bar]
Finished in 0.10445 seconds (files took 1.03 seconds to load)
1 example, 0 failures
Total resources: 2
Touched resources: 1
Resource coverage: 50.00%
Untouched resources:
File[/tmp/foo]
Must be at least 95% of code coverage (FAILED - 1)
4 examples, 0 failures
Took 1 seconds
Run without parallelisation:
Finished in 0.12772 seconds (files took 1.01 seconds to load)
2 examples, 0 failures
Total resources: 2
Touched resources: 2
Resource coverage: 100.00%

Related

Is there a better way to reference sub-features so that this test finishes?

When running the following scenario, the tests finish running but execution hangs immediately after and the gradle test command never finishes. The cucumber report isn't built, so it hangs before that point.
It seems to be caused by having 2 call read() to different scenarios, that both call a third scenario. That third scenario references the parent context to inspect the current request.
When that parent request is stored in a variable the tests hang. When that variable is cleared before leaving that third scenario, the test finishes as normal. So something about having a reference to that context hangs the tests at the end.
Is there a reason this doesn't complete? Am I missing some important code that lets the tests finish?
I've added * def currentRequest = {} at the end of the special-request scenario and that allows the tests to complete, but that seems like a hack.
This is the top-level test scenario:
Scenario: Updates user id
* def user = call read('utils.feature#endpoint=create-user')
* set user.clientAccountId = user.accountNumber + '-test-client-account-id'
* call read('utils.feature#endpoint=update-user') user
* print 'the test is done!'
The test scenario calls 2 different scenarios in the same utls.feature file
utils.feature:
#ignore
Feature: /users
Background:
* url baseUrl
#endpoint=create-user
Scenario: create a standard user for a test
Given path '/create'
* def restMethod = 'post'
* call read('special-request.feature')
When method restMethod
Then status 201
#endpoint=update-user
Scenario: set a user's client account ID
Given path '/update'
* def restMethod = 'put'
* call read('special-request.feature')
When method restMethod
Then status 201
And match response == {"status":"Success", "message":"Update complete"}
Both of the util scenarios call the special-request feature with different parameters/requests.
special-request.feature:
#ignore
Feature: Builds a special
Scenario: special-request
# The next line causes the test to sit for a long time
* def currentRequest = karate.context.parentContext.getRequest()
# Without the below clear of currentRequest, the test never finishes
# We are de-referencing the parent context's request allows test to finish
* def currentRequest = {}
without currentRequest = {} these are the last lines of output I get before the tests seem to stop.
12:21:38.816 [ForkJoinPool-1-worker-1] DEBUG com.intuit.karate - response time in milliseconds: 8.48
1 < 201
1 < Content-Type: application/json
{
"status": "Success",
"message": "Update complete"
}
12:21:38.817 [ForkJoinPool-1-worker-1] DEBUG com.jayway.jsonpath.internal.path.CompiledPath - Evaluating path: $
12:21:38.817 [ForkJoinPool-1-worker-1] DEBUG com.jayway.jsonpath.internal.path.CompiledPath - Evaluating path: $
12:21:38.817 [ForkJoinPool-1-worker-1] DEBUG com.jayway.jsonpath.internal.path.CompiledPath - Evaluating path: $
12:21:38.817 [ForkJoinPool-1-worker-1] DEBUG com.jayway.jsonpath.internal.path.CompiledPath - Evaluating path: $
12:21:38.818 [ForkJoinPool-1-worker-1] INFO com.intuit.karate - [print] the test is done!
12:21:38.818 [pool-1-thread-1] DEBUG com.jayway.jsonpath.internal.path.CompiledPath - Evaluating path: $
<==========---> 81% EXECUTING [39s]
With currentRequest = {}, the test completes and the cucumber report generates successfully which is what I would expect to happen even without that line.
Two comments:
* karate.context.parentContext.getRequest()
Wow, these are internal API-s not intended for users to use, I would strongly advise passing values around as variables instead. So all bets are off if you have trouble with that.
It does sound like you have a null-pointer in the above (no surprises here).
There is a bug in 0.9.4 that causes test failures in some edge cases such as the things you are doing, pre-test life-cycle or failures in karate-config.js to hang the parallel runner. You should see something in the logs that indicates a failure, if not - do try help us replicate this problem.
This should be fixed in the develop branch, so you could help if you can build from source and test locally. Instructions are here: https://github.com/intuit/karate/wiki/Developer-Guide
And if you still see a problem, please do this: https://github.com/intuit/karate/wiki/How-to-Submit-an-Issue

Rails test fails requests did not finish in 60 seconds

After upgrading rails from 4.2 to 5.2 my test gets stuck on a request while it is working in development server I'm getting following failure on running test suit.
Failures:
1) cold end overview shows cold end stats
Failure/Error: example.run
RuntimeError:
Requests did not finish in 60 seconds
# /home/asnad/.rvm/gems/ruby-2.5.0/gems/capybara-2.18.0/lib/capybara/server.rb:94:in `rescue in wait_for_pending_requests'
# /home/asnad/.rvm/gems/ruby-2.5.0/gems/capybara-2.18.0/lib/capybara/server.rb:91:in `wait_for_pending_requests'
# /home/asnad/.rvm/gems/ruby-2.5.0/gems/capybara-2.18.0/lib/capybara/session.rb:130:in `reset!'
# /home/asnad/.rvm/gems/ruby-2.5.0/gems/capybara-2.18.0/lib/capybara.rb:314:in `block in reset_sessions!'
# /home/asnad/.rvm/gems/ruby-2.5.0/gems/capybara-2.18.0/lib/capybara.rb:314:in `reverse_each'
# /home/asnad/.rvm/gems/ruby-2.5.0/gems/capybara-2.18.0/lib/capybara.rb:314:in `reset_sessions!'
# /home/asnad/.rvm/gems/ruby-2.5.0/gems/capybara-2.18.0/lib/capybara/rspec.rb:22:in `block (2 levels) in <top (required)>'
# ./spec/spec_helper.rb:43:in `block (3 levels) in <top (required)>'
# /home/asnad/.rvm/gems/ruby-2.5.0/gems/database_cleaner-1.6.2/lib/database_cleaner/generic/base.rb:16:in `cleaning'
# /home/asnad/.rvm/gems/ruby-2.5.0/gems/database_cleaner-1.6.2/lib/database_cleaner/base.rb:98:in `cleaning'
# /home/asnad/.rvm/gems/ruby-2.5.0/gems/database_cleaner-1.6.2/lib/database_cleaner/configuration.rb:86:in `block (2 levels) in cleaning'
# /home/asnad/.rvm/gems/ruby-2.5.0/gems/database_cleaner-1.6.2/lib/database_cleaner/configuration.rb:87:in `cleaning'
# ./spec/spec_helper.rb:37:in `block (2 levels) in <top (required)>'
# ------------------
# --- Caused by: ---
# Timeout::Error:
# execution expired
# /home/asnad/.rvm/gems/ruby-2.5.0/gems/capybara-2.18.0/lib/capybara/server.rb:92:in `sleep'
Top 1 slowest examples (62.59 seconds, 97.0% of total time):
cold end overview shows cold end stats
62.59 seconds ./spec/features/cold_end_overview_spec.rb:13
Finished in 1 minute 4.51 seconds (files took 4.15 seconds to load)
1 example, 1 failure
my spec_helper.rb has configurations
RSpec.configure do |config|
config.include FactoryBot::Syntax::Methods
config.around(:each) do |example|
DatabaseCleaner[:active_record].clean_with(:truncation)
DatabaseCleaner.cleaning do
if example.metadata.key?(:js) || example.metadata[:type] == :feature
# VCR.configure { |c| c.ignore_localhost = true }
WebMock.allow_net_connect!
VCR.turn_off!
VCR.eject_cassette
example.run
else
# WebMock.disable_net_connect!
VCR.turn_on!
cassette_name = example.metadata[:full_description]
.split(/\s+/, 2)
.join('/')
.underscore.gsub(/[^\w\/]+/, '_')
# VCR.configure { |c| c.ignore_localhost = false }
VCR.use_cassette(cassette_name) { example.run }
VCR.turn_off!
WebMock.allow_net_connect!
end
end
end
config.expect_with :rspec do |expectations|
expectations.include_chain_clauses_in_custom_matcher_descriptions = true
end
config.mock_with :rspec do |mocks|
mocks.verify_partial_doubles = true
end
config.filter_run :focus
config.run_all_when_everything_filtered = true
config.example_status_persistence_file_path = "spec/examples.txt"
if config.files_to_run.one?
config.default_formatter = 'doc'
end
# Print the 10 slowest examples and example groups at the
# end of the spec run, to help surface which specs are running
# particularly slow.
config.profile_examples = 10
# Run specs in random order to surface order dependencies. If you find an
# order dependency and want to debug it, you can fix the order by providing
# the seed, which is printed after each run.
# --seed 1234
config.order = :random
# Seed global randomization in this process using the `--seed` CLI option.
# Setting this allows you to use `--seed` to deterministically reproduce
# test failures related to randomization by passing the same `--seed` value
# as the one that triggered the failure.
Kernel.srand config.seed
end
# Selenium::WebDriver.logger.level = :debug
# Selenium::WebDriver.logger.output = 'selenium.log'
Capybara.register_driver :selenium_chrome_headless do |app|
capabilities = Selenium::WebDriver::Remote::Capabilities.chrome(chromeOptions: { args: %w[headless no-sandbox disable-dev-shm-usage disable-gpu window-size=1200,1500] }, loggingPrefs: { browser: 'ALL' })
Capybara::Selenium::Driver.new(app, browser: :chrome, desired_capabilities: capabilities)
end
Chromedriver.set_version '2.39'
Capybara.javascript_driver = :selenium_chrome_headless
Capybara::Screenshot.prune_strategy = :keep_last_run
in my spec the line sign_in current_user takes too much time actually it redirects to a page and do not get response even long time while it is working on development environment.
what can be the reason if you need anything else please comment.
I've just arrived here myself after upgrading from 4.2 to 5.1 and now 5.2, and I'm seeing the same thing in my testing, when I have frozen a test in mid-request with binding.pry, I get the message Requests did not finish in 60 seconds. What a great story, skip to the end for tl;dr (I may have figured it out.)
Now I have upgraded all of the gems, incrementally so I can preserve the capacity to bisect and observe the source of interesting changes like this one. I only noticed this new 60 second timeout after changing over from chromedriver-helper which reported it had been deprecated to the new webdrivers gem that's taking over, but that seems to be not related as I searched webdrivers for any timeout or 60 second value, and only found references to an unrelated Pull Request #60 (fixes Issue #59).
I checked my gem source directory for this message, Requests did not finish in 60 seconds, and found it was not in fact an older version of Capybara, but that it has been raised from versions dating back to at least 3.9.0, and in the most current version 3.24.0 in lib/capybara/server.rb.
The object used there is a Timer which you can find an interface to it here, in the helper:
https://github.com/teamcapybara/capybara/blob/320ee96bb8f63ac9055f7522961a1e1cf8078a8a/lib/capybara/helpers.rb#L79
This particular message is raised out of the method wait_for_pending_requests which passes a hard 60 into the :expire_in named parameter, then after sends any errors that were encountered in the server thread. This means the time is not configurable, probably 60 seconds is a reasonable length of time to wait for a web request in progress to complete, although it's a bit inconvenient for my test.
That method is only called in one place, reset!, which you can find defined here in capybara/session.rb: https://github.com/teamcapybara/capybara/blob/320ee96bb8f63ac9055f7522961a1e1cf8078a8a/lib/capybara/session.rb#L126
The reset! method is an interesting one that comes with some documentation about how it's used. #server&.wait_for_pending_requests looks like it might call wait_for_pending_requests if it has an active server thread in a request, and then raise_server_error! which similarly acts only if #server&.error is truthy.
Now we find that reset! comes with two aliases, this message reset! is received whenever Capybara calls cleanup! or reset_session!. At this point we can probably understand what happened, but it's still a little bit mysterious when I've been using chromedriver-helper and selenium testing for several years, but never recall seeing this 60 second timeout before. I'm hesitant to point the finger at webdriver, but I don't have any other answers for why this timeout is new. I haven't done really anything that could account for it but upgrade to this gem, and any other gems, plus clearing out deprecation warnings.
It seems possible that in Rails 5.1+, capybara calls reset! a lot more, maybe more than in between test examples. Especially when you read that documentation of the method and think about what Single-Page focus there has been now, and consider all of the things that reset! documentation tells you it doesn't reset, clear browser cache/HTML 5 local storage/IndexedDB/Web SQL database/etc — or maybe I'm imagining it, and this isn't new. But I'm imagining there are a lot of ways that it can call reset! and not land in this timeout code, that are likely to be driver-dependent.
Did you change to webdrivers gem by any chance when you did your Rails upgrade?
Edit: I reverted to chromedriver-helper just to be sure, and that wasn't it. What's actually happening is my test is failing in one thread, but the server has left a binding.pry session open. Capybara has moved onto the next test, and thus to get a fresh session it has called reset!, but 60 seconds later I am still in my pry session, and the server is still not ready to serve a root request. I have a feeling that the threading behavior of capybara has changed, in my memory a pry session opened during a server request would block the test from failing until it had returned. But that's apparently not what's happening anymore.
How did you arrive here? I have no idea unfortunately, but this is a fair description of what's happening when that message is received.

How to build historgram of methods by time spent inside with Mono?

I have tried the following:
mono --profile=log myprog.exe
to collect profiler data. Then to interpret those I invoke:
> mprof-report output.mlpd
Mono log profiler data
Profiler version: 2.0
Data version: 14
Arguments: log
Architecture: x86-64
Operating system: linux
Mean timer overhead: 51 nanoseconds
Program startup: Fri Jul 20 00:11:12 2018
Program ID: 19840
Server listening on: 59374
JIT summary
Compiled methods: 8349
Generated code size: 2621631
JIT helpers: 0
JIT helpers code size: 0
GC summary
GC resizes: 0
Max heap size: 0
Object moves: 0
Metadata summary
Loaded images: 16
Loaded assemblies: 16
Exception summary
Throws: 0
Thread summary
Thread: 0x7fb49c50a700, name: ""
Thread: 0x7fb49d27b700, name: "Threadpool worker"
Thread: 0x7fb49d07a700, name: "Threadpool worker"
Thread: 0x7fb49ce79700, name: "Threadpool worker"
Thread: 0x7fb49cc78700, name: "Threadpool worker"
Thread: 0x7fb49d6b9700, name: ""
Thread: 0x7fb4bbff1700, name: "Finalizer"
Thread: 0x7fb4bfe3f740, name: "Main"
Domain summary
Domain: (nil), friendly name: "myprog.exe"
Domain: 0x1d037f0, friendly name: "(null)"
Context summary
Context: (nil), domain: (nil)
However, there's no information concerning which methods were called often and took long to complete, which was the only one thing I expected from profiling.
How do I use Mono profiling to gather and output information about method calls' total run time? Like hprof with cpu=times will generate.
The Mono docs are "slightly" wrong as the methods calls are not tracked by default. This option creates huge profile log output and massively slows down "total" execution time and when combined with other options like alloc, effect the execution time of the methods and thus any timings that are being collected.
Personally I would recommend using calls profiling by itself adjusting the calldepthto a level that matters to your profiling. i.e. do you need to profile into the framework calls or not? Also a smaller call depth also greatly decreases the size of the log produced.
Example:
mono --profile=log:calls,calldepth=10 Console_Ling.exe
Produces:
Method call summary
Total(ms) Self(ms) Calls Method name
53358 0 1 (wrapper runtime-invoke) <Module>:runtime_invoke_void_object (object,intptr,intptr,intptr)
53358 2 1 Console_Ling.MainClass:Main (string[])
53340 2 1 Console_Ling.MainClass:Stuff ()
53337 0 3 System.Linq.Enumerable:ToList<int> (System.Collections.Generic.IEnumerable`1<int>)
53194 13347 1 System.Linq.Enumerable/WhereListIterator`1<int>:ToList ()
33110 13181 20000000 Console_Ling.MainClass/<>c__DisplayClass0_0:<Stuff>b__0 (int)
19928 13243 20000000 System.Collections.Generic.List`1<int>:Contains (int)
6685 6685 20000000 System.Collections.Generic.GenericEqualityComparer`1<int>:Equals (int,int)
~~~~
Re: http://www.mono-project.com/docs/debug+profile/profile/profiler/#profiler-option-documentation

Cucumber - perform ActiveJob `perform_later` jobs immediately

I have many jobs that are calling other nested jobs using perform_later. However, during some tests on Cucumber, I'd like to execute those jobs immediately after to proceed with the rests of the tests.
I thought it would be enough to add
# features/support/active_job.rb
World(ActiveJob::TestHelper)
And to call jobs using this in a step definition file
perform_enqueued_jobs do
# call step that calls MyJob.perform_later(*args)
end
However I run into something like that
undefined method `perform_enqueued_jobs' for #<ActiveJob::QueueAdapters::AsyncAdapter:0x007f98fd03b900> (NoMethodError)
What am I missing / doing wrong ?
I switched to the :test adapter in tests and it worked out for me:
# initialisers/test.rb
config.active_job.queue_adapter = :test
# features/support/env.rb
World(ActiveJob::TestHelper)
It would seem as long as you call .perform_now inside the cucumber step, even if there are nested jobs with .deliver_later inside, it does work too
#support/active_job.rb
World(ActiveJob::TestHelper)
#my_job_steps.rb
Given(/^my job starts$/) do
MyJob.perform_now(logger: 'stdout')
end
#jobs/my_job.rb
...
MyNestedJob.perform_later(*args) # is triggered during the step
...
Also, in my environment/test.rb file I didn't write anything concerning ActiveJob, the default was working fine. I believe the default adapter for tests is :inline so calling .deliver_later _now shouldn't matter

ThinkingSphinx3: How to prevent searchd threads from freezing?

I am using Rails 3.2.12, RSpec-rails 2.13.0 and ThinkingSphinx 3.0.10
The problem:
When I run bundle exec rpsec spec/controllers/ads_controller_spec.rb, thinking sphinx spawns 3 searchd processes which become frozen, my tests just lockup until I manually killed the searchd processes after which the tests continue running.
The setup:
Here is my sphinx_env.rb file which in which I setup TS for testing:
require 'thinking_sphinx/test'
def sphinx_environment(*tables, &block)
obj = self
begin
before(:all) do
obj.use_transactional_fixtures = false
ThinkingSphinx::Test.init
ThinkingSphinx::Test.start
sleep(0.5)
end
yield
ensure
after(:all) do
ThinkingSphinx::Test.stop
sleep(0.5)
obj.use_transactional_fixtures = true
end
end
end
Here is my test script:
describe "GET index" do
before(:each) do
#web_origin = FactoryGirl.create(:origin)
#api_origin = FactoryGirl.create(:api_origin)
#first_ad = FactoryGirl.create(:ad, :origin_id => #web_origin.id)
ThinkingSphinx::Test.index #index ads created above
sleep 0.5
end
sphinx_environment :ads do
it 'should return a collection of all live ads' do
get :index, {:format => 'json'}
response.code.should == '200'
end
end
...
UPDATE
No progress made, however here are some additional details:
When I run my tests, thinking sphinx always starts 3 searchd
processes.
The pid in my test.sphinx.pid always has just one of the searchd
pid's, its always the second searchd process pid.
Here is the output from my test.searchd.log file:
[ 568] binlog: finished replaying total 49 in 0.006 sec
[ 568] accepting connections
[ 568] caught SIGHUP (seamless=1, in queue=1)
[ 568] rotating index 'ad_core': started
[ 568] caught SIGHUP (seamless=1, in queue=2)
[ 568] caught SIGTERM, shutting down
Any help is appreciated, I have been trying to sort out this issue for over a day & a bit lost.
Thanks.
Sphinx 2.0.x releases with threaded Sphinx workers (which is what Thinking Sphinx v3 uses, hence the multiple searchd processes) are buggy on OS X, but this was fixed in Sphinx 2.0.6 (which was one of the main things holding back TS v3 development - my own tests wouldn't run due to problems like what you've been seeing).
I'd recommend upgrading Sphinx to 2.0.6 and I'm pretty sure that should resolve these issues.