capybara webkit-driver reset? - ruby-on-rails-3

Having a bit of an oddity with some capybara webkit-driver (:js => true) tests.
Tests run fine when they run on their own, but somehow in sequence, they fail.
For example, I have a request test that looks something like
describe "A", :js => true do
# tests here run fine
end
describe "B", :js => true do
# tests here fail
end
When I split describe B section into its own file, and run it using bundle exec rspec spec/requests/b_spec.rb however - tests run fine and pass.
Debugging this, it looks like when both sections are in the same file, somehow the webkit driver loads a 'dirty' browser session. Trying things like page.driver.reset! or Capybara.reset_sessions! or Capybara.reset! doesn't seem to have any effect...
When using spectator/spork this isn't a problem, since I can split the tests into different files and run them independently, but when running the full suite of tests using bundle exec rspec - these tests fail...
How can I reset the webkit driver / session properly between tests? Or I am chasing the wrong problem?
p.s. These tests are not hitting the database or altering state in any particular way, so I'm pretty sure it's some driver related problem.

Sometime it helps just writing the question for the solution to pop up.
The key for me was:
These tests are not hitting the database or altering state in any particular way, so I'm pretty sure it's some driver related problem.
Turns out there was a state-change. In my particular case, setting OmniAuth into test_mode, which required setting it back to false after the previous test was running...

Related

Preventing asserts from failing tests in Codeception

I've just started exploring automated testing, specifically Codeception, as part of my QA work at a web design studio. The biggest issue I'm experiencing is having Codeception fail a test as soon as an assert fails, no matter where it's placed in the code. If my internet connection hiccups or is too slow, things can become difficult. I was wondering if there were methods to provide more control over when Codeception will fail and terminate a test session, or even better, a way to retry or execute a different block or loop of commands when an assert does fail. For example, I would like to do something similar to the following:
if ( $I->see('Foo') )
{
echo 'Pass';
}
else
{
echo 'Fail';
}
Does anyone have any suggestions that could help accomplish this?
You can use a conditional assertion:
$I->canSeeInCurrentUrl('/user/miles');
$I->canSeeCheckboxIsChecked('#agree');
$I->cantSeeInField('user[name]', 'Miles');
The codeception documentation says:
Sometimes you don't want the test to be stopped when an assertion fails. Maybe you have a long-running test and you want it to run to the end. In this case you can use conditional assertions. Each see method has a corresponding canSee method, and dontSee has a cantSee method.
I'm not sure, if I understand it correctly, but I think, you should try to use Cest.
$ php codecept.phar generate:cest suitename CestName
So you could write one test in one test function. If a test fails, it will abort. You can also configure codeception, that it will not abort and show only the one test which fails in a summary at the end of all tests.
See here in the documentation: https://github.com/Codeception/Codeception/blob/2.0/docs/07-AdvancedUsage.md
Maybe it's better to use:
$I::dontSee('Foo');
Regards

delete cookies using capybara webkit

I recently started using capybara-webkit in order to speed up my acceptance tests. 90% of my tests run using the standard capybara DSL but some are slightly different.
One of the main ones that I am having trouble with is deleting cookies. Previously I used the following:
page.driver.browser.manage.delete_all_cookies
but this does not work with capybara-webkit. Receive this error:
undefined method `delete_cookie' for #<Selenium::WebDriver::Driver:0x007f86cb068b88> (NoMethodError)
Does anyone know how I can delete the cookies using capybara-webkit?
Thanks!
You can use clear_cookies method:
page.driver.browser.clear_cookies

Better logging from Capybara/RSpec?

I'm having a really tough time investigating the cause of a test failure. I'm a very experienced programmer and am well versed in general debugging techniques, but I'm new to Capybara and RSpec so I'm hoping there's some kind of facility I'm ignorant of that can help me.
In short, I have a test something like this:
expect { click('.fake_button'); sleep 1 }.to change { clicks.count }.by(1)
When the fake button is clicked, it triggers an AJAX call to the Rails app which, among other things, adds a click record to the database. I can think of dozens of things that could be causing this test to fail and have had only limited success getting information out of logs. The tests do not fail in development and it only fails sporadically in test. One of the differences of the test environment is that the tests are run on a server in our office against a server in the cloud, so there are network delays along with other possible issues.
This is very hard to diagnose because there's so little information coming out of the failed test and of course all the database information is thrown away by the time I read about the failure. I know clicks.count didn't change in the test and I can infer that click('.fake_button') succeeded, but due to server time sync issues I can't even be sure that the click happened on the right button or that the AJAX call fired.
What I'd like are some tools to help me follow this test case in the web server logs (maybe using automatic URL parameters, for example), detailed logging about what Capybara did, and a record of the web page as it was when the failure occurred, including cookie values. Can I get any of that? Anything like that?
Capybara simulates human actions. The test code does exactly what needed. It's something a real user should expect. I don't think you should complain the code.
I think it's okay to increase the wait time, say 1 to 2, due to your network latency, but it should not exceed a reasonable value otherwise the app does not work as real user expected.
To debug Capybara codes, there are three methods as I summarized:
Add "save_and_open_page" to the place you want to see result. Then a saved html page will appear during the test. (I forget if "launchy" gem should be added)
Temporarily set this test as JS to see how this test going.
scenario "a fake test", js: true do
# code here
end
By doing this a real browser will pop up and Capybara will show you step by step how it play the code.
Just run $ tail log/test.log to show what happened recently.
Building off what #Billy suggested, log/test.log was not giving me any useful information and I was already using js: true so I tried this:
begin
expect { click('.fake_button'); sleep 1 }.to change { clicks.count }.by(1)
rescue Exception => e
begin
timestamp = Time::now.strftime('%Y%m%d%H%M%S%L')
begin
screenshot_name = "tmp/capybara/capybara-screenshot-#{timestamp}.png"
$stderr.puts "Trying to save screenshot #{screenshot_name} due to test failure"
page.save_screenshot(screenshot_name)
rescue Exception => inner
$stderr.puts "Ignoring exception #{inner} while trying to save screenshot of test page"
end
begin
# Page saved by Capybara under tmp/capybara/ by default
save_page "capybara-html-#{timestamp}.html"
rescue Exception => inner
$stderr.puts "Ignoring exception #{inner} while trying to save HTML of failed test page"
end
ensure
raise e
end
end
Later I changed the test itself to take advantage of Capybara's AJAX synchronization features by doing something like this:
starting_count = clicks.count
click('.fake_button')
page.should have_css('.submitted') # Capybara is smart enough to wait for this to happen
clicks.count.should == starting_count + 1
Note that the CSS I'm looking for is something added to the page in JavaScript by the AJAX callback, so it showing up is a signal that the AJAX call completed.
The rescue blocks are important because the screenshot has a high failure rate from not having enough memory to render the full page and convert it to an image.
EDIT
Though I haven't tried it, a promising solution is Capybara::Screenshot which automatically saves the screenshot and HTML on any failure. Just reading the code it looks like it will have problems when the screenshot fails and I can't tell what state the page will be in by the time the screenshot is triggered, but it certainly looks like it's worth a try.
A nice way to debug tests is to use irb to watch what's actually happening in the browser. RSpec fails usually give decent information for simple cases, but for more complicated things I either split the case up until it is simple, or chuck it in irb for a live session to make sure its doing what it should do.
Make sure to use :selenium as your driver, and you should see firefox come up and be able to be driven by your irb session.

Integration test error with capybara for ajax call with rails 3.2.12

Here is a integration test case with capybara in rails 3.2.12. click_link 'New Log' is an ajax call. However the page opened starts with $() and has a bunch of js escape like \n and \log-log.
it "should work with link on show customer_comm_record page" do
visit customer_customer_comm_records_path(#cust)
#visit customer_customer_comm_record_path(#cust, #crecord)
click_link #crecord.id.to_s
click_link 'New Log'
save_and_open_page
end
We also tried to wrap the case with describe "", :js => true do, how there is an error saying
`An error occurred in an after hook ActiveRecord::StatementInvalid: SQLite3::BusyException: database is locked:`
There is no error out of code execution. What's wrong with the rspec case? Thanks for help.
It looks like your server is locking the database so that the test cannot clean up after it has run.
When you use Capybara without JavaScript the tests and server are all running in a single process and thread. This means they can share the same database connection and transaction. This means that RSpec can use a simple database transaction to roll back changes at the end of the test and why you don't see any lock contention between test and server.
When you run with :js => true things are a little different, the server starts up in a separate thread or process to your tests so they will each be using a separate database connection and transaction. This means that the database transaction strategy that RSpec uses by default to clean up won't work. It is also causing lock errors in your case.
The Capybara readme talks about this problem and recommends database_cleaner gem as an alternative in this situation. You will need to configure database cleaner to use the truncation strategy rather than transactions

Some of my unit tests tests are not finishing in XCode 4.4

I have seen people posting about this here and elsewhere, but I haven't found any solution that works. I am using XCode 4.4 and have a bunch of unit tests set up. I have ran them all before on this project, so I know that they do pass/fail when they are supposed to if they are actually ran.
I have about 15 test suites, and each one contains 1-7 tests. On most attempts, all of the test suites finished (and passed) except for 1 (FooTests). It gives the warning:
FooTests did not finish
testFoo did not finish
XCode will report that testing was successful, regardless of what happens in unfinished tests. Another thing to note, sometimes it is a different test that will not finish, and sometimes multiple suites will not finish. I have not noticed a case where all tests do finish, but judging by this seemingly random behaviour I believe that it is possible.
So, is this a bug in XCode? I can't think of any other reason that tests randomly don't finish and then cause XCode to report that everything was successful. Are there any solutions?
I am on XCode 4.5.2. For application unit test, if your test suites finish so quick that the main application is not correctly loaded before that, you will get the warning. You can simply avoid the problem by adding a sleep at the end of your test like following. It doesn't happen for logic unit test.
[NSThread sleepForTimeInterval:1.0];
I've just had this problem with XC4.5DP4.
I had a test which does some work in a loop, and does nothing else when it falls out of the loop, and I was getting the "did not finish" error.
In an attempt to prove that the test was finishing, I added this as the very last line:
NSLog(#"done");
Not only does "done" get printed to the output - now Xcode says that the test has finished.
Seems to be a workaround... go figure.
I'm using XCode46-DP3 and I've just resolved this problem in my tests. I've several tests that start a web server and then execute http call to it; at the end of the test the web server is stopped. In last week these tests have begun to have the warning 'did not finish'.
For me has been enough to add the following sleep at the end of these tests (precisely I've added it in the tearDown):
- (void)tearDown {
[self.httpServer stop];
[NSThread sleepForTimeInterval:1.0];
self.httpServer = nil;
self.urlComposer = nil;
}
The problem seems that your tests terminate too quickly for Xcode to receive and parse the log messages that indicate failure or success. Sleeping for 1 second in the last test case run worked reliably for me, where 0.0 didn't. To see which test case is the last test case, check the test action in the Scheme dialog.
I created a separate WorkaroundForTestsFinishingTooFast test case, with a single method:
- (void)testThatMakesSureWeDontFinishTooFast
{
[NSThread sleepForTimeInterval:1.0];
}
The problem is that as you add more test cases, you'll have to ensure that this test is the last method run. That means removing and adding this class from the Test action as reordering of test cases is not allowed. On the other hand, you're only delaying your entire test bundle by 1 second.
I had the same warnings in the Log Navigator. I fixed it for me.
In my project I have got 2 Schemes, one for running the project and one for the unit tests.
Go to Product --> Edit Scheme...
Select the UnitTest Scheme in the scheme picker
Select the "Test"-Icon on the Left
Change the debugger from LLDB to GDB and press OK
Tests should finish now. (For me it worked fine)
For me the solution was to slim down the logging output from the parts of the app that the tests were testing. I think xcode couldn't parse the tests output in time because of the other output I had throughout the app.
I had the same problem running with XCode 4.6, the reason for it, in my case, was inconsistency between the scheme and the actual unit tests in the test suits.
In the scheme I had some suits checked but in their .m file some unit tests were commented.
To solve the problem: either uncomment the test or deselected the file/suit in the scheme and all shall became green again :)
for people like me that forgot how to reach the scheme these are the required steps:
right click on the project name in the scheme section (right to the stop button)
choose edit scheme
choose the test debug
click on the triangle next to the unit test project and next to each file you have a check box
uncheck files that you placed unit test in comments
hope this helps