Better logging from Capybara/RSpec? - ruby-on-rails-3

I'm having a really tough time investigating the cause of a test failure. I'm a very experienced programmer and am well versed in general debugging techniques, but I'm new to Capybara and RSpec so I'm hoping there's some kind of facility I'm ignorant of that can help me.
In short, I have a test something like this:
expect { click('.fake_button'); sleep 1 }.to change { clicks.count }.by(1)
When the fake button is clicked, it triggers an AJAX call to the Rails app which, among other things, adds a click record to the database. I can think of dozens of things that could be causing this test to fail and have had only limited success getting information out of logs. The tests do not fail in development and it only fails sporadically in test. One of the differences of the test environment is that the tests are run on a server in our office against a server in the cloud, so there are network delays along with other possible issues.
This is very hard to diagnose because there's so little information coming out of the failed test and of course all the database information is thrown away by the time I read about the failure. I know clicks.count didn't change in the test and I can infer that click('.fake_button') succeeded, but due to server time sync issues I can't even be sure that the click happened on the right button or that the AJAX call fired.
What I'd like are some tools to help me follow this test case in the web server logs (maybe using automatic URL parameters, for example), detailed logging about what Capybara did, and a record of the web page as it was when the failure occurred, including cookie values. Can I get any of that? Anything like that?

Capybara simulates human actions. The test code does exactly what needed. It's something a real user should expect. I don't think you should complain the code.
I think it's okay to increase the wait time, say 1 to 2, due to your network latency, but it should not exceed a reasonable value otherwise the app does not work as real user expected.
To debug Capybara codes, there are three methods as I summarized:
Add "save_and_open_page" to the place you want to see result. Then a saved html page will appear during the test. (I forget if "launchy" gem should be added)
Temporarily set this test as JS to see how this test going.
scenario "a fake test", js: true do
# code here
end
By doing this a real browser will pop up and Capybara will show you step by step how it play the code.
Just run $ tail log/test.log to show what happened recently.

Building off what #Billy suggested, log/test.log was not giving me any useful information and I was already using js: true so I tried this:
begin
expect { click('.fake_button'); sleep 1 }.to change { clicks.count }.by(1)
rescue Exception => e
begin
timestamp = Time::now.strftime('%Y%m%d%H%M%S%L')
begin
screenshot_name = "tmp/capybara/capybara-screenshot-#{timestamp}.png"
$stderr.puts "Trying to save screenshot #{screenshot_name} due to test failure"
page.save_screenshot(screenshot_name)
rescue Exception => inner
$stderr.puts "Ignoring exception #{inner} while trying to save screenshot of test page"
end
begin
# Page saved by Capybara under tmp/capybara/ by default
save_page "capybara-html-#{timestamp}.html"
rescue Exception => inner
$stderr.puts "Ignoring exception #{inner} while trying to save HTML of failed test page"
end
ensure
raise e
end
end
Later I changed the test itself to take advantage of Capybara's AJAX synchronization features by doing something like this:
starting_count = clicks.count
click('.fake_button')
page.should have_css('.submitted') # Capybara is smart enough to wait for this to happen
clicks.count.should == starting_count + 1
Note that the CSS I'm looking for is something added to the page in JavaScript by the AJAX callback, so it showing up is a signal that the AJAX call completed.
The rescue blocks are important because the screenshot has a high failure rate from not having enough memory to render the full page and convert it to an image.
EDIT
Though I haven't tried it, a promising solution is Capybara::Screenshot which automatically saves the screenshot and HTML on any failure. Just reading the code it looks like it will have problems when the screenshot fails and I can't tell what state the page will be in by the time the screenshot is triggered, but it certainly looks like it's worth a try.

A nice way to debug tests is to use irb to watch what's actually happening in the browser. RSpec fails usually give decent information for simple cases, but for more complicated things I either split the case up until it is simple, or chuck it in irb for a live session to make sure its doing what it should do.
Make sure to use :selenium as your driver, and you should see firefox come up and be able to be driven by your irb session.

Related

How to continue test when the page still not completely loaded in selenium

Actually, I am creating automation testing for an e-commerce website. Actually, the website have function lazy load or something. I am testing it on UAT server. So, it will load the page slowly because the specification of the server. It takes more than 60 sec or more to load all the resources from the webpage. So, when I am trying to create selenium automation, it always waiting more than 60 sec to continue the next step (because waiting the page fully loaded). Please, someone give me tips how to continue run the test step after 10 seconds wait the page to load. It won't throw an exception, just continue the test step.
Not possible.
If you find some element and try execute some action while loading you will get stale element error + due loading issue you will have a lot of failed tests and it will take a lot more time to debug.
Automation means to execute fast and have reliable results.
It seems that this environment is not built for automation, you should request more resources.
As an alternative maybe you can use a headless driver or see if you can put the same build on a VM.
Why this is an issue: Selenium needs to wait for each request to be complete.For example when you request a page, if the page is not received entirely and the server still sending info then the request is not done, it is logical that you need a complete request in order to continue.
You should address this to your Project Manager/QA Lead and ask for advice/option on how to handle this.
Please note that these costs should be included/added in the automation price.You need to address this in a simple way:
good server -> automation runs smoothly and fast and the testing is
done faster
bad server -> unable to run automation since is not reliable and each
test has a high rate of failure => alternative X day(s) of
manual testing for each build
If this would be a coding issue like some delayed ajax request then you would have some solutions, devs could help, but if is an infrastructure/resources issue then if not depending on you, and you cannot solve it.
You could use try any type of wait implicit/explicit, explicit would throw some exception, but this is not a solution for poor resources.

capybara webkit-driver reset?

Having a bit of an oddity with some capybara webkit-driver (:js => true) tests.
Tests run fine when they run on their own, but somehow in sequence, they fail.
For example, I have a request test that looks something like
describe "A", :js => true do
# tests here run fine
end
describe "B", :js => true do
# tests here fail
end
When I split describe B section into its own file, and run it using bundle exec rspec spec/requests/b_spec.rb however - tests run fine and pass.
Debugging this, it looks like when both sections are in the same file, somehow the webkit driver loads a 'dirty' browser session. Trying things like page.driver.reset! or Capybara.reset_sessions! or Capybara.reset! doesn't seem to have any effect...
When using spectator/spork this isn't a problem, since I can split the tests into different files and run them independently, but when running the full suite of tests using bundle exec rspec - these tests fail...
How can I reset the webkit driver / session properly between tests? Or I am chasing the wrong problem?
p.s. These tests are not hitting the database or altering state in any particular way, so I'm pretty sure it's some driver related problem.
Sometime it helps just writing the question for the solution to pop up.
The key for me was:
These tests are not hitting the database or altering state in any particular way, so I'm pretty sure it's some driver related problem.
Turns out there was a state-change. In my particular case, setting OmniAuth into test_mode, which required setting it back to false after the previous test was running...

Is there a browser-agnostic way to detect client-side script errors with Watin?

We're using WatiN to test our web portals. During the course of an E2E test, we'll occasionally see client-side script errors on the IE status bar. I'd like to chain a handler onto the script error event and record the error for later analysis and bug filing.
Problem is, I don't know that there's a global script error event or how to chain into it. And if there's not a browser-agnostic way to accomplish this, I can create MyIE and MyFF subclasses but then this becomes two browser-specific questions.
In essence, I'm thinking of something like this entirely made-up call:
browser.ScriptEngine.SetCustomErrorHandler(LogScriptingError);
... where LogScriptErrors is my code that does the obvious.
Many of our client-side scripting errors don't necessarily prevent the test from continuing (a pretty UI element didn't animate, for example, but the underlying form is still submittable), so I'd like to log the error and forge ahead in most cases.
You probably looking for this:
window.onerror=function(message, url, line){logError();};
You can add this code to your pages to handle errors in logError(). but this may not work in all browser(works in IE), check this for browser compatibility:
http://www.quirksmode.org/dom/events/error.html
Or you may try this commercial product:
exceptionhub.com/
You could maybe co-opt the ability to inject eval code (described under "Added Eval functionality") to add a script that caught all errors, not just errors from the eval'ed script. I'm not sure if this would work, but it's an area to explore. Another resource might be this blog post, which discusses how to evaluate Javascript in WatiN.

Selenium RC drops error when it tries open the popup

When selenium tries to open popup window I'm getting JS error permission denied in file
file:///C:/DOCUME~1//LOCALS~1/Temp/customProfileDir8708f7f69e14482ba857f4b2e74775c1/core/RemoteRunner.hta
So this break script execution, could you assist? I saw a related topic at MSDN and openqa but didn't find resolution that could help me.
I've just encountered this error. In the end it was because I was running IE in 'Offline' mode. Open the File menu and make sure that "Work Offline" does not have a tick next to it.
I've just updated a section about that in the Selenium docs. The website build is not working right now, so if you go to the site you will find the old version.
I'll paste the raw text here, I think your case is the second: JS trying to access sections that are still not loaded, so your solution would be a waitForPopUp command:
Why am I getting a permission denied
error?
The most common reason for this error
is that your session is attempting to
violate the same-origin policy by
crossing domain boundaries (e.g.,
accesses a page from http://domain1
and then accesses a page from
http://domain2) or switching protocols
(moving from http://domainX to
https://domainX). For this to be
solved, try using the Heightened
Privileges Browsers if you're working
with the Proxy Injection browsers.
This is covered in some detail in the
tutorial. Make sure you read the
sections about The Same Origin Policy
and Proxy Injection carefully.
If the previous situation was not your
case, it can also occur when
JavaScript attempts to look at
objects which are not yet available
(before the page has completely
loaded), or tries to look at objects
which are no longer available (after
the page has started to be unloaded).
This is most typically encountered
with AJAX pages which are working with
sections of a page or subframes that
load and/or reload independently of
the larger page. For this type of
problem, it is common that the error
is intermittent. Often it is
impossible to reproduce the problem
with a debugger because the trouble
stems from race conditions which are
not reproducible when the debugger's
overhead is added to the system. Try
first adding a static pause to make
sure this is the situation and then
moving on to the waitFor kind of
commands.

Try catch in assert statement

I am doing regression testing using NUnit, WatiN and VB.net. What i am doing is opening an IE page, seleting some data, making a registration and then on view registration page testing the registration by assertion.
I want to ask is it a good way to use try and catch on every assert. i am using it because if some assert fails it will stop executing the rest of the statement and quits without running rest of the tests. Now I have put try and catch on every assert and writting the fail message in log file. Kindly let is it ok to go with this approach or suggest any better one.
Hello Ray
For instance If I am checking for some airline resevation booking. After creating a booking, On view Booking Summary Page I am testing weather it is diplaying cancel booking button or not. For this I am ussing the following code Try Assert.IsTrue(_internetExplorer.Button(Find.ById(New Regex("CBooking"))).Exists) Catch ex As Exception d_logger.LogResultTextFile("Cancel Button doesnot Exist", True, False) End Try I am checking this by running this in a loop for no of bookings created. I want to keep running the test even if in one booking it wont finds the button but keep checking for other bookings. Thats why I am using it. What I want is is iit a good approach to do so or not
This should be the case. If one Assert fails in your test, no other asserts should happen either. The best way is to run your tests, fix the assert that failed and run again.