I am doing regression testing using NUnit, WatiN and VB.net. What i am doing is opening an IE page, seleting some data, making a registration and then on view registration page testing the registration by assertion.
I want to ask is it a good way to use try and catch on every assert. i am using it because if some assert fails it will stop executing the rest of the statement and quits without running rest of the tests. Now I have put try and catch on every assert and writting the fail message in log file. Kindly let is it ok to go with this approach or suggest any better one.
Hello Ray
For instance If I am checking for some airline resevation booking. After creating a booking, On view Booking Summary Page I am testing weather it is diplaying cancel booking button or not. For this I am ussing the following code Try Assert.IsTrue(_internetExplorer.Button(Find.ById(New Regex("CBooking"))).Exists) Catch ex As Exception d_logger.LogResultTextFile("Cancel Button doesnot Exist", True, False) End Try I am checking this by running this in a loop for no of bookings created. I want to keep running the test even if in one booking it wont finds the button but keep checking for other bookings. Thats why I am using it. What I want is is iit a good approach to do so or not
This should be the case. If one Assert fails in your test, no other asserts should happen either. The best way is to run your tests, fix the assert that failed and run again.
Related
I have a series of tests that are dependent on a step in the middle (such as creating an account). The API that I'm using for this is a kind of brittle (which is a separate problem), and sometimes fails. I'd like to be able to just quit the tests in the middle there when that fails, instead of wait for TestCafe to fail the initial assertions for the next few tests that follow. Is there a way to get the test controller to stop, or signify to the fixture that the tests should stop? I immediately thought of Spock's #Stepwise annotation, but I can't find anything like that in the TestCafe docs.
The Stop on First Fail option stops the entire run once a failed test occurred. If I understand your scenario correctly, you could add an assertion for a successful account creation and if it fails, exit the entire run with this option.
CLI Documentation
API Documentation (under Parameters)
I wanted to send emails a day(fewdays) before a particular due date.
I went on to try with automation actions and was confused with on how it would work
and also the server action for that respective automated action
I would like to know whether "based on timed condition" works or does not work, as far as I have tried and researched, this seems to be a bug or which does not work.
Automated Actions do work, and are quite useful.
One catch with timed conditions is that they are triggered once and only once for each document/record, when the time condition is reached for the first time.
If you are playing around with timed conditions and use the same document/record for your tests, it will seem that later tries don't work, since triggered once it won't be triggered again.
In this scenario, you need to test changes to the Automated Action using a different test record.
Other things that might be wrong:
Your outgoing email might not be working properly.
The filter in your Automated Action might not be correct. Make sure you test it on a list view, and that the "User" field is blank.
I've just started exploring automated testing, specifically Codeception, as part of my QA work at a web design studio. The biggest issue I'm experiencing is having Codeception fail a test as soon as an assert fails, no matter where it's placed in the code. If my internet connection hiccups or is too slow, things can become difficult. I was wondering if there were methods to provide more control over when Codeception will fail and terminate a test session, or even better, a way to retry or execute a different block or loop of commands when an assert does fail. For example, I would like to do something similar to the following:
if ( $I->see('Foo') )
{
echo 'Pass';
}
else
{
echo 'Fail';
}
Does anyone have any suggestions that could help accomplish this?
You can use a conditional assertion:
$I->canSeeInCurrentUrl('/user/miles');
$I->canSeeCheckboxIsChecked('#agree');
$I->cantSeeInField('user[name]', 'Miles');
The codeception documentation says:
Sometimes you don't want the test to be stopped when an assertion fails. Maybe you have a long-running test and you want it to run to the end. In this case you can use conditional assertions. Each see method has a corresponding canSee method, and dontSee has a cantSee method.
I'm not sure, if I understand it correctly, but I think, you should try to use Cest.
$ php codecept.phar generate:cest suitename CestName
So you could write one test in one test function. If a test fails, it will abort. You can also configure codeception, that it will not abort and show only the one test which fails in a summary at the end of all tests.
See here in the documentation: https://github.com/Codeception/Codeception/blob/2.0/docs/07-AdvancedUsage.md
Maybe it's better to use:
$I::dontSee('Foo');
Regards
I'm having a really tough time investigating the cause of a test failure. I'm a very experienced programmer and am well versed in general debugging techniques, but I'm new to Capybara and RSpec so I'm hoping there's some kind of facility I'm ignorant of that can help me.
In short, I have a test something like this:
expect { click('.fake_button'); sleep 1 }.to change { clicks.count }.by(1)
When the fake button is clicked, it triggers an AJAX call to the Rails app which, among other things, adds a click record to the database. I can think of dozens of things that could be causing this test to fail and have had only limited success getting information out of logs. The tests do not fail in development and it only fails sporadically in test. One of the differences of the test environment is that the tests are run on a server in our office against a server in the cloud, so there are network delays along with other possible issues.
This is very hard to diagnose because there's so little information coming out of the failed test and of course all the database information is thrown away by the time I read about the failure. I know clicks.count didn't change in the test and I can infer that click('.fake_button') succeeded, but due to server time sync issues I can't even be sure that the click happened on the right button or that the AJAX call fired.
What I'd like are some tools to help me follow this test case in the web server logs (maybe using automatic URL parameters, for example), detailed logging about what Capybara did, and a record of the web page as it was when the failure occurred, including cookie values. Can I get any of that? Anything like that?
Capybara simulates human actions. The test code does exactly what needed. It's something a real user should expect. I don't think you should complain the code.
I think it's okay to increase the wait time, say 1 to 2, due to your network latency, but it should not exceed a reasonable value otherwise the app does not work as real user expected.
To debug Capybara codes, there are three methods as I summarized:
Add "save_and_open_page" to the place you want to see result. Then a saved html page will appear during the test. (I forget if "launchy" gem should be added)
Temporarily set this test as JS to see how this test going.
scenario "a fake test", js: true do
# code here
end
By doing this a real browser will pop up and Capybara will show you step by step how it play the code.
Just run $ tail log/test.log to show what happened recently.
Building off what #Billy suggested, log/test.log was not giving me any useful information and I was already using js: true so I tried this:
begin
expect { click('.fake_button'); sleep 1 }.to change { clicks.count }.by(1)
rescue Exception => e
begin
timestamp = Time::now.strftime('%Y%m%d%H%M%S%L')
begin
screenshot_name = "tmp/capybara/capybara-screenshot-#{timestamp}.png"
$stderr.puts "Trying to save screenshot #{screenshot_name} due to test failure"
page.save_screenshot(screenshot_name)
rescue Exception => inner
$stderr.puts "Ignoring exception #{inner} while trying to save screenshot of test page"
end
begin
# Page saved by Capybara under tmp/capybara/ by default
save_page "capybara-html-#{timestamp}.html"
rescue Exception => inner
$stderr.puts "Ignoring exception #{inner} while trying to save HTML of failed test page"
end
ensure
raise e
end
end
Later I changed the test itself to take advantage of Capybara's AJAX synchronization features by doing something like this:
starting_count = clicks.count
click('.fake_button')
page.should have_css('.submitted') # Capybara is smart enough to wait for this to happen
clicks.count.should == starting_count + 1
Note that the CSS I'm looking for is something added to the page in JavaScript by the AJAX callback, so it showing up is a signal that the AJAX call completed.
The rescue blocks are important because the screenshot has a high failure rate from not having enough memory to render the full page and convert it to an image.
EDIT
Though I haven't tried it, a promising solution is Capybara::Screenshot which automatically saves the screenshot and HTML on any failure. Just reading the code it looks like it will have problems when the screenshot fails and I can't tell what state the page will be in by the time the screenshot is triggered, but it certainly looks like it's worth a try.
A nice way to debug tests is to use irb to watch what's actually happening in the browser. RSpec fails usually give decent information for simple cases, but for more complicated things I either split the case up until it is simple, or chuck it in irb for a live session to make sure its doing what it should do.
Make sure to use :selenium as your driver, and you should see firefox come up and be able to be driven by your irb session.
We have a bank of tests that all start by logging in.
They're recorded by QA so are html tests.
However occasionally something goes wrong and the tests fails. When
that happens the logout at the end of the test doesn't get called, so
the next test tries to login again - using open ./Login
If you're logged out that works fine.
However if you didn't log out because the test fails, that command puts you in a different path and then the rest of the tests in that suite all fail.
How do I tell Selenium to log out if the test fails?
Or how do I tell Selenium if LogOut link is available logout else
continue?
From My point of view i would prefer Following steps
create lib with all test cases. create Suite which will call the required function from libraries. In Suite use following flow
Call login
if login function returns zero call required function to execute.
If the called function returns zero call logout.
::::::::::::::::::::::::::::::::::::::::::::::::
If one of the function returns non zero store it in some variable or array with function name and error.
If want more details let me know.
e.g. if function gives error returns non zero value call errorLogout
You can used below approach:
Approach 1: TestNg annotation.
Approach 2. Use try catch block in catch block call logout function and then throw exception
Let me know if you need more explanation.