Supply parameters to NUnit tests at run time - testing

NUnit 2.5 adds support for parameterized tests with attributes like ValuesAttribute and ValueSourceAttribute so that one can write something like:
[Test]
public void MoneyTransfer(
[Values("USD", "EUR")]string currency,
[Values(0, 100)]long amount)
{
}
and get all permutations for parameters specified. Priceless. However, it would be cool to specify (override) those parameters directly in NUnit GUI before pressing 'Run'. Unfortunately there is no such functionality in NUnit (yet?). Is there an alternative tool or testing framework allowing me to specify parameters before running a test (something like i can provide parameters in WcfTestClient.exe)?

One option could be to try out the TestCaseSource attribute that's supported - basically, you can define an IEnumerable method as source of data for a test - and within that, you can look anywhere you like for test data - could be to pull from a database/flat file/iterater round files in a given directory etc.
Have a look at that, it's a handy thing to know about.

Unit test are supposed to run automatically and be reproducible. By changing test at runtime you break this behavior.
So I don't think this is something you want to do...

Related

How to set outcome of XUnit test

Can you manually set outcome of test using XUnit? In one of my test I have to fulfill prerequisite and if it fails I need to set outcome of test to inconclusive. In NUnit you can set outcome by Assert.Inconclusive(), Assert.Fail(), etc. Am I able to do something similar with XUnit? Is there some best practice how to do this?
There is no out-of-the-box way of doing what you want in xUnit. See https://xunit.github.io/docs/comparisons.html for more information.
However Assert.Fail("This test is failing") can be replicated using Assert.True(false, "This test is failing").
The closest equivalent of Assert.Inconclusive in XUnit would be the use of the Skip attribute. But as far as I can see, there is no way to invoke this part way through a method, like you can with NUnit's Assert.Inconclusive.
Someone did write an Assert.Skip method for v1.9 here: https://github.com/xunit/xunit/blob/v1/samples/AssertExamples/DynamicSkipExample.cs, however it no longer compiles in v.2.2.0. Actually the xUnit authors seem to be antagonistic to inconclusive tests (https://xunit.codeplex.com/workitem/9691).
I think you can use Assume.That in your munitions test method it won't work while setup but you will to use that in test method.

Any way to run code in SenTest only on a success?

In my Mac Cocoa unit tests, I would like to output some files as part of the testing process, and delete them when the test is done, but only when there are no failures. How can this be done (and/or what's the cleanest way to do so)?
Your question made me curious so I looked into it!
I guess I would override the failWithException: method in the class SenTestCase (the class your tests run in inherits from this), and set a "keep output files" flag or something before calling the super's method.
Here's what SenTestCase.h says about that method:
/*"Failing a test, used by all macros"*/
- (void) failWithException:(NSException *) anException;
So, provided you only use the SenTest macros to test and/or fail (and chances are this is true in your case), that should cover any test failure.
I've never dug into the scripts for this, but it seems like you could customize how you call the script that actually runs your tests to do this. In Xcode 4, look at the last step in the Build Phases tab of your test target. Mine contains this:
# Run the unit tests in this test bundle.
"${SYSTEM_DEVELOPER_DIR}/Tools/RunUnitTests"
I haven't pored through the contents of this script or the many subscripts it pulls in on my machine, but presumably they call otest or some other test rig executable and the test results would be returned to that script. After a little time familiarizing yourself with those scripts you would likely be able to find a straightforward way to conditionally remove the output files based on the test results.

TestNG & Selenium: Separate tests into "groups", run ordered inside each group

We use TestNG and Selenium WebDriver to test our web application.
Now our problem is that we often have several tests that need to run in a certain order, e.g.:
login to application
enter some data
edit the data
check that it's displayed correctly
Now obviously these tests need to run in that precise order.
At the same time, we have many other tests which are totally independent from the list of tests above.
So we'd like to be able to somehow put tests into "groups" (not necessarily groups in the TestNG sense), and then run them such that:
tests inside one "group" always run together and in the same order
but different test "groups" as a whole can run in any order
The second point is important, because we want to avoid dependencies between tests in different groups (so different test "groups" can be used and developed independently).
Is there a way to achieve this using TestNG?
Solutions we tried
At first we just put tests that belong together into one class, and used dependsOnMethods to make them run in the right order. This used to work in TestNG V5, but in V6 TestNG will sometimes interleave tests from different classes (while respecting the ordering imposed by dependsOnMethods). There does not seem to be a way to tell TestNG "Always run tests from one class together".
We considered writing a method interceptor. However, this has the disadvantage that running tests from inside an IDE becomes more difficult (because directly invoking a test on a class would not use the interceptor). Also, tests using dependsOnMethods cannot be ordered by the interceptor, so we'd have to stop using that. We'd probably have to create our own annotation to specify ordering, and we'd like to use standard TestNG features as far as possible.
The TestNG docs propose using preserve-order to order tests. That looks promising, but only works if you list every test method separately, which seems redundant and hard to maintain.
Is there a better way to achieve this?
I am also open for any other suggestions on how to handle tests that build on each other, without having to impose a total order on all tests.
PS
alanning's answer points out that we could simply keep all tests independent by doing the necessary setup inside each test. That is in principle a good idea (and some tests do this), however sometimes we need to test a complete workflow, with each step depending on all previous steps (as in my example). To do that with "independent" tests would mean running the same multi-step setup over and over, and that would make our already slow tests even slower. Instead of three tests doing:
Test 1: login to application
Test 2: enter some data
Test 3: edit the data
we would get
Test 1: login to application
Test 2: login to application, enter some data
Test 3: login to application, enter some data, edit the data
etc.
In addition to needlessly increasing testing time, this also feels unnatural - it should be possible to model a workflow as a series of tests.
If there's no other way, this is probably how we'll do it, but we are looking for a better solution, without repeating the same setup calls.
You are mixing "functionality" and "test". Separating them will solve your problem.
For example, create a helper class/method that executes the steps to log in, then call that class/method in your Login test and all other tests that require the user to be logged in.
Your other tests do not actually need to rely on your Login "Test", just the login class/method.
If later back-end modifications introduce a bug in the login process, all of the tests which rely on the Login helper class/method will still fail as expected.
Update:
Turns out this already has a name, the Page Object pattern. Here is a page with Java examples of using this pattern:
http://code.google.com/p/selenium/wiki/PageObjects
Try with depends on group along with depends on method. Add all methods in same class in one group.
For example
#Test(groups={"cls1","other"})
public void cls1test1(){
}
#Test(groups={"cls1","other"}, dependsOnMethods="cls1test1", alwaysrun=true)
public void cls1test2(){
}
In class 2
#Test(groups={"cls2","other"}, dependsOnGroups="cls1", alwaysrun=true)
public void cls2test1(){
}
#Test(groups={"cls2","other"}, dependsOnMethods="cls2test1", dependsOnGroups="cls1", alwaysrun=true)
public void cls2test2(){
}
There is an easy (whilst hacky) workaround for this if you are comfortable with your first approach:
At first we just put tests that belong together into one class, and used dependsOnMethods to make them run in the right order. This used to work in TestNG V5, but in V6 TestNG will sometimes interleave tests from different classes (while respecting the ordering imposed by dependsOnMethods). There does not seem to be a way to tell TestNG "Always run tests from one class together".
We had a similar problem: we need our tests to be run class-wise because we couldn't guarantee the test classes not interfering with each other.
This is what we did:
Put a
#Test( dependsOnGroups= { "dummyGroupToMakeTestNGTreatThisAsDependentClass" } )
Annotation on an Abstract Test Class or Interface that all your Tests inherit from.
This will put all your methods in the "first group" (group as described in this paragraph, not TestNG-groups). Inside the groups the ordering is class-wise.
Thanks to Cedric Beust, he provided a very quick answer for this.
Edit:
The group dummyGroupToMakeTestNGTreatThisAsDependentClass actually has to exist, but you can just add a dummy test case for that purpose..

how can I add code reuse to my Selenium tests?

Here's the situation that I'm working with:
Build tests in Selenium
Get all the tests running correctly (in Firefox)
Export all the tests to MSTest (so that each test can be run in IE, Chrome and FF)
If any test needs to be modified, do that editing in Selenium IDE
So it's a very one-way workflow. However, I'd now like to do a bit more automation. For instance, I'd like every test to run under each of two accounts. I'm getting into a maintenance issue. If I have 6 tests that I want to run under two accounts, suddenly I'd need 12 tests in the Selenium IDE tests. That's too much editing. But a ton of that code is exactly the same.
How can I share chunks of Selenium tests among tests? Should I use Selenium IDE to develop the test first time then never use it again (only doing edits in VS after that)?
Selenium code is very linear after you export it from the IDE.
For example (ignore syntax):
someTestMethod() {
selenium.open("http://someLoginPage.com");
selenium.type("usernameField", "foo");
selenium.type("passwordField", "bar");
selenium.click("loginButton");
selenium.waitForPageToLoad("30000");
assertTrue(selenium.isTextPresent("Welcome * foo"));
}
This is the login page. Every single one of your tests will have to use it. You should refactor it into a method.
someTestMethod(){
selenium.open("http://someLoginPage.com");
String username = "foo";
String password = "bar";
performLogin(username, password);
}
performLogin(String username, String password){
selenium.type("usernameField", username);
selenium.type("passwordField", password);
selenium.click("loginButton");
selenium.waitForPageToLoad("30000");
assertTrue(selenium.isTextPresent("Welcome * foo"));
}
The performLogin() method does not have to be in the same file as your test code itself. You can create a separate class for it with your methods and share it between your tests.
We have classes that correspond to certain functionalities on our UI. For example, we have many ways to search in our app. All methods that helps you with search functionality will be in the SearchUtil class.
Structuring your tests similarly will give you the following advantages:
If the UI changes (an id of a field), you go to your one method, update the id and you are good to go
If the flow of your logic changes you also have only one place to update
To test whether your changes worked, you only have to run one of the tests to verify. All other tests use the same code so it should work.
A lot more expressive as you look at the code. With well named methods, you create a higher level of abstraction that is easier to read and understand.
Flexible and extensible! The possibilities are limitless. At this point you can use conditions, loops, exceptions, you can do your own reporting, etc...
This website is an excellent resource on what you are trying to accomplish.
Good Luck!
There are two aspects to consider regarding code reuse:
Eliminating code duplication in your own code base -- c_maker touched on this.
Eliminating code duplication from code generated by Selenium IDE.
I should point out that my comments lean heavily to the one-way workflow that you are using, jcollum, but even more so: I use IDE to generate code just once for a given test case. I never go back to the IDE to modify the test case and re-export it. (I do keep the IDE test case around as a diagnostic tool when I want to experiment with things while I am fine-tuning and customizing my test case in code (in my case, C#).
The reasons I favor using IDE tests only as a starting point are:
IDE tests will always have a lot of code duplication from one test to another; sometimes even within one test. That is just the nature of the beast.
In code I can make the test case more "user-friendly", i.e. I can encapsulate arcane locators within a meaningful-named property or method so it is much clearer what the test case is doing.
Working in code rather than the IDE just provides much greater flexibility.
So back to IDE-generated code: it always has massive amounts of duplication. Example:
verifyText "//form[#id='aspnetForm']/div[2]/div/div[2]/div[1]/span" Home
generates this block of code:
try
{
Assert.AreEqual("Home",
selenium.GetText("//form[#id='aspnetForm']/div[2]/div/div[2]/div[1]/span"));
}
catch (AssertionException e)
{
verificationErrors.Append(e.Message);
}
Each subsequent verifyText command generates an identical block of code, differing only by the two parameters.
My solution to this pungent code smell was to develop Selenium Sushi, a Visual Studio C# project template and library that lets you eliminate most if not all of this duplication. With the library I can simply write this one line of code to match the original line of code from the IDE test case:
Verify.AreEqual("Home",
selenium.GetText("//form[#id='aspnetForm']/div[2]/div/div[2]/div[1]/span"));
I have an extensive article covering this (Web Testing with Selenium Sushi: A Practical Guide and Toolset) that was just published on Simple-Talk.com in February, 2011.
You can also put some fragments or one-liners, e.g.
note( "now on page: " . $sel->get_location() . ", " . $sel->get_title() ;
into the "code snippets" collection of your IDE ( I use Eclipse).
That's not true reuse, but hey it works for me for throwaway testscripts or quick enhancements of existing testscripts.

TestCase scripting framework

For our webapp testing environment we're currently using watin with a bunch of unit tests, and we're looking to move to selenium and use more frameworks.
We're currently looking at Selenium2 + Gallio + Xunit.net,
However one of the things we're really looking to get around is compiled testcases. Ideally we want testcases that can be edited in VS with intellisense, but don't require re-compilling the assembly every single time we make a small change,
Are there any frameworks likely to help with this issue?
Are there any nice UI tools to help manage massive ammount of testcases?
Ideally we want the testcase writing process to be simple so that more testers can aid in writing them.
cheers
You can write them in a language like ruby (e.g., IronRuby) or python which doesnt have an explicit compile step of such a manner.
If you're using a compiled a compiled language, it needs to be compiled. Make the assemblies a reasonable size and a quick Shift F6 (I rewire it to shift Ins) will compile your current project. (Shift Ctrl-B will typically do lots of redundant stuff). Then get NUnit to auto-re-run the tests when it detects the assembly change (or go vote on http://xunit.codeplex.com/workitem/8832 and get it into the xunit GUI runner).
You may also find that CR, R# and/or TD.NET have stuff to offer you in speeding up your flow. e.g., I believe CR detects which tests have changed and does stuff around that (at the moment it doesnt support the more advanced xunit.net testing styles so I dont use it day to day)
You wont get around compiling test frameworks if you add new tests..
However there are a few possibilities.
First:
You could develop a native language like i did in xml or similar format. It would look something like this:
[code]
action name="OpenProfile"
parameter name="Username" value="TestUser"
[/code]
After you have this your could simply take an interpreter and serialize this xml into an object. Then with reflection you could call the appropriate function in the corresponding class. After you have a lot of actions implemented of course perfectly moduled and carefully designed structure ( like every page has its own object and a base object that every page inherits from ), you will be able to add xml based tests on your own without the need of rebuilding the framework it self.
You see you have actions like, login, go to profile, go to edit profile, change password, save, check email etcetc. Then you could have tests like: login change password, login edit profile username... and so on and so fort. And you only would be creating new xmls.
You could look for frameworks supporting similar behavior and there are a few out there. The best of them are cucumber and fitnesse. These all support high level test case writing and low level functionality building.
So basically once you have your framework ready all your have to do is writing tests.
Hope that helped.
Gergely.