Nancy.Testing seems slow, is there anything I'm should be doing to improve performance? - testing

I love Nancy.Testing, excellent way to test my application. But, I'm finding it's quite slow. One of my test fixtures has 26 tests using the Browser object and it's taking about 1m20sec.
I'm creating a single ConfigurableBootstrapper and Browser objects in the test fixture setup and I'm reusing them for each request (per test fixture). I've tried just loading a single Module rather than all discoverable, but it doesn't make any difference.
I do have a lot of Mocks for my repository interfaces that are loaded into the ConfigurableBootstrapper, surely once they are loaded it shouldn't affect speed? Also, most of the tests use the css selectors, is that known to be slow?
The Environment in a nut shell:
Test framework: Nunit
Mock framework: Moq
Bootstrapper: ConfigutableBootstrapper
Nancy Version: 0.23
Test Runner: Resharper/Teamcity
Is there anything should be doing to do to speed up the tests?

Got the answer to this. The problem was the number of dependencies being loaded.
I had the following lines in the ConfigurableBootstrapper
with.AllDiscoveredModules();
with.EnableAutoRegistration();
This loads the whole universe into the test instance.
I removed these lines and manually added the dependencies needed almost test by test. I also did some refactoring in my application to reduce the number of dependencies injected to satisfy each request. E.g. chances are if you are editing a customer record you don't need the products repository, so I split down several classes to be more focused (it was a code smell anyway)
Testing time was reduced from 8 minutes to 1.5 minutes
Word is you can go further:
with.DisableAutoRegistrations();

I had the same problem. My fourteen tests took more than three minutes. It was so slow that before writing more unit tests I began searching for a solution. This one inspired me to find what is a cause of slowness.
Before optimization of tests, my code was:
var browser = new Browser(new DefaultNancyBootstrapper());
After optimization my code became
var browser = new Browser(with => with.Module(new SomeModule()));
or
var browser = new Browser(with => with.Modules(typeof(SomeModule), typeof(AnotherModule)));
That's all. Those tests taking 180+ seconds now need only 3,8 seconds.

Related

Selenium Best Practice: One Long Test or Several Successively Long Tests?

In Selenium I often find myself making tests like ...
// Test #1
login();
// Test #2
login();
goToPageFoo();
// Test #3
login();
goToPageFoo();
doSomethingOnPageFoo();
// ...
In a unit testing environment, you'd want separate tests for each piece (ie. one for login, one for goToPageFoo, etc.) so that when a test fails you know exactly what went wrong. However, I'm not sure this is a good practice in Selenium.
It seems to result in a lot of redundant tests, and the "know what went wrong" problem doesn't seem so bad since it's usually clear what went wrong by looking at the what step the test was on. And it certainly takes longer to run a bunch of "build up" tests than it takes to run just the last ("built up") test.
Am I missing anything, or should I just have a single long test and skip all the shorter ones building up to it?
I have built a large test suite in Selenium using a lot of smaller tests (like in your code example). I did it for exactly the same reasons you did. To know "what went wrong" on a test failure.
This is a common best practice for standard unit tests, but if I had to do it over again, I would go mostly with the second approach. Larger built-up tests with some smaller tests when needed.
The reason is that Selenium tests take an order of magnitude longer than standard unit tests to run, particularly on longer scenarios. This makes the whole test suite unbearably long with most of the time being spent on running the same redundant code over and over again.
When you do get an error, say in a step that is repeated at the beginning of 20+ different tests, it does not really help to know you got the same error 20+ times. My test runner runs my test out of order so my first error isn't even on the first incremental test of the "build-up" series so I end up looking at the first test failure and it's error message to see where the failure came from. The same thing I would do with if I had used larger "built-up" tests.

Unit test project takes ~40 seconds to build via nCrunch with MS Fakes

I've got about 27 tests in a Unit testing project, one of which is using System.Fakes to fake a System.DateTime call. It seems that the unit test project is recreating the System.Fakes extensions with every build meaning that nCrunch is VERY slow to show unit test results. I've not experienced this when using rhinomocks for mocking interfaces in tests and I was wondering if there was a way to improve this performance that anyone was aware of when using Microsoft.Fakes.
I allocated an additional processor to nCrunch and set the fastlane thread option and this made a huge difference. Tests were as responsive as I've experienced with rhinomocks. I'm happy.

Running GWT 2.4 Tests With JUnitCore

I have several GWTTestCases in my test suite, and I'm currently using a homegrown testing script which is written in Java that runs tests as follows:
for(Class<?> testClass : allTestClasses) {
final JUnitCore core = new JUnitCore();
final Result result = core.run(testClass);
}
Now, the first GWT test will pass and all subsequent tests will fail. It doesn't matter which test runs first, and I can run the tests successfully from the command line.
Looking through the logs, the specific error is typically like:
java.lang.RuntimeException: deepthought.test.JUnit:package.GwtTestCaseClass.testMethod: could not instantiate the requested class
I think it has something to do with GWTTestCase static state, but am unsure. If I do one run where I pass all the testClasses to the core, they all pass, and then subsequently any individual test will pass.
My guess is that gwt compiles and caches the tests you are running, and then stores them based on the module. But in this case, the compiler misses my other test cases, because it doesn't see a dependency to them. Then for the next test, it comes back to the cache, hits it and fails to find the test I want.
Any thoughts on a workaround, other than just passing all the tests in at once?
The workaround I discovered is to first add all the GWTTestCase classes to a GWTTestSuite, which you can then throw away. You don't incur the cost of compilation at this point, but it somehow makes GWT aware of all the test cases, and so when you compile the first one...they all get compiled.
If you ask me, this is a GWT bug.

how can I add code reuse to my Selenium tests?

Here's the situation that I'm working with:
Build tests in Selenium
Get all the tests running correctly (in Firefox)
Export all the tests to MSTest (so that each test can be run in IE, Chrome and FF)
If any test needs to be modified, do that editing in Selenium IDE
So it's a very one-way workflow. However, I'd now like to do a bit more automation. For instance, I'd like every test to run under each of two accounts. I'm getting into a maintenance issue. If I have 6 tests that I want to run under two accounts, suddenly I'd need 12 tests in the Selenium IDE tests. That's too much editing. But a ton of that code is exactly the same.
How can I share chunks of Selenium tests among tests? Should I use Selenium IDE to develop the test first time then never use it again (only doing edits in VS after that)?
Selenium code is very linear after you export it from the IDE.
For example (ignore syntax):
someTestMethod() {
selenium.open("http://someLoginPage.com");
selenium.type("usernameField", "foo");
selenium.type("passwordField", "bar");
selenium.click("loginButton");
selenium.waitForPageToLoad("30000");
assertTrue(selenium.isTextPresent("Welcome * foo"));
}
This is the login page. Every single one of your tests will have to use it. You should refactor it into a method.
someTestMethod(){
selenium.open("http://someLoginPage.com");
String username = "foo";
String password = "bar";
performLogin(username, password);
}
performLogin(String username, String password){
selenium.type("usernameField", username);
selenium.type("passwordField", password);
selenium.click("loginButton");
selenium.waitForPageToLoad("30000");
assertTrue(selenium.isTextPresent("Welcome * foo"));
}
The performLogin() method does not have to be in the same file as your test code itself. You can create a separate class for it with your methods and share it between your tests.
We have classes that correspond to certain functionalities on our UI. For example, we have many ways to search in our app. All methods that helps you with search functionality will be in the SearchUtil class.
Structuring your tests similarly will give you the following advantages:
If the UI changes (an id of a field), you go to your one method, update the id and you are good to go
If the flow of your logic changes you also have only one place to update
To test whether your changes worked, you only have to run one of the tests to verify. All other tests use the same code so it should work.
A lot more expressive as you look at the code. With well named methods, you create a higher level of abstraction that is easier to read and understand.
Flexible and extensible! The possibilities are limitless. At this point you can use conditions, loops, exceptions, you can do your own reporting, etc...
This website is an excellent resource on what you are trying to accomplish.
Good Luck!
There are two aspects to consider regarding code reuse:
Eliminating code duplication in your own code base -- c_maker touched on this.
Eliminating code duplication from code generated by Selenium IDE.
I should point out that my comments lean heavily to the one-way workflow that you are using, jcollum, but even more so: I use IDE to generate code just once for a given test case. I never go back to the IDE to modify the test case and re-export it. (I do keep the IDE test case around as a diagnostic tool when I want to experiment with things while I am fine-tuning and customizing my test case in code (in my case, C#).
The reasons I favor using IDE tests only as a starting point are:
IDE tests will always have a lot of code duplication from one test to another; sometimes even within one test. That is just the nature of the beast.
In code I can make the test case more "user-friendly", i.e. I can encapsulate arcane locators within a meaningful-named property or method so it is much clearer what the test case is doing.
Working in code rather than the IDE just provides much greater flexibility.
So back to IDE-generated code: it always has massive amounts of duplication. Example:
verifyText "//form[#id='aspnetForm']/div[2]/div/div[2]/div[1]/span" Home
generates this block of code:
try
{
Assert.AreEqual("Home",
selenium.GetText("//form[#id='aspnetForm']/div[2]/div/div[2]/div[1]/span"));
}
catch (AssertionException e)
{
verificationErrors.Append(e.Message);
}
Each subsequent verifyText command generates an identical block of code, differing only by the two parameters.
My solution to this pungent code smell was to develop Selenium Sushi, a Visual Studio C# project template and library that lets you eliminate most if not all of this duplication. With the library I can simply write this one line of code to match the original line of code from the IDE test case:
Verify.AreEqual("Home",
selenium.GetText("//form[#id='aspnetForm']/div[2]/div/div[2]/div[1]/span"));
I have an extensive article covering this (Web Testing with Selenium Sushi: A Practical Guide and Toolset) that was just published on Simple-Talk.com in February, 2011.
You can also put some fragments or one-liners, e.g.
note( "now on page: " . $sel->get_location() . ", " . $sel->get_title() ;
into the "code snippets" collection of your IDE ( I use Eclipse).
That's not true reuse, but hey it works for me for throwaway testscripts or quick enhancements of existing testscripts.

Is there a tool for creating historical report out of j/nunit results

Looking for a way to get a visual report about:
overall test success percentage over time (information about if and how quickly tests are going greener)
visualised single test results over time (to easily notice test gone red that has been green for long time or vice versa to pay attention to a test that has just gone green)
any other visual statistics that would benefit testers and the project as a whole
Basically a tool that would generate results from the whole test results directory not just off the single (daily) run.
Generally it seems it could be done using XSLT, but it doesn't seem to have much flexibility to work with multiple files at the same time.
Does such a tool exist already?
I feel fairly courageous to claim that most Continuous Integration Engines such as Hudson (for Java) provide such capability either natively or through plugins. In Hudson's case there's a few code coverage plugins available already and I think it does basic graphs from unit tests automatically by itself.
Oh and remember to configure the CI properly, for example our Hudson polls CVS every 10 minutes and if it sees any changes, it does all the associated tricks (get updated .java files, compile, run tests, verify dependencies etc.) to see if the build is still OK or not.
Hudson will do this and it will work with Nunit (here), Junit (natively), and MSTest.exe tests using the steps I outline here. It does all that you require and more. Even if you want it to ONLY run tests and give you feedback on those, it can.
There's such new report supporting NUnit \ JUnit called Allure. To retrieve information from NUnit you need to use NUnit adapter, for JUnit - read the following wiki page. You can use it with Jenkins via respective plugin.