Unit test project takes ~40 seconds to build via nCrunch with MS Fakes - microsoft-fakes

I've got about 27 tests in a Unit testing project, one of which is using System.Fakes to fake a System.DateTime call. It seems that the unit test project is recreating the System.Fakes extensions with every build meaning that nCrunch is VERY slow to show unit test results. I've not experienced this when using rhinomocks for mocking interfaces in tests and I was wondering if there was a way to improve this performance that anyone was aware of when using Microsoft.Fakes.

I allocated an additional processor to nCrunch and set the fastlane thread option and this made a huge difference. Tests were as responsive as I've experienced with rhinomocks. I'm happy.

Related

Nancy.Testing seems slow, is there anything I'm should be doing to improve performance?

I love Nancy.Testing, excellent way to test my application. But, I'm finding it's quite slow. One of my test fixtures has 26 tests using the Browser object and it's taking about 1m20sec.
I'm creating a single ConfigurableBootstrapper and Browser objects in the test fixture setup and I'm reusing them for each request (per test fixture). I've tried just loading a single Module rather than all discoverable, but it doesn't make any difference.
I do have a lot of Mocks for my repository interfaces that are loaded into the ConfigurableBootstrapper, surely once they are loaded it shouldn't affect speed? Also, most of the tests use the css selectors, is that known to be slow?
The Environment in a nut shell:
Test framework: Nunit
Mock framework: Moq
Bootstrapper: ConfigutableBootstrapper
Nancy Version: 0.23
Test Runner: Resharper/Teamcity
Is there anything should be doing to do to speed up the tests?
Got the answer to this. The problem was the number of dependencies being loaded.
I had the following lines in the ConfigurableBootstrapper
with.AllDiscoveredModules();
with.EnableAutoRegistration();
This loads the whole universe into the test instance.
I removed these lines and manually added the dependencies needed almost test by test. I also did some refactoring in my application to reduce the number of dependencies injected to satisfy each request. E.g. chances are if you are editing a customer record you don't need the products repository, so I split down several classes to be more focused (it was a code smell anyway)
Testing time was reduced from 8 minutes to 1.5 minutes
Word is you can go further:
with.DisableAutoRegistrations();
I had the same problem. My fourteen tests took more than three minutes. It was so slow that before writing more unit tests I began searching for a solution. This one inspired me to find what is a cause of slowness.
Before optimization of tests, my code was:
var browser = new Browser(new DefaultNancyBootstrapper());
After optimization my code became
var browser = new Browser(with => with.Module(new SomeModule()));
or
var browser = new Browser(with => with.Modules(typeof(SomeModule), typeof(AnotherModule)));
That's all. Those tests taking 180+ seconds now need only 3,8 seconds.

Running test coverage separately from test execution

We're configuring build steps in TeamCity. Since we have huge problems with test coverage reports (they were there and then inexplainably vanished), we're trying to find a work-trough (asking and bouting a question directly related to our issue yielded very cold response).
Please note that I'm not looking for an opinion but rather a technical knowledge base to support (or kill) the choice of ours. And yes, I've checked the build logs - these are posted in the other thread. This question is about the (in?)sanity of trying an alternative approach. :)
Is it recommended to run a build step for test and then another build step for test coverage?
Does it even makes sense to run these in separate build steps?!
What gains and disadvantages are there to running coverage bundled with/separately from the tests themselves?
Test coverage reports are generated during unit test runs. Unless your problem is with reading generated reports, it doesn't make sense to "run them in separate build steps". Test coverage tells you what parts of your code were run WHILE the tests were running- I don't see how they could be independent.
It may make more sense to ask for help with the test coverage reports no longer being generated...

Grails, Hudson, and Cobertura, which tests are covering my code?

I just started working on an existing grails project where there is a lot of code written and not much is covered by tests. The project is using Hudson with the Cobertura plugin which is nice. As I'm going through things, I'm noticing that even though there are not specific test classes written for code, it is being covered. Is there any easy way to see what tests are covering the code? It would save me a bit of time if I was able to know that information.
Thanks
What you want to do is collect test coverage data per test. Then when some block of code isn't exercised by a test, you can trace it back to the test.
You need a test coverage tool which will do that; AFAIK, this is straightforward to organize. Just run one test and collect test coverage data.
However, most folks also want to know, what is the coverage of the application given all the tests? You could run the tests twice, once to get what-does-this-test-cover information, and then the whole batch to get what-does-the-batch-cover. Some tools ( ours included) will let you combine the coverage from the individual tests, to produce covverage for the set, so you don't have to run them twice.
Our tools have one nice extra: if you collect test-specific coverage, when you modify the code, the tool can tell which individual tests need to be re-run. You need a bit of straightforward scripting for this, to compare the results of the instrumentation data for the changed sources to the results for each test.

Stop unit tests from running

I would like to prevent all subsequent unit tests from running when certain conditions are met in a unit test. Is this possible in Visual Studio 2005?
This sounds like a code smell. You should set up your unit tests so that they are independent of one another, so that one failing test has no implications on any other tests. It sounds like you are doing something other than true unit testing at the moment.
This doesn't sound good to me. Unit tests should not have any reliance on ordering.
If you're just trying to save time during a build, consider factoring out the conditional tests into their own assembly and using a build script to determine whether the second set of tests should be run.

Is there a tool for creating historical report out of j/nunit results

Looking for a way to get a visual report about:
overall test success percentage over time (information about if and how quickly tests are going greener)
visualised single test results over time (to easily notice test gone red that has been green for long time or vice versa to pay attention to a test that has just gone green)
any other visual statistics that would benefit testers and the project as a whole
Basically a tool that would generate results from the whole test results directory not just off the single (daily) run.
Generally it seems it could be done using XSLT, but it doesn't seem to have much flexibility to work with multiple files at the same time.
Does such a tool exist already?
I feel fairly courageous to claim that most Continuous Integration Engines such as Hudson (for Java) provide such capability either natively or through plugins. In Hudson's case there's a few code coverage plugins available already and I think it does basic graphs from unit tests automatically by itself.
Oh and remember to configure the CI properly, for example our Hudson polls CVS every 10 minutes and if it sees any changes, it does all the associated tricks (get updated .java files, compile, run tests, verify dependencies etc.) to see if the build is still OK or not.
Hudson will do this and it will work with Nunit (here), Junit (natively), and MSTest.exe tests using the steps I outline here. It does all that you require and more. Even if you want it to ONLY run tests and give you feedback on those, it can.
There's such new report supporting NUnit \ JUnit called Allure. To retrieve information from NUnit you need to use NUnit adapter, for JUnit - read the following wiki page. You can use it with Jenkins via respective plugin.