How to stop further execution of Tests within a TestFixture if one of them fails in NUnit? - testing

I want to stop further execution of Tests within a TestFixture if one of them fails in NUnit.
Of course the common and advised practice is to make tests independent of each other. However, the case I would like to use NUnit for, requires that all tests and test fixtures following one that failed, are not executed. In other words, test failure causes the whole NUnit execution to stop (or proceeds with the next [TestFixture] but both scenarios should be configurable).
The simple, yet not acceptable solution, would be to force NUnit termination by sending a signal of some kind to the NUnit process.
Is there a way to do this in an elegant way?

I believe you can use NAnt to do this. Specifically, the nunit or nunit2 tasks have a haltonfailure parameter that allows the test run to stop if a test fails.

Related

Apache POI -Read from Excel Exceptions in Selenium

If an exception occurs while fetching the data from the excel, will the execution stops? Only the current test case or all the test cases?
TestNG behave differently for exceptions appearing on different stages, so it depends.
Basically, no matter which exception appeared (unless testng SkipException, but it's the edge case, so i miss this), you might get the next:
Before configurations
For this case all dependent test and configuration methods will be skipped (unless some of them have alwaysRun=true annotation attribute).
Test method
You'll get this test failed. Also all the tests, which depends on this method will be skipped.
After configurations
Usually this do not affect your test results, but may fail the build, even all the tests passed. And also after confirmation failures may potentially affect some ongoing tests, if they expect something (but this is not related to TestNG functionality).
DataProvider
All the related tests will be skipped, everything else will not be affected.
Test Class constructor
This will broke your run, no tests will be executed.
Factory method (need to recheck)
I don't remember the behaviour. This might fail the whole launch or just some test classes. But exception here is a serious issue, try to avoid.
TestNG Listeners
This will broke the whole your test launch. Try to implement them error-free, surrounding with try/catches.

How to limit the number of test threads in Cargo.toml?

I have tests which share a common resource and can't be executed concurrently. These tests fail with cargo test, but work with RUST_TEST_THREADS=1 cargo test.
I can modify the tests to wait on a global mutex, but I don't want to clutter them if there is any simpler way to force cargo set this environment variable for me.
As of Rust 1.18, there is no such thing. In fact, there is not even a simpler option to disable parallel testing.
Source
However, what might help you is cargo test -- --test-threads=1, which is the recommended way of doing what you are doing over the RUST_TEST_THREADS envvar. Keep in mind that this only sets the number of threads used for testing in addition to the main thread.

Ordering tests in TFS 2012

There are a few tests in my testing solution that must be run first or else later tests will fail. I want a way to ensure these are run first and in a specific order. Is there any way of doing this other than using a .orderedtest file?
Some problems with the .orderedtest:
Certain tests should be run in a random order after the "set up" tests are finished
Ordered test does not seem to call the ClassInitialize method
Isn't an orderedtest a form or test list that is deprecated in VS/TFS 2012?
My advice would be to fix your tests to remove the dependencies (i.e. make them proper "unit" tests) - otherwise they are bound to cause problems later, e.g.:
causing a simple failure to cascade so that hundreds of tests fail and make it hard to find the root cause
failing unexpectedly because someone has inadvertently modified the execution order
reporting passes when in fact they should be failing, just because the initial state is not as they required
You could try approaches like:
keep the tests separate but make each of them set up and tear down the test environment that they require. (A shared class to provide the initial state would be helpful here)
merge the related tests into a single one, so that you can control the setup, execution, and close-down in a robust way.

How do you compare the results of two nunit test runs?

We currently have a situation where several tests are failing. Someone is working on this, but it is not me. I have been tasked with other work. So I plan on running the tests in NUnit before I begin my work so I have a base line of failing tests and what the failure message is. I would like to use this result to verify that those tests fail with the exact same failure result while testing my own code. are there any resources that would allow me to do this?
update
I'm aware of the ExpectedException attribute. However that will not work for the tests that are failing the test condition. Also there are thousands of tests of which only about 100 tests are failing. I was hoping for something that would compare the two test runs and show me the differences.
I'd throw an
[Ignore("SomeCustomStringICanFindLater")]
attribute on the failing tests until they are fixed.
See IgnoreAttribute.
And try to convince your manager that a broken build should be everyone's top priority.
After doing some research while waiting on an answer. I found that the console runner produces xml output. I can use a diff tool to compare the two test runs and see which tests failed differently than the baseline test run.

Stop unit tests from running

I would like to prevent all subsequent unit tests from running when certain conditions are met in a unit test. Is this possible in Visual Studio 2005?
This sounds like a code smell. You should set up your unit tests so that they are independent of one another, so that one failing test has no implications on any other tests. It sounds like you are doing something other than true unit testing at the moment.
This doesn't sound good to me. Unit tests should not have any reliance on ordering.
If you're just trying to save time during a build, consider factoring out the conditional tests into their own assembly and using a build script to determine whether the second set of tests should be run.