Pex: How to indicate in a parameterized test that I expect an exception under certain conditions? - pex

I'm just getting started with Pex and running into an issue as described in the title. It seems that any parameterized tests generated by Pex or added by hand will create failing test cases for any inputs that cause an exception to be raised. Is there a way for me to indicate that certain inputs should raise exceptions, and therefore not cause a specific test to fail?

If you select the failing in test there is an 'Allow Exception' option which tells pex that the exception is correct behaviour.

Related

Apache POI -Read from Excel Exceptions in Selenium

If an exception occurs while fetching the data from the excel, will the execution stops? Only the current test case or all the test cases?
TestNG behave differently for exceptions appearing on different stages, so it depends.
Basically, no matter which exception appeared (unless testng SkipException, but it's the edge case, so i miss this), you might get the next:
Before configurations
For this case all dependent test and configuration methods will be skipped (unless some of them have alwaysRun=true annotation attribute).
Test method
You'll get this test failed. Also all the tests, which depends on this method will be skipped.
After configurations
Usually this do not affect your test results, but may fail the build, even all the tests passed. And also after confirmation failures may potentially affect some ongoing tests, if they expect something (but this is not related to TestNG functionality).
DataProvider
All the related tests will be skipped, everything else will not be affected.
Test Class constructor
This will broke your run, no tests will be executed.
Factory method (need to recheck)
I don't remember the behaviour. This might fail the whole launch or just some test classes. But exception here is a serious issue, try to avoid.
TestNG Listeners
This will broke the whole your test launch. Try to implement them error-free, surrounding with try/catches.

Using Rubberduck unit tests, how can I find out which one of multiple asserts failed?

I'm using Rubberduck to unit test my VBA implementations. When using multiple Asserts of the same kind (e.g. Assert.IsTrue) in one TestMethod, the test result is not telling me which of them failed, as far as I can see.
Is there a way to find out which Assert failed or is this on the Rubberduck future roadmap? Of course I could add my own information, e.g. by using Debug.Print before each Assert, but that would cause a lot of extra code.
I know there are different opinions about multiple Asserts in one test, but I chose to have them in my situation and this discussion is already covered elsewhere.
Disclaimer: I'm heavily involved in Rubberduck's development.
The IAssert interface that both the Rubberduck.AssertClass and the Rubberduck.PermissiveAssertClass implement, includes an optional message parameter for every single member:
Simply include a different and descriptive message for each assertion:
Assert.AreEqual expected, actual, "oops, didn't expect this"
Assert.IsTrue result, "truth is an illusion"
The Test Explorer toolwindow will display the custom message under the Message column, only when the assertion fails:

Can I throw an exception in Cleanup to fail a test?

I am running some UI tests using WebDriver and MSpec. I added a check in Cleanup that no JavaScript errors were raised. But, throwing an exception in here doesn't fail the tests. How can I get this to work? I need to fail any test, and don't really want to do this separately in each test.
If I remember correctly, there isn't really a way to do this in a cleanup. Cleanups happen after tests, so it would be too late to fail them. As a matter of principle, it may be better to write the assertion for it not raising any javascript errors as its own spec at the end of each of them.
Even if it can be done from the Cleanup code, it should not be done that way.
Reason: How would you know which of the multiple tests that you have failed?

Testing Exceptions using SenTestingKit/OCUnit

The only solution I seem to be able to find for testing for exceptions is using SenTestingKit's STAssertThrows and STAssertThrowsSpecific, however in both cases when the exception is thrown the application under test hangs until I manually ask it to continue. Surely the exceptions should be swallowed by the testing framework? And if not, does anyone have a beter suggestion for testing exceptions?
I was going to delete this question, but here is the solution for anyone else who finds themselves in a the same situation:
The reason that the application was breaking was that I had an Exception Breakpoint set up. This breaks as soon as an exception is raised, not when it bubbles up, so it was actually being halted before it had even got as far as my assertion. I just need to toggle off breakpoints (or just the exception breakpoint) when I am running tests.

what are difference between a bug and script issue?

what are difference between a bug and script issue?
and when ever a testcase is failed, how to solve that testcase, i mean what are basic
points to check how the testcase is failed.
The term "test case" is to generic - what do you mean by it in your context?
However generally in the test case you should identify the condition(s) that will make this test "passed". So if condition(s) is not satisfied the test is failed.
Generally the issue is some problem with something (for example, problem with the script). It may be the system configuration issue that prevents the script from successful execution or it may be an error in the script code, and this will be a bug in the script.