How to set outcome of XUnit test - testing

Can you manually set outcome of test using XUnit? In one of my test I have to fulfill prerequisite and if it fails I need to set outcome of test to inconclusive. In NUnit you can set outcome by Assert.Inconclusive(), Assert.Fail(), etc. Am I able to do something similar with XUnit? Is there some best practice how to do this?

There is no out-of-the-box way of doing what you want in xUnit. See https://xunit.github.io/docs/comparisons.html for more information.
However Assert.Fail("This test is failing") can be replicated using Assert.True(false, "This test is failing").
The closest equivalent of Assert.Inconclusive in XUnit would be the use of the Skip attribute. But as far as I can see, there is no way to invoke this part way through a method, like you can with NUnit's Assert.Inconclusive.
Someone did write an Assert.Skip method for v1.9 here: https://github.com/xunit/xunit/blob/v1/samples/AssertExamples/DynamicSkipExample.cs, however it no longer compiles in v.2.2.0. Actually the xUnit authors seem to be antagonistic to inconclusive tests (https://xunit.codeplex.com/workitem/9691).

I think you can use Assume.That in your munitions test method it won't work while setup but you will to use that in test method.

Related

Running SpecFlow tests with different test cases

NUnit (and the like) has method attributes which allow tests to be run multiple times with different arrange values. Is something similar possible with SpecFlow?
What I am aiming for is a way to run the same scenario tests in a feature file with as many browser drivers as I can, in one test run.
You can use scenario outlines. In example of scenario outline you can mention driver name and you code logic should take action according to driver. Please see more details about scenario ouyline below
https://github.com/cucumber/cucumber/wiki/Scenario-outlines
Examples are one solution, but in your case a little cumbersome, as you have to specify them at every scenario.
In your case, please have a look at the targets feature of the SpecFlow+Runner. With that you can "multiply" your scenarios for different configurations. If you put the web driver that should be used in this configuration, you can test as many webdriver as you want.
Have a look at this example: https://github.com/techtalk/SpecFlow.Plus.Examples/tree/master/SeleniumWebTest
Full Disclosure: I am one of the developers of SpecFlow & SpecFlow+
Use scenario outlines and this tool if you want to use browsers as tags:
https://github.com/unickq/Unickq.SeleniumHelper

Is there a way to repeat some tests for a week by using testNG framework?

I need to run a test for a week. I tried several ways.
1. Use factory to add test. All cases run after the factory method finished. So I could not judge how many tests to add.
2. Changed TestNG class, rewrite some methods. Only works when I need to repeat all tests.
Is there any good way to implement that?
Have a thought of change both XmlSuite and testNG class. Add isRepeat and time2Repeat properites for XMLSuite and change testNG method runSuitesSequentially to support repeat running. Will have a try later.
The solution might work, but feels like a hack. If anyone could provide more decent way, please provide your thoughts.
Thanks!

Tool or eclipse base plugin available for generate test cases for SalesForce platform related Apex classes

Can any one please tell me is there any kind of tools or eclipse base plugins available for generate relevant test cases for SalesForce platform related Apex classes. It seems with code coverage they are not expecting out come like we expect with JUnit, they want to cover whether, test cases are going through the flows of the source classes (like code go through).
Please don't get this post in wrong, I don't want anyone is going to write test cases for my codes :). I have post this question due to nature of SalesForce expecting that code coverage should be. Thanks.
Although Salesforce requires a certain percentage of code coverage for your test cases, you really need to be writing cases that check the results to ensure that the code behaves as designed.
So, even if there was a tool that could generate code to get 100% coverage of your test class, it wouldn't be able to test the results of those method calls, leaving you with a false sense of having "tested code".
I've found that breaking up long methods into separate, sometimes static, methods makes it easier to do unit testing. You can test each individual method, and not worry so much about tweaking parameters to a single method so that it covers all execution paths.
it's now possible to generate test classes automatically for your class/trigger/batch. You can install "Test Class Generator" app from AppExchange and see it working.
This would really help you generating test class and saves lot of your development time.

Supply parameters to NUnit tests at run time

NUnit 2.5 adds support for parameterized tests with attributes like ValuesAttribute and ValueSourceAttribute so that one can write something like:
[Test]
public void MoneyTransfer(
[Values("USD", "EUR")]string currency,
[Values(0, 100)]long amount)
{
}
and get all permutations for parameters specified. Priceless. However, it would be cool to specify (override) those parameters directly in NUnit GUI before pressing 'Run'. Unfortunately there is no such functionality in NUnit (yet?). Is there an alternative tool or testing framework allowing me to specify parameters before running a test (something like i can provide parameters in WcfTestClient.exe)?
One option could be to try out the TestCaseSource attribute that's supported - basically, you can define an IEnumerable method as source of data for a test - and within that, you can look anywhere you like for test data - could be to pull from a database/flat file/iterater round files in a given directory etc.
Have a look at that, it's a handy thing to know about.
Unit test are supposed to run automatically and be reproducible. By changing test at runtime you break this behavior.
So I don't think this is something you want to do...

How to run tests conditionally in NUNIT?

I'd like to be able to set a condition from which to decide whether certain tests run in NUNIT.
For example if a global variable x = 1 then only run the tests from a certain class/assembly or only run the first test.
Is anything like this possible?
How would one go about doing this?
Thanks!!
I would recommend using Categories with your NUnit tests. This would allow you to run groups of them at a time, or all at once.
While Categories seem like the "more pure" way to do this, you could programmatically skip tests like this post asks: Programmatically skip an nunit test. It seems to me, this approach is what you're looking for though.