Using Mocks with multiple scenarios in NBehave - rhino-mocks

I'm using NBehave to write out my stories and using Rhino Mocks to mock out dependencies of the System(s) Under Test.
However I'm having a problem resetting expected behaviour in my mock dependencies when moving from one scenario to the next.
I only want to assert that the save method on my repository was called in two scenarios:
dependancyRepository.AssertWasCalled( ear =>
ear.Save(
Arg<IDependancy>.Is.Equal(dependency)
)
)
But this is being called in each scenario and fails in my second scenario because the Rhino Mocks expects it be called just once. I don't want to be forced to use an explicit expections but it kinda looks like I'll have too.
There are a few examples out there of NBehave with Rhino Mocks but I can't one that has multiple scenarios. And there are a few with NBehave and multiple Scenarios but no mocks.
Anybody else run into this issue?
Cheers

If you don't want want to assert that .Save(...) was called in each scenario, then don't set up that expectation for each scenario, set it up only for the scenarios where you expect it to be called.
If this doesn't answer your question, please clarify your question with more information; it's unclear what you're trying to do.

Make the AssertWasCalled call during your Then clause of the relevant scenario, and not in any others.

Related

Pytest BDD - select stubbed or live API calls

I'm working on developing some Behavior Driven Development i.e style tests using pytest-bdd. We want to re-use the same features and more or less the same step definitions to having both stubbed and live calls to a third party API i.e. we want to reuse test code for integration and end to end testing.
I'm wondering about whether there was a convention on how to handle alternating between mocked and real calls in pytest_bdd or pytest
This question is similar: Running pytest tests against multiple backends? with an answer to add a parser option with a pytest_addoption hook placed in the top level conftest.py.
It looks like a good approach to select a stubbed or live api call in api is to add a parser option with a pytest_addoption hook. Conditional logic will need to look for those option in the relevant tests.
This answer to a similar question is the source for this approach and has more detail: https://stackoverflow.com/a/50686439/961659

Cucumber #before and #after hooks usage

I am designing a BDD automation framework, where we are thinking of using cucumber #Before and #after hooks, can anyone suggest the best use of these?
There are many different hooks for before and after,
BeforeFeature:- Automation logic that has to run before each feature. Say in the feature file you have 10 scenarios, then for each 10 scenarios, BeforeFeature would be common. One use case is, say you want to get a global variable from config which is using by all scenarios in a feature. In this case, you don't need to go and get it during each scenario. Create a BeforeFeature hook
BeforeScenario:- Run before each scenario. Say you have 10 scenarios and you want to run one step which is common to all 10 scenarios. In this case, you can create a BeforeScenario hook. One example is, say you want to create a REST client and that REST client is common for all 10 scenarios. Go for BeforeScenario hook then.
AfterFeature:- Say if you want to clean up something, which was common to all feature then clean it in AfterFeature. This will run only after the completion of all scenarios in a feature.
AfterScenario:- Run after each scenario. If you want to clean up something after each scenario. say if you want to dispose of the REST client you have created during BeforeScenario then you can do that here.
Please find full hooks,

How to set outcome of XUnit test

Can you manually set outcome of test using XUnit? In one of my test I have to fulfill prerequisite and if it fails I need to set outcome of test to inconclusive. In NUnit you can set outcome by Assert.Inconclusive(), Assert.Fail(), etc. Am I able to do something similar with XUnit? Is there some best practice how to do this?
There is no out-of-the-box way of doing what you want in xUnit. See https://xunit.github.io/docs/comparisons.html for more information.
However Assert.Fail("This test is failing") can be replicated using Assert.True(false, "This test is failing").
The closest equivalent of Assert.Inconclusive in XUnit would be the use of the Skip attribute. But as far as I can see, there is no way to invoke this part way through a method, like you can with NUnit's Assert.Inconclusive.
Someone did write an Assert.Skip method for v1.9 here: https://github.com/xunit/xunit/blob/v1/samples/AssertExamples/DynamicSkipExample.cs, however it no longer compiles in v.2.2.0. Actually the xUnit authors seem to be antagonistic to inconclusive tests (https://xunit.codeplex.com/workitem/9691).
I think you can use Assume.That in your munitions test method it won't work while setup but you will to use that in test method.

Tool or eclipse base plugin available for generate test cases for SalesForce platform related Apex classes

Can any one please tell me is there any kind of tools or eclipse base plugins available for generate relevant test cases for SalesForce platform related Apex classes. It seems with code coverage they are not expecting out come like we expect with JUnit, they want to cover whether, test cases are going through the flows of the source classes (like code go through).
Please don't get this post in wrong, I don't want anyone is going to write test cases for my codes :). I have post this question due to nature of SalesForce expecting that code coverage should be. Thanks.
Although Salesforce requires a certain percentage of code coverage for your test cases, you really need to be writing cases that check the results to ensure that the code behaves as designed.
So, even if there was a tool that could generate code to get 100% coverage of your test class, it wouldn't be able to test the results of those method calls, leaving you with a false sense of having "tested code".
I've found that breaking up long methods into separate, sometimes static, methods makes it easier to do unit testing. You can test each individual method, and not worry so much about tweaking parameters to a single method so that it covers all execution paths.
it's now possible to generate test classes automatically for your class/trigger/batch. You can install "Test Class Generator" app from AppExchange and see it working.
This would really help you generating test class and saves lot of your development time.

TestNG & Selenium: Separate tests into "groups", run ordered inside each group

We use TestNG and Selenium WebDriver to test our web application.
Now our problem is that we often have several tests that need to run in a certain order, e.g.:
login to application
enter some data
edit the data
check that it's displayed correctly
Now obviously these tests need to run in that precise order.
At the same time, we have many other tests which are totally independent from the list of tests above.
So we'd like to be able to somehow put tests into "groups" (not necessarily groups in the TestNG sense), and then run them such that:
tests inside one "group" always run together and in the same order
but different test "groups" as a whole can run in any order
The second point is important, because we want to avoid dependencies between tests in different groups (so different test "groups" can be used and developed independently).
Is there a way to achieve this using TestNG?
Solutions we tried
At first we just put tests that belong together into one class, and used dependsOnMethods to make them run in the right order. This used to work in TestNG V5, but in V6 TestNG will sometimes interleave tests from different classes (while respecting the ordering imposed by dependsOnMethods). There does not seem to be a way to tell TestNG "Always run tests from one class together".
We considered writing a method interceptor. However, this has the disadvantage that running tests from inside an IDE becomes more difficult (because directly invoking a test on a class would not use the interceptor). Also, tests using dependsOnMethods cannot be ordered by the interceptor, so we'd have to stop using that. We'd probably have to create our own annotation to specify ordering, and we'd like to use standard TestNG features as far as possible.
The TestNG docs propose using preserve-order to order tests. That looks promising, but only works if you list every test method separately, which seems redundant and hard to maintain.
Is there a better way to achieve this?
I am also open for any other suggestions on how to handle tests that build on each other, without having to impose a total order on all tests.
PS
alanning's answer points out that we could simply keep all tests independent by doing the necessary setup inside each test. That is in principle a good idea (and some tests do this), however sometimes we need to test a complete workflow, with each step depending on all previous steps (as in my example). To do that with "independent" tests would mean running the same multi-step setup over and over, and that would make our already slow tests even slower. Instead of three tests doing:
Test 1: login to application
Test 2: enter some data
Test 3: edit the data
we would get
Test 1: login to application
Test 2: login to application, enter some data
Test 3: login to application, enter some data, edit the data
etc.
In addition to needlessly increasing testing time, this also feels unnatural - it should be possible to model a workflow as a series of tests.
If there's no other way, this is probably how we'll do it, but we are looking for a better solution, without repeating the same setup calls.
You are mixing "functionality" and "test". Separating them will solve your problem.
For example, create a helper class/method that executes the steps to log in, then call that class/method in your Login test and all other tests that require the user to be logged in.
Your other tests do not actually need to rely on your Login "Test", just the login class/method.
If later back-end modifications introduce a bug in the login process, all of the tests which rely on the Login helper class/method will still fail as expected.
Update:
Turns out this already has a name, the Page Object pattern. Here is a page with Java examples of using this pattern:
http://code.google.com/p/selenium/wiki/PageObjects
Try with depends on group along with depends on method. Add all methods in same class in one group.
For example
#Test(groups={"cls1","other"})
public void cls1test1(){
}
#Test(groups={"cls1","other"}, dependsOnMethods="cls1test1", alwaysrun=true)
public void cls1test2(){
}
In class 2
#Test(groups={"cls2","other"}, dependsOnGroups="cls1", alwaysrun=true)
public void cls2test1(){
}
#Test(groups={"cls2","other"}, dependsOnMethods="cls2test1", dependsOnGroups="cls1", alwaysrun=true)
public void cls2test2(){
}
There is an easy (whilst hacky) workaround for this if you are comfortable with your first approach:
At first we just put tests that belong together into one class, and used dependsOnMethods to make them run in the right order. This used to work in TestNG V5, but in V6 TestNG will sometimes interleave tests from different classes (while respecting the ordering imposed by dependsOnMethods). There does not seem to be a way to tell TestNG "Always run tests from one class together".
We had a similar problem: we need our tests to be run class-wise because we couldn't guarantee the test classes not interfering with each other.
This is what we did:
Put a
#Test( dependsOnGroups= { "dummyGroupToMakeTestNGTreatThisAsDependentClass" } )
Annotation on an Abstract Test Class or Interface that all your Tests inherit from.
This will put all your methods in the "first group" (group as described in this paragraph, not TestNG-groups). Inside the groups the ordering is class-wise.
Thanks to Cedric Beust, he provided a very quick answer for this.
Edit:
The group dummyGroupToMakeTestNGTreatThisAsDependentClass actually has to exist, but you can just add a dummy test case for that purpose..