Setting the TestNG group name at class level - testing

TestNG groups annotation is declared at test method level, that is, you can specify which group or groups a specific test method can belong to.
eg.
#Test(groups = { "group1", "group2" }, priority = 10, enabled = true)
public void doTest() {}
For a given scenario where all test methods only belong to a single group, it almost seems like an overhead to specify the group name in each test method. Is there a way, in this case, to set the group name at a higher (class) level?

Yes, you can do it this way at class level.
#Test(groups = GROUP_EXAMPLE)
public class ClassExample {
#Test
public void methodExample() {
}
}
Hopefully it helped!

Related

What are the other ways to set priority of test cases in selenium TestNG apart from priority attribute

Suppose there are 10 tests in a test class . I want to run them in a particular order. I can use priority attribute to set the priority of test cases. Is there any other way to set priority of test cases.
Scenario : Test classes have one or more #Test methods defined with priorities
import org.testng.annotations.Test;
public class MyTests1 {
#Test(priority = 1)
public void test1() {
System.out.println("test1 from " + getClass().getSimpleName() + " class");
}
#Test(priority = 2)
public void test2() {
System.out.println("test2 from " + getClass().getSimpleName() + " class");
}
}
Also work this way too....
#Test
public void Test1() {
}
#Test (dependsOnMethods={"Test1"})
public void Test2() {
}
#Test (dependsOnMethods={"Test2"})
public void Test3() {
}
TestNG uses Priority to "suggest" an order of execution, based on the priority you give to the test. This is not the same as setting an order.
A strict way to establish an order on certain tests is using Test Dependencies.
If TestA has priority=1 and TestB has priority=2, but A depends on B, then TestNG will run B first, ignoring the priority, otherwise A would fail.
A combination of the two practices, will give you something similar to an "order of execution".
I would correct what JeffC says: He is right to say it's a good practice to have your tests as independent of each other as possible. But this is always true ONLY in unit testing.
For example:
You might have a regression suite like:
#Test (priority=2)
public void validateAddingMilkToShoppingCart(){
putMilkInCart();
validateMilkIsInCart();
}
#Test (priority=1, dependsOnMethods = {"validateAddingMilkToShoppingCart"})
public void validateRemovingMilkToShoppingCart(){
verifyMilkIsInCart();
removeMilkFromCart();
validateCartIsEmpty();
}
In this scenario, "validateRemovingMilkToShoppingCart" might have a higher priority because the Sprint is working on emptying the shopping cart, or because it had a bug associated recently. But you should only run that test, if you can put the milk in the cart in the first place, otherwise, you'll spend time and resources in running a test that you already know it will fail based on a previous test. Plus, by doing this, you'll report will look cleaner showing a Skip if the feature couldn't be tested because of a bug in a previous test.
Hope this answers your question.

which is not annotated with #Test or not included

I created two classes under the same package one is called preparation the other is X when I use dependsOnMethods to point to the test case in Preparation I get an exception.
class X.
#Test(enabled = true, dependsOnMethods = {"com.selenium.scripts.passkey.regression.delegateprofile.Preparations.TC_01"})
public void TC_01() {
something ...
}
class preparation :
#Test(enabled = true, description = "Preparation: create a new hotel.")
public void TC_01() {........}
Here is the error:
com.selenium.scripts.passkey.regression.delegateprofile.DProfile.TC_01()
is depending on method public void
com.selenium.scripts.passkey.regression.delegateprofile.Preparations.TC_02(),
which is not annotated with #Test or not included.
The method should be included in the .xml file
The method on which your test methods depends on should also be in the same class than different classes. This will not make the code ambiguous.
As I know, dependsOnMethods only accept method name and not a class + name.
What you can try to do is using groups and dependsOnGroups attribute.

GoogleTest: Trying to get an abstract base class with tests and then use derrived classes to defined multiple test scenarios

In an attempt to do BDD style testing of some code, I have a set of tests which I want to be performed for multiple scenarios. I have done this many times in C# with NUnit & NSubstitute but I am struggling to achieve the desired result for C++ code with GoogleTest.
The concept of what I want to do - but does not even compile due to the pure virtual method in BaseTest is:
class BaseTest : public ::testing::Test {
protected:
int expected = 0;
int actual = 0;
virtual void SetUp() { printf("BaseTest SetUp()\r\n"); }
virtual void TearDown() { printf("BaseTest TearDown()\r\n"); }
virtual void PureVirtual() = 0;
};
TEST_F(BaseTest, BaseTest1)
{
printf("BaseTest BaseTest1\r\n");
ASSERT_EQ(expected, actual);
}
class ScenarioOne: public BaseTest {
public:
virtual void SetUp()
{
BaseTest::SetUp();
printf("ScenarioOne SetUp()\r\n");
actual = 20;
expected = 20;
}
virtual void PureVirtual() {}
};
class ScenarioTwo: public BaseTest {
public:
virtual void SetUp()
{
BaseTest::SetUp();
printf("ScenarioTwo SetUp()\r\n");
actual = 98;
expected = 98;
}
virtual void PureVirtual() {}
};
The above code is greatly simplified, the BaseTest class would have 30+ tests defined and the Scenario classes would have extensive and complicated input data to exercise the code being tested and the expected results would be sizeable and non-trival - hence the idea of in a derived class SetUp() method, defining the input data and expected results and stimulating the code under test with the input data. The tests in the base class would then test the various actual results against the expected results and pass/fail as appropriate.
I have considered trying to use parametrized tests but due to the complex nature of the input data and expected results this looks difficult, plus for each new test scenario I believe it would mean modifying each of the tests to provide the input data and expected results as an additional parameter.
As I said earlier, I can do this sort of thing easily in C# but sadly I am working on a C++ project at this time. Is what I'm trying to do possible with GoogleTest?
OK - I've just thought of a potential solution.
Put all the tests in a header file like this:
// Tests.h - Tests to be performed for all test scenarios
TEST_F(SCENARIO_NAME, test1)
{
ASSERT_EQ(expected, actual);
}
The BaseTest class would just have basic SetUp()/TearDown() methods, member variables to hold the expected and actual results plus any helper functions for the derived scenario classes - but no tests so could be abstract if wanted.
Then for each scenario:
class ScenarioOne: public BaseTest
{
public:
virtual void SetUp()
{
BaseTest::SetUp();
printf("ScenarioOne SetUp()\r\n");
actual = 20;
expected = 20;
}
};
#define SCENARIO_NAME ScenarioOne
#include "Tests.h"
The resultant effect is a set of tests defined once which can then be applied to multiple test scenarios.
It does seem like a bit of a cheat so I'm interested if anyone has a better way of doing it.

JMockit: #Mocke and MockUp combination in the same test

What I have to do:
I have to test my spring mvc with JMockit. I need to do two things:
Redefine MyService.doService method
Check how many times redefined MyService.doService method is called
What the problem:
To cope with the first item, I should use MockUp; to cope with the second item I should use #Mocked MyService. As I understand this two approaches are overriding each other.
My questions:
How to override MyService.doService method and simultaneously check how many times it was invoked?
Is it possible to avoid mixing a behaviour & state based testing approaches in my case?
My code:
#WebAppConfiguration
#ContextConfiguration(locations = "classpath:ctx/persistenceContextTest.xml")
#RunWith(SpringJUnit4ClassRunner.class)
public class MyControllerTest extends AbstractContextControllerTests {
private MockMvc mockMvc;
#Autowired
protected WebApplicationContext wac;
#Mocked()
private MyServiceImpl myServiceMock;
#BeforeClass
public static void beforeClass() {
new MockUp<MyServiceImpl>() {
#SuppressWarnings("unused")
#Mock
public List<Object> doService() {
return null;
}
};
}
#Before
public void setUp() throws Exception {
this.mockMvc = webAppContextSetup(this.wac).build();
}
#Test
public void sendRedirect() throws Exception {
mockMvc.perform(get("/doService.html"))
.andExpect(model().attribute("positions", null));
new Verifications() {
{
myServiceMock.doService();
times = 1;
}
};
}
}
I don't know what gave you the impression that you "should use" MockUp for something, while using #Mocked for something else in the same test.
In fact, you can use either one of these two APIs, since they are both very capable. Normally, though, only one or the other is used in a given test (or test class), not both.
To verify how many invocations occurred to a given mocked method, you can use the "invocations/minInvocations/maxInvocations" attributes of the #Mock annotation when using a MockUp; or the "times/minTimes/maxTimes" fields when using #Mocked. Choose whichever one best satisfies your needs and testing style. For example tests, check out the JMockit documentation.

Unit testing different class hierarchies

What would be the best approach to make unit tests that consider different class hierarchies, like:
I have a base class Car and another base class Animal.
Car have the derived classes VolksWagen and Ford.
Animal have the derived classes Dog and Cat.
How would you develop test that decide at run-time what kind of object are you going to use.
What is the best approach to implement these kind of tests without using code replication, considering that these tests will be applied for milions of objects
from different hierarchies ?
This was an interview question asked to a friend of mine.
Problem as I see it: Avoid repeating common tests to validate n derivations of a common base type.
Create an abstract test fixture. Here you write the tests against the base type & in a abstract base class (search term 'abstract test fixture') with a abstract method GetTestSubject(). Derivations of this type override the method to return an instance of the type to be tested. So you'd need to write N subtypes with a single overridden method but your tests would be written once.
Some unit testing frameworks like NUnit support 'parameterized tests' (search term) - where you have to implement a method/property which would return all the objects that the tests need to be run against. It would then run one/all tests against each such object at run time. This way you don't need to write N derivations - just one method.
Here is one approach that I've used before (well, a variant of this).
Let's assume that you have some sort of common method (go) on Car that you want to test for all classes, and some specific method (breakDown) that has different behavior in the subclass, thus:
public class Car {
protected String engineNoise = null;
public void go() {
engineNoise = "vroom";
}
public void breakDown() {
engineNoise = null;
}
public String getEngineNoise() {
return engineNoise;
}
}
public class Volkswagen extends Car {
public void breakDown() {
throw new UnsupportedOperationException();
}
}
Then you could define a test as follows:
public abstract class CarTest<T extends Car> {
T car;
#Before
public void setUp() {
car = createCar();
}
#Test
public void testVroom() {
car.go();
assertThat( car.getEngineNoise(), is( "vroom" ) );
}
#Test
public void testBreakDown() {
car.breakDown();
assertThat( car.getEngineNoise(), is( null ) );
}
protected abstract T createCar();
}
Now, since Volkswagen needs to do something different in the testBreakDown method -- and may possibly have other methods that need testing -- then you could use the following VolkswagenTest.
public class VolkswagenTest extends CarTest<Volkswagen> {
#Test(expected = UnsupportedOperationException.class)
public void testBreakdown() {
car.breakDown();
}
protected Volkswagen createCar() {
return new Volkswagen();
}
}
Hope that helps!
Actually Unit Test refers to Method Test, when you want to write a unit test you must think to the functionality of a method that you want to write and test, and then create class(es) and method(s) for testing that. by considering this approach when you design and write your code, maybe create hierarchies of classes or just single class or any type of other designs.
but when you have to use existing design like something you mentioned above, then the best practice is to use Interfaces or Base Classes for dependecy objects, because in this way you can mock or stub those classes easily.