GoogleTest: Trying to get an abstract base class with tests and then use derrived classes to defined multiple test scenarios - googletest

In an attempt to do BDD style testing of some code, I have a set of tests which I want to be performed for multiple scenarios. I have done this many times in C# with NUnit & NSubstitute but I am struggling to achieve the desired result for C++ code with GoogleTest.
The concept of what I want to do - but does not even compile due to the pure virtual method in BaseTest is:
class BaseTest : public ::testing::Test {
protected:
int expected = 0;
int actual = 0;
virtual void SetUp() { printf("BaseTest SetUp()\r\n"); }
virtual void TearDown() { printf("BaseTest TearDown()\r\n"); }
virtual void PureVirtual() = 0;
};
TEST_F(BaseTest, BaseTest1)
{
printf("BaseTest BaseTest1\r\n");
ASSERT_EQ(expected, actual);
}
class ScenarioOne: public BaseTest {
public:
virtual void SetUp()
{
BaseTest::SetUp();
printf("ScenarioOne SetUp()\r\n");
actual = 20;
expected = 20;
}
virtual void PureVirtual() {}
};
class ScenarioTwo: public BaseTest {
public:
virtual void SetUp()
{
BaseTest::SetUp();
printf("ScenarioTwo SetUp()\r\n");
actual = 98;
expected = 98;
}
virtual void PureVirtual() {}
};
The above code is greatly simplified, the BaseTest class would have 30+ tests defined and the Scenario classes would have extensive and complicated input data to exercise the code being tested and the expected results would be sizeable and non-trival - hence the idea of in a derived class SetUp() method, defining the input data and expected results and stimulating the code under test with the input data. The tests in the base class would then test the various actual results against the expected results and pass/fail as appropriate.
I have considered trying to use parametrized tests but due to the complex nature of the input data and expected results this looks difficult, plus for each new test scenario I believe it would mean modifying each of the tests to provide the input data and expected results as an additional parameter.
As I said earlier, I can do this sort of thing easily in C# but sadly I am working on a C++ project at this time. Is what I'm trying to do possible with GoogleTest?

OK - I've just thought of a potential solution.
Put all the tests in a header file like this:
// Tests.h - Tests to be performed for all test scenarios
TEST_F(SCENARIO_NAME, test1)
{
ASSERT_EQ(expected, actual);
}
The BaseTest class would just have basic SetUp()/TearDown() methods, member variables to hold the expected and actual results plus any helper functions for the derived scenario classes - but no tests so could be abstract if wanted.
Then for each scenario:
class ScenarioOne: public BaseTest
{
public:
virtual void SetUp()
{
BaseTest::SetUp();
printf("ScenarioOne SetUp()\r\n");
actual = 20;
expected = 20;
}
};
#define SCENARIO_NAME ScenarioOne
#include "Tests.h"
The resultant effect is a set of tests defined once which can then be applied to multiple test scenarios.
It does seem like a bit of a cheat so I'm interested if anyone has a better way of doing it.

Related

How to programmatically register extensions in Junit5

Say, a test needs a parameter that is only known when the tests are about to run.
#ExtendWith(MyParameterExtension.class)
public class Test {
protected final MyParameter p;
public Test(MyParameter p) {}
#Test
public void test() { assertSuccess(TestedCode.doComplexThing(p)); }
}
Only before the tests are executed, the specific contents of MyParameter instance can be determined. So I can have a resolver extension that simple pastes that parameter value where needed:
class MyParameterExtension implements ParameterResolver {
private final MyParameter myParameter;
public MyParameterExtension(MyParameter p) {
myParameter = p;
}
#Override
public boolean supportsParameter(ParameterContext parameterContext, ExtensionContext extensionContext) {
return (parameterContext.getParameter().getType() == MyParameter.class);
}
#Override
public MyParameter resolveParameter(ParameterContext parameterContext, ExtensionContext extensionContext) {
return myParameter;
}
}
I run the tests by starting Junit5 from my own code. That's when I can determine what the corresponding parameter values are. Let's say these parameters drive the behavior of the tests, and a user can specify (i.e., over a CLI) the values that a run should use.
How do I register the extension with the test run, as I'm about to commence it?
void launchSuite(List<DiscoverySelector> selectors, Object something) {
// The input to this are all the necessary selectors.
LauncherDiscoveryRequest ldr = LauncherDiscoveryRequestBuilder.request()
.selectors(selectors).build();
Launcher launcher = LauncherFactory.create();
TestPlan plan = launcher.discover(ldr);
MyParameter myParameter = new MyParameter(something);
MyParameterExtension ext = new MyParameterExtension(myParameter);
// $TODO: how do I register my extension with the test run
// before starting it?
launcher.execute(plan);
}
Auto-registering extensions doesn't help me (how would that process know the value of MyParameter)
Using #RegisterExtension in the test code doesn't help me (A static block in the test code won't know the proper input for constructing instances of MyParameter)
Looking at the mechanics of launching the test, I don't see anything that lets me register those extensions in advance.
I considered using a ThreadLocal field in an extension registered statically but AFAIU, this won't (reliably) work because JUnit may create its own threads at least in certain cases.
I considered sticking the value of MyParameter in the "extension context", but I don't see a way to grab a hold of that before the test execution starts either. The root context is created in JupiterEngineDescriptor that is, if nothing else, all internal API.
The obvious solution is to stick the parameter in a static field somewhere, but that would preclude me from running tests with different parameters in parallel, unless I resort to loading tests into isolated class loaders, which sounds too cumbersome for something that I believe should be simpler. After all, all of the contexts of a test run are otherwise fully isolated.
What I'm ultimately trying to do, at then, as to make something like this possible:
// ...
new Thread(()->launchSuite(selectors, "assume Earth gravity")).start();
new Thread(()->launchSuite(selectors, "assume Mars gravity")).start();
So what's are the reasonable ways to wire something this together?
Let's start with the one thing that does not work: Using the launcher API.
The launcher API is a platform feature, whereas extensions are Jupiter-related. That's why there is no mechanism to register an extension in the API.
What should work, though, is #RegisterExtension - although you claim it would not. As the documentation shows it is not restricted to static fields. Therefore, whatever you do here:
MyParameter myParameter = new MyParameter(something);
MyParameterExtension ext = new MyParameterExtension(myParameter);
could be done in a static method to instantiate an extension during runtime:
public class Test {
private static MyParameterExtension createExtension() {
MyParameter myParameter = new MyParameter(something);
return new MyParameterExtension(myParameter);
}
#RegisterExtension
private MyParameterExtension my = createExtension();
#Test
public void test(MyParameter p) {
assertSuccess(TestedCode.doComplexThing(p));
}
}
If that doesn't work in your case, some information is missing from your problem statement IMO.
Update
If your extension creation code requires parameters that can only be determined at launch time, you have the option of adding configuration parameters to the discovery request:
LauncherDiscoveryRequest ldr = LauncherDiscoveryRequestBuilder.request()
.configurationParameter("selectors", "assume Earth gravity")
.selectors(selectors).build();
This parameter can then be retrieved within the extension:
class MyParameterExtension implements ParameterResolver {
...
#Override
public MyParameter resolveParameter(ParameterContext parameterContext, ExtensionContext extensionContext) {
var selectors = extensionContext.getConfigurationParameter("selectors").orElse("");
return new MyParameter(selectors);
}
}

Running DropwizardAppRule before each test in a class using junit

I have a test class that has several tests. At the moment I have this to start up the server, wipe the database etc:
#ClassRule
public static final DropwizardAppRule<ServiceConfig> RULE =
new DropwizardAppRule<ServiceConfig>(ServiceApp.class, ResourceHelpers.resourceFilePath("config.yml"));
All my tests work with this individually. But when I run them all together some fail since other tests modify data. I tried doing the following but I'm getting null pointers when calling RULE.getPort():
#ClassRule
public static DropwizardAppRule<ServiceConfig> RULE;
#Before
public void beforeClass() {
RULE = new DropwizardAppRule<ServiceConfig>(ServiceApp.class, ResourceHelpers.resourceFilePath("config.yml"));
}
I would have expected this to work but it doesn't seem to set the values of RULE properly. Any ideas?
Hi,
I don't know how to handle db "from within" DropwizardAppRule, so I may not really answer your question... I'm actually having another issue myself trying with DropwizardAppRule not properly being setup and torn down between tests. (So if you made progress going this way I'd like you insights).
Anyway, I think you need to handle your DB outside DropwizardAppRule and give it in the Rule. We resolved DB clearing by relying on custom and external TestsRules:
public class CockpitApplicationRule implements TestRule {
public static class App extends CockpitApplication<CockpitConfiguration> {
// only needed because of generics
}
public final DropwizardAppRule<CockpitConfiguration> dw;
public final EmbeddedDatabaseRule db;
public CockpitApplicationRule(String config, ConfigOverride... configOverrides) {
this.db = EmbeddedDatabaseRule.builder()
.initializedByPlugin(LiquibaseInitializer.builder().withChangelogResource("migrations.xml").build())
.build();
this.dw = new DropwizardAppRule<>(App.class, ResourceHelpers.resourceFilePath(config),
ConfigOverride.config("database.url", () -> this.db.getConnectionJdbcUrl()));
}
#Override
#Nullable
public Statement apply(#Nullable Statement base, #Nullable Description description) {
assert base != null;
assert description != null;
return RulesHelper.chain(base, description, dw, RulesHelper.dropDbAfter(db), db);
}
public DSLContext db() {
return DSL.using(db.getConnectionJdbcUrl());
}
}
Basically we override TestRule apply(...) to chain custom Statements. There's our RulesHelper if you want to take a look. That way the DB is cleanly handled by the Rules, we can fill our test DB in test classes using #Before setup methods.
org.zapodot.junit.db.EmbeddedDatabaseRule Is an external dependency that allows us to rather easily instantiate a DB for our tests.
The RulesHelper.dropDbAfter does the actual cleaning:
public static TestRule dropDbAfter(EmbeddedDatabaseRule db) {
return after(() -> DSL.using(db.getConnectionJdbcUrl()).execute("DROP ALL OBJECTS"));
}
You should be able to setup and clean the DB from #Before and #After methods without fully using TestRules though, but I'm not sure it's really easier in the end.
Hope this helped !

How to disable static initializer?

Assuming my system under test looks like this:
public class SysUnderTest {
public int foo() {
Trouble trouble1 = new Trouble();
Trouble trouble2 = new Trouble();
return trouble1.water(1) + trouble2.water(2);
}
}
The test will looks something like
public class DummyTest {
#Tested SysUnderTest sut;
#Mocked Trouble trouble;
#Test
public void testTrouble() {
new Expectations() {{
trouble.water(anyInt); returns(10, 20);
}};
assertThat("mocked result", sut.foo(), is(30));
new FullVerificationsInOrder() {{
Trouble t1 = new Trouble();
Trouble t2 = new Trouble();
t1.water(1);
t2.water(2);
}};
}
}
However, Trouble is actually a 3rd-party lib class that I have no control, which it does static initialization which will fail in testing env.
public class Trouble {
static {
troubleInitialize();
};
public int water(int i) {
return 0;
}
private static void troubleInitialize() {
throw new RuntimeException("Trouble");
}
}
I know I can use MockUp<Trouble> to get rid of the static initializer but I have no idea how to make use of it in case as I want to (in my realistic case) be able to distinguish the two new instances (created in SysUnderTest) and verify their invocations. I have tried different ways but all failed with some reasons
Adding a new MockUp<Trouble>(){#Mock void $clinit(){} }; in #Before/#BeforeClass, and keep #Mocked Trouble trouble;. It seems not working because the mockup action happens after the DummyTest class is loaded, which will load (unmodified) Trouble class which will throw exception during static initialization
Adding the new Mockup in a TestSuite and call the DummyTest in suite, similar problem as 1.
Simply put the behavior of returning 20, 30 in the fake class, and remove usage of Expectations/Verifications but I have no way to verify which instance is called with what parameter.
Is there a better way to solve my problem? Actually I would want to keep using Expectaitons/Verifications, all I want is some way to disable the static initializer during unit test.
Use stubOutClassInitialization to change the mocked class's static init to an empty method when using Mocked.
#Mocked(stubOutClassInitialization=true) Trouble trouble;

JMockit: #Mocke and MockUp combination in the same test

What I have to do:
I have to test my spring mvc with JMockit. I need to do two things:
Redefine MyService.doService method
Check how many times redefined MyService.doService method is called
What the problem:
To cope with the first item, I should use MockUp; to cope with the second item I should use #Mocked MyService. As I understand this two approaches are overriding each other.
My questions:
How to override MyService.doService method and simultaneously check how many times it was invoked?
Is it possible to avoid mixing a behaviour & state based testing approaches in my case?
My code:
#WebAppConfiguration
#ContextConfiguration(locations = "classpath:ctx/persistenceContextTest.xml")
#RunWith(SpringJUnit4ClassRunner.class)
public class MyControllerTest extends AbstractContextControllerTests {
private MockMvc mockMvc;
#Autowired
protected WebApplicationContext wac;
#Mocked()
private MyServiceImpl myServiceMock;
#BeforeClass
public static void beforeClass() {
new MockUp<MyServiceImpl>() {
#SuppressWarnings("unused")
#Mock
public List<Object> doService() {
return null;
}
};
}
#Before
public void setUp() throws Exception {
this.mockMvc = webAppContextSetup(this.wac).build();
}
#Test
public void sendRedirect() throws Exception {
mockMvc.perform(get("/doService.html"))
.andExpect(model().attribute("positions", null));
new Verifications() {
{
myServiceMock.doService();
times = 1;
}
};
}
}
I don't know what gave you the impression that you "should use" MockUp for something, while using #Mocked for something else in the same test.
In fact, you can use either one of these two APIs, since they are both very capable. Normally, though, only one or the other is used in a given test (or test class), not both.
To verify how many invocations occurred to a given mocked method, you can use the "invocations/minInvocations/maxInvocations" attributes of the #Mock annotation when using a MockUp; or the "times/minTimes/maxTimes" fields when using #Mocked. Choose whichever one best satisfies your needs and testing style. For example tests, check out the JMockit documentation.

Unit testing different class hierarchies

What would be the best approach to make unit tests that consider different class hierarchies, like:
I have a base class Car and another base class Animal.
Car have the derived classes VolksWagen and Ford.
Animal have the derived classes Dog and Cat.
How would you develop test that decide at run-time what kind of object are you going to use.
What is the best approach to implement these kind of tests without using code replication, considering that these tests will be applied for milions of objects
from different hierarchies ?
This was an interview question asked to a friend of mine.
Problem as I see it: Avoid repeating common tests to validate n derivations of a common base type.
Create an abstract test fixture. Here you write the tests against the base type & in a abstract base class (search term 'abstract test fixture') with a abstract method GetTestSubject(). Derivations of this type override the method to return an instance of the type to be tested. So you'd need to write N subtypes with a single overridden method but your tests would be written once.
Some unit testing frameworks like NUnit support 'parameterized tests' (search term) - where you have to implement a method/property which would return all the objects that the tests need to be run against. It would then run one/all tests against each such object at run time. This way you don't need to write N derivations - just one method.
Here is one approach that I've used before (well, a variant of this).
Let's assume that you have some sort of common method (go) on Car that you want to test for all classes, and some specific method (breakDown) that has different behavior in the subclass, thus:
public class Car {
protected String engineNoise = null;
public void go() {
engineNoise = "vroom";
}
public void breakDown() {
engineNoise = null;
}
public String getEngineNoise() {
return engineNoise;
}
}
public class Volkswagen extends Car {
public void breakDown() {
throw new UnsupportedOperationException();
}
}
Then you could define a test as follows:
public abstract class CarTest<T extends Car> {
T car;
#Before
public void setUp() {
car = createCar();
}
#Test
public void testVroom() {
car.go();
assertThat( car.getEngineNoise(), is( "vroom" ) );
}
#Test
public void testBreakDown() {
car.breakDown();
assertThat( car.getEngineNoise(), is( null ) );
}
protected abstract T createCar();
}
Now, since Volkswagen needs to do something different in the testBreakDown method -- and may possibly have other methods that need testing -- then you could use the following VolkswagenTest.
public class VolkswagenTest extends CarTest<Volkswagen> {
#Test(expected = UnsupportedOperationException.class)
public void testBreakdown() {
car.breakDown();
}
protected Volkswagen createCar() {
return new Volkswagen();
}
}
Hope that helps!
Actually Unit Test refers to Method Test, when you want to write a unit test you must think to the functionality of a method that you want to write and test, and then create class(es) and method(s) for testing that. by considering this approach when you design and write your code, maybe create hierarchies of classes or just single class or any type of other designs.
but when you have to use existing design like something you mentioned above, then the best practice is to use Interfaces or Base Classes for dependecy objects, because in this way you can mock or stub those classes easily.