Spock extension's start method invoked multiple times - testing

I have bunch of functional tests based on Spock and Geb. I want to perform some actions before and after execution of these tests. So I created global extension and added required functionality to start() and stop() methods of that extension. But the problem is that start/stop methods are invoked before/after each Spock spec though Spock documentation (http://spockframework.org/spock/docs/1.1/all_in_one.html#_global_extensions) states:
start() This is called once at the very start of the Spock execution
stop() This is called once at the very end of the Spock execution
Do I do something wrong or Spock documentation is incorrect about behaviour of these methods?

#MantasG Spock implements a JUnit Runner and does not control how it is executed. Global extensions are managed in a RunContext which is kept in a ThreadLocal. If surefire uses multiple threads to execute Tests then this will create multiple instances of RunContext each with their own list of global extensions. If you are using an EmbeddedSpecRunner then this would also create a new isolated context.
This context will stay around until the thread dies. It would be
more accurate to remove the context once the test run has finished,
but the JUnit Runner SPI doesn't provide an adequate hook. That
said, since most environments fork a new JVM for each test run, this
shouldn't be much of a problem in practice.
Depending on what you want to do there are other ways:
you can use a JUnitRunListener and use the testRunStarted/testRunFinished hooks. Note that you need to register this via surefire.
If you really want to run only once, then you could use failsafe instead of surefire and use the pre- and postintegration goals.
You could hack something using a static fields and a counter for each start/stop call and perform your start action if the counter is 0 and perform your stop action once the counter reaches 0. Of course you'll need to make this thread safe.
Note that surefire also supports forking multiple JVMs and this will also impact options 1 and 3.

I believe Spock is invoked at the start of every Testing Spec in your testing Suite, so start and stop are run with every one of those executions
I think you might want to take a look at the Fixture Methods found in the same doc you linked in the question:
http://spockframework.org/spock/docs/1.1/all_in_one.html#_specification

Related

Using DynamicNode and need a lifecycle hook to run after all tests have completed

I'm using DynamicNode very successfully in a framework that dynamically generates tests and executes them.
Now I have a need to execute some code after all DynamicNode collections have executed. This can mean that I have a single JUnit5 class with multiple methods that return Iterable<DynamicNode>, but I want to run something only after all the test methods have completed.
Is there a way to do this automatically ?
EDIT: ideally I would like my framework to inject the code to be executed automatically, without the user needing to add a #AfterAll annotation on a method and write some extra code.
Each method that is annotated with #TestFactory takes part in the default lifecycle. That means in your case an #AfterAll annotated method should do the trick.
#AfterAll
Denotes that the annotated method should be executed after all
#Test, #RepeatedTest, #ParameterizedTest, and #TestFactory
methods in the current class; analogous to JUnit 4’s #AfterClass.
Such methods are inherited (unless they are hidden or overridden) and
must be static (unless the "per-class" test instance lifecycle is
used).
Copied from https://junit.org/junit5/docs/current/user-guide/#writing-tests-annotations

Run database once per Spek suite

Some tests require running a database, for instance, using Test Containers Library. It obviously takes time to boot it up.
Is there a way to do this only once per entire Spek suite which spans across multiple files? The docs don't say anything about this.
Anyone knows why this has not been implemented?
This answer is not Spek-specific, but Testcontainers objects expose a simple start() and stop() method, meaning that you don't have to rely on the test framework to control your container lifecycle if you don't want to. You can create a container in a static object that is separate from your test classes, and then access it across all tests if you like.
Please see an example here (Java example snippet below):
static {
GenericContainer redis = new GenericContainer("redis:3-alpine")
.withExposedPorts(6379);
redis.start();
}
I would imagine an equivalent in Kotlin should be quite easy as an object (or similar).

gtest - why does one test affect behavior of other?

Currently I have a gtest which has a gtest object with some member variables and functions.
I have a simple test, as well as more complex tests later on. If I comment out the complex tests, my simple test runs perfectly fine. However, when I include the other tests (even though I'm using gtest_filter to only run the first test), I start getting segfaults. I know it's impossible to debug without posting my code, but I guess I wanted to know more at a high level how this could occur. My understanding is that TEST_F constructs/destructs a new object every time it is run, so how could it be possible that the existence of a test affects another? Especially if I'm filtering, shouldn't the behavior be exactly the same?
TEST_F does not construct/destruct a new "object" ( at this point I assume that object here is to be interpreted as instance of the feature test class) for each test
What is done before each test of the test feature is to call the SetUp method and after each test the TearDown method is called.
Test feature constructor and destructor are called only once.
But because you did not provide a mvce , we can not assume further

How to intercept JUnit5 method annotated with #Disabled?

I would like to write a JUnit5 Extension where I have to take some action when a test method annotated with #Disabled is found. Unfortunately, beforeTestExecution() is not called for such methods. Does anybody have an idea how to intercept such #Disabled test methods ?
Thanks !
As described in the User Guide, you can disable the built-in ExecutionCondition that handles #Disabled by default by setting junit.jupiter.conditions.deactivate to org.junit.*DisabledCondition (see Configuration Parameters on how to set it). This will cause your tests to be executed.
Next, you need to implement your own ExecutionCondition extension, check for #Disabled, take your action and return ConditionEvaluationResult.disabled("...").
In order to avoid having to register your extension on each test class, you can activate Automatic Extension Registration and register your extension globally.
Depending on what you want to achieve, it may be easier to register your own TestExecutionListener (see Plugging in Your Own Test Execution Listeners) and implement executionSkipped().
The extension point used here is https://github.com/junit-team/junit5/blob/master/junit-jupiter-api/src/main/java/org/junit/jupiter/api/extension/ExecutionCondition.java
It might be interesting to find out how several implementors compose, i.e. if the 2nd one will be called at all and under what circumstances. You’ll probably have to dive into Jupiter code or do some experiments.

How to use TestNg in Selenium WebDriver?

How to use TestNg in Selenium WebDriver? Explain me what is the usage of that.
I am new Learner in Selenium WebDriver
Hi TestNG can be defined as
1.TestNG is a testing framework designed to simplify a broad range of testing needs, from unit testing (testing a class in isolation of the others) to integration testing (testing entire systems made of several classes, several packages and even several external frameworks, such as application servers).
2.For official TestNG documentation Please Click Here
Before you can use TestNG with selenium you have to install it first.Talking in consideration that you are working with eclipse (any version)
1.There are various ways to install TestNG either follow this or this or simply go to Help/Eclipse MarketPlace. under Find type Test NG and click on the install
now how to use Test NG in eclipse with selenium
#BeforeTest
public void TearUP(){
// preconditions for sample test
// like browser start with specific URL
}
#Test
public void SampleTest(){
// code for the main test case goes inside
}
#AfterTest
public void TearDown1(){
// thing to done after test is run
// like memory realese
// browser close
}
Some information for above code
TestNG have various annotations for more info on annotation go to the above link
#BeforeSuite: The annotated method will be run before all tests in this suite have run.
#AfterSuite: The annotated method will be run after all tests in this suite have run.
#BeforeTest: The annotated method will be run before any test method belonging to the classes inside the <test> tag is run.
#AfterTest: The annotated method will be run after all the test methods belonging to the classes inside the <test> tag have run.
#BeforeGroups: The list of groups that this configuration method will run before. This method is guaranteed to run shortly before the first test method that belongs to any of these groups is invoked.
#AfterGroups: The list of groups that this configuration method will run after. This method is guaranteed to run shortly after the last test method that belongs to any of these groups is invoked.
#BeforeClass: The annotated method will be run before the first test method in the current class is invoked.
#AfterClass: The annotated method will be run after all the test methods in the current class have been run.
#BeforeMethod: The annotated method will be run before each test method.
#AfterMethod: The annotated method will be run after each test method.
One of the primary usage of selenium is to test ui functionality, and as a testing framework testNg has many techniques to run and report the tests and can be leveraged for ui testing with selenium. One of the tools effectively use this is selion (https://github.com/paypal/selion).