TestNG could not instantiate class happens randomly - selenium

Is there any way to get more debugging information? It doesn't always happen but it seems to almost always happen if I try to immediately rerun tests. I know it's not a classpath issue b/c the tests will run often.

If you created a class with annotations, then don't use/find any page elements before the first annotation.
Testng will looks for the code to instantiate then only to the methods.
Hope this helps.

Related

gtest - why does one test affect behavior of other?

Currently I have a gtest which has a gtest object with some member variables and functions.
I have a simple test, as well as more complex tests later on. If I comment out the complex tests, my simple test runs perfectly fine. However, when I include the other tests (even though I'm using gtest_filter to only run the first test), I start getting segfaults. I know it's impossible to debug without posting my code, but I guess I wanted to know more at a high level how this could occur. My understanding is that TEST_F constructs/destructs a new object every time it is run, so how could it be possible that the existence of a test affects another? Especially if I'm filtering, shouldn't the behavior be exactly the same?
TEST_F does not construct/destruct a new "object" ( at this point I assume that object here is to be interpreted as instance of the feature test class) for each test
What is done before each test of the test feature is to call the SetUp method and after each test the TearDown method is called.
Test feature constructor and destructor are called only once.
But because you did not provide a mvce , we can not assume further

How to intercept JUnit5 method annotated with #Disabled?

I would like to write a JUnit5 Extension where I have to take some action when a test method annotated with #Disabled is found. Unfortunately, beforeTestExecution() is not called for such methods. Does anybody have an idea how to intercept such #Disabled test methods ?
Thanks !
As described in the User Guide, you can disable the built-in ExecutionCondition that handles #Disabled by default by setting junit.jupiter.conditions.deactivate to org.junit.*DisabledCondition (see Configuration Parameters on how to set it). This will cause your tests to be executed.
Next, you need to implement your own ExecutionCondition extension, check for #Disabled, take your action and return ConditionEvaluationResult.disabled("...").
In order to avoid having to register your extension on each test class, you can activate Automatic Extension Registration and register your extension globally.
Depending on what you want to achieve, it may be easier to register your own TestExecutionListener (see Plugging in Your Own Test Execution Listeners) and implement executionSkipped().
The extension point used here is https://github.com/junit-team/junit5/blob/master/junit-jupiter-api/src/main/java/org/junit/jupiter/api/extension/ExecutionCondition.java
It might be interesting to find out how several implementors compose, i.e. if the 2nd one will be called at all and under what circumstances. You’ll probably have to dive into Jupiter code or do some experiments.

Spock extension's start method invoked multiple times

I have bunch of functional tests based on Spock and Geb. I want to perform some actions before and after execution of these tests. So I created global extension and added required functionality to start() and stop() methods of that extension. But the problem is that start/stop methods are invoked before/after each Spock spec though Spock documentation (http://spockframework.org/spock/docs/1.1/all_in_one.html#_global_extensions) states:
start() This is called once at the very start of the Spock execution
stop() This is called once at the very end of the Spock execution
Do I do something wrong or Spock documentation is incorrect about behaviour of these methods?
#MantasG Spock implements a JUnit Runner and does not control how it is executed. Global extensions are managed in a RunContext which is kept in a ThreadLocal. If surefire uses multiple threads to execute Tests then this will create multiple instances of RunContext each with their own list of global extensions. If you are using an EmbeddedSpecRunner then this would also create a new isolated context.
This context will stay around until the thread dies. It would be
more accurate to remove the context once the test run has finished,
but the JUnit Runner SPI doesn't provide an adequate hook. That
said, since most environments fork a new JVM for each test run, this
shouldn't be much of a problem in practice.
Depending on what you want to do there are other ways:
you can use a JUnitRunListener and use the testRunStarted/testRunFinished hooks. Note that you need to register this via surefire.
If you really want to run only once, then you could use failsafe instead of surefire and use the pre- and postintegration goals.
You could hack something using a static fields and a counter for each start/stop call and perform your start action if the counter is 0 and perform your stop action once the counter reaches 0. Of course you'll need to make this thread safe.
Note that surefire also supports forking multiple JVMs and this will also impact options 1 and 3.
I believe Spock is invoked at the start of every Testing Spec in your testing Suite, so start and stop are run with every one of those executions
I think you might want to take a look at the Fixture Methods found in the same doc you linked in the question:
http://spockframework.org/spock/docs/1.1/all_in_one.html#_specification

SpecFlow test doesn't open repository methods

I wrote SpecFlow method, (just add some items in repositories and calculate values after). It worked without mistakes. But now it doesn't.
Made mock of me repositories and filled values them.
I debug it by steps and found, that repositories methods doesn't call. Debuger just ignores and skips it. It isn't joke, I'm sure I made debug point in the right method.
It was surprise for me, SpecFlow Mock test doesn't call repository method and you can't debug behavior in the method. Mock repository just return value. If you can't get value - that means you added incorrectly data in the repository

testNG : find which test classes will run before any of them are

I am working with testNG where I run an external test framework, receive the result data and assert it. To run the external test framework I need to set up a specification for which tests that should be run. To generate this specification I need to know which tests that are selected in the testNG .xml file.
The only way I could think of doing this is to parse the file manually. But I am hoping for a better solution than this.
Thanks for any answers!
//Flipbed
Edit:
My colleague found solutions to the problem.
In #Factory and #DataProvider annotated methods it is possible to add a parameter of the type ITestContext. Using the variable of that type, one can use the method .getAllTestMethods().
Create a new class that implements IMethodInterceptor. In this class one can override the method 'intercept'. The method takes a parameter of the type List which is a list of all methods that will be run by testNG.
If someone has any other suggestions feel free to add.
//Flipbed
The solution that we used was to number 2 in my edit. We implemented the IMethodInterceptor and used the methods list as well as the ITestContext to both view what tests will run and modify that list.