Junit5 call methos with parameter but receive error: No ParameterResolver registered for parameter - junit5

I should perform a Junit5 test where method 'test1' can receive a String parameter
I need recall again this method from a new #Test case called 'call_test1WithParameter' passing the parameter.
It is possible to do that in Junit5?
Moreover, when I run method "test1", I receive this error:
There is a way do perform this test case as by me indicated?
Thanks in advance
org.junit.jupiter.api.extension.ParameterResolutionException:
No ParameterResolver registered for parameter [java.lang.String arg1] in method [void com.netsgroup.igfs.cg.test.IntegrationTest.MyTest.test1(org.junit.jupiter.api.TestInfo,java.lang.String)].
Thanks in advance for your support on This.
Bolow my test class:
import org.junit.jupiter.api.Order;
import org.junit.jupiter.api.Test;
import org.junit.jupiter.api.TestInfo;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
class MyTest {
final static Logger log = LoggerFactory.getLogger(MyTest.class);
#Test
#Order(1)
void test1(TestInfo testInfo, String myOrderId){
log.info("Start test case: " + testInfo.getDisplayName());
if (myOrderId!=null)
log.info("MyOrderId is: " + myOrderId);
}
#Test
#Order(2)
void call_test1WithParameter(TestInfo testInfo){
log.info("Start test case: " + testInfo.getDisplayName());
test1(testInfo, "OrderId_123456");
}
}

It is because you specify some arguments in your test method and you need to configure it in order to tell JUnit how to resolve the value for those parameters.
For the arguments that are provided by JUnit such as TestInfo , TestReport or RepetitionInfo , it will be automatically be resolved without any extra configuration.
But for other arguments such as the orderId in your case ,you have to change your test case to a #ParameterizedTest such that you can use #ValueSource/ #EnumSource / #MethodSource / #CsvSource or #ArgumentsSource etc. to define the actual value for the orderId (see this for details).
So change your test case to the following should solve your problem :
#ParameterizedTest
#ValueSource(strings = {"order1" , "order2" , "order3"})
#Order(1)
void test1(String myOrderId , TestInfo testInfo) {
log.info("Start test case: " + testInfo.getDisplayName());
if (myOrderId != null)
log.info("MyOrderId is: " + myOrderId);
}
One thing that need to pay attention is that the arguments that are resolved by argument sources (i.e #ValueSource) should be come first in the argument list. (see this).

Related

How to Take Screenshot when TestNG Assert fails?

String Actualvalue= d.findElement(By.xpath("//[#id=\"wrapper\"]/main/div[2]/div/div[1]/div/div[1]/div[2]/div/table/tbody/tr[1]/td[1]/a")).getText();
Assert.assertEquals(Actualvalue, "jumlga");
captureScreen(d, "Fail");
The assert should not be put before your capture screen. Because it will immediately shutdown the test process so your code
captureScreen(d, "Fail");
will be not reachable
This is how i usually do:
boolean result = false;
try {
// do stuff here
result = true;
} catch(Exception_class_Name ex) {
// code to handle error and capture screen shot
captureScreen(d, "Fail");
}
# then using assert
Assert.assertEquals(result, true);
1.
A good solution will be is to use a report framework like allure-reports.
Read here:allure-reports
2.
We don't our tests to be ugly by adding try catch in every test so we will use Listeners which are using an annotations system to "Listen" to our tests and act accordingly.
Example:
public class listeners extends commonOps implements ITestListener {
public void onTestFailure(ITestResult iTestResult) {
System.out.println("------------------ Starting Test: " + iTestResult.getName() + " Failed ------------------");
if (platform.equalsIgnoreCase("web"))
saveScreenshot();
}
}
Please note I only used the relevant method to your question and I suggest you read here:
TestNG Listeners
Now we will want to take a screenshot built in method by allure-reports every time a test fails so will add this method inside our listeners class
Example:
#Attachment(value = "Page Screen-Shot", type = "image/png")
public byte[] saveScreenshot(){
return ((TakesScreenshot)driver).getScreenshotAs(OutputType.BYTES);
}
Test example
#Listeners(listeners.class)
public class myTest extends commonOps {
#Test(description = "Test01: Add numbers and verify")
#Description("Test Description: Using Allure reports annotations")
public void test01_myFirstTest(){
Assert.assertEquals(result, true)
}
}
Note we're using at the beginning of the class an annotation of #Listeners(listeners.class) which allows our listeners to listen to our test, please mind the (listeners.class) can be any class you named your listeners.
The #Description is related to allure-reports and as the code snip suggests you can add additional info about the test.
Finally, our Assert.assertEquals(result, true) will take a screen shot in case the assertion fails because we enabled our listener.class to it.

testNG priorities not followed

In the testNG.xml file, I have 10+ test classes (within a test-suite tag) for regression testing. I, then, have ordered the automated tests of several test classes in a particular sequence by using the priority=xxx in #Test annotation. The priority values within a particular class are sequential - but each test class has different ranges. For example:
testClass1 : values are from 1-10
testClass2 : values are from 11-23
testClass3 : values are from 31-38
.
.
.
lastTestClass : values are from 10201-10215
The purpose of this is to have a particular sequence in which the 10+ test-classes are executed. There is one test-class that I need to be executed towards the end of the test execution - so, the priorities in that class range from 10201-10215. However, this particular test-class gets tested right after the 1st class with priorities from 1-10.
Instead of using priority, I would recommend you to use dependencies. They will run your tests in a strict order, never running the depended before the dependent, even if you are running in parallel.
I understand you have the different ranges in different classes, so in dependOnMethods you would have to specify the root of the test you are referencing:
#Test( description = "Values are from 1-10")
public void values_1_10() {
someTest();
}
#Test( description = "Values are from 21-23",
dependsOnMethods = { "com.project.test.RangeToTen.values_1_10" })
public void values_21_23() {
someTest();
}
If you have more than one test in each range then you can use dependsOnGroups:
#Test( enabled = true,
description = "Values are from 1-10")
public void values_1_10_A() {
someTest();
}
#Test( enabled = true,
description = "Values are from 1-10")
public void values_1_10_B() {
someTest();
}
#Test( enabled = true,
description = "Values are from 1-10",
dependsOnGroups = { "group_1_10" })
public void values_21_23_A() {
someTest();
}
#Test( enabled = true,
description = "Values are from 1-10",
dependsOnGroups = { "group_1_10" })
public void values_21_23_B() {
someTest();
}
You can also do the same with more options from the testng.xml:
https://testng.org/doc/documentation-main.html#dependencies-in-xml
Another option you have is to use the "preserve order":
https://www.seleniumeasy.com/testng-tutorials/preserve-order-in-testng
But as Anton mention, that could bring you troubles if you ever want to run in parallel, so I recommend you using dependencies.
Designing your tests to be run in specific order is a bad practice. You might want to run tests in parallel in future - and having dependencies on order will stop you from doing that.
Consider using TestNG listeners instead:
It looks like you are trying to implement some kind of tearDown process after tests.
If this is the case - you can implement ITestListener and use onFinish method to run some code after all of your tests were executed.
Also, this TestNG annotation might work for your case:
org.testng.annotations.AfterSuite

Run specific Spock Tests without setup?

In the documentation Spock mentions these function in the section 'Fixture Methods':
def setup() {} // run before every feature method
def cleanup() {} // run after every feature method
def setupSpec() {} // run before the first feature method
def cleanupSpec() {} // run after the last feature method
I am using the setup function to initialize new objects for all tests. However some tests are supposed to test what happens when there are no other objects yet.
Because of the setup these tests fail now.
Is there any way to suspend the setup for certain functions? An annotation perhaps?
Or do I have to create a separate "setup" function and call them in each test I do. The majority of tests uses it!
You can always override value inside your test method only for that specific method. Take a look at following example:
import spock.lang.Shared
import spock.lang.Specification
class SpockDemoSpec extends Specification {
String a
String b
#Shared
String c
def setup() {
println "This method is executed before each specification"
a = "A value"
b = "B value"
}
def setupSpec() {
println "This method is executed only one time before all other specifications"
c = "C value"
}
def "A empty"() {
setup:
a = ""
when:
def result = doSomething(a)
then:
result == ""
}
def "A"() {
when:
def result = doSomething(a)
then:
result == "A VALUE"
}
def "A NullPointerException"() {
setup:
a = null
when:
def result = doSomething(a)
then:
thrown NullPointerException
}
def "C"() {
when:
def result = doSomething(c)
then:
result == 'C VALUE'
}
private String doSomething(String str) {
return str.toUpperCase()
}
}
In this example a and b are set up before each test and c is set up only once before any specification is executed (that's why it requires #Shared annotation to keep it's value between executions).
When we run this test we will see following output in the console:
This method is executed only one time before all other specifications
This method is executed before each specification
This method is executed before each specification
This method is executed before each specification
This method is executed before each specification
The general rule of thumb is that you shouldn't change c value after it is set up in setupSpec method - mostly because you don't know if which order tests are going to be executed (in case you need to guarantee the order you can annotate your class with #Stepwise annotation - Spock will run all cases in order they are defined in this case). In case of a and b you can override then at the single method level - they will get reinitialized before another test case is executed. Hope it helps.

Hive Udf exception

I have a simple hive UDF:
package com.matthewrathbone.example;
import org.apache.hadoop.hive.ql.exec.Description;
import org.apache.hadoop.hive.ql.exec.UDF;
import org.apache.hadoop.io.Text;
#Description(
name="SimpleUDFExample",
value="returns 'hello x', where x is whatever you give it (STRING)",
extended="SELECT simpleudfexample('world') from foo limit 1;"
)
class SimpleUDFExample extends UDF {
public Text evaluate(Text input) {
if(input == null) return null;
return new Text("Hello " + input.toString());
}
}
When I am executing it using select query :
select helloUdf(method) from tests3atable limit 10;
method is the name of column in tests3atable table.
I am getting below exception :
FAILED: SemanticException [Error 10014]: Line 1:7 Wrong arguments 'method': Unable to instantiate UDF implementation class com.matthewrathbone.example.SimpleUDFExample: java.lang.IllegalAccessException: Class org.apache.hadoop.hive.ql.udf.generic.GenericUDFBridge can not access a member of class com.matthewrathbone.example.SimpleUDFExample with modifiers ""
Declare the class as public, it should work
I also had the same issue. It turned out, eclipse was not refreshing the program which i modified. So please make sure that the modifications you make in your code gets reflected in the jar.

Grails integration test failing with MissingMethodException

I'm attempting to test a typical controller flow for user login. There are extended relations, as with most login systems, and the Grails documentation is completely useless. It doesn't have a single example that is actually real-world relevant for typical usage and is a feature complete example.
my test looks like this:
#TestFor(UserController)
class UserControllerTests extends GroovyTestCase {
void testLogin() {
params.login = [email: "test1#example.com", password: "123"]
controller.login()
assert "/user/main" == response.redirectUrl
}
}
The controller does:
def login() {
if (!params.login) {
return
}
println("Email " + params.login.email)
Person p = Person.findByEmail(params?.login?.email)
...
}
which fails with:
groovy.lang.MissingMethodException: No signature of method: immigration.Person.methodMissing() is applicable for argument types: () values: []
The correct data is shown in the println, but the method fails to be called.
The test suite cannot use mocks overall because there are database triggers that get called, the result of which will need to be tested.
The bizarre thing is that if I call the method directly in the test, it works, but calling it in the controller doesn't.
For an update: I regenerated the test directly from the grails command, and it added the #Test annotation:
#TestFor(UserController)
class UserControllerTests extends GroovyTestCase {
#Test
void testLogin() {
params.login = [email: "test1#example.com", password: "123"]
Person.findByEmail(params.login.email)
controller.login()
}
}
This works if I run it with
grail test-app -integration UserController
though the result isn't populated correctly - the response is empty, flash.message is null even though it should have a value, redirectedUrl is null, content body is empty, and so is view.
If I remove the #TestFor annotation, it doesn't work even in the slightest. It fails telling me that 'params' doesn't exist.
In another test, I have two methods. The first method runs, finds Person.findAllByEmail(), then the second method runs and can't find Person.findAllByEmail and crashes with a similar error - method missing.
In another weird update - it looks like the response object is sending back a redirect, but to the application baseUrl, not to the user controller at all.
Integration tests shouldn't use #TestFor. You need to create an instance of the controller in your test, and set params in that:
class UserControllerTests extends GroovyTestCase {
void testLogin() {
def controller = new UserController()
controller.params.login = [email:'test1#example.com', password:'123']
controller.login()
assert "/user/main" == controller.response.redirectedUrl
}
}
Details are in the user guide.
The TestFor annotation is used only in unit tests, since this mocks the structure. In the integration tests you have access of the full Grails environment, so there's no need for this mocks. So just remove the annotation and should work.
class UserControllerTests extends GroovyTestCase {
...
}