Comparing result of one JUnit #Test with another #Test in same class - selenium

Is it possible to compare the result / output of one JUnit test to another test in same class?
Below is the algorithm of
Public class CompareResult {
#Before
{
open driver
}
#After
{
quit driver
}
#Test
{
Connect to 1st website
Enter data and calculate value
Store the value in Variable A
}
#Test
{
Connect to 2nd website
Enter data and calculate value
Store the value in Variable B
}
#Test
{
Compare A and B
}
}
When I display the value of variable A & B in 3rd #Test, it is NULL. Can we not use variable in one #Test to another #Test in JUnit? Please advise, I am new to JUnit.

Why do they need to be two tests? If you are comparing the values, you really have one test with multiple methods and possibly multiple asserts. And if there aren't any asserts in helper1 and helper2, this becomes even more apparent. A test without an assert is just testing it doesn't blow up!
private helper1
{
// Connect to 1st website
// Enter data and calculate value
// Store the value in Variable A
}
private helper2
{
// Connect to 2nd website
// Enter data and calculate value
// Store the value in Variable B
}
#Test actualTest
{
// Compare A and B with assertion
}

You store your values in local variables as I understood. Declare private fields A and B first and then use it to store your data.
Public class CompareResult {
private String a = null;
private String b = null;
#Before
public void Setup() {
open driver
}
...
Btw your tests should be independent and passing values from one test to another is not a good way to implement them. Also I didn't work with junit a lot so I don't know how execution order for your tests is set. You should define some tests dependency or something like that and I repeat it once again: this is not correct for tests.

Related

I need help writing a Junit 5 test for a Method

I am very new to Java, and I have been tasked with creating Junit5 tests for already written code. To start, I have the below method that I need to write a test for. I am unsure how to approach a test for this method.
public static Double getFormattedDoubleValue(Number value){ return getFormattedDoubleValue(value, -1); }
I tried the below test, and it passes, but I feel like I am testing the wrong thing here.
#Test
public void testDoubleString() {
Double num = 41.1212121212;
String expected = "41.12";
String actual = String.format("%.2f", num);
assertEquals(expected, actual, "Should return 41.12");}
Writing tests are simpler than they are sometimes made out to be, all it is is just calling your code from a test class instead of a business logic class and making sure that you get the right output based on what you input.
Here is an excellent article that will take you from the very beginning of the process: Baeldung: JUnit 5
Possible Sample Test
As I'm not quite certain what the expectations of your method are, I am just going to pretend that your method should take a Double, and return that number minus one:
#Test
void getFormattedDoubleValue_Test() {
Double expected = 5.0L
Double actual = getFormattedDoubleValue(expected + 1L)
assertEquals(expected, actual)
}

Abort/ignore parameterized test in JUnit 5

I have some parameterized tests
#ParameterizedTest
#CsvFileSource(resources = "testData.csv", numLinesToSkip = 1)
public void testExample(String parameter, String anotherParameter) {
// testing here
}
In case one execution fails, I want to ignore all following executions.
AFAIK there is no built-in mechanism to do this. The following does work, but is a bit hackish:
#TestInstance(Lifecycle.PER_CLASS)
class Test {
boolean skipRemaining = false;
#ParameterizedTest
#CsvFileSource(resources = "testData.csv", numLinesToSkip = 1)
void test(String parameter, String anotherParameter) {
Assumptions.assumeFalse(skipRemaining);
try {
// testing here
} catch (AssertionError e) {
skipRemaining = true;
throw e;
}
}
}
In contrast to a failed assertion, which marks a test as failed, an assumption results in an abort of a test. In addition, the lifecycle is switched from per method to per class:
When using this mode, a new test instance will be created once per test class. Thus, if your test methods rely on state stored in instance variables, you may need to reset that state in #BeforeEach or #AfterEach methods.
Depending on how often you need that feature, I would rather go with a custom extension.

testNG priorities not followed

In the testNG.xml file, I have 10+ test classes (within a test-suite tag) for regression testing. I, then, have ordered the automated tests of several test classes in a particular sequence by using the priority=xxx in #Test annotation. The priority values within a particular class are sequential - but each test class has different ranges. For example:
testClass1 : values are from 1-10
testClass2 : values are from 11-23
testClass3 : values are from 31-38
.
.
.
lastTestClass : values are from 10201-10215
The purpose of this is to have a particular sequence in which the 10+ test-classes are executed. There is one test-class that I need to be executed towards the end of the test execution - so, the priorities in that class range from 10201-10215. However, this particular test-class gets tested right after the 1st class with priorities from 1-10.
Instead of using priority, I would recommend you to use dependencies. They will run your tests in a strict order, never running the depended before the dependent, even if you are running in parallel.
I understand you have the different ranges in different classes, so in dependOnMethods you would have to specify the root of the test you are referencing:
#Test( description = "Values are from 1-10")
public void values_1_10() {
someTest();
}
#Test( description = "Values are from 21-23",
dependsOnMethods = { "com.project.test.RangeToTen.values_1_10" })
public void values_21_23() {
someTest();
}
If you have more than one test in each range then you can use dependsOnGroups:
#Test( enabled = true,
description = "Values are from 1-10")
public void values_1_10_A() {
someTest();
}
#Test( enabled = true,
description = "Values are from 1-10")
public void values_1_10_B() {
someTest();
}
#Test( enabled = true,
description = "Values are from 1-10",
dependsOnGroups = { "group_1_10" })
public void values_21_23_A() {
someTest();
}
#Test( enabled = true,
description = "Values are from 1-10",
dependsOnGroups = { "group_1_10" })
public void values_21_23_B() {
someTest();
}
You can also do the same with more options from the testng.xml:
https://testng.org/doc/documentation-main.html#dependencies-in-xml
Another option you have is to use the "preserve order":
https://www.seleniumeasy.com/testng-tutorials/preserve-order-in-testng
But as Anton mention, that could bring you troubles if you ever want to run in parallel, so I recommend you using dependencies.
Designing your tests to be run in specific order is a bad practice. You might want to run tests in parallel in future - and having dependencies on order will stop you from doing that.
Consider using TestNG listeners instead:
It looks like you are trying to implement some kind of tearDown process after tests.
If this is the case - you can implement ITestListener and use onFinish method to run some code after all of your tests were executed.
Also, this TestNG annotation might work for your case:
org.testng.annotations.AfterSuite

how to execute test data level execution in parallel mode for one test case

I am opening multiple instance of browser for each data set but all the input data is getting entered only in one instance/session instead of each data set in each instance. I am using selenium and TestNG
#DataProvider(name="URLprovider", parallel=true )
private Object[][] getURLs() {
return new Object[][] {
{"Fist data"},
{"Second Data"},
{"3 data"}
};
}
#Test(dataProvider="URLprovider",threadPoolSize = 3)
public void testFun(String url){
BaseDriver baseReference = BaseDriver.getBaseDriverInstance();
System.out.println("Test class"+url +"="
+Thread.currentThread().getId());
driver = baseReference.initBrowser();
driver.get("http://stackoverflow.com/");
driver.findElement(By.xpath("//*#id='search']/div/input")).sendKeys(url);
}
So here i am opening three browser instance parallel (as we have 3 set of data in #dataprovider ) and entering value in text box. But while executing the code 3 instance is getting opened but test data value is entered only in one instance... but my expectation is to enter one data in one instance.
The problem lies in your test code.
The test code that you shared as part of testFun() seems to suggest that you are making use of the same WebDriver instance amongst all of your #Test iterations.
You haven't shown us what BaseDriver baseReference = BaseDriver.getBaseDriverInstance(); looks like, but going by your issue I am assuming that its returning the same webdriver instance.
That explains why all of your test methods seem to be sharing the same webdriver instance.
To fix this issue, you would need to do one of the following:
Move your webdriver instantiation logic inside your test method i.e., testFun() (or)
Create a #BeforeMethod configuration method which would be responsible for creating a browser instance and persist that within a ThreadLocal<RemoteWebDriver> instance and your test method viz., testFun() gets the current thread's webdriver instance via driver.get() [ here driver is of type ThreadLocal<RemoteWebDriver>. Don't forget to declare driver as a static variable.

In MSTest how to check if last test passed (in TestCleanup)

I'm creating web tests in Selenium using MSTest and want to take a screenshot everytime a test fails but I don't want to take one every time a test passes.
What I wanted to do is put a screenshot function inside the [TestCleanup] method and run it if test failed but not if test passed. But how do I figure out if a last test passed?
Currently I'm doing bool = false on [TestInitialize] and bool = true if test runs through.
But I don't think that's a very good solution.
So basically I'm looking for a way to detect if last test true/false when doing [TestCleanup].
Solution
if (TestContext.CurrentTestOutcome != UnitTestOutcome.Passed)
{
// some code
}
The answer by #MartinMussmann is correct, but incomplete. To access the "TestContext" object you need to make sure to declare it as a property in your TestClass:
[TestClass]
public class BaseTest
{
public TestContext TestContext { get; set; }
[TestCleanup]
public void TestCleanup()
{
if (TestContext.CurrentTestOutcome != UnitTestOutcome.Passed)
{
// some code
}
}
}
This is also mentioned in the following post.