testNG priorities not followed - selenium

In the testNG.xml file, I have 10+ test classes (within a test-suite tag) for regression testing. I, then, have ordered the automated tests of several test classes in a particular sequence by using the priority=xxx in #Test annotation. The priority values within a particular class are sequential - but each test class has different ranges. For example:
testClass1 : values are from 1-10
testClass2 : values are from 11-23
testClass3 : values are from 31-38
.
.
.
lastTestClass : values are from 10201-10215
The purpose of this is to have a particular sequence in which the 10+ test-classes are executed. There is one test-class that I need to be executed towards the end of the test execution - so, the priorities in that class range from 10201-10215. However, this particular test-class gets tested right after the 1st class with priorities from 1-10.

Instead of using priority, I would recommend you to use dependencies. They will run your tests in a strict order, never running the depended before the dependent, even if you are running in parallel.
I understand you have the different ranges in different classes, so in dependOnMethods you would have to specify the root of the test you are referencing:
#Test( description = "Values are from 1-10")
public void values_1_10() {
someTest();
}
#Test( description = "Values are from 21-23",
dependsOnMethods = { "com.project.test.RangeToTen.values_1_10" })
public void values_21_23() {
someTest();
}
If you have more than one test in each range then you can use dependsOnGroups:
#Test( enabled = true,
description = "Values are from 1-10")
public void values_1_10_A() {
someTest();
}
#Test( enabled = true,
description = "Values are from 1-10")
public void values_1_10_B() {
someTest();
}
#Test( enabled = true,
description = "Values are from 1-10",
dependsOnGroups = { "group_1_10" })
public void values_21_23_A() {
someTest();
}
#Test( enabled = true,
description = "Values are from 1-10",
dependsOnGroups = { "group_1_10" })
public void values_21_23_B() {
someTest();
}
You can also do the same with more options from the testng.xml:
https://testng.org/doc/documentation-main.html#dependencies-in-xml
Another option you have is to use the "preserve order":
https://www.seleniumeasy.com/testng-tutorials/preserve-order-in-testng
But as Anton mention, that could bring you troubles if you ever want to run in parallel, so I recommend you using dependencies.

Designing your tests to be run in specific order is a bad practice. You might want to run tests in parallel in future - and having dependencies on order will stop you from doing that.
Consider using TestNG listeners instead:
It looks like you are trying to implement some kind of tearDown process after tests.
If this is the case - you can implement ITestListener and use onFinish method to run some code after all of your tests were executed.
Also, this TestNG annotation might work for your case:
org.testng.annotations.AfterSuite

Related

How to Take Screenshot when TestNG Assert fails?

String Actualvalue= d.findElement(By.xpath("//[#id=\"wrapper\"]/main/div[2]/div/div[1]/div/div[1]/div[2]/div/table/tbody/tr[1]/td[1]/a")).getText();
Assert.assertEquals(Actualvalue, "jumlga");
captureScreen(d, "Fail");
The assert should not be put before your capture screen. Because it will immediately shutdown the test process so your code
captureScreen(d, "Fail");
will be not reachable
This is how i usually do:
boolean result = false;
try {
// do stuff here
result = true;
} catch(Exception_class_Name ex) {
// code to handle error and capture screen shot
captureScreen(d, "Fail");
}
# then using assert
Assert.assertEquals(result, true);
1.
A good solution will be is to use a report framework like allure-reports.
Read here:allure-reports
2.
We don't our tests to be ugly by adding try catch in every test so we will use Listeners which are using an annotations system to "Listen" to our tests and act accordingly.
Example:
public class listeners extends commonOps implements ITestListener {
public void onTestFailure(ITestResult iTestResult) {
System.out.println("------------------ Starting Test: " + iTestResult.getName() + " Failed ------------------");
if (platform.equalsIgnoreCase("web"))
saveScreenshot();
}
}
Please note I only used the relevant method to your question and I suggest you read here:
TestNG Listeners
Now we will want to take a screenshot built in method by allure-reports every time a test fails so will add this method inside our listeners class
Example:
#Attachment(value = "Page Screen-Shot", type = "image/png")
public byte[] saveScreenshot(){
return ((TakesScreenshot)driver).getScreenshotAs(OutputType.BYTES);
}
Test example
#Listeners(listeners.class)
public class myTest extends commonOps {
#Test(description = "Test01: Add numbers and verify")
#Description("Test Description: Using Allure reports annotations")
public void test01_myFirstTest(){
Assert.assertEquals(result, true)
}
}
Note we're using at the beginning of the class an annotation of #Listeners(listeners.class) which allows our listeners to listen to our test, please mind the (listeners.class) can be any class you named your listeners.
The #Description is related to allure-reports and as the code snip suggests you can add additional info about the test.
Finally, our Assert.assertEquals(result, true) will take a screen shot in case the assertion fails because we enabled our listener.class to it.

Maintain context of Selenium WebDriver while running parallel tests in NUnit?

Using:C#NUnit 3.9
Selenium WebDriver 3.11.0
Chrome WebDriver 2.35.0
How do I maintain the context of my WebDriver while running parallel tests in NUnit?
When I run my tests with the ParallelScope.All attribute, my tests reuse the driver and fail
The Test property in my tests does not persist across the [Setup] - [Test] - [TearDown] without the Test being given a higher scope.
Test.cs
public class Test{
public IWebDriver Driver;
//public Pages pages;
//anything else I need in a test
public Test(){
Driver = new ChromeDriver();
}
//helper functions and reusable functions
}
SimpleTest.cs
[TestFixture]
[Parallelizable(ParallelScope.All)]
class MyTests{
Test Test;
[SetUp]
public void Setup()
{
Test = new Test();
}
[Test]
public void Test_001(){
Test.Driver.Goto("https://www.google.com/");
IWebElement googleInput = Test.Driver.FindElement(By.Id("lst-ib"));
googleInput.SendKeys("Nunit passing context");
googleInput.SendKeys(Keys.Return);
}
[Test]
public void Test_002(){
Test.Driver.Goto("https://www.google.com/");
IWebElement googleInput = Test.Driver.FindElement(By.Id("lst-ib"));
googleInput.SendKeys("Nunit passing context");
googleInput.SendKeys(Keys.Return);
}
[Test]
public void Test_003(){
Test.Driver.Goto("https://www.google.com/");
IWebElement googleInput = Test.Driver.FindElement(By.Id("lst-ib"));
googleInput.SendKeys("Nunit passing context");
googleInput.SendKeys(Keys.Return);
}
[Test]
public void Test_004(){
Test.Driver.Goto("https://www.google.com/");
IWebElement googleInput = Test.Driver.FindElement(By.Id("lst-ib"));
googleInput.SendKeys("Nunit passing context");
googleInput.SendKeys(Keys.Return);
}
[TearDown]
public void TearDown()
{
string outcome = TestContext.CurrentContext.Result.Outcome.ToString();
TestContext.Out.WriteLine("#RESULT: " + outcome);
if (outcome.ToLower().Contains("fail"))
{
//Do something like take a screenshot which requires the WebDriver
}
Test.Driver.Quit();
Test.Driver.Dispose();
}
}
The docs state: "SetUpAttribute is now used exclusively for per-test setup."
Setting the Test property in the [Setup] does not seem to work.
If this is a timing issue because I'm re-using the Test property. How do I arrange my fixtures so the Driver is unique each test?
One solution is to put the driver inside the [Test]. But then, I cannot utilize the TearDown method which is a necessity to keep my tests organized and cleaned up.
I've read quite a few posts/websites, but nothing solves the problem. [Parallelizable(ParallelScope.Self)] seems to be the only real solution and that slows down the tests.
Thank you in advance!
The ParallelizableAttribute makes a promise to NUnit that it's safe to run certain tests in parallel, but it doesn't do anything to actually make it safe. That's up to you, the programmer.
Your tests (test methods) have shared state, i.e. the field Test. Not only that, but each test changes the shared state, because the SetUp method is called for each test. That means your tests may not safely be run in parallel, so you shouldn't tell NUnit to run them that way.
You have two ways to go... either use a lesser degree of parallelism or make the tests safe to run in parallel.
Using a lesser degree of parallelism is the easiest. Try using ParallelScope.Fixtures on the assembly or ParallelScope.Self (the default) on each fixture. If you have a large number of independent fixtures, this may give you as good a throughput as you will get doing something more complicated.
Alternatively, to run tests in parallel, each test must have a separate driver. You will have to create it and dispose of it in the test method itself.
In the future, NUnit may add a feature that will make this easier, by isolating each test method in a separate object. But with the current software, the above is the best you can do.

Programmatically execute Gatling tests

I want to use something like Cucumber JVM to drive performance tests written for Gatling.
Ideally the Cucumber features would somehow build a scenario dynamically - probably reusing predefined chain objects similar to the method described in the "Advanced Tutorial", e.g.
val scn = scenario("Scenario Name").exec(Search.search("foo"), Browse.browse, Edit.edit("foo", "bar")
I've looked at how the Maven plugin executes the scripts, and I've also seen mention of using an App trait but I can't find any documentation for the later and it strikes me that somebody else will have wanted to do this before...
Can anybody point (a Gatling noob) in the direction of some documentation or example code of how to achieve this?
EDIT 20150515
So to explain a little more:
I have created a trait which is intended to build up a sequence of, I think, ChainBuilders that are triggered by Cucumber steps:
trait GatlingDsl extends ScalaDsl with EN {
private val gatlingActions = new ArrayBuffer[GatlingBehaviour]
def withGatling(action: GatlingBehaviour): Unit = {
gatlingActions += action
}
}
A GatlingBehaviour would look something like:
object Google {
class Home extends GatlingBehaviour {
def execute: ChainBuilder =
exec(http("Google Home")
.get("/")
)
}
class Search extends GatlingBehaviour {...}
class FindResult extends GatlingBehaviour {...}
}
And inside the StepDef class:
class GoogleStepDefinitions extends GatlingDsl {
Given( """^the Google search page is displayed$""") { () =>
println("Loading www.google.com")
withGatling(Home())
}
When( """^I search for the term "(.*)"$""") { (searchTerm: String) =>
println("Searching for '" + searchTerm + "'...")
withGatling(Search(searchTerm))
}
Then( """^"(.*)" appears in the search results$""") { (expectedResult: String) =>
println("Found " + expectedResult)
withGatling(FindResult(expectedResult))
}
}
The idea being that I can then execute the whole sequence of actions via something like:
val scn = Scenario(cucumberScenario).exec(gatlingActions)
setup(scn.inject(atOnceUsers(1)).protocols(httpConf))
and then check the reports or catch an exception if the test fails, e.g. response time too long.
It seems that no matter how I use the 'exec' method it tries to instantly execute it there and then, not waiting for the scenario.
Also I don't know if this is the best approach to take, we'd like to build some reusable blocks for our Gatling tests that can be constructed via Cucumber's Given/When/Then style. Is there a better or already existing approach?
Sadly, it's not currently feasible to have Gatling directly start a Simulation instance.
Not that's it's not technically feasible, but you're just the first person to try to do this.
Currently, Gatling is usually in charge of compiling and can only be passed the name of the class to load, not an instance itself.
You can maybe start by forking io.gatling.app.Gatling and io.gatling.core.runner.Runner, and then provide a PR to support this new behavior. The former is the main entry point, and the latter the one can instanciate and run the simulation.
I recently ran into a similar situation, and did not want to fork gatling. And while this solved my immediate problem, it only partially solves what you are trying to do, but hopefully someone else will find this useful.
There is an alternative. Gatling is written in Java and Scala so you can call Gatling.main directly and pass it the arguments you need to run the Gatling Simulation you want. The problem is, the main explicitly calls System.exit so you have to also use a custom security manager to prevent it from actually exiting.
You need to know two things:
the class (with the full package) of the Simulation you want to run
example: com.package.your.Simulation1
the path where the binaries are compiled.
The code to run a Simulation:
protected void fire(String gatlingGun, String binaries){
SecurityManager sm = System.getSecurityManager();
System.setSecurityManager(new GatlingSecurityManager());
String[] args = {"--simulation", gatlingGun,
"--results-folder", "gatling-results",
"--binaries-folder", binaries};
try {
io.gatling.app.Gatling.main(args);
}catch(SecurityException se){
LOG.debug("gatling test finished.");
}
System.setSecurityManager(sm);
}
The simple security manager i used:
public class GatlingSecurityManager extends SecurityManager {
#Override
public void checkExit(int status){
throw new SecurityException("Tried to exit.");
}
#Override
public void checkPermission(Permission perm) {
return;
}
}
The problem is then getting the information you want out of the simulation after it has been run.

Is it possible to skip a scenario with Cucumber-JVM at run-time

I want to add a tag #skiponchrome to a scenario, this should skip the scenario when running a Selenium test with the Chrome browser. The reason to-do this is because some scenario's work in some environments and not in others, this might not even be browser testing specific and could be applied in other situation for example OS platforms.
Example hook:
#Before("#skiponchrome") // this works
public void beforeScenario() {
if(currentBrowser == 'chrome') { // this works
// Skip scenario code here
}
}
I know it is possible to define ~#skiponchrome in the cucumber tags to skip the tag, but I would like to skip a tag at run-time. This way I don't have to think about which steps to skip in advance when I starting a test run on a certain environment.
I would like to create a hook that catches the tag and skips the scenario without reporting a fail/error. Is this possible?
I realized that this is a late update to an already answered question, but I want to add one more option directly supported by cucumber-jvm:
#Before //(cucumber one)
public void setup(){
Assume.assumeTrue(weAreInPreProductionEnvironment);
}
"and the scenario will be marked as ignored (but the test will pass) if weAreInPreProductionEnvironment is false."
You will need to add
import org.junit.Assume;
The major difference with the accepted answer is that JUnit assume failures behave just like pending
Important Because of a bug fix you will need cucumber-jvm release 1.2.5 which as of this writing is the latest. For example, the above will generate a failure instead of a pending in cucumber-java8-1.2.3.jar
I really prefer to be explicit about which tests are being run, by having separate run configurations defined for each environment. I also like to keep the number of tags I use to a minimum, to keep the number of configurations manageable.
I don't think it's possible to achieve what you want with tags alone. You would need to write a custom jUnit test runner to use in place of #RunWith(Cucumber.class). Take a look at the Cucumber implementation to see how things work. You would need to alter the RuntimeOptions created by the RuntimeOptionsFactory to include/exclude tags depending on the browser, or other runtime condition.
Alternatively, you could consider writing a small script which invokes your test suite, building up a list of tags to include/exclude dynamically, depending on the environment you're running in. I would consider this to be a more maintainable, cleaner solution.
It's actually really easy. If you dig though the Cucumber-JVM and JUnit 4 source code, you'll find that JUnit makes skipping during runtime very easy (just undocumented).
Take a look at the following source code for JUnit 4's ParentRunner, which Cucumber-JVM's FeatureRunner (which is used in Cucumber, the default Cucumber runner):
#Override
public void run(final RunNotifier notifier) {
EachTestNotifier testNotifier = new EachTestNotifier(notifier,
getDescription());
try {
Statement statement = classBlock(notifier);
statement.evaluate();
} catch (AssumptionViolatedException e) {
testNotifier.fireTestIgnored();
} catch (StoppedByUserException e) {
throw e;
} catch (Throwable e) {
testNotifier.addFailure(e);
}
}
This is how JUnit decides what result to show. If it's successful it will show a pass, but it's possible to #Ignore in JUnit, so what happens in that case? Well, an AssumptionViolatedException is thrown by the RunNotifier (or Cucumber FeatureRunner in this case).
So your example becomes:
#Before("#skiponchrome") // this works
public void beforeScenario() {
if(currentBrowser == 'chrome') { // this works
throw new AssumptionViolatedException("Not supported on Chrome")
}
}
If you've used vanilla JUnit 4 before, you'd remember that #Ignore takes an optional message that is displayed when a test is ignored by the runner. AssumptionViolatedException carries the message, so you should see it in your test output after a test is skipped this way without having to write your own custom runner.
I too had the same challenge, where in I need to skip a scenario from running based on a flag which I obtain from the application dynamically in run-time, which tells whether the feature to be tested is enabled on the application or not..
so this is how I wrote my logic in the scenarios file, where we have the glue code for each step.
I have used a unique tag '#Feature-01AXX' to mark my scenarios that need to be run only when that feature(code) is available on the application.
so for every scenario, the tag '#Feature-01XX' is checked first, if its present then the check for the availability of the feature is made, only then the scenario will be picked for running. Else it will be merely skipped, and Junit will not mark this as failure, instead it will me marked as Pass. So the final result if these tests did not run due to the un-availability of the feature will be pass, that's cool...
#Before
public void before(final Scenario scenario) throws Exception {
/*
my other pre-setup tasks for each scenario.
*/
// get all the scenario tags from the scenario head.
final ArrayList<String> scenarioTags = new ArrayList<>();
scenarioTags.addAll(scenario.getSourceTagNames());
// check if the feature is enabled on the appliance, so that the tests can be run.
if (checkForSkipScenario(scenarioTags)) {
throw new AssumptionViolatedException("The feature 'Feature-01AXX' is not enabled on this appliance, so skipping");
}
}
private boolean checkForSkipScenario(final ArrayList<String> scenarioTags) {
// I use a tag "#Feature-01AXX" on the scenarios which needs to be run when the feature is enabled on the appliance/application
if (scenarioTags.contains("#Feature-01AXX") && !isTheFeatureEnabled()) { // if feature is not enabled, then we need to skip the scenario.
return true;
}
return false;
}
private boolean isTheFeatureEnabled(){
/*
my logic to check if the feature is available/enabled on the application.
in my case its an REST api call, I parse the JSON and check if the feature is enabled.
if it is enabled return 'true', else return 'false'
*/
}
I've implemented a customized junit runner as below. The idea is to add tags during runtime.
So say for a scenario we need new users, we tag the scenarios as "#requires_new_user". Then if we run our test in an environment (say production environment which dose not allow you to register new user easily), then we will figure out that we are not able to get new user. Then the ""not #requires_new_user" will be added to cucumber options to skip the scenario.
This is the most clean solution I can imagine now.
public class WebuiCucumberRunner extends ParentRunner<FeatureRunner> {
private final JUnitReporter jUnitReporter;
private final List<FeatureRunner> children = new ArrayList<FeatureRunner>();
private final Runtime runtime;
private final Formatter formatter;
/**
* Constructor called by JUnit.
*
* #param clazz the class with the #RunWith annotation.
* #throws java.io.IOException if there is a problem
* #throws org.junit.runners.model.InitializationError if there is another problem
*/
public WebuiCucumberRunner(Class clazz) throws InitializationError, IOException {
super(clazz);
ClassLoader classLoader = clazz.getClassLoader();
Assertions.assertNoCucumberAnnotatedMethods(clazz);
RuntimeOptionsFactory runtimeOptionsFactory = new RuntimeOptionsFactory(clazz);
RuntimeOptions runtimeOptions = runtimeOptionsFactory.create();
addTagFiltersAsPerTestRuntimeEnvironment(runtimeOptions);
ResourceLoader resourceLoader = new MultiLoader(classLoader);
runtime = createRuntime(resourceLoader, classLoader, runtimeOptions);
formatter = runtimeOptions.formatter(classLoader);
final JUnitOptions junitOptions = new JUnitOptions(runtimeOptions.getJunitOptions());
final List<CucumberFeature> cucumberFeatures = runtimeOptions.cucumberFeatures(resourceLoader, runtime.getEventBus());
jUnitReporter = new JUnitReporter(runtime.getEventBus(), runtimeOptions.isStrict(), junitOptions);
addChildren(cucumberFeatures);
}
private void addTagFiltersAsPerTestRuntimeEnvironment(RuntimeOptions runtimeOptions)
{
String channel = Configuration.TENANT_NAME.getValue().toLowerCase();
runtimeOptions.getTagFilters().add("#" + channel);
if (!TestEnvironment.getEnvironment().isNewUserAvailable()) {
runtimeOptions.getTagFilters().add("not #requires_new_user");
}
}
...
}
Or you can extends the official Cucumber Junit test runner cucumber.api.junit.Cucumber and override method
/**
* Create the Runtime. Can be overridden to customize the runtime or backend.
*
* #param resourceLoader used to load resources
* #param classLoader used to load classes
* #param runtimeOptions configuration
* #return a new runtime
* #throws InitializationError if a JUnit error occurred
* #throws IOException if a class or resource could not be loaded
* #deprecated Neither the runtime nor the backend or any of the classes involved in their construction are part of
* the public API. As such they should not be exposed. The recommended way to observe the cucumber process is to
* listen to events by using a plugin. For example the JSONFormatter.
*/
#Deprecated
protected Runtime createRuntime(ResourceLoader resourceLoader, ClassLoader classLoader,
RuntimeOptions runtimeOptions) throws InitializationError, IOException {
ClassFinder classFinder = new ResourceLoaderClassFinder(resourceLoader, classLoader);
return new Runtime(resourceLoader, classFinder, classLoader, runtimeOptions);
}
You can manipulate runtimeOptions here as you wish. But the method is marked as deprecated, so use it with caution.
If you're using Maven, you could read use a browser profile and then set the appropriate ~ exclude tags there?
Unless you're asking how to run this from command line, in which case you tag the scenario with #skipchrome and then when you run cucumber set the cucumber options to tags = {"~#skipchrome"}
If you wish simply to temporarily skip a scenario (for example, while writing the scenarios), you can comment it out (ctrl+/ in Eclipse or Intellij).

Comparing result of one JUnit #Test with another #Test in same class

Is it possible to compare the result / output of one JUnit test to another test in same class?
Below is the algorithm of
Public class CompareResult {
#Before
{
open driver
}
#After
{
quit driver
}
#Test
{
Connect to 1st website
Enter data and calculate value
Store the value in Variable A
}
#Test
{
Connect to 2nd website
Enter data and calculate value
Store the value in Variable B
}
#Test
{
Compare A and B
}
}
When I display the value of variable A & B in 3rd #Test, it is NULL. Can we not use variable in one #Test to another #Test in JUnit? Please advise, I am new to JUnit.
Why do they need to be two tests? If you are comparing the values, you really have one test with multiple methods and possibly multiple asserts. And if there aren't any asserts in helper1 and helper2, this becomes even more apparent. A test without an assert is just testing it doesn't blow up!
private helper1
{
// Connect to 1st website
// Enter data and calculate value
// Store the value in Variable A
}
private helper2
{
// Connect to 2nd website
// Enter data and calculate value
// Store the value in Variable B
}
#Test actualTest
{
// Compare A and B with assertion
}
You store your values in local variables as I understood. Declare private fields A and B first and then use it to store your data.
Public class CompareResult {
private String a = null;
private String b = null;
#Before
public void Setup() {
open driver
}
...
Btw your tests should be independent and passing values from one test to another is not a good way to implement them. Also I didn't work with junit a lot so I don't know how execution order for your tests is set. You should define some tests dependency or something like that and I repeat it once again: this is not correct for tests.