How to handle depended scenarios in cucumber 4 parallel execution with TestNG - selenium

As per cucumber 4 with TestNG:
When using TestNG in parallel mode, scenarios can be executed in
separate threads irrespective of which feature file it belongs too.
Different rows in a scenario outline can be also executed in separate
threads. The two scenarios in feature1.feature file will be executed
by two threads in two browsers. The single feature2.feature will be
executed by another thread in a separate browser.
Now suppose I have a scenario like below in feature1:
1st scenario : Create an user with some details.
2nd scenario : Edit an user with some details.
Now if in TestNG if both scenario invoke at the same time then my 2nd scenario will fail for sure as the user is not created yet.
Do I just switch to Junit as:
When using JUnit in parallel mode, all scenarios in a feature file
will be executed in the same thread. The two scenarios in
feature1.feature file will be executed in one browser. The single
feature2.feature will be executed by another thread in a separate
browser.
Below function just having the parameter to run it as parallel.
#Override
#DataProvider(parallel = true)
public Object[][] scenarios() {
return super.scenarios();
}
So my main question is how to configure my test in parallel so my test can run systematically. i.e execute parallel per feature file, or any tag which can mark scenario depended on another like we have in TestNG #Test(dependsOnMethods = { "testTwo" }).
Kindly suggest any configuration setting for cucumber or strategy which can be use for same.

Related

Running only selected tests with dynamic input [duplicate]

This question already has an answer here:
Tag logic for Parallel Run
(1 answer)
Closed 2 years ago.
I have tried few approaches to solve my problem but with no success (I do need to improve my Java :)), so I am hopping that I am missing something or that someone can point me in the right direction.
I have multiple microservices that I need to test. I should be able to test all at once or only the ones I want. Each service has its own DB and different feature files. Note that these services may not be all up and running.
I can run tests with manually setting config for each service. Ideally I would like to pass a variable with service name in command line and the tests should start.
In current set up I use callSingle to run DBInit.feature which runs SQL scripts to populate my DB. I have also set global variables that are used in feature files. And this works fine.
Problems start when I add more feature files that are used to test the service that is not running. And when I have to use callSingle for specified service to populate its DB.
The first idea was to use different envs, but I could need 5 envs to be executed in a single run and with one report. Then I was thinking to implement runner for each service but I am not sure if these runners run in parallel and not sure how could I populate DB in this case?
Is it possible to use custom variable that will be passed to main test class.
public class DemoTestSelected {
#BeforeClass
public static void beforeClass() throws Exception {
TestBase.beforeClass();
}
#Test
public void testSelected() {
List<String> tags = Arrays.asList("~#ignore");
List<String> features = Arrays.asList("classpath:demo/cats");
String karateOutputPath = "target/surefire-reports";
Results results = Runner.path(features)
.tags(tags)
.outputCucumberJson(true)
.reportDir(karateOutputPath).parallel(5);
DemoTestParallel.generateReport(karateOutputPath);
assertTrue(results.getErrorMessages(), results.getFailCount() == 0);
}
}
For example tags and features to be set in config?
I re-read your question a few times and gave up trying to understand it. But I'll lay down a couple of principles:
you should use tags to decide which features to run / not-run. try to fit everything you need to this model and don't complicate things
for more control, you can set some "system property" on the command-line and maybe before you use the Runner, you can write some Java logic which would be - "if karate.env (or some other system property) is foo, then select tags one. two and three etc.
yes the Karate 1.0 series can technically run multiple Runner instances in parallel, but that is left to you and we don't have an example, it would require you to manage threads or a Java Executor manually

How to restrict test data method call for respective Test method by using TestCaseSource attribute in NUnit

I am using NUnit for the Selenium C# project. In which I have many test methods. For getting data (from excel) I am using a public static method that returns IEnumerable<TestCaseData> which I am calling at test method level as TestCaseSource. I am facing challenges now, as I start executing on the test method it is invoking all the static methods which are there in the project.
Code looks like this:
public static IEnumerable<TestCaseData> BasicSearch()
{
BaseEntity.TestDataPath = PMTestConstants.PMTestDataFolder + ConfigurationManager.AppSettings.Get("Environment").ToString() + PMTestConstants.PMTestDataBook;
return ExcelTestDataHelper.ReadFromExcel(BaseEntity.TestDataPath, ExcelQueryCreator.GetCommand(PMTestConstants.QueryCommand, PMTestConstants.PMPolicySheet, "999580"));
}
[Test, TestCaseSource("BasicSearch"), Category("Smoke")]
public void SampleCase(Dictionary<string, string> data)
{
dosomething;
}
Can someone help me how can I restrict my data call method to the respective test method?
Your TestCaseSource is not actually called by the test method when you run it, but as part of test discovery. While it's possible to select a single test to execute, it's not possible to discover tests selectively. NUnit must examine the assembly and find all the tests before it's possible to run any of them.
To make matters worse, if you are running under Visual Studio, the discovery process takes place multiple times, first before the tests are initially displayed and then again each time the tests are run. This is made necessary by the architecture of the VS Test Window, which runs separate processes for the initial disovery and the execution of the tests.
That makes it particularly important to minimize the amount of work done in test discovery, especially when running under Visual Studio. Ideally, you should structure the code so that the variable parameters are recorded during discovery. The actual data access should take place at execution time. This can be done in a OneTimeSetUp method, a SetUp method or at the start of the test itself.
Finally, I'd say that your instinct is correct: it should be possible to set up a TestCaseSource, which only runs if the test you select is about to be executed. Unfortunately, that's a feature that NUnit doesn't yet have.

How to run a Karate Feature file after a Gatling simulation completes

I have two Karate Feature files
One for creating a user (CreateUser.feature)
One for getting a count of users created (GetUserCount.feature)
I have additionally one Gatling Scala file
This calls CreateUser.feature with rampUser(100) over (5 seconds)
This works perfect. What i'd like to know is how can I call GetUserCount.feature after Gatling finishes it's simulation? It must only be called one time to get the final created user count. What are my options and how can I implement them?
Best option is use the Java API to run a single feature in the Gatling simulation.
Runner.runFeature("classpath:mock/feeder.feature", null, false)

How can I paramaterize my selenium tests to run through multiple scenarios using saucelabs

I have a selenium automation framework which uses junit to run tests locally on a browser of my choice. I currently use junitparams to parameterize some of my tests. e.g
#RunWith(JUnitParamsRunner.class)
public class loginPage extends BaseTestClass{
#Test
#FileParameters(value = "src/test/resources/Test data/login.csv", mapper = CsvWithHeaderMapper.class)
public void login(String username, String pwd) throws Exception{
}
}
There are tests I have for logging into a website and I use junitparams with a csv file to run through multiple different login scenarios. I am now looking to start using saucelabs to run my tests across multiple different browser/os combinations simultaneously. My question is how do I achieve both the saucelabs parallel tests and parametrized tests at the same time? I have seen examples for saucelabs like the following:
https://github.com/saucelabs-sample-test-frameworks/Java-Junit-Selenium
But the issue I will run into is that I cannot use multiple different runners. I need to use a single runner as the Junit #RunWith annotatation requires. Is there an easy way to combine both the ConcurrentParameterized.class runner used in the saucelabs example and the JUnitParamsRunner.class I am currently utilising for local execution?
EDIT:
I found the following that confirms I cannot use 2 separate runners and appears to suggest merging two runners would be very difficult. Instead I'm guessing I will have to change the way sauce labs integration is handled. https://github.com/Pragmatists/junitparams-spring-integration-example
I would suggest taking a look at SauceryJ. It integrates Jenkins, the Sauce OnDemand plugin, and your testing code with SauceLabs.
Example class here.
Full disclosure: I wrote and maintain SauceryJ

Error running multiple tests in Specflow/Selenium

I have an existing project that uses Specflow and SpecRun to run some tests against Sauce Labs. I have a BeforeSenario hook that creates a RemoteWebDriver and an AfterScenario hook that closes this down.
I've now moved this into another project (copied the files over, just changed the namespace) and the first test runs fine but then get the following error:
An exception of type 'OpenQA.Selenium.WebDriverException' occurred in WebDriver.dll but was not handled in user code
Additional information: Unexpected error. The command you just sent (POST element) has no session ID.
This is generally caused by testing frameworks trying to run commands after the conclusion of a test.
For example, you may be trying to capture a screenshot or retrieve server logs
after selenium.stop() or driver.quit() was called in a tearDown method.
Please make sure this process happens before the session is ended.
I've compared the project and it's using the same version of SpecFlow, same .Net version. I can't see any difference between the two projects.
In my steps I have the following line:
public static IWebDriver driver = (IWebDriver)ScenarioContext.Current["driver"];
which I think is the issue as instead of getting a new instance of it from the ScenarioContext it's using the previous test's version which has now been disposed.
But I can't see why this is working in another project instead?
I am using the Specflow example in Github here
UPDATE
Looks like I've found the issue. In the Default.srprofile the testThreadCount was 1 whereas the value in the working solution was 10. I've now updated this to match and it works.
Not sure what this value should be though. I assume it shouldn't be the same number of tests, but then how do I get around my original issue of the shared driver context?
TestThreadCount specifics the number of threads used by SpecFlow+Runner (aka SpecRun) to execute the tests.
Each of the threads are separated. The default is AppDomain isolation, so every thread runs in a separate AppDomain.
In the SauceLab example there are 7 scenarios and the runner is configured to use 10 threads. This means, every scenario is executed in a different thread with its own AppDomain. As no thread executes a second scenario, you get this error not in the example
With only one thread, your thread is executing more than one scenario and you get this issue.
Easiest fix would be, if you remove the static from the field. For every scenario you get a new instance of the binding class. You do not have to remember it static.
For a better example how to use Selenium with SpecFlow & SpecFlow+ have a look here: https://github.com/techtalk/SpecFlow.Plus.Examples/tree/master/SeleniumWebTest
You have to adjust the WebDriver- class for using SauceLabs over the RemoteWebDriver.