Karate API framework- test dependency - api

In my regression suite I have 600+ test cases. All those tests have #RegressionTest tag. See below, how I am running.
_start = LocalDateTime.now();
//see karate-config.js files for env options
_logger.info("karate.env = " + System.getProperty("karate.env"));
System.setProperty("karate.env", "test");
Results results = Runner.path("classpath:functional/Commercial/").tags("#RegressionTest").reportDir(reportDir).parallel(5);
generateReport(results.getReportDir());
assertEquals(0, results.getFailCount(), results.getErrorMessages());
I am thinking that, I can create 1 test and give it a tag #smokeTest. I want to be able to run that test 1st and only if that test passes then run the entire Regression suite. How can I achieve this functionality? I am using Junit5 and Karate.runner.

I think the easiest thing to do is run one test in JUnit itself, and if that fails, throw an Exception or skip running the actual tests.
So use the Runner two times.
Otherwise consider this not supported directly in Karate but code-contributions are welcome.
Also refer to the answers to this question: How to rerun failed features in karate?

Related

Karate: Scenario fails if contains __arg and run in 'stand-alone' mode

I have faced a problem when I try to run a Scenario containing built-in __arg variable as 'stand-alone' (not 'called'), then my test fails with an error (I do not #ignore the called one as in order to use it in both 'called' and 'stand-alone' modes):
evaluation (js) failed: __arg, javax.script.ScriptException: ReferenceError: "__arg" is not defined in <eval> at line number 1
stack trace: jdk.nashorn.api.scripting.NashornScriptEngine.throwAsScriptException(NashornScriptEngine.java:470)
Following two simple features should be enough to reproduce.
called-standalone.feature:
Feature: Called + Stand-alone Scenario
Scenario: Should not fail on __arg when run as stand-alone
* def a = __arg
* print a
caller.feature:
Feature: Caller
Scenario: call without args
When def res = call read('called-standalone.feature')
Then match res.a == null
Scenario: call with args
When def res = call read('called-standalone.feature') {some: 42}
Then match res.a == {some: 42}
Putting these two features into the skeleton project and run mvn test will show an error.
I'm expecting this should work as the docs say that "‘called’ Karate scripts ... can behave like ‘normal’ Karate tests in ‘stand-alone’ mode".
‘called’ Karate scripts don’t need to use any special keywords to ‘return’ data and can behave like ‘normal’ Karate tests in ‘stand-alone’ mode if needed
All Karate variables have to be "defined" at run time. This is a rule which can not be relaxed.
So you should re-design your scripts. The good thing is you can use karate.get() to set a "default value".
* def a = karate.get('__arg', null)
That should answer your question.

Running features in parallel by tags in Karate [duplicate]

This question already has an answer here:
Tag logic for Parallel Run
(1 answer)
Closed 2 years ago.
I have a end to end test suite with features marked with #e2e tag. The features are within different modules namely WNG, DTC, FFD with their own tags(like #e2eWNG, #e2eDTC, #e2eFFD) along with #e2e. Each of the modules can run independently and I wish to parallelise the test execution. For example tags with #e2eWNG can run on a single thread, #e2eDTC can run on another thread and so on.
Right now I just run all tests with tags #e2e and this is sequential.
I had a look at https://github.com/intuit/karate/blob/master/karate-demo/src/test/java/demo/DemoTestParallel.java as an example but I could not figure out how to separate the threads by tags.
I tried doing this based on a solution mentioned here - Is it possible to generate Cucumber HTML Reports with Karate's JUnit5 fluent API? and this is what I did in my test runner class
Results DTC = Runner.path("classpath:").tags("#e2eDTC").reportDir("target/cucumber-html-reports").parallel(1);
Results WNG = Runner.path("classpath:").tags("#e2eWNG").reportDir("target/cucumber-html-reports").parallel(1);
Results FFD = Runner.path("classpath:").tags("#e2eFFD").reportDir("target/cucumber-html-reports").parallel(1);
assertTrue(DTC.getErrorMessages(), DTC.getFailCount() == 0);
assertTrue(WNG.getErrorMessages(), WNG.getFailCount() == 0);
assertTrue(FFD.getErrorMessages(), FFD.getFailCount() == 0);
generateReport(DTC.getReportDir());
generateReport(WNG.getReportDir());
generateReport(FFD.getReportDir());
But I do understand this is again sequential. Just wanted to know if there is way we can parallelise the execution separated by tags. I may be missing something but any suggestions would be really helpful.
run all tests with tags #e2e and this is sequential.
If you are using the parallel-runner with threads > 1, this should not happen.
In my opinion there is no need for you to run each tag on a different thread. Just use one parallel runner and 3 tags. This is what all teams do. If you have some tests that really don't mix with other tests in the suite (which is an anti-pattern and you should fix), you can look at the #parallel=false tag: https://github.com/intuit/karate#parallelfalse

Protractor suites are not properly getting executed

I have multiple specs so i created a suite for different specs.
Let's take the below scenario.
this is my suite structure in the conf file.
suites:{
forms:['specs/requestE.js'],
search:['specs/findaSpec.js'],
offers:['specs/offersPrograms.js','specs/destinationsSpec.js'],
headerfooterlinks:['specs/footerlinksSpec.js','specs/headerMenuSpec.js']
},
When I run each spec individually it works correctly and generates the test results, but when I run the whole suite only the first one is working, others are not getting executed. As a result it gives timeout error.
Do you have any test cases in first spec with fit('', function(){}) instead of it('', function(){}) ?
if thats the case, it'll just execute one spec while ignoring the rest

How to get disabled test cases count in jenkins result?

I have suppose 10 test cases in test suite in which 2 test cases are disabled.I want to get those two test cases in test result of jenkins job like pass = 7 ,fail = 1 and disabled/notrun= 2.
By default, TestNG generates report for your test suite and you may refer to index.html file under the test-output folder. If you click on "Ignored Methods" hyperlink, it will show you all the ignored test cases and its class name and count of ignored methods.
All test cases annotated with #Test(enabled = false) will be showing in "Ignored Methods" link.
I have attached a sample image. Refer below.
If your test generates JUnit XML reports, you can use the JUnit plugin to parse these reports after the build (as a post-build action). Then, you can go into your build and click 'Test Result'. You should see a breakdown of how the execution went (including passed, failed, and skipped tests).

Asynchronous execution of test cases in JS test driver

We are having CI build servers where different JS TestCases are to be executed parallely (asynchronously) on the same instance of JS test driver server. The test cases are independent of each other and should not interfere with each other. Is this possible using JS test driver?
Currently, I have written two test cases, just to check if this is possible:
Test Case 1:
GreeterTest = TestCase("GreeterTest");
GreeterTest.prototype.testGreet = function() {
var greeter = new myapp.Greeter();
alert(“just hang in there”);
assertEquals("Hello World!", greeter.greet("World"));
};
Test Case2:
GreeterTest = TestCase("GreeterTest");
GreeterTest.prototype.testGreet = function() {
var greeter = new myapp.Greeter();
assertEquals("Hello World!", greeter.greet("World"));
};
I have added the alert statement in test case 1 to make sure that it hangs there.
I have started the JS test driver server with the following command:
java --jar JsTestDriver-1.3.5.jar --port 9877 --browser <path to browser exe>
I am starting the execution of both the test cases as follows:
java -jar JsTestDriver-1.3.5.jar --tests all --server http://localhost:9877
The Test Case 1 executes and hangs at alert statement. Test Case 2 fails with an exception (BrowserPanicException). The conf file is proper, as the second test case passes if executed by itself.
Is there any configuration changes required to make the second test case pass, while the first test case is still executing?
This issue is caused by the "app" (the jsTestDriver slave, running the tests) cannot really run them in parallel. This is because it's written in javascript, which is single threaded.
The implementation probably loops all tests and runs them one by one, thus, when an alert pops, the entire "app" is blocked, and cannot even report back to the jsTestDriver server, resulting in a timeout manifested in a BrowserPanicException.
Writing Async Tests won't help since the entire "app" is stuck.