Can I run karate in parallel using standalone JAR? - karate

I am trying to run a simple test against a number of cases. I am using VS Code on Windows with Karate extension and standalone karate.jar.
Here is my feature:
Feature: settings support paths
Background:
* def some_ids = read('some_ids.json')
Scenario Outline: migrated settings are OK
Given url 'https://someapi.myorg.net/settings/'
And path id, 'Settings/Blah'
When method get
Then status 200
And match response.settings !contains { DefaultCounty: '#number'}
Examples:
|some_ids|
The Json is something like
[
{ "id":"0023a832-c1f3-464e-9de7-ce2cd0e24413"},
// ... 300 more lines of ids
{ "id":"fff5a55e-e3a1-43d8-81ef-b590f388fe90"}
]
It all works well until the number of cases gets around 300 where it kind of freezes by the end of execution and never produces the summary in console.
With lower numbers it works just fine, and the summary always indicates threads: 1, which is also supported by elapsed time given that API responds in ~1 sec.
My question is, setting the freezing aside, can I run these tests in parallel using standalone JAR?
The doc says Karate can run 'examples' in parallel, but I did not find any specific instructions for standalone jar.
I am not using Java as main platform and have no experience with Java ecosystem to speak of, so ability to use Karate as standalone is a big win for me.

Yes, just add a -T option: https://github.com/intuit/karate/tree/master/karate-netty#parallel-execution
java -jar karate.jar -T 5 src/features

Related

How to detect when Robot Framework test case failed during a particular Test Setup keyword

The problem
I am currently running Robot Framework test cases with a target HW. During the Test Setup, I am flashing the software under test (firmware) to the target device. Due to several reasons, the flashing of the firmware fails sometimes, which causes the whole test case to fail. This is currently preventing me from getting any further meaningful results.
What I want to achieve
I want to detect when the test case fails in a particular Robot keyword during Test Setup. If a test case fails during a particular Test Setup keyword, I will retry/rerun the test case with a different target HW (I have my own Python script that executes the Robot runner with a given target device).
The main problem is that I don't know how to detect when a test case failed in a Test Setup keyword.
What I tried so far
My first guess was that this could be achieved by parsing the output.xml file, but I couldn't extract this information from there.
I have already considered retrying the flashing operation in Test Setup, but this won't work for me. The test case needs to be restarted from scratch (running in a different target HW).
Lastly, using "Run Keyword And Ignore Error" is not a solution neither since Test Setup must be run successfully in order to continue the test case.
The best solution I found so far: how to list all keywords used by a robotframework test suite
from robot.api import ExecutionResult, ResultVisitor
result = ExecutionResult('output.xml')
class MyVisitor(ResultVisitor):
def visit_keyword(self, kw):
print("Keyword Name: " + str(kw.name))
print("Keyword Status: " + str(kw.status))
result.visit(MyVisitor())
Output:
Keyword Name: MyKeywords.My Test Setup
Keyword Status: FAIL

Possible issue with Karate testParallel runner

I'll apologize up-front for not being able to post actual code that exhibits this possible issue as it is confidential, but I wanted to see if anyone else might have observed the same issue. I looked in the project for any open/closed issues that might be like this but did not notice any.
I noticed that when I use the Karate testParallel runner (which we have been using for a while now), that every GET, POST, DELETE request issued gets called 2x, observed in the karate logs.
It doesn't matter if the request is being directly called in a scenario or indirectly from another feature file via call/callonce.
When I do not use the Karate testParallel runner only a single request is made.
I noticed this when performing a POST to create a a data source in our application. When I went to the applications UI to verify the new data source was created, I saw 2 of them. This lead me down the path to research further what might be happening.
To possibly rule out our API was doubling up on the data source creation, a data source was created via a totally different internal tool and only 1 data source got created. This lead me back to Karate to see what might be causing the double creation and observing the issue.
Bottom-line is that I think the parallel runner is causing requests to occur twice.
Using Karate v0.9.3
When using the parallel test runner, multiple POST's get executed. The code below uses the Post Test Server V2 to submit a POST to and you can see that 2 posts are submitted.
Note the test runner is NOT using the #RunWith(Karate.class) annotation and using the junit:4.12 transient dependency from karate-junit4:0.9.3
Here is a Minimal, Complete and Verifiable example that demonstrates the issue:
Feature file:
Feature: Demonstrates multiple POST requests
Scenario: Demonstrates multiple POST requests using parallel runner
* def REQUEST = {type: 'test-type', name: 'test-name'}
Given url 'https://ptsv2.com/t/paowv-1563551220/post'
And request REQUEST
When method POST
Then status 200
Parallel Test Runner file:
import com.intuit.karate.Results;
import com.intuit.karate.Runner;
import org.junit.Test;
public class ApiTest {
#Test
public void testParallel() {
Results results = Runner.parallel(getClass(), 5, "target/surefire-reports");
assertTrue(results.getErrorMessages(), results.getFailCount() == 0);
}
}
After running this feature, using the parallel runner, go to https://ptsv2.com/t/paowv-1563551220/post and observe the multiple POST's.
Comment out the #Test JUnit annotation in the parallel runner and re-run feature and notice only 1 POST is requested, as expected.
When I originally posted this question I was definitely using a JUnit 4 Parallel Execution class without the #RunWith(Karate.class) annotation. This was in conjunction with the com.intuit.karate:karate-junit4 dependency and I was definitely getting multiple POST requests sent.
In revisiting this issue, I recently updated my dependency to use com.intuit.karate:karate-junit5 and updated to use a JUnit 5 Parallel Execution class (again, without the #RunWith(Karate.class) annotation) and I'm happy to report that I'm no longer seeing multiple POST requests.
You most likely are using the #RunWith(Karate.class) annotation when you are not supposed to. This is mentioned in the docs. Fortunately this confusion will go away when everyone switches to JUnit 5.

How to run a Karate Feature file after a Gatling simulation completes

I have two Karate Feature files
One for creating a user (CreateUser.feature)
One for getting a count of users created (GetUserCount.feature)
I have additionally one Gatling Scala file
This calls CreateUser.feature with rampUser(100) over (5 seconds)
This works perfect. What i'd like to know is how can I call GetUserCount.feature after Gatling finishes it's simulation? It must only be called one time to get the final created user count. What are my options and how can I implement them?
Best option is use the Java API to run a single feature in the Gatling simulation.
Runner.runFeature("classpath:mock/feeder.feature", null, false)

Garbled test result output from meteortesting:mocha

The recommended testing framework for Meteor 1.7 seems to be meteortesting:mocha.
With Meteor 1.7.0.3 I created a default app (meteor create my-app), which has the following tests (in test/main.js)
import assert from "assert";
describe("my-app", function () {
it("package.json has correct name", async function () {
const { name } = await import("../package.json");
assert.strictEqual(name, "noteit");
});
if (Meteor.isClient) {
it("client is not server", function () {
assert.strictEqual(Meteor.isServer, false);
});
}
if (Meteor.isServer) {
it("server is not client", function () {
assert.strictEqual(Meteor.isClient, false);
});
}
});
I ran
meteor add meteortesting:mocha
meteor test --driver-package meteortesting:mocha
and with meteortesting:mocha#2.4.5_6 I got this in the console:
I20180728-12:06:37.729(2)? --------------------------------
I20180728-12:06:37.729(2)? ----- RUNNING SERVER TESTS -----
I20180728-12:06:37.729(2)? --------------------------------
I20180728-12:06:37.729(2)?
I20180728-12:06:37.730(2)?
I20180728-12:06:37.731(2)?
I20180728-12:06:37.737(2)? the server
✓ fails a test.753(2)?
I20180728-12:06:37.755(2)?
I20180728-12:06:37.756(2)?
I20180728-12:06:37.756(2)? 1 passing (26ms)
I20180728-12:06:37.756(2)?
I20180728-12:06:37.757(2)? Load the app in a browser to run client tests, or set the TEST_BROWSER_DRIVER environment variable. See https://github.com/meteortesting/meteor-mocha/blob/master/README.md#run-app-tests
=> Exited with code: 0
=> Your application is crashing. Waiting for file change.
Actually, it was repeated three times. Not pretty. And I wasn't expecting a passing test to crash my app.
Also in the browser I got this
I was expecting something more like the nice output, as per the Meteor testing guide:
As with most things Node.js, there are a multitude of forks of almost anything. So also with meteortesting:mocha.
cultofcoders:mocha seems to be a few commits ahead of practicalmeteor:mocha, which was at one point the recommended testing framework for Meteor.
If you run
meteor add cultofcoders:mocha
meteor test --driver-package cultofcoders:mocha
you'll get the nice output.
As a curiousity, I found that the version of cultofcoders:mocha I got (meteor list | grep mocha) was 2.4.6, a version that the github repo does not have...
The screenshot, you reference to, is made using practicalmeteor:mocha, but meteortesting:mocha is not (as the other answer claims) a fork of it but a separately developed package, aiming for the same goal, which is running of tests in Meteor.
The usage of the packages is very different and practicalmeteor:mocha might look a bit trickier to set up and this list only applies to it's version 1.0.1 and might change later.
But I have to admit that the documentation needs a refresh ... Anyways, here are some helpful tipps which I'll include in the documentation soon.
If you just want to get started, run this:
meteor add meteortesting:mocha
npm i --save-dev puppeteer#^1.5.0
TEST_BROWSER_DRIVER=puppeteer meteor test --driver-package meteortesting:mocha --raw-logs --once
Do you want to exit after the tests are completed or re-run them after file-change?
Usually, Meteor will restart your application when it exits (a normal exit or a crash), which includes the test-runner.
In case you want to use it in one of your CI or you just want to run the tests once, add --once to the meteor-command, otherwise set TEST_WATCH=1 before running this script. If you don't set the env variable, and don't define --once, Meteor will print these lines and restart the tests once they're finished:
=> Exited with code: 0
=> Your application is crashing. Waiting for file change.
As of now I haven't found a way to check if the flag --once is set, which would omit the env variable. The flexibility here to choose between CI and continuous testing is very useful.
Maybe you're currently working on a feature and want to run the tests as you work. If you have set TEST_WATCH=1 and are not using --once, Meteor will restart the tests once it registers that a file was changed. You can even limit the test collection using MOCHA_GREP.
Where and how do you want to see the results?
You currently have to choose between seeing all the test-results on the command-line or to show the server-tests in the commandline and the client-tests in the browser. Currently practicalmeteor:mocha does not support showing the result of the server- and client-tests in the browser, as your screenshot shows.
Please take a look at the package documentation for further details:
You should disable the Meteor timestamp to make it look better.
Tests might look quite gambled because of the timestamp added to every line. To avoid this, add --raw-logs to your command.
I hope this answers most of your question. I know that the documentation needs some improvements and would welcome if someone would take the time to take it into a more logical order for people who "just want to get started".

Issue with the karate parallel runner

I wanted to see if anyone else might have observed the same issue. I looked in the project for any open/closed issues that might be like this but did not notice any.
I noticed that when I use the Karate Parallel runner (which we have been using for a while now), that every GET, POST, DELETE request gets called 2x, observed in the karate logs which came in the console.
When I do not use the Karate Parallel runner only a single request is made.
I noticed this when performing a POST to create a data source in our application. When I went to the applications UI to verify the new data source was created, I saw 2 of them. This leads me down the path to research further what might be happening.
Using Karate v0.9.5 with Junit 5
minimalistic Example -
https://drive.google.com/file/d/1UWnNtxGO7gr-_Z80MLJbFkaAmuaVGlAD/view?usp=sharing
Steps To Run The Code -
Extract ZIP
cd GenericModel
mvn clean test -Dtest=UsersRunner
Check the console logs API scenario get executed 2X
Note - It works fine for me for karate V0.9.4 with Junit 5
You mixed up parallel runner and JUnit runner and ended up having both in one test method. Please read the documentation: https://github.com/intuit/karate#junit-5-parallel-execution
Note that you use the normal #Test annotation not the #Karate.Test one.