ExecutionHook with parallel runner [duplicate] - karate

This question already has answers here:
Dynamic scenario freezes when called using afterFeature hook
(2 answers)
Closed 1 year ago.
I am using the parallel runner to run one of m feature files. It has 8 scenarios as of now. I wanted to integrate a third party reporting plugin (Extent report) to build out the reports. I planned to use the ExecutionHook interface to try and achieve this. Below are the issues i faced and havent found a even after looking at the documentation.
My issues
I am creating a new test on the afterFeature method. This gives me 2 handles, Feature and ExecutionContext. However since the tests are running in parallel, the reporting steps are getting mixed on each other? How do i handle this? Any out of the box method i can use?
To counter the above, i decided to build the whole report towards the end on the afterAll overridden method but here i am missing the execution context data so i cant use the context.getRequestBuilder() to get the urls and paths.
Any help would be great.

Please focus on the 1.0 release: https://github.com/intuit/karate/wiki/1.0-upgrade-guide
Reasons:
it gives you a way to build the whole report at the end, and the Results object can iterate over all ScenarioResult instances
ExecutionHook has been changed to RuntimeHook, see example
yes, since tests can run in parallel, it is up to you to synchronize as the framework has to be high performance, but building reports at the end using the Results object is recommended instead of using the RuntimeHook

Related

Karate - Starting Mock Server with multiple feature files [duplicate]

This question already has an answer here:
Karate Standalone as Mock Server with multiple Feature Files
(1 answer)
Closed 1 year ago.
My feature files are structured like this
As you can see, each module has a common, mock and test feature files.
for eg: category-common.feature, category-mock.feature and category-test.feature. These contain all common definitions, mock API definitions and tests respectively related to category APIs.
We are using the java -jar karate.jar -m <feature_file> command to run the mock server.
This approach is good when we are testing the APIs module wise. The question is how can we deploy all mocks together in a single port?
As per this answer, it is not possible to do it. If not, what are some other approaches we can follow?
Someone contributed a PR to add this post the 1.0 release, so you should read this thread: https://github.com/intuit/karate/issues/1566
And you should be able to test and provide feedback on 1.1.0.RC2
Of course if you can contribute code, nothing like it :)

FluentAssertions without exceptions? [duplicate]

This question already has an answer here:
Customize failure handling in FluentAssertions
(1 answer)
Closed 2 years ago.
This seems like a long shot...
I am building a test harness for manual testing (for my QA Team). It runs in a console application and can output some level of smart data, but nothing so automatic as a fully automated test (not my rules).
I would love to use FluentAssertions to generate the text to show, but I don't want to throw an exception.
Is there a way to have FluentAssertions just output a string with its fluent message? (Without throwing an exception.)
NOTE: I am aware of a possible workaround: (Try/Catch statements around an AssertionScope around my fluent assertion checks). But I am hoping to keep the extra code to a minimum so as to not confuse the non-programmer QA person that has to use the test harness.
You could replace the Services.ThrowException property with custom behavior or you could use AssertionScope's Discard method.

Karate Execution getting stuck in the report generation step

I am executing my karate suite from teamcity. I have started facing an issue when i had to add some data csv files with 1700 rows and around 10 columns.
I got Out of memory error while local execution. I added argLine params and increased heapSize to 6G. In local I managed to solve the error.
When I moved this to continuous integration environment even with argline params 6G heap size, its getting stuck. Interesting fact is even if I exclude these large files tests using tags its getting stuck.
I am using parallel executer with 2 threads(I also tried with 1 thread). Also I use cucumber reports.
From the analysis what i understand is karate completes the test execution just before generating the reports json and cucumber reports it gets stuck.
I have tried to remove those huge CSV files and tried to put the data directly in examples inside my feature file. Still it gets stuck.
I have managed to fix this in my local, but it seems to be potetial issue. Any suggestions.
Total number of tests am running is 4500.
I am no expert on this but I would say break down your tests into many classes (you could start with having 2 runners instead of just 1) and have each class only call a portion of the .feature files you have. It is possible breaking your tests into multiple classes running parts of your test cases might relieve the memory problem.
For example:
https://github.com/intuit/karate/blob/master/karate-demo/src/test/java/demo/greeting/GreetingRunner.java

Karate -- Parallel execution Failing [duplicate]

This question already has an answer here:
Karate Cucumber reports in Junit 4 without parallel execution
(1 answer)
Closed 2 years ago.
I have observed that when I am running my tests (feature files) in maven build with Runner.parallel(getClass(), 1); it is working fine but when I am increasing number of thread like Runner.parallel(getClass(), 5); it start failing because it is executing all scenarios in parallel which is available in feature file.
Scenarios are dependent on each other which are failing because which scenario need to execute in last executing in first.
Please suggest me some option which run all feature file in parallel but not run scenarios in parallel which available in feature file.
https://github.com/intuit/karate#parallelfalse
If you use #parallel=false on each feature where scenarios cannot be played in parallel, it'll work. But scenarios should be played in any order and not be dependent on each other. Maybe what you call scenarios shouldn't be split in the first place?
More information about script structure : https://github.com/intuit/karate#script-structure

Running a test in golang that only works for some versions [duplicate]

This question already has answers here:
How do I skip a tests file if it is run on systems with go 1.4 and below?
(2 answers)
Closed 6 years ago.
I have a library here (https://github.com/turtlemonvh/altscanner) that includes a test comparing functionality of a custom scanner to bufio.Scanner. In particular, I am comparing my approach to the Buffer method which wasn't added until go1.6.
My actual code works with versions of go back to 1.4, but I wanted to include this test (and I'd like to add a benchmark as well) that uses the Buffer function of the bufio.Scanner object.
How can I include these tests that use features of go1.6+ while still allowing code to run for go1.4 and 1.5?
I imagine the answer is using a build flag to trigger the execution of these tests only if explicitly requested (and I do have access to the go version in my CI pipeline via a travis environment variable). I could also abuse the short flag here.
Is there a cleaner approach?
A few minutes after posting this I remembered about build constraints. Go has a built in constraint that handles this exact case, i.e. "version of go must be >= X".
Moving that test into a separate file and adding //+build go1.6 at the top fixed it.