Can I customize error reports in Gatling? [duplicate] - karate

This question already has an answer here:
How can I print/get the errors on gatling report? [duplicate]
(1 answer)
Closed 1 year ago.
Using Karate-Gatling, the default reports I get only log errors as
path/to/feature/myFeature.feature:19 method get
If I read the feature file, I can find the offending line and determine the cause of the error. However I would like to have this information in the report itself (so my colleagues and I don't have to go fishing through the features).
Is there a way to add a custom error log into the Gatling report? Possibly replace the location (path/to/feature/myFeature.feature:19 method get) with a custom message?

No. You can consider contributing code. There may be an opportunity to emit the Karate HTML reports only on errors. But right now, for performance reasons, all the reporting is disabled when in "gatling mode".

Related

Is there any way to get all tags available in all feature files of projects in karate? [duplicate]

This question already has an answer here:
Karate summary reports not showing all tested features after upgrade to 1.0.0
(1 answer)
Closed 1 year ago.
We want to avoid report portal's empty launches if passed tag does not available in all feature files. So is there any predefined method or any way in karate to check that passed tag available or not before execution of test cases?
No there is not. Feel free to contribute code.

How to capture screenshot in case of error in a called feature? [duplicate]

This question already has an answer here:
Attaching screenshots to json report
(1 answer)
Closed 1 year ago.
I am trying to capture a screenshot after any step failure in a called feature. I tried using
configure afterScenario = function(){ if (karate.info.errorMessage) driver.screenshot() }
for that but realized that the hooks don't work for the called feature files. Is there any other way to achieve this?
Can you first upgrade to 1.0.1 and confirm, Karate should automatically take a screenshot and add it to the report on an error.
Else please go through this thread for ideas: https://github.com/intuit/karate/issues/1465
It may require you to dig into Karate internals and contribute code, so if you are not prepared to do that - you can opt for other alternative solutions.

ExecutionHook with parallel runner [duplicate]

This question already has answers here:
Dynamic scenario freezes when called using afterFeature hook
(2 answers)
Closed 1 year ago.
I am using the parallel runner to run one of m feature files. It has 8 scenarios as of now. I wanted to integrate a third party reporting plugin (Extent report) to build out the reports. I planned to use the ExecutionHook interface to try and achieve this. Below are the issues i faced and havent found a even after looking at the documentation.
My issues
I am creating a new test on the afterFeature method. This gives me 2 handles, Feature and ExecutionContext. However since the tests are running in parallel, the reporting steps are getting mixed on each other? How do i handle this? Any out of the box method i can use?
To counter the above, i decided to build the whole report towards the end on the afterAll overridden method but here i am missing the execution context data so i cant use the context.getRequestBuilder() to get the urls and paths.
Any help would be great.
Please focus on the 1.0 release: https://github.com/intuit/karate/wiki/1.0-upgrade-guide
Reasons:
it gives you a way to build the whole report at the end, and the Results object can iterate over all ScenarioResult instances
ExecutionHook has been changed to RuntimeHook, see example
yes, since tests can run in parallel, it is up to you to synchronize as the framework has to be high performance, but building reports at the end using the Results object is recommended instead of using the RuntimeHook

FluentAssertions without exceptions? [duplicate]

This question already has an answer here:
Customize failure handling in FluentAssertions
(1 answer)
Closed 2 years ago.
This seems like a long shot...
I am building a test harness for manual testing (for my QA Team). It runs in a console application and can output some level of smart data, but nothing so automatic as a fully automated test (not my rules).
I would love to use FluentAssertions to generate the text to show, but I don't want to throw an exception.
Is there a way to have FluentAssertions just output a string with its fluent message? (Without throwing an exception.)
NOTE: I am aware of a possible workaround: (Try/Catch statements around an AssertionScope around my fluent assertion checks). But I am hoping to keep the extra code to a minimum so as to not confuse the non-programmer QA person that has to use the test harness.
You could replace the Services.ThrowException property with custom behavior or you could use AssertionScope's Discard method.

How do they get this snapshot? [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Server Generated web screenshots?
Whenever i use
goo.gl to shorten urls
, i see a screenshot of the url's page!
It's not usual screenshot - there're no UI elements - only the page itself!
My question: How do they do it? What is the general logic behind generating this screenshot(raster image)?
#valentinas no. i'm not after implementation. I'd like to get a
general idea how it's done! (and also, without using canvas element) –
DrStrangeLove 6 mins ago
General idea: you take a browser rendering engine (webkit is pretty good, here's one of the implementations: http://phantomjs.org/) and instead of outputting the result to UI you take it and dump it to file.