This question already has an answer here:
Karate: Is there a way to disable log when using retry?
(1 answer)
Closed 1 year ago.
Is there any way to enable and disable the thymeleaf report from the logs? I couldn't find any config flag to do, so is there any other way to switch it on/off?
I don't understand what you mean - but my guess is the very verbose log that sometimes happens when the logging system is not configured properly. Or if you are using some combination of dependencies or building a JAR on your own (which we don't support, you have to figure this out on your own).
Maybe this thread gives you some ideas: https://github.com/intuit/karate/issues/1694
EDIT: quoting the comment by #italktothewind - it seems that <logger name="karate.org.thymeleaf" level="OFF"/> can do the trick. This may or may not work depending on which version of the Karate Maven dependency you use.
Or if you insist that this is some issue with Karate - please follow this process so that we can fix it: https://github.com/intuit/karate/wiki/How-to-Submit-an-Issue
Related
This question already has answers here:
Is there an option for us to customize and group the test scenarios in the statics section of the Karate Gatling Report?
(2 answers)
Closed 1 year ago.
I have downloaded the gatling-highchart-bundle zip..
placed the karate feature file and karate-galtling simulation file in the user-files/simulation folder under the highchart-bundle.
set the classpath of the karate feature file folder in the gatling.sh & gatling.bat file.
When I trigger the execution, reports are not generated ?
Can anyone help me to fix this.
Please assume that what you are asking for is not supported. Read the karate-gatling documentation to see what is supported: https://github.com/intuit/karate/tree/master/karate-gatling
This question already has answers here:
How to run Karate and Gatling with Gradle build system
(2 answers)
Closed 1 year ago.
In gatling report i don't get to see the failed scenarios. So can we also have cucumber report when we run the gatling test so that the failed steps can be known.
No, this will add un-necessary overhead to the test-execution - so this is not (and perhaps will never be) supported by Karate.
Do note that the failures (and line number) will appear in the Gatling report: https://twitter.com/ptrthomas/status/986463717465391104
I'm trying to test a kafka stream on jmeter using the pepper box config, but each time I try adding java request parameters it goes back to the default parameters without saving the ones I have added. I have tried the recommendations on here of adding the underscore, so _ssl.enabled, but the params are still disappearing. Any recommendations? Using jmeter5.3 and pepper-box1.0
I believe you need to put your SSL properties to the PepperBoxKafkaSampler directly, there are pre-populated placeholders which you can change and the changes persist.
The same behaviour is for Java Request Defaults
It might be the case your installation got corrupt somehow or there is a conflict with another JMeter Plugin, check jmeter.log file for any suspicious entries
In the meantime you may find Apache Kafka - How to Load Test with JMeter article useful
I had the same issue. I got around this issue by cloning the pepperbox repository https://github.com/GSLabDev/pepper-box and made changes to the PepperBoxKafkaSampler.java file, updated the setupTest() method with your props. You can also add the parameters making use of the .addArgument() method (used in PepperBoxKafkaSampler.java) to make the parameters available in jmeter.
Rebuild the repo using maven mvn clean install replace the old pepperbox jar in jmeter/lib/ext with your new built jar.
This question already has an answer here:
Unable to use read('classpath:') when running tests with standalone karate.jar
(1 answer)
Closed 1 year ago.
I'd like to run a single .feature file, the one I'm trying to debug, instead of the full set of over 100 tests we have... it it possible?
I'v tried adding the "classpath" to the karate options, as I saw in other answers, but it still runs everything even if the path doesn't exists:
$ mvn clean test \
-Dtest=ParallelTest \
-DargLine="-Dkarate.options='--tags ~#ignore classpath:relative/path/to/my/new.feature" \
-Denv='dev'
This is what the IDE support is for, we have Visual Studio Code (recommended), IntelliJ and Eclipse: https://github.com/intuit/karate/wiki/IDE-Support
If you are interested in the details, there is a "CLI way" to do this (0.9.5 onwards): https://github.com/intuit/karate/wiki/Debug-Server#maven - so just put a feature classpath: instead of the -d
And for Maven / JUnit users, most teams write a "Runner" class, and you can add a test method and run a single feature, please refer to the docs: https://github.com/intuit/karate#junit-5
I'm searching for the equivalent of mocha:
mocha test.spec.js -w
opening nigtwatch manually on every change burns a lot of time during test writing.
any ideas of this?
I've checked nightwatch docs and google, nothing came up
From what I've seen, there isn't a --watch mode or command for nightwatch. This is only an answer in that what I've seen confirms that this doesn't seem to exist.
Here are the sources that led me to this conclusion, including possible leads for other solutions:
I've found an issue in the nightwatch repo asking about that in 2016: https://github.com/nightwatchjs/nightwatch/issues/1061. The answer then was
'No, but you can use something like the grunt-contrib-watch module.'
I found a stackoverflow question in 2018 that talks about how nightwatch, and other things using selenium, always return exit code 1: Nightwatch.js always returns exit code 1. That leads me to believe that nightwatch always exits its tests. The question was asking about integrate Nightwatch.js tests in a Jenkins Job. A comment offers a possible solution for that, so maybe it's possible to make a wrapper for nightwatch this way:
This happens with other selenium based libraries as well. I think I ran into this issue with protractor in the past. The way we tackled it was by parsing the output instead of using the exit code and looked for any failures. If none were found then we returned 0 explicitly... you can check for failed tests using browser.currentTest.results.failed and then figure out a way to pass or fail your build based on that.