Karate - Starting Mock Server with multiple feature files [duplicate] - karate

This question already has an answer here:
Karate Standalone as Mock Server with multiple Feature Files
(1 answer)
Closed 1 year ago.
My feature files are structured like this
As you can see, each module has a common, mock and test feature files.
for eg: category-common.feature, category-mock.feature and category-test.feature. These contain all common definitions, mock API definitions and tests respectively related to category APIs.
We are using the java -jar karate.jar -m <feature_file> command to run the mock server.
This approach is good when we are testing the APIs module wise. The question is how can we deploy all mocks together in a single port?
As per this answer, it is not possible to do it. If not, what are some other approaches we can follow?

Someone contributed a PR to add this post the 1.0 release, so you should read this thread: https://github.com/intuit/karate/issues/1566
And you should be able to test and provide feedback on 1.1.0.RC2
Of course if you can contribute code, nothing like it :)

Related

Is there any way to get all tags available in all feature files of projects in karate? [duplicate]

This question already has an answer here:
Karate summary reports not showing all tested features after upgrade to 1.0.0
(1 answer)
Closed 1 year ago.
We want to avoid report portal's empty launches if passed tag does not available in all feature files. So is there any predefined method or any way in karate to check that passed tag available or not before execution of test cases?
No there is not. Feel free to contribute code.

ExecutionHook with parallel runner [duplicate]

This question already has answers here:
Dynamic scenario freezes when called using afterFeature hook
(2 answers)
Closed 1 year ago.
I am using the parallel runner to run one of m feature files. It has 8 scenarios as of now. I wanted to integrate a third party reporting plugin (Extent report) to build out the reports. I planned to use the ExecutionHook interface to try and achieve this. Below are the issues i faced and havent found a even after looking at the documentation.
My issues
I am creating a new test on the afterFeature method. This gives me 2 handles, Feature and ExecutionContext. However since the tests are running in parallel, the reporting steps are getting mixed on each other? How do i handle this? Any out of the box method i can use?
To counter the above, i decided to build the whole report towards the end on the afterAll overridden method but here i am missing the execution context data so i cant use the context.getRequestBuilder() to get the urls and paths.
Any help would be great.
Please focus on the 1.0 release: https://github.com/intuit/karate/wiki/1.0-upgrade-guide
Reasons:
it gives you a way to build the whole report at the end, and the Results object can iterate over all ScenarioResult instances
ExecutionHook has been changed to RuntimeHook, see example
yes, since tests can run in parallel, it is up to you to synchronize as the framework has to be high performance, but building reports at the end using the Results object is recommended instead of using the RuntimeHook

QA Tool / Framework to run kdb scripts

I am new to "KDB DATA testing" & I would like to build (Test scenario based) scripts using q programming language. Each test scenario is associated to each .q file. Is there any existing framework / tool which I can use to run these .q files & generate the report for the same ?
Please let me know, if you have any relevant information on this.
qStudio from Timestored also supports unit testing, though the output is not graphical. See here: qUnit
Simon Garland's k4unit may be similar to what you're looking for:
https://github.com/simongarland/k4unit
Alternatively, Daniel Nugent has another unit testing framework:
https://github.com/nugend/qspec
Kx Systems also maintains the following list of Git repos, which is generally the first place to look if you're trying to find something along these lines:
https://kxsystems.github.io/

Running a test in golang that only works for some versions [duplicate]

This question already has answers here:
How do I skip a tests file if it is run on systems with go 1.4 and below?
(2 answers)
Closed 6 years ago.
I have a library here (https://github.com/turtlemonvh/altscanner) that includes a test comparing functionality of a custom scanner to bufio.Scanner. In particular, I am comparing my approach to the Buffer method which wasn't added until go1.6.
My actual code works with versions of go back to 1.4, but I wanted to include this test (and I'd like to add a benchmark as well) that uses the Buffer function of the bufio.Scanner object.
How can I include these tests that use features of go1.6+ while still allowing code to run for go1.4 and 1.5?
I imagine the answer is using a build flag to trigger the execution of these tests only if explicitly requested (and I do have access to the go version in my CI pipeline via a travis environment variable). I could also abuse the short flag here.
Is there a cleaner approach?
A few minutes after posting this I remembered about build constraints. Go has a built in constraint that handles this exact case, i.e. "version of go must be >= X".
Moving that test into a separate file and adding //+build go1.6 at the top fixed it.

Best way to test command line tools? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
We have a large collection of command-line utilities that we write ourselves and use frequently. At the moment, testing them is very cumbersome and consequently, we don't do as much testing as we aught to.
I am wondering if anyone can suggest good techniques or tools for doing a good job of this kind of thing.
This is UNIX.
I recommend structuring your command line tool's code so that the command line utility is a client to a library of functions and/or classes.
Rather than simply using std::cout to print output, have the libraries function take an ostream reference that defaults to std::cout. When you are testing, provide a std::stringstream to collect the output.
Finally, simply compare your utility's output with expected results using your favorite unit testing framework.
(I apologize for the C++ specific example... I'm sure there are ways to do similar things in other languages too).
You can write tests that resemble an interactive shell session using Cram. It has flexible test specification format that allows you to match output using Perl regex or shell-like wildcards. Cram will replay commands from the test, compare output to the reference, and report differences.
Aruba is a Cucumber extension for testing command line applications written in any programming language.
To use it, you will need ruby to run the tests, but the purpose of aruba is to provide a library of pre-defined step definitions so that you won't need to write any ruby code to make a workable test suite. (Though at some point you probably will want to write a bit of ruby to make a few custom steps.)
You can see a sophisticated example of a command line tool tested with aruba here: jingweno/gh
You should be able to call them from a shell script (batch file, on MS operating systems), redirect the output to a file, then scan the file programmatically to ensure that it has the correct output. I'm not aware of a testing framework that automates this for you, but it should be fairly straight forward to set it up yourself.
Bats (Bash Automated Testing System) by Sam Stephenson. It is tiny, written purely in shell and has a nice set of features.
Previously suggested Aruba looks interesting, but in some cases it might be quiet an overkill in terms of dependencies (ruby, cucumber)
I did a little bit of this (a loooong time ago hehe) using Expect to check that what happened was what I, umm, expected
I have developed a tool "Exactly"
https://github.com/emilkarlen/exactly
It executes the thing to test in a temporary sandbox directory.
The README contains a number of examples.
A test of a hypotethical program "classify-files-by-moving-to-appropriate-dir" can look like this:
[setup]
dir input
dir output/good
dir output/bad
file input/a.txt = <<EOF
GOOD contents
EOF
file input/b.txt = <<EOF
bad contents
EOF
[act]
classify-files-by-moving-to-appropriate-dir GOOD input/ output/
[assert]
dir-contents input empty
exists output/good/a.txt : type file
dir-contents output/good num-files == 1
exists output/bad/b.txt : type file
dir-contents output/bad num-files == 1
You can do this from a batch file oder windows scripting host.
But i promise to use a task scheduler like (http://www.splinterware.com/products/wincron.htm) or other free/professional software.
There you can easy copy/paste the commandline-parameters which you should vary on, when you wanna test your software for about many 100 times?!
You could use perl with Test::more library, which provides a great framework for testing CLIs.
Though primarily designed for unit testing, you could extend it to test user workflows.
Some of the methods:
# Various ways to say "ok"
ok($got eq $expected, $test_name);
is ($got, $expected, $test_name);
isnt($got, $expected, $test_name);
# Rather than print STDERR "# here's what went wrong\n"
diag("here's what went wrong");
like ($got, qr/expected/, $test_name);
unlike($got, qr/expected/, $test_name);
cmp_ok($got, '==', $expected, $test_name);
command-lineautomationtestingperl-testing