Any way to generate a test coverage report in Karate - testing

I´m using Karate testing framework to validate some APIs and would like to know if there is any way to generate a Test Coverage Report by using a predefined list of expected scenarios to run and validate them against the scenarios that actually exist within the Karate feature files.
Imagine that you agree to run 50 scenarios with your client but in reallity you have only developed 20 scenarios within your feature files (more than one stored in different folders)
Wonder if there is any (easy) way to:
list ALL the scenarios developed in ALL the feature files available
match them against an external (csv, excel, json...) list of scenarios (the ones agreed with the client) so that a coverage % could be calculated

Here's a bare bones implementation of a coverage report based on comparing karate.log to an openapi/swagger json spec.
https://github.com/ericdriggs/karate-test-utils#karate-coverage-report
Endpoint coverage is a useful metric which can be auto-generated based on auto-generated spec. It also lets you exclude paths which aren't in scope for coverage, e.g. actuator, ping
Will publish jar soonish.
Open an issue if you'd like any enhancements.
MIT licensed so feel free to repurpose

Related

use karate dsl for api automation testing with data drven approach (want to avoid multiple scenario if the steps are same) [duplicate]

I need to create data driven unit tests for different APIs in karate framework. The various elements to be passed in the JSON payload should be taken as input from an excel file.
A few points:
I recommend you look at Karate's built-in data-table capabilities, it is far more readable, integrates into your test-script and you won't need to depend on other software. Refer these examples: call-table.feature and dynamic-params.feature
Next I would recommend using JSON instead of an Excel or CSV file, it is natively supported by Karate: call-json-array.feature
Finally, if you really wanted to, you can call any Java code and if you return data in a Map / List form, it will be ready for Karate to use. This example shows how to read a database via JDBC: dogs.feature. So although this is not built into Karate, just write a simple utility to read a CSV or Excel file and you can do pretty much anything Java can do.
EDIT: Karate now supports CSV files that can be used to even do data-driven testing: https://github.com/intuit/karate#csv-files

Can we run multiple feature files in the same package using Karate-gatling

I read in the documentation that we can run multiple feature files by adding newer lines for different classpaths in the simulation class. Is there a way wherein we can run multiple feature files belonging to the same package just like we run in FeatureRunner files?
No, I personally think that will introduce maintainability issues. We will consider PR-s though if anyone wants to contribute this.
If you really want this behavior, you should be able to write a small bit of Java code that scans a folder, loops over them and builds the Gatling "scenarios".

Folder specific cucumber-reporting without parallel run?

Was wondering if I could get setup cucumber-reporting for specific folders?
For example in the https://github.com/intuit/karate#naming-conventions, in CatsRunner.java i set up the third party cucumber reporting? without parallel execution. please advise or direct me on how to set it up.
Rationale. its easier to read and helps me in debugging
You are always recommended to use the 3rd party Cucumber Reporting when you want HTML reports. And you can create as many Java classes similar to the DemoTestParallel in different packages. The first argument to CucumberRunner.parallel() needs to be a Java class - and by default, the same package and sub-folders will be scanned for *.feature files.
BUT I think your question is simply how to easily debug a single feature in dev-mode. Easy, just use the #RunWith(Karate.class) JUnit runner: video | documentation

what is the meaning of 'Artifact" in software testing?

during defect raise in software testing , i came across a word "Artifact" , what it actually means? can anybody explain? I'm really frustrated by searching the actual meaning of it in google.
It usually means something like "a file created during testing". For example, the log file is an artifact. If your tests create temporary files, they are artifacts. If your test dowloads images, those are artifacts.
Artifacts can mean other things besides files (which is why we say "artifact" rather than "file"). For example, an artifact could be a row added to a database.
Depending on the context, it could also mean files you need in order to perform the test (eg: "for this test you need the following artifacts...")
In short, an artifact is something created or used by the test suite.
The artifacts produced, let's say, for a given release are all the different "units of products" that are available. It's better to have an example to understand. Imagine you have to test a single product but that this products comes 2 you in 2 versions (.msi file for Win and .dmg for Mac) plus 3 upgrade scripts for the different databases you could as a backend, then you have 5 artifacts in your hands that you should test.
Artifacts are also the documents used to execute different things. For example SRS, FS, Test plan, Test cycle plan, Test cases of different types.
They are used for different purposes. Other than the above mentioned artifacts there are design documents, ERD, DFD, etc.
These documents are needed to execute different tasks during different stages of SDLC.

Is documentation readable by non-programmers possible with Spock?

FitNesse has a wiki-style documentation feature. It provided both the code and the doc's for these specification tests.
Is there a way in Spock (with plugin? / out of the box?) to generate any type of similar documentation to show off to the project managers / stakeholders, who can not be expected to read the (Groovy) source code of the Spock specifications.
Well, I have used the Strings that describe each Spock block in your tests to generate HTML reports.
Please visit my project and let me know if that helps:
https://github.com/renatoathaydes/spock-reports
You can download the jar from the reports directory and then just add it to your classpath. Run your tests, and "miraculously" you will have reports generated in the directory build/spock-reports!
You can even provide your own CSS stylesheets if you want to customize the reports, as explained in README.
Here's a blogpost I wrote about writing this Spock extension.
UPDATE
spock-reports has been available on Maven Central for a while now, as well as JCenter.
Spock allows you to add descriptions to blocks, e.g.:
when: "5 dollars are withdrawn from the account"
account.withdraw(5)
then: "3 dollars remain"
account.balance == 3
While we don't use this information yet, it's easy to access from an extension (see link below). What's left to do is to turn this into a nice report.
https://github.com/spockframework/spock-uberconf-2011/blob/master/src/test/groovy/extension/custom/ReportExtension.groovy
There are some great answers in here already, but if you want to keep your BDD definitions free from any plumbing whatsoever, you can take a look at pease, which will let you use Gherkin specification language with Spock.