FitNesse has a wiki-style documentation feature. It provided both the code and the doc's for these specification tests.
Is there a way in Spock (with plugin? / out of the box?) to generate any type of similar documentation to show off to the project managers / stakeholders, who can not be expected to read the (Groovy) source code of the Spock specifications.
Well, I have used the Strings that describe each Spock block in your tests to generate HTML reports.
Please visit my project and let me know if that helps:
https://github.com/renatoathaydes/spock-reports
You can download the jar from the reports directory and then just add it to your classpath. Run your tests, and "miraculously" you will have reports generated in the directory build/spock-reports!
You can even provide your own CSS stylesheets if you want to customize the reports, as explained in README.
Here's a blogpost I wrote about writing this Spock extension.
UPDATE
spock-reports has been available on Maven Central for a while now, as well as JCenter.
Spock allows you to add descriptions to blocks, e.g.:
when: "5 dollars are withdrawn from the account"
account.withdraw(5)
then: "3 dollars remain"
account.balance == 3
While we don't use this information yet, it's easy to access from an extension (see link below). What's left to do is to turn this into a nice report.
https://github.com/spockframework/spock-uberconf-2011/blob/master/src/test/groovy/extension/custom/ReportExtension.groovy
There are some great answers in here already, but if you want to keep your BDD definitions free from any plumbing whatsoever, you can take a look at pease, which will let you use Gherkin specification language with Spock.
Related
I´m using Karate testing framework to validate some APIs and would like to know if there is any way to generate a Test Coverage Report by using a predefined list of expected scenarios to run and validate them against the scenarios that actually exist within the Karate feature files.
Imagine that you agree to run 50 scenarios with your client but in reallity you have only developed 20 scenarios within your feature files (more than one stored in different folders)
Wonder if there is any (easy) way to:
list ALL the scenarios developed in ALL the feature files available
match them against an external (csv, excel, json...) list of scenarios (the ones agreed with the client) so that a coverage % could be calculated
Here's a bare bones implementation of a coverage report based on comparing karate.log to an openapi/swagger json spec.
https://github.com/ericdriggs/karate-test-utils#karate-coverage-report
Endpoint coverage is a useful metric which can be auto-generated based on auto-generated spec. It also lets you exclude paths which aren't in scope for coverage, e.g. actuator, ping
Will publish jar soonish.
Open an issue if you'd like any enhancements.
MIT licensed so feel free to repurpose
Was wondering if I could get setup cucumber-reporting for specific folders?
For example in the https://github.com/intuit/karate#naming-conventions, in CatsRunner.java i set up the third party cucumber reporting? without parallel execution. please advise or direct me on how to set it up.
Rationale. its easier to read and helps me in debugging
You are always recommended to use the 3rd party Cucumber Reporting when you want HTML reports. And you can create as many Java classes similar to the DemoTestParallel in different packages. The first argument to CucumberRunner.parallel() needs to be a Java class - and by default, the same package and sub-folders will be scanned for *.feature files.
BUT I think your question is simply how to easily debug a single feature in dev-mode. Easy, just use the #RunWith(Karate.class) JUnit runner: video | documentation
Does Doxygen have the capability to generate test plans from a number of test cases?
This would be much like Atlassian Jira Plug-in called "Zephyr"
Doxygen provides the command '#test', which starts a paragraph to describe a test case and creates an additional "Test" index. This in combination with the capability to create custom commands and create PDF files via Latex should help to create test plans with doxygen. I already used this to document my Unit Tests.
For me to execute something at the TestRun, Feature, Scenario or Step level.. I understand we can use Hooks. What i would like to find out is how these can be writte in the feature file when I am writting the spec.
Based on my understanding I can use Backgroud to write something common which is to be run within the feature before all the scenarios. However its adviced that we should not have long list in the Backgroud section. Also if I have something which is common for the testrun or multiple feature where i can use a tag a group them, is there any syntax I can use to write this.
The hook implementations cannot be expressed in Gherkin in the feature files, they must be implemented in the step implementation files:
Tag the scenarios and/or features with a tag #foo, and in some class decorated with the [Binding] attribute, annotate a method with a hook attribute, like [BeforeTestRun("foo")].
More information and available hooks can be found in the specflow wiki.
If you're worried about having a long list of steps in the Background, maybe the steps are too verbose and you can consider joining them into a single step. If you need to do something for all tests in the test run, maybe it's not important to mention it in the feature anyways, so it can go in a step implementation file like described above.
I'd love to ask you how do the guys developing dojo create the documentation?
From nightly builds you can get the uncompressed js files with all the comments, and I'm sure there is some kind documenting script that will generate some html or xml out of it.
I guess they use jsdoc as this can be found in their utils folder, but I have no idea on how to use it. jsDoc toolkit uses different /**commenting**/ notations than the original dojo files.
Thanks for all your help
It's all done with a custom PHP parser and Drupal. If you look in util/docscripts/README and util/jsdoc/INSTALL you can get all the gory details about how to generate the docs.
It's different than jsdoc-toolkit or JSDoc (as youv'e discovered).
FWIW, I'm using jsdoc-toolkit as it's much easier to generate static HTML and there's lots of documentation about the tags on the google code page.
Also, just to be clear, I don't develop dojo itself. I just use it a lot at work.
There are two parts to the "dojo jsdoc" process. There is a parser, written in PHP, which generates xml and/or json of the entirety of listed namespaces (defined in util/docscripts/modules, so you can add your own namespaces. There are basic usage instructions atop the file "generate.php") and a Drupal part called "jsdoc" which installs as a drupal module/plugin/whatever.
The Drupal aspect of it is just Dojo's basic view of this data. A well-crafted XSLT or something to iterate over the json and produce html would work just the same, though neither of these are provided by default (would love a contribution!). I shy away from the Drupal bit myself, though it has been running on api.dojotoolkit.org for some time now.
The doc parser is exposed so that you may use its inspection capabilities to write your own custom output as well. I use it to generate the Komodo .cix code completion in a [rather sloppy] PHP file util/docscripts/makeCix.php, which dumps information as found into an XML doc crafted to match the spec there. This could be modified to generate any kind of output you chose with a little finagling.
The doc syntax is all defined on the style guideline page:
http://dojotoolkit.org/reference-guide/developer/styleguide.html