Folder specific cucumber-reporting without parallel run? - karate

Was wondering if I could get setup cucumber-reporting for specific folders?
For example in the https://github.com/intuit/karate#naming-conventions, in CatsRunner.java i set up the third party cucumber reporting? without parallel execution. please advise or direct me on how to set it up.
Rationale. its easier to read and helps me in debugging

You are always recommended to use the 3rd party Cucumber Reporting when you want HTML reports. And you can create as many Java classes similar to the DemoTestParallel in different packages. The first argument to CucumberRunner.parallel() needs to be a Java class - and by default, the same package and sub-folders will be scanned for *.feature files.
BUT I think your question is simply how to easily debug a single feature in dev-mode. Easy, just use the #RunWith(Karate.class) JUnit runner: video | documentation

Related

how to define order for execution of feature files in karate framework

I have few feature files in specific subfolders, And I want to execute those feature files according to my defined order.
So how can we run the feature files in a specific order?
Thank you in Advance!
You can create one feature and then make calls to the other features. But this means you will lose the biggest benefit of Karate which is that you can run tests in parallel.
Any needs beyond this, please assume are not supported (or will never be supported) by Karate. Also read this: https://stackoverflow.com/a/46080568/143475

Can we run multiple feature files in the same package using Karate-gatling

I read in the documentation that we can run multiple feature files by adding newer lines for different classpaths in the simulation class. Is there a way wherein we can run multiple feature files belonging to the same package just like we run in FeatureRunner files?
No, I personally think that will introduce maintainability issues. We will consider PR-s though if anyone wants to contribute this.
If you really want this behavior, you should be able to write a small bit of Java code that scans a folder, loops over them and builds the Gatling "scenarios".

How to specify which binary Intelij uses when running tests with the JUnit plugin

I use InteliJ to run JUnit tests.
I would like to specify the name/path of the command it uses to execute them. Specifically, rather than the specified JDKs/bin/java, I'd like to use a custom command (e.g. my_java).
My particular reason is that I'd like my_java to be a small script that launches "java" at a lower priority. If there is an alternate approach, that would be just as useful.
I reached out to JetBrains and asked them directly. According to them, specifying a custom binary is "not possible". They did suggest I look at writing a custom external tool.

Mule best practice?

I would like to build a component that other developers can plugin in to MuleStudio and use to process files. It will expose a variety of methods which process the incoming file and return a new file. I want to make sure I'm going in the right direction with my implementation of this, and would appreciate any advice about best practices.
From my reading, it seems that I should use Mule Devkit to create a Module. This module can contain a variety of Processor methods. I then package with the maven command, and it can be installed as a plugin.
Some specific questions:
-Should I use Processors or Transformers, is there any difference in this case?
-Should I create multiple modules each with one Processor/Transformer, or one module with all the Processors/Transfromers?
-I would like the file to be able to be supplied generically (from an email, http, local file system, etc...). What should the parameter and return of my Processors be? Can I use InputStream as a param and OutputStream as my return, and then expect users to use the proper Endpoints/transformers to provide the InputStream. Or should I supply a variety of methods that take a variety of parameters, and perform the converison myself?
By looking at your requirement i can suggest please go ahead with MuleSoft Connector DevKit which in the box contains so many cool features and easy to build and install.
You can give it a try once , and achieve your business needs:
https://docs.mulesoft.com/anypoint-connector-devkit/v/3.7/
Creating Anypoint Connector
https://docs.mulesoft.com/anypoint-connector-devkit/v/3.7/creating-an-anypoint-connector-project

Is documentation readable by non-programmers possible with Spock?

FitNesse has a wiki-style documentation feature. It provided both the code and the doc's for these specification tests.
Is there a way in Spock (with plugin? / out of the box?) to generate any type of similar documentation to show off to the project managers / stakeholders, who can not be expected to read the (Groovy) source code of the Spock specifications.
Well, I have used the Strings that describe each Spock block in your tests to generate HTML reports.
Please visit my project and let me know if that helps:
https://github.com/renatoathaydes/spock-reports
You can download the jar from the reports directory and then just add it to your classpath. Run your tests, and "miraculously" you will have reports generated in the directory build/spock-reports!
You can even provide your own CSS stylesheets if you want to customize the reports, as explained in README.
Here's a blogpost I wrote about writing this Spock extension.
UPDATE
spock-reports has been available on Maven Central for a while now, as well as JCenter.
Spock allows you to add descriptions to blocks, e.g.:
when: "5 dollars are withdrawn from the account"
account.withdraw(5)
then: "3 dollars remain"
account.balance == 3
While we don't use this information yet, it's easy to access from an extension (see link below). What's left to do is to turn this into a nice report.
https://github.com/spockframework/spock-uberconf-2011/blob/master/src/test/groovy/extension/custom/ReportExtension.groovy
There are some great answers in here already, but if you want to keep your BDD definitions free from any plumbing whatsoever, you can take a look at pease, which will let you use Gherkin specification language with Spock.