I need to create data driven unit tests for different APIs in karate framework. The various elements to be passed in the JSON payload should be taken as input from an excel file.
A few points:
I recommend you look at Karate's built-in data-table capabilities, it is far more readable, integrates into your test-script and you won't need to depend on other software. Refer these examples: call-table.feature and dynamic-params.feature
Next I would recommend using JSON instead of an Excel or CSV file, it is natively supported by Karate: call-json-array.feature
Finally, if you really wanted to, you can call any Java code and if you return data in a Map / List form, it will be ready for Karate to use. This example shows how to read a database via JDBC: dogs.feature. So although this is not built into Karate, just write a simple utility to read a CSV or Excel file and you can do pretty much anything Java can do.
EDIT: Karate now supports CSV files that can be used to even do data-driven testing: https://github.com/intuit/karate#csv-files
i have a feature called as create experiment in feature1, i am passing the data that is needed to create experiment as Background section of feature file 1.
i want to access the data i am passing from feature file1 into feature file 2 as precondition. before executing featurefile2 i want to make sure the experiment is present with the data given in feature file1.
i am using a common class to collect all the data used in feature file 1 instead of storing them in stepdefination .
following the example given in this link to achieve this. https://docs.specflow.org/projects/specflow/en/latest/Bindings/Context-Injection.html
The problem here is, when i run feature file2 alone it does not get the data, since to data is filled during execution of feature file 1.
How to read the data of featurefile1 without executing featurefile1.
how to solve this .
Depending on the state of another test is generally not the idea on how to do BDD tests, or any unit/integration tests in general, since they are supposed to be self-contained. Understandably there won't be any support in SpecFlow on how to do this.
The easiest solution I can think of is to either copy the step definitions to your second feature file, which then calls the same code again. You don't have to copy the test implementation, because all steps are global in SpecFlow by default.
Another solution would be to add a background to the second feature file where you simply call the methods of the first feature file.
Pretty simple question here: If I read a .csv file for example, how can I know at runtime what columns that file has?
I want to convert that .csv file to JSON, but I don't know how could I set the fields for the JSON Output step dynamically, to include all the rows of that file. Can you help me expand my knowledge?
Thanks in advance
This is definitely a good use case for metadata injection. The step specifically is called ETL Metadata Injection. You'll need to get the fields dynamically probably using a scripting step (there's Java, JavaScript, and Python scripting steps available, as well as R if you're an Enterprise customer). I don't think that there is a built in step that will dynamically discover the fields at runtime.
Once you have fields, you can use the metadata injection step to inject the field names into CSV Input or Text File Input Step, as well as the JSON Output step.
Here is the official help documentation on the ETL Metadata Injection step: https://help.pentaho.com/Documentation/8.1/Products/Data_Integration/Transformation_Step_Reference/ETL_Metadata_Injection
Cucumber Gherkin: Is there a way to have your gherkin scenarios written and managed in excel sheets instead of .feature files in IntelliJ or Eclipse like in SpecFlow+Excel(screenshot given as link below)? I am using Cucumber-JVM with selenium for my automation framework.
Excel based Scenarios
PS: Will there be any pros or cons to using excel sheets as your feature files?
No, Gherkin is the language understood by Cucumber.
If you want to introduce Excel in the equation, you probably want to use some other tool. Or implement your own functionality that reads Excel and does something interesting based on the content.
You could write a compiler that takes .csv files and translates them into feature files by doing a write.
Here's a quick thing I whipped up in JS that does just that.
First it removes blank lines, then searches through for keywords with a missing colon (it might happen somewhere down the road) and adds those in, checks for an examples table, and as per your photo, this was easy to do, as it was just checking whether the start character was a comma (after having removed the blank lines, and keywords always being in the first section). Finally, removing the rest of the commas.
The only difference, I believe, is that mine included the Feature: and Scenario:/Scenario Outline line that is needed to create a valid scenario in cucumber.
So yes. It is possible. You'll just have to compile the csv's into feature files first.
function constructFeature(csvData) {
let data = csvData,
blankLineRegex = /^,+$/gm,
keywordsWithCommasRegex = /(Feature|Scenario|Scenario Outline|Ability|Business Need|Examples|Background),/gm,
examplesTableRegex = /^,.*$/gm;
data = data
.toString()
.replace(blankLineRegex, "");
data.match(keywordsWithCommasRegex).forEach((match) => {
data = data.replace(match, match.replace(',', ":,"))
});
data.match(examplesTableRegex).forEach((match) => {
data = data.replace(match, match.replace(/,/g, "|").replace(/$/, "|"))
});
data = data.replace(/,/g, " ");
data = data.replace(/\ +/g, " ");
console.log(data);
}
let featureToBuild = `Feature,I should be able to eat apples,,
,,,
Scenario Outline,I eat my apples,,
Given,I have ,<start>,apples
When,I eat,<eat>,apples
Then,I should have,<end>,apples
,,,
Examples,,,
,start,eat,end
,4,2,2
,3,2,1
,4,3,1`
constructFeature(featureToBuild);
Just take that output and shove it into an aptly named feature file.
And here's the original excel document:
The downside of using Excel is that you'll be using Excel.
The upside of using feature files as part of your project are:
* they are under version control (assuming you use git or similar)
* you IDE with Cucumber plugin can help you:
- For instance, IntelliJ (which has a free Community edition) will highlight any steps that have not been implemented yet, or suggest steps that have been implemented
- You can also generate step definitions from your feature file
* when running tests you will see if and where your feature file violates Gherkin syntax
TL;DR: Having the feature file with your code base will make it easier to write and implement scenarios!
Also, like #Thomas Sundberg said, Cucumber cannot read Excel.
Cucumber can only process the feature files, so you will have to copy your scenarios from Excel to feature files at some point.
So what you are asking is not possible with Cucumber.
If you want to read test cases from Excel you'll have to build your own data driven tests. You could for instance use Apache POI.
There's a solution out there which does scenarios and data only in Excel. However, this I have tried in conjunction with Cucumber and not with any test application. You could try and see if you want to use this:
Acceptance Tests Excel Addin (for Gherkin)
However, not that Excel as a workbook or multiple workbooks can be source controlled, but it is hard to maintain revisions.
But then Excel provides you flexibility with managing data. You must also think not duplicating your test information and data in multiple places. So check this out, may be helpful.
Hi All I want too create keyword driven framework , I want to have java code to read keyword from csv file and map it to function in my framework . Many example given but based on excel not in csv
Can anyone help me out on this
You may find useful to take a look on the following GitHub repository.
https://github.com/qasquare/KeywordDrivern.TestNg
It proposes the same idea you mentioned. However the repository uses Apache POI to read XLSX files you can easily change the methods to read a CSV file.