Handle Empty Scenario Outline in Karate - karate

I have a feature file which fetches some records from DB. Now these records are then passed on to Scenario Outline examples section.
One ptoblem is there that when there are no records found then scenario outline does not gets executed which creates a problem since we do not get to know if that feature file is executed or not. I tried passing '0' into scenario outline examples section, but then it gives me an error that result is neither an list nor a function. So is there an alternate way to handle this.

Then maybe you should not use a dynamic Scenario Outline. Use the call option, explained here: https://github.com/karatelabs/karate#data-driven-features

Related

Scope of the variable is getting lost when trying to use scenario outline

I'm calling a Scenario from a Scenario Outline where both are in the same feature file. Now from this Scenario i'm calling another Scenario which is in second feature file. I want to return a parameter from the scenario of the second feature file to the scenario of the first feature file. But somehow the scope of the variable is getting lost. Can anyone help me on this

How to create a dynamic URL by a previous scenario response in Karate DSL [duplicate]

Does Karate supports a feature where you can define a variable in a scenario and reuse it in other scenarios in the same feature file. I tried doing the same but get an error. What's the best way to reuse the variables within the same feature file ?
Scenario: Get the request Id
* url baseUrl
Given path 'eam'
When method get
Then status 200
And def reqId = response.teams[0]resourceRequestId
Scenario: Use the above generated Id
* url baseUrl
* print 'From the previous Scenario: ' + reqId
Error:
Caused by: javax.script.ScriptException: ReferenceError: "reqId" is not defined in <eval> at line number 1
Use a Background: section. Here is an example.
EDIT: the variable if in the Background: will be re-initialized for every scenario which is standard testing framework "set up" behavior. You can use hooks such as callonce - if you want the initialization to happen only once.
If you are trying to modify a variable in one scenario and expect it to be now having that modified value when the next Scenario starts, you have misunderstood the concept of a Scenario. Just combine your steps into one Scenario, because think about it: that is the "flow" you are trying to test.
Each Scenario should be able to run stand-alone. In the future the execution order of Scenario-s could even be random or run in parallel.
Another way to explain this is - if you comment out one Scenario other ones should continue to work.
Please don't think of the Scenario as a way to "document" the important parts of your test. You can always use comments (e.g. # foo bar). Some teams assume that each HTTP "end point" should live in a separate Scenario - but this is absolutely not recommended. Look at the Hello World example itself, it deliberately shows 2 calls, a POST and a GET !
You can easily re-use code using call so you should not be worrying about whether code-duplication will be an issue.
Also - it is fine to have some code duplication, if it makes the flow easier to read. See this answer for details - and also read this article by Google.
EDIT: if you would like to read another answer that answers a similar question: https://stackoverflow.com/a/59433600/143475

Avoid Cucumber Duplicated Steps

How to avoid duplicated Cucumber steps while multiple team members working on different feature files in parallel?
Sometimes we find some similar steps with different context but are 90%-100% the same. The problem occurs when the tests are complex and need to write new steps for new feature that we didn't have before.
Any tips good tips that could help solving this issue?
Are there good tools that can manage and search steps to avoid duplicated statements?
Thanks,
Everyone writes scenarios in a different way.
One way you could start to avoid duplication is to document your steps and what they do, so that you and your team can look back on them to see if there is an existing step that they can use.
For steps where the functionality is almost the same, you could merge the step definitions: Given I have logged in vs When I log in - Cucumber Expression: "I( have) log(ged) in", Regex: /I(?:| have) log(?:|ged) in/ as an example.
For your example, we would need to know what you're trying to achieve in your step definition. For instance, you could be filling out a form where you have to select an option in a drop down dependant on the user that is logged in:
Psuedo-Code:
// Some stuff here to get to the dropdown
if(user.name === "Bob Ross"){
form.dropdown.select.option(2);
} else if (user.name === "Ellen Ripley"){
form.dropdown.select.option(3);
} else {
form.dropdown.select.option(1);
}
// Some other stuff here to complete this step
Basically, it depends on what you're attempting to do with your semi-duplicate steps.
Can you check what journey you are completing (with a previously ran step and a variable, a Url, a user that is logged in etc) and do something differently? Perhaps write a helper function for it and keep the step definition as just a block of ifs to choose the right one.
Can you merge steps that hold the same functionality?
If you can do either of these, then you should be good to go.

Karate API Testing - Reusing variables in different scenarios in the same feature file

Does Karate supports a feature where you can define a variable in a scenario and reuse it in other scenarios in the same feature file. I tried doing the same but get an error. What's the best way to reuse the variables within the same feature file ?
Scenario: Get the request Id
* url baseUrl
Given path 'eam'
When method get
Then status 200
And def reqId = response.teams[0]resourceRequestId
Scenario: Use the above generated Id
* url baseUrl
* print 'From the previous Scenario: ' + reqId
Error:
Caused by: javax.script.ScriptException: ReferenceError: "reqId" is not defined in <eval> at line number 1
Use a Background: section. Here is an example.
EDIT: the variable if in the Background: will be re-initialized for every scenario which is standard testing framework "set up" behavior. You can use hooks such as callonce - if you want the initialization to happen only once.
If you are trying to modify a variable in one scenario and expect it to be now having that modified value when the next Scenario starts, you have misunderstood the concept of a Scenario. Just combine your steps into one Scenario, because think about it: that is the "flow" you are trying to test.
Each Scenario should be able to run stand-alone. In the future the execution order of Scenario-s could even be random or run in parallel.
Another way to explain this is - if you comment out one Scenario other ones should continue to work.
Please don't think of the Scenario as a way to "document" the important parts of your test. You can always use comments (e.g. # foo bar). Some teams assume that each HTTP "end point" should live in a separate Scenario - but this is absolutely not recommended. Look at the Hello World example itself, it deliberately shows 2 calls, a POST and a GET !
You can easily re-use code using call so you should not be worrying about whether code-duplication will be an issue.
Also - it is fine to have some code duplication, if it makes the flow easier to read. See this answer for details - and also read this article by Google.
EDIT: if you would like to read another answer that answers a similar question: https://stackoverflow.com/a/59433600/143475

JMeter Tests and Non-Static GET/POST Parameters

What's the best strategy to use when writing JMeters tests against a web application where the values of certain query-string and post variables are going to change for each run.
Quick, common, example
You go to a Web Page
Enter some information into a form
Click Save
Behind the scenes, a new record is entered in the database
You want to edit the record you just entered, so you go to another web page. Behind the scenes it's passing the page a parameter with the Database ID of the row you just created
When you're running step 5 of the above test, the page parameter/Database ID is going to change each time.
The workflow/strategy I'm currently using is
Record a test using the above actions
Make a note of each place where a query string variable may change from run to run
Use a XPath or Regular Expression Extractor to pull the value out of a response and into a JMeter variable
Replace all appropriate instances of the hard-coded parameter with the above variable.
This works and can be automated to an extent. However, it can get tedious, is error prone, and fragile. Is there a better/commonly accepted way of handling this situation? (Or is this why most people just use JMeter to play back logs? (-;)
Sounds to me like your on the right track. The best that can be achieved by JMeter is to extract page variables with a regular expression or xpath post processor. However your absolutely correct in that this is not a scalable solution and becomes increasingly tricky to maintain or grow.
If you've reached is point then you may want to consider a tool which is more specialised for this sort of problem. Have a look web testing tool such as Watir, it will automatically handle changing post parameters; but you would still need to extract parameters if you need to do a database update but using Watir allows for better code reuse making the problem less painful.
We have had great success in testing similar scenarios with JMeter by storing parameters in JMeter Variables within a JDBC assertion. We then do our http get/post and use a BSF Assertion and javascript do complex validation of the response. Hope it helps