karate - Can we fetch data from excel file in karate? if yes then can we set the fetch data in examples in scenario outline? - karate

Examples:
|sku_code |property_code |sale_price |override_source|persistent_override | stay_date|
|'48' | '0001661' | 2000 |'DASHBOARD' | 'true' | 2 |
like I have this data hardcoded , I want this data to fetched from excel sheet!

Yes you can use csv to do it using Dynamic scenario outline in karate
Example from karate demo:
Scenario Outline: cat name: <name>
Given url demoBaseUrl
And path 'cats'
And request { name: '<name>', age: <age> }
When method post
Then status 200
And match response == { id: '#number', name: '<name>' }
Examples:
| read('kittens.csv') |
Links:
Dynamic csv demo
Dynamic scenario outline doc

Related

I want to run Scenario Outline in loop for one of the variable configurable

I have use case in which I am making server name configurable using Scenario outline for get call. But I also want to make another variable like ID configurable. I want using that id it should run for all server name mentioned in Scenario Outline. How can we achieve that?
Example
Scenario Outline: Test one get call
Given url: 'https://' + server+ 'v1/share/12345/profit'
When method get
Then status 200
Examples:
|server|
|server1|
|server2|
|server3|
|server4|
In above example server name, I made it configurable using scenario outline, but I want to make number entered in URL configurable & want that to run for all servers. How I will achieve that?
Just use another variable.
Examples:
| server | id |
| foo | 1 |
| foo | 2 |
| bar | 1 |
| bar | 2 |
And if you want to dynamically generate data using a function, all that is possible. Refer: https://github.com/karatelabs/karate#json-function-data-source

how to dynamically set an value in json read from file in Karate

I want to dynamically set value for some elements in JSON(read from a file) using data driven feature of KARATE framework. Here are more details:
request.json -> { wheels : <wheel>, color: '<color>' }
Feature: Read json input from file and iterate over data table values
Background:
* url ''
* def reqJson = read('request.json')
* print reqJson
Scenario Outline: Test file read
# I want to avoid writing below set statements for each element in request
#* set reqJson.wheels = <wheel>
#* set reqJson.color = '<color>'
Given path ''
And request reqJson
When method POST
Then status 200
And match response contains {mode: '<result>'}
Examples:
| wheel | color | result |
| 4 | red | car |
| 2 | any | bicycle |
I am developing automation framework using Karate, my intention is to save sample request in JSON file for a given API and then during execution I want element values to be replaced with the ones given in the table above.I don't want to write set statement for each element either(commented lines above)
P.S.: I tried with calling other feature file using table approach. However, I want to keep one feature file per API, hence want to know if there is any way possible for the above approach
I think you have missed embedded expressions which is simpler than the set keyword in many cases, especially when reading from files.
For example:
request.json -> { wheels : '#(wheels)', color: '#(color)' }
And then this would work:
* def wheels = 4
* def color = 'blue'
* def reqJson = read('request.json')
* match reqJson == { wheels: 4, color: 'blue' }
If you go through the demo examples you will get plenty of other ideas. For example:
* table rows
| wheels | color | result |
| 4 | 'blue' | 'car' |
| 2 | 'red' | 'bike' |
* call read('make-request.feature') rows
And where make-request.feature is:
Given path ''
And request { wheels: '#(wheels)', color: '#(color)' }
When method POST
Then status 200
And match response contains { mode: '#(result)' }

Can we pass the excel file or .csv file as a table input in karate feature file

In Karate, we are parameterizing with the below values. Do we have any option of passing the table as external file in karate.
And table tablename
| name | age | id |
| abc | 02 | 01 |
| def | 03 | 02 |
And def values = { "name": '(#name)', "age": '(#age)', "id" : '(#id)' }
Expecting below in karate framework.
And table <tablefile.xls>
And def values = { "name": '(#name)', "age": '(#age)', "id" : '(#id)' }
There are multiple ways, the most recommended is to use JSON for maintaining test data.
Please take a look at this answer for details: https://stackoverflow.com/a/49031155/143475
EDIT: Since OP is insisting on Excel, please refer to this other answer where this is explained in detail: https://stackoverflow.com/a/47954946/143475
If I were you I would NOT use Excel and at least use CSV. In my opinion table or set is far easier to maintain than Excel and you can do it as part of your test feature file itself.

Use random var in behat tests to produce unique usernames

Right now I am creating users using something like the following
Given users:
| name | status | roles |
| kyle | 1 | authenticated user |
| cartman | 1 | admin |
Is there a possibility to add random strings in these names?
If I didn't misunderstand, you can do this instead.
Gherkin
Scenario: Create random users
Given I create "3" users
FeatureContext
var $str = 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ';
var $status = [0, 1];
var $roles = ['authenticated user', 'admin', 'superman'];
/**
* #Given /^I create "([^"]*)" users$/
*/
public function createDummyUsers($count)
{
for ($i = 0; $i < $count; $i++) {
$name = substr(str_shuffle($this->str), 0, 8);
$status = $this->status[array_rand($this->status, 1)];
$role = $this->roles[array_rand($this->roles, 1)];
echo "You've just created $name - $status -$role" . PHP_EOL;
}
}
Prints
You've just created mqBWAQJK - 1 - superman
You've just created WYuAZSco - 0 - admin
You've just created HCNWvVth - 1 - admin
You've just created EmLkVRpO - 1 -superman
You've just created pxWcsuPl - 1 -authenticated user
You've just created mLYrlKdz - 0 -superman
The RandomContext functionality from drupal/drupal-extension allows for usage like this:
Given I fill in "E-mail address" with "<?username>#example.org"
or
Given users:
| name | email | status | roles |
| <?standard> | <?standard>#example.org | 1 | authenticated |
| <?admin> | <?admin>#example.org | 1 | admin |
Each token (eg <?username>, <?firstname>) used in a feature will be randomly transformed with a (random string) value for that feature execution. This is implemented with the use of Behat's #Transform functionality, meaning that your tokens will be substituted before execution of that step - so it works to generate random inputs anywhere you might need to use random input as part of your feature.
You can reference the same token later in your feature, eg to verify that the random value input earlier has been returned correctly, and the randomly generated value will be recalled. So, the first and second usages of <?admin> in the example above will both be replaced by the same generated value.
If you are using drupal/drupal-extension then this can be enabled by adding Drupal\DrupalExtension\Context\RandomContext to the enabled contexts in your behat.yml.
If you aren't using Drupal, then the source linked above will demonstrate how you could implement the same for your own usage.
I've created a solution on that you can try.
https://github.com/JordiGiros/MinkFieldRandomizer
MinkFieldRandomizer is a random (with sense) information generator for filling browser form fields in Behat Mink Selenium tests. It brings the option to run your tests in a more realistic way changing the information you use to fill in the forms in every test you run.
You can easily add it to your project by Composer.
One example:
Then Fills in form fields with provided table
| "f_outbound_accommodation_name" | "{RandomName(10)}" |
| "f_outbound_accommodation_phone_number" | "{RandomPhone(9)}" |
| "f_outbound_accommodation_address_1" | "{RandomText(10)}" |
I hope you try it!
And you're welcome to add new functionalities or fork it or do wathever you want.
Cheers

Solr - how to "group by" and "limit"?

Say I indexed the following from my database:
======================================
| Id | Code | Description |
======================================
| 1 | A1 | Hello world |
| 2 | A1 | Hello world 123 |
| 3 | A1 | World hello hi |
| 4 | B1 | Quick fox jumped |
| 5 | B1 | Lazy dog |
...
Further, say the user searches for "hello", which should return records 1, 2, and 3. Is there a way to make Solr "group by" the Code field and apply a limit (say, 10 records)? I'm somewhat looking for a SQL counterpart of GROUP BY and LIMIT.
Also, when it does this "group by", I want it to choose the most relevant document and use that document's Description field as part of the return.
Of course, I could just have Solr return everything to my application and I can manipulate the results to do the GROUP BY and LIMIT. I'd rather not do this if possible.
Have a look at field collapsing, available in Solr 4.0. Sorting groups on relevance: group.sort=score desc.
http://XXX.XXX.XXX.XXX:8080/solr/autocomplete/select?q=displayterm:new&wt=json&indent=true&q.op=and&fl=displayterm&group=true&group.field=displayterm&rows=3&start=0
Note:
Response:
start -> response start your id.
rows -> how do you wat number of rows .
Exp
1 step
&start=0&rows=3
2 step
&start=3&rows=3
3 step
&start=6&rows=3
etc.
{
"responseHeader":{
"status":0,
"QTime":1,
"params":{
"fl":"displayterm",
"indent":"true",
"start":"0",
"q":"displayterm:new",
"q.op":"and",
"group.field":"displayterm",
"group":"true",
"wt":"json",
"rows":"3"}},
"grouped":{
"displayterm":{
"matches":231,
"groups":[{
"groupValue":null,
"doclist":{"numFound":220,"start":0,"docs":[
{
"displayterm":"Professional News"}]
}},
{
"groupValue":"general",
"doclist":{"numFound":1,"start":0,"docs":[
{
"displayterm":"General News"}]
}},
{
"groupValue":"delhi",
"doclist":{"numFound":2,"start":0,"docs":[
{
"displayterm":"New Delhi"}]
}}]}}}
add the following field to your query
'group':'true',
'group.field':'source',
'group.main':'true',
'group.limit':10,
The simplest way to achieve what you want is to use Solr grouping capabilities also called Field Collapsing. You would have to add the following parameters to your query:
group=true - that would turn on the grouping module
group.field=Code - that would tell Solr on which field the grouping should be done
rows=10 - that would tell Solr to limit the number of unique groups to 10 at max
If you would like to page through groups you should use the rows and start parameter. To control the results inside the groups themselves you would use group.limit and group.offset.
Hopefully that helps :)