How to execute a Cucumber Scenario / Feature multiple times? - cucumber-jvm

I would like to know if it is possible to execute using cucumber-jvm a scenario / feature multiple times. Thanks.

Your can use several approaches:
On operation system level: run command several times or write
appropriate shell script.
On Cucumber level. E.g. you have
following feature file:
Feature: My great feature
Scenario: My scenario
Given My given step one
When My when step two
Then My then step three
You can force cucumber-jvm run it several times e.g. 3 by transforming "Scenario" to "Scenario outlines" and using "Examples":
Feature: My great feature
Scenario **Outline**: My scenario
Given My given step <number>
When My when step two
Then My then step three
Examples: to run this scenario several times
|number|
|one|
|one|
|one|
It looks some artificially, but works.
I use this approach to gather statistics for complex tests dependent of a lot of conditions.

Related

Databricks - automatic parallelism and Spark SQL

I have a general question about Databrick cells and auto-parallelism with Spark SQL. I have a summary table that has a number of fields of which most have a complex logic behind them.
If I put blocks (%SQL) of individual field logic in individual cells, will the scheduler automatically attempt to allocate the cells to different nodes on the cluster to improve performance ( depending on how many nodes my cluster has) ? Alternatively are their PySpark functions I can use to organise the parallel running myself ? I cant find much about this elsewhere...
I am using LTS 10.4 (Spark 3.2.1 Scala 2.12)
Many thanks
Richard
If you write python "pyspark" code over multiple cells there is something called "lazy execution" meaning the actual work only happens at the last possible moment (for example when data is written or displayed). So before you run for example a display(df) no actual work is done on the cluster. So technically here the code of multiple code cells is parallelized efficiently.
However, in Databricks Spark SQL a single cell is executed to completion before the next one is started. If you want to run those concurrently you can take a look at running multiple notebooks at the same time (or multiple parameterized instances of the same notebook) with dbutils.notebook.run(). Then the cluster will automatically split the resources evenly between those queries running at the same time.
You can try run the sql statements using spark.sql() and assign the outputs to different dataframes. In the last step, you could execute an operation (for ex: join) that brings all into one dataframe. The lazy evaluation should then evaluate all dataframes (i.e. your sql queries) in parallel.

How to loop get method multiple times as per values fetching from database in Karate Framework [duplicate]

There is a requirement every time I run my automation I first need to get list of data from DB (as it is very dynamic) and put it in Examples section of scenario outline and use it running the feature file
I have used scenario outline as same scenario needs to be run for multiple data set .
So can you suggest if there is a way for this ?
Yes, you can try the dynamic scenario outline introduced in 0.9.X: https://github.com/intuit/karate#dynamic-scenario-outline
Examples:
| getDataFromDb() |
Note that there is an open bug for logs and a large number of rows: https://github.com/intuit/karate/issues/660
Else the normal looping over a second feature which you already know will work: https://github.com/intuit/karate#data-driven-tests

Running Karate tests on different environments and per specific url

I'm doing the research for my QA project and I'm wondering if Karate is able to handle certain use cases. Basically I need to run tests for different environments (local, staging, production). What I understood from the documentation, it is not a problem because of karate-config.js and karate-config-env.js.
The problem starts with the execution itself. Each environment has different urls for 3 different countries, so actually there are 9 urls in total. Moreover, because of the development process, certain features are deployed not at the same time for all countries. So I want to be able to run tests against:
1 - staging for one country (one url)
2 - staging for all countries (the same request with 3 urls, I guess I can use parallel execution)
The json structure is the same for all environments and countries and I want to execute one request with different configurations. I was thinking about TDD but I'm not sure if I can skipp some rows from Scenario Outline table if I'm executing tests for only one country. Is it possible? or is there any other way? Any advice appreciated.
You can "tagify" Scenario Outline rows. See the docs: https://github.com/intuit/karate#tags-and-examples
Scenario Outline: examples partitioned by tag
* def vals = karate.tagValues
* match vals.region[0] == expected
#region=US
Examples:
| expected |
| US |
#region=GB
Examples:
| expected |
| GB |
Karate can handle pretty much any data-driven challenge you have, once you understand how JSON, manipulating JSON and data-driven testing works. Here are some answers that will give you further ideas to consider:
https://stackoverflow.com/a/61685169/143475
https://stackoverflow.com/a/59162760/143475

Karate - Is there a way to get data from DB in Examples section of scenario outline?

There is a requirement every time I run my automation I first need to get list of data from DB (as it is very dynamic) and put it in Examples section of scenario outline and use it running the feature file
I have used scenario outline as same scenario needs to be run for multiple data set .
So can you suggest if there is a way for this ?
Yes, you can try the dynamic scenario outline introduced in 0.9.X: https://github.com/intuit/karate#dynamic-scenario-outline
Examples:
| getDataFromDb() |
Note that there is an open bug for logs and a large number of rows: https://github.com/intuit/karate/issues/660
Else the normal looping over a second feature which you already know will work: https://github.com/intuit/karate#data-driven-tests

How can i specify an order in mink/behat tests?

I'm using mink/behat to test my website.
I need certain features be executed before others.
Is there a way to specify an order of execution?
For example:
There are 3 features: Administration, Login, Buy
I need to execute them in this order: Login, Administration, Buy
Thank you!
Behat executes files in an alphabetic order. You could prefix them with a number to force the order.
However...
Scenarios are meant to be independent and the execution order shouldn't matter.
behat now allows you to specify an order from the existing Orderer implementations:
vendor/bin/behat --order=[random|reverse|null]
so for randon:
vendor/bin/behat --order=random
I believe you could also write your own. If you wish to control the order in terms of batches, e.g. set x scenarios need to run before set y this could be achieved by just tags and running the two separate suites in sequence.