Choosing spec feature file in wdio at runtime - webdriver-io

I want to run different feature files based and want to decide it at runtime, i.e via command line arguments.
Everytime I uncomment the file and then run the test.
Tried working with cucumber tags and did not get around it.
specs: [
// 'features/subscription/create.feature'
// './features/payment/create.feature'
],
Is there any simple way to do this?

There are two ways as far as i know:
Defining suites with required feature files and pass the suite name as parameter to WDIO test.
Detailed explanation on suites: https://webdriver.io/docs/organizingsuites.html
NOTE: In case if you are using npm test for starting the test, then use npm test -- --suite login to pick a suite.(this is not mentioned in the file).
You can directly pass in the features through command line as like below:
In you wdio.conf.js file write the below lines above the exports.config and parameter the spec value.
var features = process.env.FEATURE || './features/**/*.feature';
var featureArray = features.split(',');
exports.config = { .... spec: featureArray, ....} //skipped others
now while triggering the test use the command like below:
FEATURE='./features/test.feature,./features/test1.feature' npm test
So when the execution begins, features will receive the string and we converted that to array and passing as parameter to spec.
Hope this helps.

Related

AUTOTOOLS : use my own testsuite instead of the default one

I have made a python testsuite to test my project. I have added in Makefile.am the variable:
TESTS = ./launcher.sh
launcher.sh contains: tests/testsuite.py
When I do ./launcher.sh, my testsuite is correctly executed.
However, when I do make check, I get the following output:
PASS: launcher.sh
============================================================================
Testsuite summary for spider 1.0
============================================================================
# TOTAL: 1
# PASS: 1
# SKIP: 0
# XFAIL: 0
# FAIL: 0
# XPASS: 0
# ERROR: 0
============================================================================
How can I hide the default output and use the output of my testsuite ?
The Automake manual contains a whole chapter on testing, which would be helpful for understanding the context of Automake's test suite support. Moreover, it is important to understand that part of the bargain you enter into by using Automake to generate makefiles for you is to accept some limitations on the form and behavior of the resulting build system.
How can I hide the default output and use the output of my testsuite ?
To the best of my knowledge, you cannot hide the default output of make check, but you can cause the output of your test program to be emitted to make's standard output instead of redirected to a file. The easiest way to do this would be to enable the serial test harness by turning on Automake's serial-tests option. That would ordinarily be expressed via the argument to the AM_INIT_AUTOMAKE macro in your configure.ac:
AM_INIT_AUTOMAKE([serial-tests])
Note also that it should not be necessary to wrap your tests/testsuite.py in a shell script. Just make sure it is executable (which it sounds like you have already done), and name it directly, relative path included, in the value of the TESTS variable.

karate api testing - How to read tag names from command line to feature file

karate api testing - How to read tag names from command line to feature file
My feature file
Feature: validating tag name reading from maven command line
Background:
Given url baseURL
When param validation = I want to read tagname here
Then method get
Then status 200
#com_status #all #I want to read tagname here
Scenario Outline: Testing tag input scenarios
print I want to read tagname here
Command - mvn clean test -Dtest=Runner -DargLine="-Dkarate.env=dev" -Dcucumber.options="--tags #com_status"
Below is the command to execute the tag from the command line in Karate Api automation testing.
mvn test -DargLine="-Dkarate.env=e2e" "-Dkarate.options=--tags #user_management_get_vender_types"
You cannot. Tags are designed to be passed on the command line to filter scenarios to run - and cannot be retrieved within a test. You can retrieve the tag of a Scenario though: https://github.com/intuit/karate#karate-tags
You can try using karate.properties or something similar to retrieve what was passed on the command-line: https://github.com/intuit/karate#dynamic-port-numbers
Command:
mvn clean test -DcustomName=foo
Feature:
* def customName = karate.properties['customName']
Feel free to contribute this feature if you think it is important.
EDIT - update in 1.1.0 onwards, there is a new feature "Exvironment Tags" that may solve what is being asked for here: https://github.com/intuit/karate#environment-tags
Also see: https://stackoverflow.com/a/50693388/143475

How to run single cucumber scenario by name

I'm asking for help on how to run a feature file scenario just by name. I've been trying for a while and it does not come out. I know that can be done by tags or by line number, but I wonder if we can run a cucumber test by name, more or less with this nomenclature.
Given a file named "features/test.feature" with:
Feature:
Scenario: My first scenario
Given this step is blah blah blah
Scenario: My second scenario
Given this step too blah blah
I want to run a scenario by name from the console or with gradle, maybe similar this way
cucumber features/test.feuture::My second scenario
Or maybe with gradle
./gradlew cucumber::My second scenario
You didn't describe how you start cucumber so I can't help you with that.
When used from the CLI accepts --name REGEXP. This will only run scenarios whose names match REGEXP.
The #CucumberOptions annotation accepts name="REGEXP".
Cucumber < v6.0.0 looks at the environment. For maven you can add -Dcucumber.options=--name REGEXP. I don't know the equivalent for gradle. Take note that the escape characters maybe shell/build system dependent.
Cucumber v6.0.0 and above looks at the environment. For maven you can add -Dcucumber.filter.name="REGEXP".
See:
https://cucumber.io/docs/reference/jvm#running
https://github.com/cucumber/cucumber-jvm/tree/main/core
From cucumber 6.x, you can run a scenario with below CLI commands:
// Specify a scenario by its line number
$ cucumber-js features/my_feature.feature:3
// Specify a scenario by its name matching a regular expression
$ cucumber-js --name "topic 1"
But, these are time-consuming and repetitive. You can save a lot of time by using a dedicated VSCode Extension called Cucumber-Quick. This extension will allow you to run a scenario/feature just by right-clicking on them. It can save you from all the hustle.
You would call the scenario by its line number.
So assuming that the second scenario starts on line 5 in your feature file you could run:
cucumber features/test.feature:5

How to use Bamboo plan variables in an inline script task?

When defining a Bamboo plan variable, the page has this.
For task configuration fields, use the syntax
${bamboo.myvariablename}. For inline scripts, variables are exposed as
shell environment variables which can be accessed using the syntax
$BAMBOO_MY_VARIABLE_NAME (Linux/Mac OS X) or %BAMBOO_MY_VARIABLE_NAME%
(Windows).
However, that doesn't work in my Linux inline script. For example, I have the following defined a a plan variable
name: my_plan_var value: some_string
My inline script is simply...
PLAN_VAR=$BAMBOO_MY_PLAN_VAR
echo "Plan var: $PLAN_VAR"
and I just get a blank string.
I've tried this
PLAN_VAR=${bamboo.my_plan_var}
But I get
${bamboo.my_plan_var}: bad substitution
on the log viewer window.
Any pointers?
I tried the following and it works:
On the plan, I set my_plan_var to "it works" (w/o quotes)
In the inline script (don't forget the first line):
#/bin/sh
PLAN_VAR=$bamboo_my_plan_var
echo "testing: $PLAN_VAR"
And I got the expected result:
testing: it works
I also wanted to create a Bamboo variable and the only thing I've found to share it between scripts is with inject-variables like following:
Add to your bamboo-spec.yaml the following after your script that will create the variable:
Build:
tasks:
- script: create-bamboo-var.sh
- inject-variables:
file: bamboo-specs/vars.yaml
scope: RESULT
# namespace: plan
- script: echo ${bamboo.inject.GIT_VERSION} # just for testing
Note: Namespace defaults to inject.
In create-bamboo-var.sh create the file bamboo-specs/vars.yaml:
#!bin/bash
versionStr=$(git describe --tags --always --dirty --abbrev=4)
echo "GIT_VERSION: ${versionStr}" > ./bamboo-specs/vars.yaml
Or for multiple lines you can use:
SW_NUMBER_DIGITS=${1} # Passed as first parameter to build script
cat <<EOT > ./bamboo-specs/vars.yaml
GIT_VERSION: ${versionStr}
SW_NUMBER_APP: ${SW_NUMBER_DIGITS}
EOT
Scope can be local or result. Local means it's only available for current job and result means it can be used in subsequent stages of this plan and releases that are created from the result.
Namespace is just used to avoid naming collisions with other variables.
With the above you can use that variable in later scripts with ${bamboo.inject.GIT_VERSION}. The last script task is just to see that it is working in other scripts. You can also see the variables in the web app as build meta data.
I'm using the above script before the build (in my case compiling C-Code) takes place so I can also create a version.h file that can be used by the source code.
This is still a bit cumbersome but I'm happy with it and I hope it will help others to configure Bamboo. Bamboo documentation could be better. (Still a lot try and error)

Bamboo with tSQLt - Failed to parse test result file

First of all I should point out I'm new to Atlassian's Bamboo and continuous integration in general. This is the first project where I've used either.
I've created a raft of unit tests using the tSQLt framework. I've also configured Bamboo to:
Get a fresh copy of the repository from BitBucket
Drop & re-create the build DB
Use Red-Gate SQL Compare to deploy the DB objects from source to the build DB
Run the tSQLt tests
Output the results of the tests in XML format to a file called TestResults.xml
I've checked and can confirm that the TestResults.xml file is created.
In Bamboo I then added a JUnit Parser task to consume the contents of this TestResults.xml file. However when that task runs it returns this error:
Failed to parse test result file
At first I thought it might have meant that Bamboo could not find the file. I changed the task that created the results file to output a file called TestResults2.xml. When I did that the JUnit Parser returned this error:
Failing task since test cases were expected but none were found.
So I'm assuming that the first error message means Bamboo is finding the file, it just can't parse the file.
I have no idea where to start working out what exactly is the problem. Has anyone got any ideas?
I had a similar problem, but turned out to be weird behavior from bamboo needing file stamps being modified to have visibility of the JUnit file.
In Windows enviornment you just need to add "script task" before the "JUnit task"
powershell (ls *.xml).LastWriteTime = Get-Date
Reference
https://jira.atlassian.com/browse/BAM-12768
I have had several cases of this and was able to fix it by removing single quotes and greater than / less than characters from test names inside the *.rb file.
Example
test "make sure 'go_to_world' is removed from header and length < 23"
change to remove single quotes and < symbol
test "make sure go_to_world is removed from header and length less than 23"
Very common are contractions: "won't don't shouldn't", or possessives: "the vessel's data".
And also < or > characters.
I think there is a bug in the parser that just doesn't escape those characters in a test title appropriately.