XRAY and WebdriverIO integration Issue - Partial result imported into XRAY - webdriver-io

I have integrated XRAY with webdriver IO , mocha using the guide below
https://docs.getxray.app/display/XRAYCLOUD/Testing+web+applications+using+Mocha+and+WebdriverIO#tab-API
https://docs.getxray.app/display/XRAY/Import+Execution+Results+-+REST#ImportExecutionResultsREST-JUnitXMLresults
WDIO Config for JUnit reporter:
reporters: ['spec',
['junit', {
outputDir: './',
outputFileFormat: function(options) {
return `results.xml`
}
}],
Curl to import results.xml into XRAY:
curl -H "Content-Type: multipart/form-data" -u "UserName":"PASSWORD" -F "file=#results.xml" "${URL}/rest/raven/1.0/import/execution/junit?projectKey=${projectKey}&testPlanKey=${testPlanKey}"
Commands to run test suite(s):
Run single suite: npm run --suite mysuite1
Run multiple suites: npm run --suite mysuite1 --suite mysuite2
When single suite is executed, result.xml is created and successfully imported into XRAY. But when multiple suites are executed as above, result.xml has test result of the last suite only and thus only test results for the last suite are imported into XRAY.
As XRAY import API needs projectkey and testplankey, a result file should be created for each suite and import API to be invoked for each result file with right file name, project and plan.
What could help is a way to amend result file name which could be associated with the test plan e.g. result_mysuite1.xml.
Please let me know if more information is needed.
Thanks in advance,
Mahima.

The example we have available in the page you referred is in fact for a simple use case with one test, if you want to have multiple results please include a dynamic id in the name so that it will create multiple files (we will include this suggestion in the tutorial in the future):
[
'junit',
{
outputDir: './',
outputFileFormat(options) {
return `results-${options.cid}.xml`;
},
},
],
Then you can push those results to Xray.

The Junit Reporter creates one JUnit XML file per runners, as per the documentation.
What you can do is to configure WDIO (wdio.conf.js) to generate a distinct file based on an id that is available.
reporters: ['spec',
['junit', {
outputDir: './',
outputFileFormat: function(options) {
return `results-${options.cid}.xml`
}
}],
If you have 2 workers, you'll have 2 files like results-0-0.xml, results-0-1.xml.
Then you can either upload them, one by one, or you may merge them using an utility such as junit-merge.
To upload one by one, using some shell script, you could something like:
for n in `ls results-*.xml`; do curl -H "Content-Type: multipart/form-data" -u "UserName":"PASSWORD" -F "file=#$n" "$BASE_URL/rest/raven/2.0/import/execution/junit?projectKey=$projectKey&testPlanKey=$testPlanKey"; done
If you prefer to merge the files and upload them in a single shot (my preferred approach), you would so something like:
npm install junit-merge
node_modules/junit-merge/bin/junit-merge -o junit.xml results-0-0.xml results-0-1.xml
# or if you have them in a directory, junit-merge -o junit.xml -d output_dir
curl -H "Content-Type: multipart/form-data" -u "UserName":"PASSWORD" -F "file=#junit.xml" "$BASE_URL/rest/raven/2.0/import/execution/junit?projectKey=$projectKey&testPlanKey=$testPlanKey"
Note: another option could be forcing WDIO to have just one runner; however, it seems that a runner is created per each spec, at least from what I could assess

Related

Activate test coverage for python code on Gitlab

I'm trying to create a .gilab-ci.yml step to activate the gitlab's test coverage with pytest + pytest-cov.
Current unsuccessful snippet
I've tried:
.only-default: &only-default
only:
- merge_requests
stages:
- test
test-py:
stage: test
image: "python:3.8"
script:
- pip install -r requirements.txt
- python -m pytest -vvv src --cov-report xml --cov=src
artifacts:
reports:
cobertura: coverage.xml
Among other packages used for my project, the requirements.txt file contains pytest and pytest-cov.
The associated pipeline outputted:
Uploading artifacts...
coverage.xml: found 1 matching files and directories
Uploading artifacts as "cobertura" to coordinator... ok id=858390324 responseStatus=201 Created token=6uBetoBX
But I'm unable to see the new feature in my MR.
Does anyone have a working solution to activate the option ?
Reference page
https://docs.gitlab.com/ee/user/project/merge_requests/test_coverage_visualization.html
Solution to this question can be found in the issue in the gitlab repo:
https://gitlab.com/gitlab-org/gitlab/-/issues/285086
The documentation states that coverage.py is needed to convert the report to use full relative paths. The information isn’t displayed without the conversion.
So in your example, instead of:
- python -m pytest -vvv src --cov-report xml --cov=src
Do:
- python -m pytest -vvv src --cov=src
- coverage xml -o coverage.xml

Testcafe report generaton - open source engine

I am planning to convert my testcafe console output to txt/html file with custom file name, I am using below line of script, but output file is not generating and no errors. I have installed the required report templates. Please guide me .
$ testcafe chrome tests/core/sample.test.js --reporter html:/file.html
$ testcafe chrome tests/core/sample.test.js --reporter list:/test.txt
Thanks
Ramesh D
console output to txt
Isn't this just easier?
$ testcafe chrome tests/core/sample.test.js > test_output.txt 2>&1
I mean you can use reporters, you can build your own reporter and spend hours on that, or if you really just need a file out of console output, this should be enough.
console output to html
I have a habit of using config files rather than command line options, so I'll show it on config files:
.testcaferc.json
{
"reporter": [
{
"name": "html",
"output": "Results/report.html"
}
]
}
First, of course, you need to install the appropriate npm package:
$ npm install --save-dev testcafe-reporter-html
If you insist on a command-line option, this testcafe docs should guide you. I believe this would be the right command:
$ testcafe chrome tests/core/sample.test.js -r html:Results/report.html

extends in Gitlab-ci pipeline

I'm trying to include a file in which I declare some repetitive jobs, I'm using extends.
I always have this error did not find expected key while parsing a block
this is the template file
.deploy_dev:
stage: deploy
image: nexus
script:
- ssh -i ~/.ssh/id_rsa -o "StrictHostKeyChecking=no" sellerbot#sb-dev -p 10290 'sudo systemctl restart mail.service'
only:
- dev
this is the main file
include:
- project: 'sellerbot/gitlab-ci'
ref: master
file: 'deploy.yml'
deploy_dev:
extends: .deploy_dev
Can anyone help me please
`
It looks like just stage: deploy has to be indented. In this case it's a good idea to use gilab CI line tool to check if CI pipeline code is valid or just YAML validator. When I checked section from template file in yaml linter I've got
(<unknown>): mapping values are not allowed in this context at line 3 column 8

Does package.json support compound variables?

A project that respects the semver directory structure for build artefacts is beginning soon, and package.json or .nmprc would seem to be the right place to define this metadata. This is some preliminary code that demonstrates how the goal is intended to be achieved:
{
"name": "com.vendor.product"
, "version": "0.0.0"
, "directories": {
"build": "./out"
}
, "main": "./${npm_directories_build}/${npm_package_name}/${npm_package_version}/${npm_package_name}.js"
, "exports": "${npm_package_main}"
, "scripts": {
"echo": "echo\"${npm_package_exports}\""
}
}
I expected
npm run echo
to print the compound variable result to standard output,
./out/com.vendor.product/0.0.0/com.vendor.product.js
but instead, it prints the literal text
${npm_package_export}
I attempted to use array variables in .npmrc
outpath[]=./out
outpath[]=/${npm_package_name}
outpath[]=/${npm_package_version}
But
...
{
"echo": "echo \"${npm_config_outpath}\""
}
Simply prints an empty newline
It was expected that package.json supports compound variables, but this assumption is now in question. I have checked documentation, but either I am missing something or such is not defined. Long hand repetition of the same data is to be avoided (e.g. multiple references to package variables in order to make a single path). It is intended for package name and version to always dictate the location of the build files in a reliable and predictable manner.
If compound variables are not supported, could you clarify how .npmrc array variables actually work? Failing that, could you recommend an alternative method to achieve the same ends? Many thanks!
Searched documentation:
https://docs.npmjs.com/misc/config
https://docs.npmjs.com/files/npmrc
https://docs.npmjs.com/configuring-npm/npmrc.html
https://docs.npmjs.com/files/package.json#config
http://doc.codingdict.com/npm-ref/misc/config.html#config-settings
https://github.com/npm/ini
Short Answer:
"Does package.json support compound variables?"
Unfortunately no, not for the way you are wanting to use them. It only has package json vars which can be used in npm scripts only. For example on *Nix defining the echo script as:
"echo": "echo $npm_package_version"
or on Windows defining it as:
"echo": "echo %npm_package_version%"
will print the version e.g. 0.0.0.
Note: cross-env provides a solution for a single syntax that works cross-platform.
You certainly cannot include parameter substitution ${...} elsewhere in package.json fields except for the scripts section.
Additional info:
Regarding your subsequent comment:
How array values defined in .npmrc can be used in package.json
AFAIK I don't think you can. For example let's say we;
Save this contrived .npmrc in the root of the project directory.
.npmrc
quux[]="one"
quux[]="two"
quux[]="three"
foo=foobar
Then cd to the project directory and run the following command to print all environment variables:
npm run env
As you can see, the npm_config_foo=foobar environment variable has been added by npm. However for the quux array there is no npm_config_quux=[ ... ] environment variable added.
So, in npm scripts using package.json vars the following does work:
"echo": "echo $npm_config_foo"
However the following, (for referencing the array), does not - simply because it does not exist;
"echo": "echo $npm_config_quux"
The ini node.js package:
Maybe consider investigating the ini node.js package that npm utilizes for parsing .npmrc files. For example:
If you run the following command to install the package in your project:
npm i -D ini
Then define the npm echo script as per this:
"scripts": {
"echo": "node -e \"var fs = require('fs'), ini = require('ini'); var config = ini.parse(fs.readFileSync('./.npmrc', 'utf-8')); console.log(config.quux)\""
}
Note it uses the nodejs command line option -e to evaluate the JavaScript code. It essentially executes the following:
var fs = require('fs'),
ini = require('ini');
var config = ini.parse(fs.readFileSync('./.npmrc', 'utf-8'));
console.log(config.quux);
Then given the contrived .npmrc file that I mentioned previously when running:
npm run echo
it will print:
[ 'one', 'two', 'three' ]

How to handle an asynchronous request/response as part of a Gitlab CI/CD test

I am looking to migrate from Jenkins to GitLab CI/CD. We currently use the BlazeMeter plugin for Jenkins to run GUI Functional tests on Blazemeter as part of a Jenkins job.
Unfortunately BlazeMeter doesn't have a plugin for GitLab but they do have a simple JSON API to start tests.
Because the tests can be long-running the Blazemeter API is asynchronous. One cUrl endpoint is used to start a test and another is used to poll and get the results (passing an ID returned in the first call).
What is the best way to handle this asynchronous process as part of a GitLab CI Pipeline job and what is the sample gitlab yaml?
GitLab has webhook or pipeline trigger feature where you can invoke from where ever you want. Also blazemeter has notification via webhooks. By combining these two will solve your problem without having long running one job until test completion.
test-trigger:
stage: test
script:
- # curl command to invoke test
except:
- triggers
test-completion:
stage: test
script:
- # reporting script
only:
- triggers
Following resources will help you to get started.
https://docs.gitlab.com/ee/ci/triggers/
https://guide.blazemeter.com/hc/en-us/articles/360001859058-Notifications-Overview-Notifications-Overview#webhook
https://blog.runscope.com/posts/how-to-send-runscope-webhook-notifications-to-google-hangouts-chat-with-eventn
The general solution is to use a shell/cmd script to manage loops for gitlab-ci.
build:
stage: runBlazeMeter
script:
- echo "START test run"
echo "use curl to initiate test"
COUNT=0
while
COUNT=$((COUNT + 1))
echo "use curl to query test completion"
RES=$(curl --silent https://jsonplaceholder.typicode.com/users/${COUNT} | wc -c)
[ $RES -gt 3 ]
do :; done
echo "END test run"
You can use two stages to achieve this by using artifacts to copy the ID from one stage to the next
start-test:
stage: test
artifacts:
untracked: true
script:
- curl http://run/the/test > testid.json
test:
stage: test
dependencies:
- start-test
script:
- TESTID=`cat testid.json`
while
sleep 1000
RES=$(curl https://test/status/${TESTID} | grep "COMPLETE")
[ $RES -gt 0 ]
do :; done
echo "TEST COMPLETE"