Testcafe report generaton - open source engine - testing

I am planning to convert my testcafe console output to txt/html file with custom file name, I am using below line of script, but output file is not generating and no errors. I have installed the required report templates. Please guide me .
$ testcafe chrome tests/core/sample.test.js --reporter html:/file.html
$ testcafe chrome tests/core/sample.test.js --reporter list:/test.txt
Thanks
Ramesh D

console output to txt
Isn't this just easier?
$ testcafe chrome tests/core/sample.test.js > test_output.txt 2>&1
I mean you can use reporters, you can build your own reporter and spend hours on that, or if you really just need a file out of console output, this should be enough.
console output to html
I have a habit of using config files rather than command line options, so I'll show it on config files:
.testcaferc.json
{
"reporter": [
{
"name": "html",
"output": "Results/report.html"
}
]
}
First, of course, you need to install the appropriate npm package:
$ npm install --save-dev testcafe-reporter-html
If you insist on a command-line option, this testcafe docs should guide you. I believe this would be the right command:
$ testcafe chrome tests/core/sample.test.js -r html:Results/report.html

Related

XRAY and WebdriverIO integration Issue - Partial result imported into XRAY

I have integrated XRAY with webdriver IO , mocha using the guide below
https://docs.getxray.app/display/XRAYCLOUD/Testing+web+applications+using+Mocha+and+WebdriverIO#tab-API
https://docs.getxray.app/display/XRAY/Import+Execution+Results+-+REST#ImportExecutionResultsREST-JUnitXMLresults
WDIO Config for JUnit reporter:
reporters: ['spec',
['junit', {
outputDir: './',
outputFileFormat: function(options) {
return `results.xml`
}
}],
Curl to import results.xml into XRAY:
curl -H "Content-Type: multipart/form-data" -u "UserName":"PASSWORD" -F "file=#results.xml" "${URL}/rest/raven/1.0/import/execution/junit?projectKey=${projectKey}&testPlanKey=${testPlanKey}"
Commands to run test suite(s):
Run single suite: npm run --suite mysuite1
Run multiple suites: npm run --suite mysuite1 --suite mysuite2
When single suite is executed, result.xml is created and successfully imported into XRAY. But when multiple suites are executed as above, result.xml has test result of the last suite only and thus only test results for the last suite are imported into XRAY.
As XRAY import API needs projectkey and testplankey, a result file should be created for each suite and import API to be invoked for each result file with right file name, project and plan.
What could help is a way to amend result file name which could be associated with the test plan e.g. result_mysuite1.xml.
Please let me know if more information is needed.
Thanks in advance,
Mahima.
The example we have available in the page you referred is in fact for a simple use case with one test, if you want to have multiple results please include a dynamic id in the name so that it will create multiple files (we will include this suggestion in the tutorial in the future):
[
'junit',
{
outputDir: './',
outputFileFormat(options) {
return `results-${options.cid}.xml`;
},
},
],
Then you can push those results to Xray.
The Junit Reporter creates one JUnit XML file per runners, as per the documentation.
What you can do is to configure WDIO (wdio.conf.js) to generate a distinct file based on an id that is available.
reporters: ['spec',
['junit', {
outputDir: './',
outputFileFormat: function(options) {
return `results-${options.cid}.xml`
}
}],
If you have 2 workers, you'll have 2 files like results-0-0.xml, results-0-1.xml.
Then you can either upload them, one by one, or you may merge them using an utility such as junit-merge.
To upload one by one, using some shell script, you could something like:
for n in `ls results-*.xml`; do curl -H "Content-Type: multipart/form-data" -u "UserName":"PASSWORD" -F "file=#$n" "$BASE_URL/rest/raven/2.0/import/execution/junit?projectKey=$projectKey&testPlanKey=$testPlanKey"; done
If you prefer to merge the files and upload them in a single shot (my preferred approach), you would so something like:
npm install junit-merge
node_modules/junit-merge/bin/junit-merge -o junit.xml results-0-0.xml results-0-1.xml
# or if you have them in a directory, junit-merge -o junit.xml -d output_dir
curl -H "Content-Type: multipart/form-data" -u "UserName":"PASSWORD" -F "file=#junit.xml" "$BASE_URL/rest/raven/2.0/import/execution/junit?projectKey=$projectKey&testPlanKey=$testPlanKey"
Note: another option could be forcing WDIO to have just one runner; however, it seems that a runner is created per each spec, at least from what I could assess

How to run npm postinstall script only on MacOS

How can a postinstall script be restricted to run only on macOS?
I have a shell script inside my React native library and it needs to be started when the npm install has completed.
This works great with postinstall but the problem is that Windows can't execute the shell script.
"scripts": {
"postinstall": "./iospatch.sh"
},
I need a way to limit that, to only run on macOS.
I tried with this library but it didn't work for my case
https://www.npmjs.com/package/cross-os
For cross-platform consider redefining your npm script as follows. This ensures that the shell script (.sh) is run only, via the postinstall npm script, when installing your package on macOS.
"scripts": {
"postinstall": "node -e \"process.platform === 'darwin' && require('child_process').spawn('sh', ['./iospatch.sh'], { stdio: 'inherit'})\""
}
Explanation:
The node -e \"...\" part utilizes the Node.js command line option -e to evaluate the inline JavaScript as follows:
process.platform === 'darwin' utilizes the process.platform property to identify the operating system platform. If it equals darwin then it's macOS.
The last part on the right-hand side of the && operator;
require('child_process').spawn('sh', ['./iospatch.sh'], { stdio: 'inherit'})
is executed only if the expression on the left-hand side of the && operator is true, i.e. it only runs if the platform is macOS.
That part of the code essentially utilizes the child_process.spawn() method to invoke your .sh file. The stdio option is set to inherit to configure the pipes for stdin, stdout, stderr in the child process.
Also note the command passed to child_process.spawn() is sh and the argument is the file path to the shell script, i.e. ['./iospatch.sh']. We do this to avoid having to set file permissions on macOS so that it can execute iospatch.sh.

How can I run an npm script with a file path in windows?

I am working on a project with a windows machine, and I have a few npm scripts like this:
"start" : "./foo/bar"
When I try to run npm run start I get this error:
.\foo\bar is not recognized as an internal or external command,
operable program or batch file.
I noticed the forward slash has been flipped to a backslash for windows, but also if I run this command on its own the bash terminal will interrupt them as 'escapes' and return:
bash: .foobar: command not found
The file runs ok in the terminal if I use ./foo/bar or .\\foo\\bar but not if I use these in the npm script.
What can I do to have this working in Windows? Furthermore is there a way to write it to be compatible for Win/Mac/Linux?
It works when you first do a cd with normal slashes (npm/nodejs seems to resolve this depending on the OS), then you only have to specify the file.
"scripts": {
"not-working": "scripts/another-folder/foo.cmd",
"working": "cd scripts/another-folder && foo.cmd"
},

How to access log files created by GCP cloud build steps?

My Cloud build fails with a timeout on npm test, and no useful information is sent to stdout. A complete log can be found in a file, but I couldn't find a way to ssh in the cloud build environment.
Already have image: node
> project-client#v3.0.14 test
> jest
ts-jest[versions] (WARN) Version 4.1.0-beta of typescript installed has not been tested with ts-jest. If you're experiencing issues, consider using a supported version (>=3.8.0 <5.0.0-0). Please do not report issues in ts-jest if you are using unsupported versions.
npm ERR! path /workspace/v3/client
npm ERR! command failed
npm ERR! signal SIGTERM
npm ERR! command sh -c jest
npm ERR! A complete log of this run can be found in:
npm ERR! /builder/home/.npm/_logs/2020-11-09T07_56_23_573Z-debug.log
Since I have no problem running the tests locally, I'd like to see the content of that 2020-11-09T07_56_23_573Z-debug.log file to hopefully get a hint at what might be wrong.
Is there a way to retrieve the file content ?
Ssh in a cloud build environment?
Get npm to print the complete log to stdout?
Some way to save the log file artifact to cloud storage ?
I had a similar issue with error management on Gitlab CI and my workaround is inspired from there.
The trick is to embed your command in something that exit with a return code 0. Here an example
- name: node
entrypoint: "bash"
args:
- "-c"
- |
RETURN_CODE=$$(jtest > log.stdout 2>log.stderr;echo $${?})
cat log.stdout
cat log.stderr
if [ $${RETURN_CODE} -gt 0 ];
then
#Do what you want in case of error, like a cat of the files in the _logs dir
# Break the build: exit 1
else
#Do what you want in case of success. Nothing to continue to the next step
fi
Some explanations:
echo $${?}: the double $ is to indicate to Cloud Build to not use substitution variables but to ignore it and let Linux command being interpreted.
The $? allows you to get the exit code of the previous command
Then you test the exit code, if > 0, you can perform actions. At the end, I recommend to break the build to not continue with erroneous sources.
You can parse the log.stderr file to get useful info in it (the log file for example)

How to setup environments for Cypress.io

I am taking a swing at setting up a test suite for my company's web app. We use four environments at the time (Production, Regression, Staging, Development). I have environment variables setup in my cypress.json file but I would like to be able to switch my environment for example from regression to development and force cypress to change the baseURL to my new environment as well as point to a different cypress.json file that has development variables. The documentation around environments on cypress.io is a little confusing to me and I'm not sure where to start.
I have cypress running in different environments using package.json's scripts. You can pass in env vars before the cypress command. It would look something like:
"scripts": {
"cypress:open:dev": "CYPRESS_BASE_URL=http://localhost:3000 cypress open",
"cypress:open:prod": "CYPRESS_BASE_URL=http://mycompanydomain.com cypress open",
"cypress:run:dev": "CYPRESS_BASE_URL=http://localhost:3000 cypress run",
"cypress:run:prod": "CYPRESS_BASE_URL=http://mycompanydomain.com cypress run",
}
If you want to make 4 separate cypress.json files instead, you could have them all named according to environment and when you run an npm script that corresponds with that environment just copy it to be the main cypress.json when you run the tests.
Files:
./cypress.dev.json
./cypress.prod.json
./cypress.staging.json
./cypress.regression.json
npm scripts:
"scripts": {
"cypress:run:dev": "cp ./cypress.dev.json ./cypress.json; cypress run;"
}
Update:
I wrote this while cypress was still in beta. Using the config flag seems like a cleaner option:
https://docs.cypress.io/guides/guides/command-line.html#cypress-run
npm scripts:
"scripts": {
"cypress:run:dev": "cypress run -c cypress.dev.json;"
}
You can pass the config file to be used with --config-file param as:
Syntax:-
cypress open --config-file <config-file-name>
If you have different environment files then it should be as:
"scripts": {
"cypress:open:prod": "cypress open --config-file production-config.json",
"cypress:open:stag": "cypress open --config-file staging-config.json",
},
If you see above commands we are telling the cypress to use production-config.json file for prod environment and similarly staging-config.json for stag environment.