After the test execution is there any way we can get a report of how many test passed and how many failed in playwright like in robot framework
The new built-in Playwright Test Runner has a number of reporter options. They are documented here:
https://playwright.dev/docs/test-reporters
There are currently 3 modes to output to the terminal ranging from a very verbose to a very terse output. They are list, line and dot.
There are a further 2 modes intended for output to file. They are json and junit. The former is self-explanatory, the latter produces JUnit-style xml output.
Finally there is the option to generate an HTML report.
The modes can be combined to control the terminal output and file output simultaneously.
You can configure many different reporters, that really depends on what you need, "like in robot framework" is quite broad since you can use different reporters with RF as well.
If you use Playwright with mocha, you can configure even more reporters:
.mocharc.json:
{
"reporter-options": [
"configFile=reporter-options.json"
]
}
reporter-options.json:
{
"reporterEnabled": "mocha-simple-html-reporter, spec, mocha-junit-reporter",
"mochaSimpleHtmlReporterReporterOptions": {
"output": "./Results/report.html"
},
"mochaJunitReporterReporterOptions": {
"mochaFile": "./Results/report-junit.xml"
}
}
Obviously you have to install dependencies:
package.json:
{
"devDependencies": {
"mocha": "~8.2.1",
"mocha-junit-reporter": "~2.0.0",
"mocha-multi-reporters": "~1.5.1",
"mocha-simple-html-reporter": "~1.1.0",
"playwright": "~1.10.0"
}
}
After this setup, html and junit reports will be available after test runs. Junit report should be enough so that it could be parsed in a pipeline and displayed on some dashboard.
Related
I'm trying to check which browser we're running tests on, and then skip a test/fixture based on the result (as mentioned in this TestCafe Issue).
import { t } from 'testcafe';
fixture `test`
.page('https://testcafe.devexpress.com')
if (t.browser.name.includes('Chrome')) {
test('is Chrome?', async () => {
console.log(t.browser.name);
await t.expect(t.browser.name.includes('Chrome').ok();
});
} else {
test.skip('is Chrome?')
};
Results in...
ERROR Cannot prepare tests due to an error.
Cannot implicitly resolve the test run in the context of which the test controller action should be executed. Use test function's 't' argument instead.
Is there any way I can call the testObject (t) outside of the test?
I don't have a solution to exactly your question. But I think it's better to do it slightly differently, so the outcome will be the same, but the means to achieve it will differ a bit. Let me explain.
Wrapping test cases in if statements is, in my opinion, not a good idea. It mostly clutters test files so you don't only see test or fixture at the left side, but also if statements that make you stop when reading such files. It presents more complexity when you just want to scan a test file quickly from top to bottom.
The solution could be you introduce meta data to your test cases (could work well with fixtures as well).
test
.meta({
author: 'pavelsaman',
creationDate: '16/12/2020',
browser: 'chrome'
})
('Test for Chrome', async t => {
// test steps
});
Then you can execute only tests for Chrome like so:
$ testcafe --test-meta browser=chrome chrome
That's very much the same as what you wanted to achieve with the condition, but the code is a bit more readable.
In case you want to execute tests for both chrome and firefox, you can execute more commands:
$ testcafe --test-meta browser=chrome chrome
$ testcafe --test-meta browser=firefox firefox
or:
$ testcafe --test-meta browser=chrome chrome && testcafe --test-meta browser=firefox firefox
If your tests are in a pipeline, it would probably be done in two steps.
The better solution, as mentioned in one of the comments in this question is to use the runner object in run your tests instead of the command line. Instead of passing the browser(s) as a CLI argument, you would pass it as an optional argument to a top-level script.
You would then read the browser variable from either the script parameter or the .testcaferc.json file.
You would need to tag all tests/fixtures with the browser(s) they apply to using meta data.
You then use the Runner.filter method to add a delegate that returns true if the browser in the meta data is equal to the browser variable in the top level script
var runner = testcafe.createRunner();
var browser = process.env.npm_package_config_browser || require("./testcaferc.json").browser;
var runner.filter((testName, fixtureName, fixturePath, testMeta, fixtureMeta) => {
return fixtureMeta.browser === browser || testMeta.browser === browser ;
}
I am using karma-html-reporter to generate the report which works fine.
But when i execute the test-cases with karma-parallel i observe that it is generating the report of only one instance not for the other one.
Is there a way to get/generate report for both of the instances.
Currently i am running 2 instances of Chrome.
What all i have to do to get the integrated report of both instances ?
I have tried karma-multibrowser-reporter link
But it is removing the karma-parallel feature.
report generation happens with below configuration:-
htmlReporter: {
outputDir: 'path/results'
},
karma-parallel has the option aggregatedReporterTest. If you add html to the regex it only uses one reporter for all browsers:
parallelOptions: {
aggregatedReporterTest: /coverage|istanbul|html|junit/i
},
Using testcafe grep patterns would partially solve our problem of using tags but it would still display those tags on the spec report ...!!!
Is there a way to include tags in the test/fixture names and use grep patterns but skip those tags to be displayed in the execution report ??
import { Selector } from 'testcafe';
fixture `Getting Started`
.page `http://devexpress.github.io/testcafe/example`;
test('My first test --tags {smoke, regression}', async t => {
// Test code
});
test('My Second test --tags {smoke}', async t => {
// Test code
});
test('My first test --tags {regression}', async t => {
// Test code
});
testcafe chrome test.js -F "smoke"
The above snippet would trigger the smoke only tests for me though but the report will display the test names along with those tags
Is there an alternative way to deal with tags or a solution to not display the tags in the test execution report?
It appears in a recent release (v0.23.1) of testcafe you can now filter with metadata via the commandline.
You can now run only those tests or fixtures whose metadata contains a specific set of values. Use the --test-meta and --fixture-meta flags to specify these values.
testcafe chrome my-tests --test-meta device=mobile,env=production
or
testcafe chrome my-tests --fixture-meta subsystem=payments,type=regression
Read more at https://devexpress.github.io/testcafe/blog/testcafe-v0-23-1-released.html
I think the best solution in this case is to use test/fixture metadata. Please refer the following article: http://devexpress.github.io/testcafe/documentation/test-api/test-code-structure.html#specifying-testing-metadata
For now, you can't filter by metadata, but this feature is in the pull request: https://github.com/DevExpress/testcafe/pull/2841. So, after this PR is merged, you will be able to add any metadata to tests and filter by this metadata in a command line.
I have a Cucumber feature file with a bunch of Scenarios to execute Citrus integration tests (Citrus-Cucumber integration). I can run all Scenarios at once in IntelliJ or through Maven.
Works perfect, but is it possible to run a single Scenario in IntelliJ or through Maven?
In IntelliJ I found a Cucumber plugin that gives me this option, but after fixing lots of NoClassDefFound errors with dependency tricks, it fails because it does not respect the Citrus project files such as citrus-application.properties.
Answer to myself: Yes, I can.
According to the Cucumber docs, it uses tags to (what a surprise) tag scenarios. Tags begin with the # symbol and one can set any number of tags on a Feature or Scenario.
Example:
#Test123
#Important
Scenario: My awesome Scenario
Given ...
When ...
Then ...
#Test456
Scenario: My other Scenario
Given ...
When ...
Then ...
With these tags in place, I can "select" scenarios to execute based on tags. For example to execute all important Scenarios I use the tag #Important.
cucumber --tags #Important
Since I execute my tests through a Citrus class, I have to pass the tags to execute in the CucumberOptions of my test class.
#RunWith(Cucumber.class)
#CucumberOptions(
strict = true,
glue = { "com.consol.citrus.cucumber.step.runner.core" },
plugin = { "com.consol.citrus.cucumber.CitrusReporter" },
tags = { "#Important" }
)
public class RunCucumberTests {
}
It is not super-convenient, to edit the test class metadata every time, but it works.
This may be a long shot but I'm seeing the weirdest thing. I'm using the setValue and addValue functions from WebdriverIO and whenever my string contains the number 3, it is being stripped out and not entered into the input boxes. I am manually able to type 3 into these inputs so I have no idea what is going on. 3 is the only character I've seen this happen with.
Any ideas?
Update: This is only occurring in Chrome
Update 2: Sorry for the lack of details. Here is additional info. I'm using wdio test runner. This issue does not occur in Safari or Firefox, only in chrome.
browser.setValue(usernameInput, "t3st") will input "tst" into the usernameInput element. As well
browser.addValue(usernameInput, "t3st"). Any string containing a 3 will be inputted to any element, but all 3's will be missing from the string.
package.json dependencies:
"dependencies": {
"babel-preset-es2015": "~6.24.0",
"babel-register": "~6.26.0",
"chai": "~4.1.2",
"chromedriver": "^2.33.2",
"wdio-cucumber-framework": "~1.0.2",
"wdio-phantomjs-service": "~0.2.2",
"wdio-selenium-standalone-service": "~0.0.9",
"wdio-spec-reporter": "~0.1.2",
"webdriverio": "4.7.1"
},
"devDependencies": {
"babel-jest": "~21.2.0",
"babel-polyfill": "~6.26.0",
"eslint": "~4.9.0",
"eslint-config-airbnb-base": "~12.1.0",
"eslint-plugin-import": "~2.8.0",
"forever": "~0.15.3",
"http-server": "~0.10.0",
"jest": "~21.2.0"
}
Well, I had a look, but didn't manage to reproduce it. I tried both of the bellow examples using different variants of chromedriver, or wdio-selenium-standalone-service. All worked just fine.
My guess is that:
maybe that input you're trying to fill in has some JavaScript logic behind (form-validation) which might be truncating digits;
or, maybe you have some old software (outdated packages) from your package.json dependencies which you previously installed globally (npm install -g <packageName>) and forgot about;
WebdriverIO (v4.8.0):
> browser.setValue('*[connectqa-mya="first-name"]',"t3st t3st t3st 1234test")
{ state: 'pending' }
> [13:27:12] COMMAND POST "/wd/hub/session/29096eb4bd851d6e3a49ad740c3c1ead/elements"
[13:27:12] DATA {"using":"css selector","value":"*[connectqa-mya=\"first-name\"]"}
[13:27:12] RESULT [{"ELEMENT":"0.8157706669622329-6"}]
[13:27:12] COMMAND POST "/wd/hub/session/29096eb4bd851d6e3a49ad740c3c1ead/element/0.8157706669622329-6/clear"
[13:27:12] DATA {}
[13:27:12] COMMAND POST "/wd/hub/session/29096eb4bd851d6e3a49ad740c3c1ead/element/0.8157706669622329-6/value"
[13:27:12] DATA {"value":["t","3","s","t"," ","t","3","s","t"," ","(13 more items)"],"text":"t3st t3st t3st 1234test"}
WebdriverIO (v4.7.1):
> browser.setValue('*[connectqa-mya="first-name"]',"t3st t3st test1234 ##$%^&*")
{ state: 'pending' }
> [13:38:25] COMMAND POST "/wd/hub/session/3b621c3d7a774872cf3a37d1bec17014/elements"
[13:38:25] DATA {"using":"css selector","value":"*[connectqa-mya=\"first-name\"]"}
[13:38:25] RESULT [{"ELEMENT":"0.42949459661053613-6"}]
[13:38:25] COMMAND POST "/wd/hub/session/3b621c3d7a774872cf3a37d1bec17014/element/0.42949459661053613-6/clear"
[13:38:25] DATA {}
[13:38:25] RESULT undefined
[13:38:25] COMMAND POST "/wd/hub/session/3b621c3d7a774872cf3a37d1bec17014/element/0.42949459661053613-6/value"
[13:38:25] DATA {"value":["t","3","s","t"," ","t","3","s","t"," ","(16 more items)"]}
As next-steps in the debugging process I would:
try to replicate the project with the same dependencies in a different folder/repo and see if it works;
for the above approach I would start with the latest versions of the packages you're using (e.g.: WebdriverIO wasn't up to date);
try to use .execute("$('<selector>').val('t3st t3st test12345');") and see if using JavaScript/JQuery would yield different results (if so, it would narrow down the problem: not form-validation, but probably chromedriver).
Let me know how it went, or if it helped. Cheers!