Old step definition is retained in WDIO Allure reports - webdriver-io

I got a situation in webdriverIO allure reporter. When a feature file and step definition is defined for the first time and run, Allure report is generated as expected.
If the existing step is modified in feature and step def files, Allure report will show the old step and the newly modified step which is misleading.
Sharing an example of the above scenario,
Feature file: test.feature
Feature: Cucumber proof of concept
Background:
Given I navigate to Google
Scenario: First Scenario
When I search for "Formula 1"
Scenario: Second Scenario
When I search for another result "Grand Prix"
Step def file: test.js
let {defineSupportCode} = require('cucumber');
defineSupportCode(function({Given, When, Then}) {
Given(/^I navigate to Google$/, () => {
browser.url('http://www.google.com')
});
When(/^I search for \"([^\"]*)\"$/, (text) => {
browser.setValue('#lst-ib', text);
browser.pause(5000);
});
When(/^I search for another result \"([^\"]*)\"$/, (text) => {
browser.setValue('#lst-ib', text);
browser.pause(5000);
});
});
The allure report is as expected
enter image description here
Later if I modify step of the Second scenario to be,
Scenario: Second Scenario
When I search for another new result "Grand Prix"
and generate Allure report, the old and the modified steps will be seen as shown below,
enter image description here
I know that Allure2 supports history. But this is quite confusing and even the order in 2nd Scenario is messed up.
The only way to I was able to fix this was by deleting allure-results folder whenever there is a modification of a step. But I can't do that since I need the trend in Jenkins. Is there a way to get around this issue.
Platform:
Windows 10
package.json -
"webdriverio": "^4.12.0"
"wdio-cucumber-framework": "^1.1.1"
"wdio-allure-reporter": "^0.6.2"

Please try to use --clean option when generating allure report:
allure generate --clean

Related

How to get html report when the test-cases run in parallel?

I am using karma-html-reporter to generate the report which works fine.
But when i execute the test-cases with karma-parallel i observe that it is generating the report of only one instance not for the other one.
Is there a way to get/generate report for both of the instances.
Currently i am running 2 instances of Chrome.
What all i have to do to get the integrated report of both instances ?
I have tried karma-multibrowser-reporter link
But it is removing the karma-parallel feature.
report generation happens with below configuration:-
htmlReporter: {
outputDir: 'path/results'
},
karma-parallel has the option aggregatedReporterTest. If you add html to the regex it only uses one reporter for all browsers:
parallelOptions: {
aggregatedReporterTest: /coverage|istanbul|html|junit/i
},

Using tags ( Smoke, regression) with TestCafe

Using testcafe grep patterns would partially solve our problem of using tags but it would still display those tags on the spec report ...!!!
Is there a way to include tags in the test/fixture names and use grep patterns but skip those tags to be displayed in the execution report ??
import { Selector } from 'testcafe';
fixture `Getting Started`
.page `http://devexpress.github.io/testcafe/example`;
test('My first test --tags {smoke, regression}', async t => {
// Test code
});
test('My Second test --tags {smoke}', async t => {
// Test code
});
test('My first test --tags {regression}', async t => {
// Test code
});
testcafe chrome test.js -F "smoke"
The above snippet would trigger the smoke only tests for me though but the report will display the test names along with those tags
Is there an alternative way to deal with tags or a solution to not display the tags in the test execution report?
It appears in a recent release (v0.23.1) of testcafe you can now filter with metadata via the commandline.
You can now run only those tests or fixtures whose metadata contains a specific set of values. Use the --test-meta and --fixture-meta flags to specify these values.
testcafe chrome my-tests --test-meta device=mobile,env=production
or
testcafe chrome my-tests --fixture-meta subsystem=payments,type=regression
Read more at https://devexpress.github.io/testcafe/blog/testcafe-v0-23-1-released.html
I think the best solution in this case is to use test/fixture metadata. Please refer the following article: http://devexpress.github.io/testcafe/documentation/test-api/test-code-structure.html#specifying-testing-metadata
For now, you can't filter by metadata, but this feature is in the pull request: https://github.com/DevExpress/testcafe/pull/2841. So, after this PR is merged, you will be able to add any metadata to tests and filter by this metadata in a command line.

To Take Screenshot of a particular page using cucumber and JAVA8

I was trying to take the screenshot of a particular output screen for all the tests.The URL of the page differs for each test depending on the environment (QA,DEV) and also the reference number created.
For example "https://xyz-QA-abc.com/ABCDEF/123456"
Here the QA can be changed and 123456 is different for each test.I am doing my work in cucumber using JAVA8.I am not using selenium webdriver.I tried with the code below in HOOKS.But it is not working.It is showing error in browser,attach,buffer,base64png .Could someone help me with a better code
if(scenario.isFailed()){
return browser.takeScreenshot()
.then((base64png)=>{
scenario.attach(new Buffer(base64png,'base64'),'image/png');
});
Try this:
byte[] screenshot = ((TakesScreenshot) driver).getScreenshotAs(OutputType.BYTES);
scenario.embed(screenshot, "image/png");

How to get disabled test cases count in jenkins result?

I have suppose 10 test cases in test suite in which 2 test cases are disabled.I want to get those two test cases in test result of jenkins job like pass = 7 ,fail = 1 and disabled/notrun= 2.
By default, TestNG generates report for your test suite and you may refer to index.html file under the test-output folder. If you click on "Ignored Methods" hyperlink, it will show you all the ignored test cases and its class name and count of ignored methods.
All test cases annotated with #Test(enabled = false) will be showing in "Ignored Methods" link.
I have attached a sample image. Refer below.
If your test generates JUnit XML reports, you can use the JUnit plugin to parse these reports after the build (as a post-build action). Then, you can go into your build and click 'Test Result'. You should see a breakdown of how the execution went (including passed, failed, and skipped tests).

Running a GEB test using Intellij

Being a beginner in GEB testing, I am trying to run a simple login program in Intellij. Could you please help me run this test in Intellij? My question is what selections should I make in the edit configurations page? Please help. This example is from the book of geb.
import geb.Browser
Browser.drive {
go "http://google.com/ncr"
// make sure we actually got to the page
assert title == "Google"
// enter wikipedia into the search field
$("input", name: "q").value("wikipedia")
// wait for the change to results page to happen
// (google updates the page dynamically without a new request)
waitFor { title.endsWith("Google Search") }
// is the first link to wikipedia?
def firstLink = $("li.g", 0).find("a.l")
assert firstLink.text() == "Wikipedia"
// click the link
firstLink.click()
// wait for Google's javascript to redirect to Wikipedia
waitFor { title == "Wikipedia" }
}
If you are running this in IntelliJ you should be able to run this as a JUnit test (ctrl+F10). Make sure that this is inside of a Class and in a method.
For ease of syntax, it would be good to use Spock as your BDD framework (include the library in your project; if using Maven, follow the guide on the site but update to Spock 0.7-groovy-2.0 and Geb 0.9.0-RC-1 for the latest libraries
If you want to switch from straight JUnit to Spock (keep in mind you should use JUnit as a silent library) then your test case looks like this:
def "show off the awesomeness of google"() {
given:
go "http://google.com/ncr"
expect: "make sure we actually got to the page"
title == "Google"
when: "enter wikipedia into the search field"
$("input", name: "q").value("wikipedia")
then: "wait for the change to results page to happen and (google updates the page dynamically without a new request)"
waitFor { title.endsWith("Google Search") }
// is the first link to wikipedia?
def firstLink = $("li.g", 0).find("a.l")
and:
firstLink.text() == "Wikipedia"
when: "click the link"
firstLink.click()
then: "wait for Google's javascript to redirect to Wikipedia"
waitFor { title == "Wikipedia" }
}
Just remember: Ctrl + F10 (best key shorcut for a test in IntelliJ!)
The above is close but no cigar, so to speak.
If you want to run bulk standard Gebish test from WITHIN Intellij,
I tried 2 things.
I added my geckodriver.exe to the test/resources under my Spock/Geb tests
I literally in the given: part of my Spok/Geb test did the following:
given:
System.setProperty("webdriver.gecko.driver", "C:\\repo\\geb-example-gradle\\src\\test\\resources" + "\\geckodriver.exe");
Failed
Succeeded
Now the usual deal with answers is, that someone writes something, you try it and then it fails.
So, if it did not work for you, use the reference Geb/Spock project on Github as follows and import it into intellij (remember, New Project, then find the gradle.build script and then intellij will import it nicely)...it also kicks off a build so dont freak out:
https://github.com/geb/geb-example-gradle
Download the driver:
https://github.com/mozilla/geckodriver/releases
and move it to the test/resource folder of the reference project you just imported under test/groovy...(see image)
Now add the above given: clause to the GebishOrgSpec Sock/Geb test:
The test runs nicely from WITHIN Intellij. Evidence the browser open and the test running:
LOVELY JOBBLY :=)