I'm trying to generate screenshots in my allure reports after each step in my test or just a single screenshot at the end of the test. I have referred to the webdriverIO docs and it seems I should use afterStep function together with the .takeScreenshot method. I have tried that in my config file but no screenshot is taken
Here is my afterStep function:
afterStep: function (test, scenario, { error, duration, passed }) {
if (!error) {
browser.takeScreenshot() } }
The closest I have come to my desired result is by using this
afterTest: function (test, scenario, { error, duration, passed }) {
if (!error) {
browser.saveScreenshot('test.png') } }
What it does is take a screenshot at the end of the test and store it in my root directory,
the image however cannot be displayed on Allure Report
How do I attach screenshots to be shown on the Allure Report?
After a lot of scrutiny on my code, I realized that I was doing one thing wrong here.
I had another afterTest hook in my config file which was being called instead of the hook to capture the screenshot. To solve this, I used a browser.takeScreenshot() function to the original afterTest hook and this solves my problem. The screenshot is attached at the end of the allure report.
browser.saveScreenshot saves the screenshot to your local folder while browser.takeScreenshot attaches it to the allure report.
My full afterTest hook
afterTest: async function (test, context, { error, result, duration, passed, retries }) {
if(passed) {
await browser.takeScreenshot();
browser.executeScript('browserstack_executor: {"action": "setSessionStatus", "arguments": {"status":"passed","reason": "Assertions passed"}}');
} else {
browser.executeScript('browserstack_executor: {"action": "setSessionStatus", "arguments": {"status":"failed","reason": "At least 1 assertion failed"}}');
}
},
In my case I was having only 1 afterTest command on my wdio.conf.ts file.
I'm using ts and I have to add async & await , and after this it get solved and attaching the screenshot in the allure report:
afterTest: async function(test, context, { error, result, duration, passed, retries }) {
if (error) {
await browser.takeScreenshot()
}
},
Related
I'm having trouble figuring out how to drive tests with data fetched from a request. I've read the documentation here: https://testcafe.io/documentation/402804/recipes/best-practices/create-data-driven-tests, and all examples use static json file data available at compile time.
I can't fetch the data in fixture.before hook, because it will only be available inside of the test context, but I need to access the data outside of the test context for iteration, such that the test is inside of a for loop.
I've tried this solution: https://github.com/DevExpress/testcafe/issues/1948, however this fails with testcafe ERROR No tests found in the specified source files. Ensure the sources contain the 'fixture' and 'test' directives., even when I use the flag disable-test-syntax-validation and .run({ disableTestSyntaxValidation: true }); option.
I am looking for suggestions and workarounds so that I can await some data, then run my tests. Even if Testcafe doesn't explicitly support something like this, I figure there must be some workaround... Thanks in advance.
Edit:
file-a.ts
export function tSteps(...args) {
// some setup
const testcase = args[args.length - 1];
const testCtx = test(name, async t => {
...
});
return testCtx;
}
----
file-b.ts
export const parameterizedTest = <T>(..., testcase: (scenario: T) => TestFn) => {
// some setup...
// I have also tried awaiting rows data here, which does not work
// because tests are not discoverable at compile time
...
const scenarios: T[] = rows.map(row => {
...
});
scenarios.forEach((scenario, idx) => {
return testcase(scenario).meta({
some metadata
});
});
};
----
tests.ts
fixture(...).before(async () => {
// can't get the data i need here because it needs to be available outside of the fixture context
})
parameterizedTest<MyInterface>(some params, (scenario: MyInterface) => {
return tSteps('my test',
async f => {
// some setup
// test code goes here which uses scenario.attributex, scenario.attributey, etc.
}
).meta(...);
}
);
In v1.0.0 and later, TestCafe does not validate test syntax. Please specify the TestCafe version that you use when you see the validation error.
Unfortunately, we cannot use pseudo-code to reproduce the issue you encountered. Please share some code that we could run to see the problematic behavior.
Generally speaking, TestCafe allows you to fetch data asynchronously and then spawn tests based on the received values. For instance, the following code works fine for me with TestCafe 1.18.3:
import { fixture, test } from 'testcafe';
import fetch from './node-fetch-mock';
(async () => {
const testData = await fetch();
testData
.map(n => () => {
fixture `Fixture ${n}`
.page `https://google.com`;
test(`Test ${n}`, async t => {
await t.expect(true).ok();
});
})
.map(async test => { await test(); });
})();
node-fetch-mock.js
export default async function fetch() {
return [1, 2, 3, 4, 5];
}
The only caveat is that I have to import fixture and test explicitly because I call them from callbacks.
Could you please provide us with any test code snippet that demonstrates the problem? We need to correctly understand the cause of the problem and reproduce it on our side.
I want to create a custom command in a typescript webdriverIO project. But no matter what I do, the command always ends up with the error :
TypeError: browser.waitAndClick is not a function.
Basically I wanted to add the same function they mentioned in webdriverIO doc. I am adding it from beforeAll() in my specs.
import { DEFAULT_TIMEOUT } from "../constants";
class CustomCommand {
private static alreadyAdded = false;
static addCommands(){
if(!this.alreadyAdded) {
browser.addCommand('waitAndClick', (el: WebdriverIO.Element) => {
el.waitForDisplayed({timeout: DEFAULT_TIMEOUT});
el.click();
}, true);
browser.addCommand('waitAndSetValue', (el: WebdriverIO.Element, text: string) => {
el.waitForDisplayed({timeout: DEFAULT_TIMEOUT});
el.setValue(text);
}, true);
this.alreadyAdded = true;
}
}
}
export default CustomCommand;
And I am calling this addCommands() function from beforeAll() of a spec. But no luck!
One nice person from slack channel helped me to find out the exact reason. Actually I overlooked something in doc : If you register a custom command to the browser scope, the command won’t be accessible for elements. Likewise, if you register a command to the element scope, it won’t be accessible in the browser scope. Turned out this is the reason. It is resolved now.
Passing false as third parameter in addCommand() fixed it.
Welcome to stack-overflow!
Please note that there is no 'beforeAll' hook in webdriverio as per the docs here.
It should work if you call this in before hook.
based on the webdriverio docs: https://webdriver.io/docs/api/browser/addCommand/
Note: don't forget to wrap inside before hook as ex bellow:
before: async function (capabilities, specs) {
browser.addCommand('waitAndClick', async function (selector) {
try {
await $(selector).waitForExist();
await $(selector).click();
} catch (error) {
throw new Error(`Could not click on selector: ${selector}`);
}
});
},
I'm trying to connect to SoundCloud using CasperJS. What is interesting is once you signed in and rerun the login feature later, the previous login is still active. Before going any further, here is the code:
casper.thenOpen('https://soundcloud.com/', function() {
casper.click('.header__login');
popup = /soundcloud\.com\/connect/;
casper.waitForPopup(popup, function() {
casper.withPopup(popup, function() {
selectors = {
'#username': username,
'#password': password
};
casper.fillSelectors('form.log-in', selectors, false);
casper.click('#authorize');
});
});
});
If you run this code at least twice, you should see the following error appears:
CasperError: Cannot dispatch mousedown event on nonexistent selector: .header__login
If you analyse the logs you will see that the second time, you were redirected to https://soundcloud.com/stream meaning that you were already logged in.
I did some research to clear environments between each test but it seems that the following lines don't solve the problem.
phantom.clearCookies()
casper.clear()
localStorage.clear()
sessionStorage.clear()
Technically, I'm really interested about understanding what is happening here. Maybe SoundCloud built a system to also store some variables server-side. In this case, I would have to log out before login. But my question is how can I perfectly isolate and clear everything between each test? Does someone know how to make the environment unsigned between each test?
To clear server-side session cache, calling: phantom.clearCookies(); did the trick for me. This cleared my session between test files.
Example here:
casper.test.begin("Test", {
test: function(test) {
casper.start(
"http://example.com",
function() {
... //Some testing here
}
);
casper.run(function() {
test.done();
});
},
tearDown: function(test) {
phantom.clearCookies();
}
});
If you're still having issues, check the way you are executing your tests.
Where did you call casper.clear() ?
I think you have to call it immediately after you have opened a page like:
casper.start('http://www.google.fr/', function() {
this.clear(); // javascript execution in this page has been stopped
//rest of code
});
From the doc: Clears the current page execution environment context. Useful to avoid having previously loaded DOM contents being still active.
I'm using karma with qUnit (after following this tutorial) to test my Ember application. It's mostly going well, however I've run into a problem that doesn't make sense.
Given the 2 following tests:
test('can get to products', function() {
visit('/products/')
.then(function() {
ok(find('*'));
});
});
test('can get to catalogues', function() {
visit('/products/catalogues')
.then(function() {
ok(find('*'));
});
});
The first will run fine. The test runner gets to /products and finds something.
However, the second test returns an error in the console:
Error: Assertion Failed: You have turned on testing mode, which disabled the run-loop's autorun. You will need to wrap any code with asynchronous side-effects in an Ember.run
I turned on transition logs, and the test runner is visiting products.catalogues.index before throwing the error.
Any ideas with this? Or is it simply a bug inside ember's testing tools?
Both are valid routes defined inside the router...
The last part of the error holds the key to how to fix this problem. You have to make sure that any code that make async calls is wrapped in Ember.run. This includes things as simple as the create and set methods.
If you have something like
App.ProductsRoute = Ember.Route.extend({
model: function() {
return [
Ember.Object.create({title: "product1"}),
Ember.Object.create({title: "product2"})
]
}
});
refactor it to
App.ProductsRoute = Ember.Route.extend({
model: function() {
return [
Ember.run( Ember.Object, "create", {title: "product1"} ),
Ember.run( Ember.Object, "create", {title: "product2"} )
]
}
});
or
App.ProductsRoute = Ember.Route.extend({
model: function() {
return Ember.run(function() {
return [
Ember.Object.create({title: "product1"}),
Ember.Object.create({title: "product2"})
]
});
}
});
If you posted your /products code it would be easier to give a more specific answer.
I try to create e2e tests with karma and jasmine with yeoman. In my karma-e2e.conf.js I add jasmine:
files = [
JASMINE,
JASMINE_ADAPTER,
ANGULAR_SCENARIO,
ANGULAR_SCENARIO_ADAPTER,
'test/e2e/**/*.js'
];
A need async testing so I need to use runs, waits, waitsFor (https://github.com/pivotal/jasmine/wiki/Asynchronous-specs)
But if I try to use it:
it('test', function () {
runs(function () {
...
});
});
Scenatio test runner returns this:
TypeError: Cannot call method 'runs' of null
at runs (http://localhost:8080/adapter/lib/jasmine.js:562:32)
at Object.<anonymous> (http://localhost:8080/base/test/e2e/eduUser.js:42:3)
at Object.angular.scenario.SpecRunner.run (http://localhost:8080/adapter/lib/angular-scenario.js:27057:15)
at Object.run (http://localhost:8080/adapter/lib/angular-scenario.js:10169:18)
I don't know where the problem is. Can you help me please?
Angular e2e tests with Karma don't and can't use the JASMINE adapter. Instead you have the ANGULAR_SCENARIO_ADAPTER which has a similar feel to writing Jasmine tests.
All commands in the adapter's API are asynchronous anyway. For example element('#nav-items').count() doesn't return a number, it returns a Future object. Future objects are placed in a queue and executed asynchronously as the runner progresses. To quote the API docs:
expect(future).{matcher}:
[...] All API statements return a future object, which get a value assigned after they are executed.
If you need to run your own asynchronous test code, you can extend the adapter's DSL, this is easier than it might sound. The idea is that you return your own Future which can be evaluated by a matcher such as toBe(). There are some examples on how to do this in the e2e-tests.js Gist from Vojta. Just remember to call done(null, myRetrunValue); when your test code is successful (myRetrunValue is the value evaluated by your matcher). Or done('Your own error message'); if you want the test to fail.
UPDATE: In response to question below. To simulate a login, first add a function called login to the dsl:
angular.scenario.dsl('login', function() {
return function(selector) {
// #param {DOMWindow} appWindow The window object of the iframe (the application)
// #param {jQuery} $document jQuery wrapped document of the application
// #param {function(error, value)} done Callback that should be called when done
// (will basically call the next item in the queuue)
return this.addFutureAction('Logging in', function(appWindow, $document, done) {
// You can do normal jQuery/jqLite stuff here on $document, just call done() when your asynchronous tasks have completed
// Create some kind of listener to handle when your login is complete
$document.one('loginComplete', function(e){
done(null, true);
}).one('loginError', function(e){
done('Login error', false);
});
// Simulate the button click
var loginButton = $document.find(selector || 'button.login');
loginButton.click();
})
};
});
And then call:
beforeEach( function()
{
expect( login('button.login') ).toBeTruthy();
});