Is it possible to do data-driven tests in Testcafe Studio - testing

Is it possible to to feed data into a Testcafe Studio tests so the same test can be completed with different sets of data without having to record a new test each time?
Thanks.

Yes, it is possible.
[1, 2, 3].forEach(data => {
test.only
('Data Driven Example', async t => {
console.log(data);
});
});
It will become useful when you have data in some separate file like so:
const testData = require(`../Resources/${process.env.TESTCAFE_ENV}/logIn.json`);
testData.credentials.forEach(credentials => {
test
('Log Into User Account', async t => {
await LogIn.logIn(credentials.username, credentials.password);
await t
.expect(PageMsg.pageMsg.innerText).eql(PageMsg.successfulLogIn)
.expect(Selector(ProfileForm.inputObj.name.input).value).eql(credentials.name);
});
});
All this is mentioned in the official documentation.
EDIT: This is a valid way, but the question is more focused on TestCafe Studio and how to do it in it. Havind said that, my answer doesn't really answer that.

Related

Testcafe data driven testing - how to drive tests with data fetched from API

I'm having trouble figuring out how to drive tests with data fetched from a request. I've read the documentation here: https://testcafe.io/documentation/402804/recipes/best-practices/create-data-driven-tests, and all examples use static json file data available at compile time.
I can't fetch the data in fixture.before hook, because it will only be available inside of the test context, but I need to access the data outside of the test context for iteration, such that the test is inside of a for loop.
I've tried this solution: https://github.com/DevExpress/testcafe/issues/1948, however this fails with testcafe ERROR No tests found in the specified source files. Ensure the sources contain the 'fixture' and 'test' directives., even when I use the flag disable-test-syntax-validation and .run({ disableTestSyntaxValidation: true }); option.
I am looking for suggestions and workarounds so that I can await some data, then run my tests. Even if Testcafe doesn't explicitly support something like this, I figure there must be some workaround... Thanks in advance.
Edit:
file-a.ts
export function tSteps(...args) {
// some setup
const testcase = args[args.length - 1];
const testCtx = test(name, async t => {
...
});
return testCtx;
}
----
file-b.ts
export const parameterizedTest = <T>(..., testcase: (scenario: T) => TestFn) => {
// some setup...
// I have also tried awaiting rows data here, which does not work
// because tests are not discoverable at compile time
...
const scenarios: T[] = rows.map(row => {
...
});
scenarios.forEach((scenario, idx) => {
return testcase(scenario).meta({
some metadata
});
});
};
----
tests.ts
fixture(...).before(async () => {
// can't get the data i need here because it needs to be available outside of the fixture context
})
parameterizedTest<MyInterface>(some params, (scenario: MyInterface) => {
return tSteps('my test',
async f => {
// some setup
// test code goes here which uses scenario.attributex, scenario.attributey, etc.
}
).meta(...);
}
);
In v1.0.0 and later, TestCafe does not validate test syntax. Please specify the TestCafe version that you use when you see the validation error.
Unfortunately, we cannot use pseudo-code to reproduce the issue you encountered. Please share some code that we could run to see the problematic behavior.
Generally speaking, TestCafe allows you to fetch data asynchronously and then spawn tests based on the received values. For instance, the following code works fine for me with TestCafe 1.18.3:
import { fixture, test } from 'testcafe';
import fetch from './node-fetch-mock';
(async () => {
const testData = await fetch();
testData
.map(n => () => {
fixture `Fixture ${n}`
.page `https://google.com`;
test(`Test ${n}`, async t => {
await t.expect(true).ok();
});
})
.map(async test => { await test(); });
})();
node-fetch-mock.js
export default async function fetch() {
return [1, 2, 3, 4, 5];
}
The only caveat is that I have to import fixture and test explicitly because I call them from callbacks.
Could you please provide us with any test code snippet that demonstrates the problem? We need to correctly understand the cause of the problem and reproduce it on our side.

cypress cy.request 401 unauthorized [duplicate]

I want to save/persist/preserve a cookie or localStorage token that is set by a cy.request(), so that I don't have to use a custom command to login on every test. This should work for tokens like jwt (json web tokens) that are stored in the client's localStorage.
To update this thread, there is already a better solution available for preserving cookies (by #bkucera); but now there is a workaround available now to save and restore local storage between the tests (in case needed). I recently faced this issue; and found this solution working.
This solution is by using helper commands and consuming them inside the tests,
Inside - cypress/support/<some_command>.js
let LOCAL_STORAGE_MEMORY = {};
Cypress.Commands.add("saveLocalStorage", () => {
Object.keys(localStorage).forEach(key => {
LOCAL_STORAGE_MEMORY[key] = localStorage[key];
});
});
Cypress.Commands.add("restoreLocalStorage", () => {
Object.keys(LOCAL_STORAGE_MEMORY).forEach(key => {
localStorage.setItem(key, LOCAL_STORAGE_MEMORY[key]);
});
});
Then in test,
beforeEach(() => {
cy.restoreLocalStorage();
});
afterEach(() => {
cy.saveLocalStorage();
});
Reference: https://github.com/cypress-io/cypress/issues/461#issuecomment-392070888
From the Cypress docs
For persisting cookies: By default, Cypress automatically clears all cookies before each test to prevent state from building up.
You can configure specific cookies to be preserved across tests using the Cypress.Cookies api:
// now any cookie with the name 'session_id' will
// not be cleared before each test runs
Cypress.Cookies.defaults({
preserve: "session_id"
})
NOTE: Before Cypress v5.0 the configuration key is "whitelist", not "preserve".
For persisting localStorage: It's not built in ATM, but you can achieve it manually right now because the method thats clear local storage is publicly exposed as Cypress.LocalStorage.clear.
You can backup this method and override it based on the keys sent in.
const clear = Cypress.LocalStorage.clear
Cypress.LocalStorage.clear = function (keys, ls, rs) {
// do something with the keys here
if (keys) {
return clear.apply(this, arguments)
}
}
You can add your own login command to Cypress, and use the cypress-localstorage-commands package to persist localStorage between tests.
In support/commands:
import "cypress-localstorage-commands";
Cypress.Commands.add('loginAs', (UserEmail, UserPwd) => {
cy.request({
method: 'POST',
url: "/loginWithToken",
body: {
user: {
email: UserEmail,
password: UserPwd,
}
}
})
.its('body')
.then((body) => {
cy.setLocalStorage("accessToken", body.accessToken);
cy.setLocalStorage("refreshToken", body.refreshToken);
});
});
Inside your tests:
describe("when user FOO is logged in", ()=> {
before(() => {
cy.loginAs("foo#foo.com", "fooPassword");
cy.saveLocalStorage();
});
beforeEach(() => {
cy.visit("/your-private-page");
cy.restoreLocalStorage();
});
it('should exist accessToken in localStorage', () => {
cy.getLocalStorage("accessToken").should("exist");
});
it('should exist refreshToken in localStorage', () => {
cy.getLocalStorage("refreshToken").should("exist");
});
});
Here is the solution that worked for me:
Cypress.LocalStorage.clear = function (keys, ls, rs) {
return;
before(() => {
LocalStorage.clear();
Login();
})
Control of cookie clearing is supported by Cypress: https://docs.cypress.io/api/cypress-api/cookies.html
I'm not sure about local storage, but for cookies, I ended up doing the following to store all cookies between tests once.
beforeEach(function () {
cy.getCookies().then(cookies => {
const namesOfCookies = cookies.map(c => c.name)
Cypress.Cookies.preserveOnce(...namesOfCookies)
})
})
According to the documentation, Cypress.Cookies.defaults will maintain the changes for every test run after that. In my opinion, this is not ideal as this increases test suite coupling.
I added a more robust response in this Cypress issue: https://github.com/cypress-io/cypress/issues/959#issuecomment-828077512
I know this is an old question but wanted to share my solution either way in case someone needs it.
For keeping a google token cookie, there is a library called
cypress-social-login. It seems to have other OAuth providers as a milestone.
It's recommended by the cypress team and can be found on the cypress plugin page.
https://github.com/lirantal/cypress-social-logins
This Cypress library makes it possible to perform third-party logins
(think oauth) for services such as GitHub, Google or Facebook.
It does so by delegating the login process to a puppeteer flow that
performs the login and returns the cookies for the application under
test so they can be set by the calling Cypress flow for the duration
of the test.
I can see suggestions to use whitelist. But it does not seem to work during cypress run.
Tried below methods in before() and beforeEach() respectively:
Cypress.Cookies.defaults({
whitelist: "token"
})
and
Cypress.Cookies.preserveOnce('token');
But none seemed to work. But either method working fine while cypress open i.e. GUI mode. Any ideas where I am coming short?
2023 Updated on Cypress v12 or more:
Since Cypress Version 12 you can use the new cy.session()
it cache and restore cookies, localStorage, and sessionStorage (i.e. session data) in order to recreate a consistent browser context between tests.
Here's how to use it
// Caching session when logging in via page visit
cy.session(name, () => {
cy.visit('/login')
cy.get('[data-test=name]').type(name)
cy.get('[data-test=password]').type('s3cr3t')
cy.get('form').contains('Log In').click()
cy.url().should('contain', '/login-successful')
})

How can I log into my web app, then read through the records of my data.json file using TestCafe

I've googled and I can find how to loop through my data file. Apparently you run a test for each record of data.
I would like to have my single test log in, then cycle through each 'record' or item of the data file. The data is a series of searches in our app. So, the test would login and assert logged in then run those searches...
test('searches', async t => {
await t
// Log in...
.typeText('input[id="login-name"]', 'aguy')
.typeText('input[id="login-password"]', 'bbb')
.click('button[id="signin-button"')
.expect(Selector('span[id="logged-in-user"]').innerText).contains('Hal')
// At this point the app is ready to run through the searches doing this...
// forEach item in my data...
.typeText('input[id="simplecriteria"]', data.criteria)
.click('button[class="search-button"]')
.expect(Selector('div[class="mat-paginator-range-label"]').innerText).contains(data.srchResult)
});
TestCafe has test hooks, I recommend using them even though they are not that usuful in your case because TestCafe deletes cookies between tests, so if you log in once and then write your test like so:
const testData = require('../Resources/testData.json');
let executed = false;
fixture `Searches`
.page(baseUrl)
.beforeEach(async t => {
if (!executed) {
// run this only once before all tests
executed = true;
// log in
await t
.typeText('input[id="login-name"]', 'aguy')
.typeText('input[id="login-password"]', 'bbb')
.click('button[id="signin-button"')
.expect(Selector('span[id="logged-in-user"]').innerText).contains('Hal');
}
});
testData.forEach((data) => {
test('Searches', async t => {
await t
.typeText('input[id="simplecriteria"]', data.criteria)
.click('button[class="search-button"]')
.expect(Selector('div[class="mat-paginator-range-label"]').innerText).contains(data.srchResult);
});
});
then you'll most likely be logged out after the first test.
However, I'd still use beforeEach hook, but put the loop inside the test:
const testData = require('../Resources/testData.json');
fixture `Searches`
.page(baseUrl)
.beforeEach(async t => {
await t
// Log in...
.typeText('input[id="login-name"]', 'aguy')
.typeText('input[id="login-password"]', 'bbb')
.click('button[id="signin-button"')
.expect(Selector('span[id="logged-in-user"]').innerText).contains('Hal');
});
test('Searches', async t => {
testData.forEach((data) => {
await t
.typeText('input[id="simplecriteria"]', data.criteria)
.click('button[class="search-button"]')
.expect(Selector('div[class="mat-paginator-range-label"]').innerText).contains(data.srchResult);
});
});
There's obvious disadvantag:
many different searches are added as one test, so if one fails, the whole "searches" test case will be marked as failed
Another solution might be to find out what it means to be logged in. If it's about adding some cookie, you might log in once and then only set up the cookie before your tests. However, in many modern systems, such "log-in cookies" will have httpOnly flag, so you can't really set it in JavaScript.

Jest + puppeteer best architecture practices

I just entered the world of testing with puppeteer and jest, and I was wondering what the best practice was in terms of folder architecture and logic.
I've never done testing before and I think I'm getting a little lost in the different principles and concepts and how it all fits together.
I learned to do my tests based on the page-object model, so I have classes for each of my pages, but also for each of my modules ( or components ). For example, in my application, the header or the login modal are components.
Then I have a test file per page or per component.
(for example the landingPage.tests.js file, which uses the model of the LandingPage class in the LandingPage.js file)
Here is a concrete example:
I have different login cases and I'd like to test them all. For example I want to test to connect with a "normal" user, for which the process is simply login then password. Then I need to test with a user who has activated 2FA, or with a user from a company that uses SSO.
I first thought about putting my different tests in authentication.tests.js, in different describe blocks, thinking it would open a new tab each time, but it doesn't... I use puppeteer in incognito mode to make sure each tab is an isolated session.
So my questions are:
Where is the best place to do these test suites?
Am I supposed to have test files that "describe" the pages ( for example, the button must be present, such text must be here etc) and also have "scenario type" test file ( a set of contextual actions to a user, like for my different login cases) ?
Here is authentication.tests.js, in which I would like to tests all my different ways of logging in :
import HeaderComponent from "../../../pages/components/HeaderComponent";
import AuthenticationComponent from "../../../pages/components/AuthenticationComponent";
import LandingPage from "../../../pages/landing/LandingPage";
import {
JEST_TIMEOUT,
CREDENTIALS
} from "../../../config";
describe('Component:Authentication', () => {
let headerComponent;
let authenticationComponent;
let landingPage;
beforeAll(async () => {
jest.setTimeout(JEST_TIMEOUT);
headerComponent = new HeaderComponent;
authenticationComponent = new AuthenticationComponent;
landingPage = new LandingPage;
});
describe('Normal login ', () => {
it('should click on login and open modal', async () => {
await landingPage.visit();
await headerComponent.isVisible();
await headerComponent.clickOnLogin();
await authenticationComponent.isVisible();
});
it('should type a normal user email adress and validate', async () => {
await authenticationComponent.typeUsername(CREDENTIALS.normal.username);
await authenticationComponent.clickNext();
});
it('should type the correct password and validate', async () => {
await authenticationComponent.typePassword(CREDENTIALS.normal.password);
await authenticationComponent.clickNext();
});
it('should be logged in', async () => {
await waitForText(page, 'body', 'Success !');
});
});
describe('SSO login ', () => {
// todo ...
});
});
Thank you and sorry if it sounds confusing, like I said I'm trying to figure out how it all fits together.
Regarding the folder structure, Jest will find any files according to the match config, basically anything called *.spec.js or *.test.js. Looks like you know that already.
What that means is the folder structure is completely up to you. Some people like to have the tests for components in the same folders as the components themselves. Personally I prefer to have all the tests in one folder as it makes the project look cleaner.
The other benefit of having all the tests in one folder is that you can then start to distinguish between the types of tests. Component tests check that pure components render and operate as expected. You don't need Puppeteer for this, use snapshots if you're in a React app. Puppeteer is good for integration tests that navigate through so-called 'happy paths', login, signup, add to cart etc., using a headless Chromium browser.
To answer the specific problem you have been having with Jest / Puppeteer on a new page for each test:
//keep a reference to the browser
let browser
//keep a reference to the page
let page
// open puppeteer before all tests
beforeAll(async () => {
browser = await puppeteer.launch()
})
// close puppeteer after all tests
afterAll(async () => {
await browser.close()
})
// Get a new page for each test so that we start fresh.
beforeEach(async () => {
page = await browser.newPage()
})
// Remember to close pages after each test.
afterEach(async () => {
await page.close()
})
describe('Counter', () => {
// "it" blocks go here.
})
Hope that helps a bit.

TestCafe 'dynamic' tests cases

I created a few e2e sanity tests for my current project using TestCafe. These tests are standard TestCafe tests:
fixture(`Basic checkout flow`)
test('Main Flow', async (t) => {
});
I would like to execute this test for multiple site locales and for multiple channels. i.e. I need this test to run for nl_nl, nl_be, en_gb, .. and also for channels like b2c, b2b, ...
The easiest way is to create a loop in the test itself to loop over the locales and channels, but I want to run these test concurrently.
I tried to create a function to dynamically generate these tests, but TestCafe can't seem to detect the tests then.
dynamicTest('Main Flow', async (t) => {
});
function dynamicTest(testName, testFn) => {
const channels = ['b2c']
channels.forEach((channel) => {
test(`[Channel] ${channel}] ${testName}`, testFn);
});
};
Is there a better way of doing this? The only solution I see is running the test script multiple times from Jenkins to have concurrency.
more detailed code:
import HomePage from '../../page/HomePage/HomePage';
import EnvUtil from '../../util/EnvUtil';
const wrapper = (config, testFn) => {
config.locales.forEach(async locale =>
config.channels.forEach(async channel => {
const tstConfig = {
locale,
channel
};
tstConfig.env = EnvUtil.parse(tstConfig, config.args.env);
testConfig.foo = await EnvUtil.get() // If I remove this line it works!
testFn(config, locale, channel)
})
);
};
fixture(`[Feature] Feature 1`)
.beforeEach(async t => {
t.ctx.pages = {
home: new HomePage(),
... more pages here
};
});
wrapper(global.config, (testConfig, locale, channel) => {
test
.before(async (t) => {
t.ctx.config = testConfig;
})
.page(`foo.bar.com`)
(`[Feature] [Locale: ${locale.key}] [Channel: ${channel.key}] Feature 1`, async (t) => {
await t.ctx.pages.home.header.search(t, '3301');
.. more test code here
});
});
If I run it like this I get a "test is undefined" error. Is there something wrong in the way I'm wrapping "test"?
Using TestCafe of version 0.23.1, you can run tests imported from external libraries or generated dynamically even if the test file you provide does not contain any tests.
You can learn more here: Run Dynamically Loaded Tests