Using another test as test constructor - testing

I wonder how can I use one of my test case (test file also) (assume that tests are separated as one test for one file. I mean all js files includes only one test case).
It will be very good if I can use one of my test case(test file) inside fixture so that before a test starts it can prepare my environment with another test.
Suppose that my fixture is like below
import LoginPage from '../../../pages/login-page';
fixture `Regression - UI`
.page(DOMAIN)
.beforeEach(async t => {
await t.maximizeWindow()
await loginPage.login(t, EMAIL, PASSWORD);
});
so what I want to do is execute another test after login is executed then run test Adding a new item test
test(`Adding a new creative`, async t => {
await leftNavigation.clickCampaignSection(t)
await leftNavigation.clickAllCampaigns(t)
}
so the fixture will gonna like below and EXECUTE.ANOTHER.TEST will gonna executed
import LoginPage from '../../../pages/login-page';
import EXECUTE.ANOTHER.TEST from ../another_test_file.js
fixture `Regression - UI`
.page(DOMAIN)
.beforeEach(async t => {
await t.maximizeWindow()
await loginPage.login(t, EMAIL, PASSWORD);
await EXECUTE.ANOTHER.TEST
});

You can not execute a test in other test or in one of a fixture hooks (beforeEach, afterEach). You need to create a separate method (leftNavigation.addNewCreative) with appropriate logic and call it when it's necessary.

Related

Testcafe data driven testing - how to drive tests with data fetched from API

I'm having trouble figuring out how to drive tests with data fetched from a request. I've read the documentation here: https://testcafe.io/documentation/402804/recipes/best-practices/create-data-driven-tests, and all examples use static json file data available at compile time.
I can't fetch the data in fixture.before hook, because it will only be available inside of the test context, but I need to access the data outside of the test context for iteration, such that the test is inside of a for loop.
I've tried this solution: https://github.com/DevExpress/testcafe/issues/1948, however this fails with testcafe ERROR No tests found in the specified source files. Ensure the sources contain the 'fixture' and 'test' directives., even when I use the flag disable-test-syntax-validation and .run({ disableTestSyntaxValidation: true }); option.
I am looking for suggestions and workarounds so that I can await some data, then run my tests. Even if Testcafe doesn't explicitly support something like this, I figure there must be some workaround... Thanks in advance.
Edit:
file-a.ts
export function tSteps(...args) {
// some setup
const testcase = args[args.length - 1];
const testCtx = test(name, async t => {
...
});
return testCtx;
}
----
file-b.ts
export const parameterizedTest = <T>(..., testcase: (scenario: T) => TestFn) => {
// some setup...
// I have also tried awaiting rows data here, which does not work
// because tests are not discoverable at compile time
...
const scenarios: T[] = rows.map(row => {
...
});
scenarios.forEach((scenario, idx) => {
return testcase(scenario).meta({
some metadata
});
});
};
----
tests.ts
fixture(...).before(async () => {
// can't get the data i need here because it needs to be available outside of the fixture context
})
parameterizedTest<MyInterface>(some params, (scenario: MyInterface) => {
return tSteps('my test',
async f => {
// some setup
// test code goes here which uses scenario.attributex, scenario.attributey, etc.
}
).meta(...);
}
);
In v1.0.0 and later, TestCafe does not validate test syntax. Please specify the TestCafe version that you use when you see the validation error.
Unfortunately, we cannot use pseudo-code to reproduce the issue you encountered. Please share some code that we could run to see the problematic behavior.
Generally speaking, TestCafe allows you to fetch data asynchronously and then spawn tests based on the received values. For instance, the following code works fine for me with TestCafe 1.18.3:
import { fixture, test } from 'testcafe';
import fetch from './node-fetch-mock';
(async () => {
const testData = await fetch();
testData
.map(n => () => {
fixture `Fixture ${n}`
.page `https://google.com`;
test(`Test ${n}`, async t => {
await t.expect(true).ok();
});
})
.map(async test => { await test(); });
})();
node-fetch-mock.js
export default async function fetch() {
return [1, 2, 3, 4, 5];
}
The only caveat is that I have to import fixture and test explicitly because I call them from callbacks.
Could you please provide us with any test code snippet that demonstrates the problem? We need to correctly understand the cause of the problem and reproduce it on our side.

Testcafe Running Multiple Fixtures or Tests consecutively from within the same "file"

I've been working with Testcafe and have had some issues working with multiple "Fixtures" or "Tests" from within the same "file".
IE TESTFILE.js:
import { Selector, Role } from 'testcafe';
import Games from '../Pages/Games.js'
import {partnerAdmin} from '../Data/Roles.js'
const config = require('../Data/config.json')
fixture `Test Fixture #1`
.page `${config.envs.dev.baseUrl}`
test('Testing Test #1', async t => {
await t
.useRole(partnerAdmin)
await t
.expect(Selector(Games.gamesList).exists).ok()
.expect(Selector(Games.topRowGame).exists).ok()
.click(Games.gameActionsBtn)
});
fixture `Test Fixture #2`
.page `${config.envs.dev.baseUrl}`
test('Testing Test #2', async t => {
await t
.useRole(partnerAdmin)
await t
// DO OTHER STUFF
});
When I try this the 1st fixture will run perfectly... HOWEVER, the 2nd Fixture will stop at the login page. It's as if the 'useRole' function is failing the second time it's getting called.
I'm using the Roles as followed in the documentation.
IE Roles.js:
import { Selector, Role, t } from 'testcafe';
import Login from '../Pages/Login.js'
const config = require('../Data/config.json')
const partnerAdmin = Role(`${config.envs.dev.baseUrl}`, async t => {
await
Login.login(config.users.partnerAutoAdmin.name, config.users.partnerAutoAdmin.pass)
});
export { partnerAdmin }
It doesn't matter the order I place the "Fixtures" or "Tests" in. I can literally swap the 2nd fixture to before the 1st and it still does the same thing. The first runs, but the second just straight up fails as if it can't remember what it just did when it hits the login page the second time.
Anyone have any idea what I might be missing here? I'm using Roles and page models that seem to be working normally. Unfortunately the first test runs, but fails to continue on through the second for some reason even if I did just duplicate the same login/roles setup.

How can I log into my web app, then read through the records of my data.json file using TestCafe

I've googled and I can find how to loop through my data file. Apparently you run a test for each record of data.
I would like to have my single test log in, then cycle through each 'record' or item of the data file. The data is a series of searches in our app. So, the test would login and assert logged in then run those searches...
test('searches', async t => {
await t
// Log in...
.typeText('input[id="login-name"]', 'aguy')
.typeText('input[id="login-password"]', 'bbb')
.click('button[id="signin-button"')
.expect(Selector('span[id="logged-in-user"]').innerText).contains('Hal')
// At this point the app is ready to run through the searches doing this...
// forEach item in my data...
.typeText('input[id="simplecriteria"]', data.criteria)
.click('button[class="search-button"]')
.expect(Selector('div[class="mat-paginator-range-label"]').innerText).contains(data.srchResult)
});
TestCafe has test hooks, I recommend using them even though they are not that usuful in your case because TestCafe deletes cookies between tests, so if you log in once and then write your test like so:
const testData = require('../Resources/testData.json');
let executed = false;
fixture `Searches`
.page(baseUrl)
.beforeEach(async t => {
if (!executed) {
// run this only once before all tests
executed = true;
// log in
await t
.typeText('input[id="login-name"]', 'aguy')
.typeText('input[id="login-password"]', 'bbb')
.click('button[id="signin-button"')
.expect(Selector('span[id="logged-in-user"]').innerText).contains('Hal');
}
});
testData.forEach((data) => {
test('Searches', async t => {
await t
.typeText('input[id="simplecriteria"]', data.criteria)
.click('button[class="search-button"]')
.expect(Selector('div[class="mat-paginator-range-label"]').innerText).contains(data.srchResult);
});
});
then you'll most likely be logged out after the first test.
However, I'd still use beforeEach hook, but put the loop inside the test:
const testData = require('../Resources/testData.json');
fixture `Searches`
.page(baseUrl)
.beforeEach(async t => {
await t
// Log in...
.typeText('input[id="login-name"]', 'aguy')
.typeText('input[id="login-password"]', 'bbb')
.click('button[id="signin-button"')
.expect(Selector('span[id="logged-in-user"]').innerText).contains('Hal');
});
test('Searches', async t => {
testData.forEach((data) => {
await t
.typeText('input[id="simplecriteria"]', data.criteria)
.click('button[class="search-button"]')
.expect(Selector('div[class="mat-paginator-range-label"]').innerText).contains(data.srchResult);
});
});
There's obvious disadvantag:
many different searches are added as one test, so if one fails, the whole "searches" test case will be marked as failed
Another solution might be to find out what it means to be logged in. If it's about adding some cookie, you might log in once and then only set up the cookie before your tests. However, in many modern systems, such "log-in cookies" will have httpOnly flag, so you can't really set it in JavaScript.

Jest + Selenium: how do I do an operation before and after all tests inside a describe block is run?

I am trying to create a simple automated test to detect if the added element contains the text it is supposed to have. The test is run using node.js with jest command. I am using Selenium to automate the UI process and Jest to validate the UI's content.
I want to do the following.
Create variables that are accessible in all tests in the describe block before running any of the test
Close the Selenium-driven browser after all tests in the describe block is run
So far, I have this code.
const { Builder, By, until } = require('selenium-webdriver')
const url = 'http://127.0.0.1:3000'
describe('addUser', async() => {
afterAll(async() => {
await driver.quit()
}, 15000)
test('valid name and age should add a new element', async() => {
const driver = await new Builder().forBrowser('firefox').build()
await driver.get(url)
const nameField = await driver.wait(until.elementLocated(By.id('name')), 10000)
const ageField = await driver.wait(until.elementLocated(By.id('age')), 10000)
const btnAddUser = await driver.wait(until.elementLocated(By.id('btnAddUser')), 10000)
await nameField.click()
await nameField.sendKeys('Adam')
await ageField.click()
await ageField.sendKeys('39')
await btnAddUser.click()
const userItem = await driver.wait(until.elementLocated(By.css('.user-item')), 10000)
const userItemText = await userItem.getText()
expect(userItemText).toBe('Adam (39 years old)')
}, 10000)
})
The problems I am facing are the following.
I have to declare the driver, ask the driver to open a new page, and finding all the necessary elements every time I run a test. If possible, I would like to do these initialization steps inside a beforeAll function (by Jest) and store the variables somehow. Then, I can use driver, nameField, ageField, etc. in every test without having to declare them again. How would I do this while maintaining a clean code?
I will close the Selenium-driven browser after all tests inside the addUser describe block are run. So, I added driver.quit() inside afterAll (Jest) to close the browser. Unfortunately, this doesn't work; the browser doesn't close itself. How can I close the Selenium-operated browser after each describe block?
The test is working great, but how can I solve the two problems above?
driver variable is declared in test scope and is unavailable in afterAll. Even if it were declared in describe scope, a teardown would be performed only for the last driver because there can be multiple tests but afterAll is called after the last one.
Variables that need a teardown can be either redefined for each test:
let driver;
beforeEach(async () => {
driver = ...
});
afterEach(async () => {
await driver.quit()
});
Or reused for all tests:
let driver;
beforeAll(async () => {
driver = ...
});
afterAll(async () => {
await driver.quit()
});

How to run multiple tests under same fixture with same browser session

I am trying to test multiple features in one test.js file with each feature implemented as a test. All these tests can be run only after login to the portal. Testcafe closes the browser after the first test which fails the subsequent tests. I used Role and .beforeEach so that the tests can log in to the portal and proceed as usual but is there any easy way to just continue all the tests in the same browser session without really closing them after each test?
I used Role feature and .beforeEach which looks like a workaround to me. Is there any other way to run all tests one after another without really closing the browser session. This will save us the login operation and the instability that might cause for each test.
import { Selector , ClientFunction, Role } from 'testcafe';
import loginpage from '../../features/blah/login/page-model'
const appPortalUser2 = Role('https://test.com', async t => {
await loginpage.signInToPortal()
await loginpage.login('test-userm', 'Password123')
}, { preserveUrl: true });
fixture `portal - settings`
.page `https://test.com/apps`
.beforeEach (async t => {
await t`enter code here`
.useRole(appPortalUser2)
});
test('settings', async t => {
//test1
await loginpage.foo1()
});
test('backup', async t => {
//test2
await loginpage.foo2()
});
Actual behavior: after test1 browser exits and login page appears which fails test2 without using .beforeEach.
Expected: browser session should continue for test2 after test1 without .beforeEach. Provide such an option where tests can continue without creating new browser sessions each time.
At the moment, there is no such option in the public API.
The idea is that one test should not affect another test in any way. If all tests had run in one browser session, every test would have influenced all preceding tests as it could have had a page with an invalid state.