How to make testcafe wait until a fixture is done executing before moving to the next fixture when using concurrency? - testing

I want to run testcafe tests concurrently BUT only executing against 1 file at a time.
In other words, I want to wait for all the tests of a specific fixture to be done executing before the tests from the next fixture start executing.
How do I do it?

You can do this using the TestCafe programming interface.
Please see the following example:
const createTestCafe = require('testcafe');
let testcafe = null;
let runner = null;
createTestCafe('localhost', 1337, 1338)
.then(tc => {
testcafe = tc;
runner = tc.createRunner()
.browsers('chrome')
.concurrency(3);
})
.then(() => {
return runner.src('fixture1.js').run();
})
.then(() => {
return runner.src('fixture2.js').run();
})
.then(() => {
testcafe.close();
});
However, please note that I run tests twice in sequence here. That means that your browsers will be opened twice too. You will also get two different reports.

Related

Jest + Selenium: how do I do an operation before and after all tests inside a describe block is run?

I am trying to create a simple automated test to detect if the added element contains the text it is supposed to have. The test is run using node.js with jest command. I am using Selenium to automate the UI process and Jest to validate the UI's content.
I want to do the following.
Create variables that are accessible in all tests in the describe block before running any of the test
Close the Selenium-driven browser after all tests in the describe block is run
So far, I have this code.
const { Builder, By, until } = require('selenium-webdriver')
const url = 'http://127.0.0.1:3000'
describe('addUser', async() => {
afterAll(async() => {
await driver.quit()
}, 15000)
test('valid name and age should add a new element', async() => {
const driver = await new Builder().forBrowser('firefox').build()
await driver.get(url)
const nameField = await driver.wait(until.elementLocated(By.id('name')), 10000)
const ageField = await driver.wait(until.elementLocated(By.id('age')), 10000)
const btnAddUser = await driver.wait(until.elementLocated(By.id('btnAddUser')), 10000)
await nameField.click()
await nameField.sendKeys('Adam')
await ageField.click()
await ageField.sendKeys('39')
await btnAddUser.click()
const userItem = await driver.wait(until.elementLocated(By.css('.user-item')), 10000)
const userItemText = await userItem.getText()
expect(userItemText).toBe('Adam (39 years old)')
}, 10000)
})
The problems I am facing are the following.
I have to declare the driver, ask the driver to open a new page, and finding all the necessary elements every time I run a test. If possible, I would like to do these initialization steps inside a beforeAll function (by Jest) and store the variables somehow. Then, I can use driver, nameField, ageField, etc. in every test without having to declare them again. How would I do this while maintaining a clean code?
I will close the Selenium-driven browser after all tests inside the addUser describe block are run. So, I added driver.quit() inside afterAll (Jest) to close the browser. Unfortunately, this doesn't work; the browser doesn't close itself. How can I close the Selenium-operated browser after each describe block?
The test is working great, but how can I solve the two problems above?
driver variable is declared in test scope and is unavailable in afterAll. Even if it were declared in describe scope, a teardown would be performed only for the last driver because there can be multiple tests but afterAll is called after the last one.
Variables that need a teardown can be either redefined for each test:
let driver;
beforeEach(async () => {
driver = ...
});
afterEach(async () => {
await driver.quit()
});
Or reused for all tests:
let driver;
beforeAll(async () => {
driver = ...
});
afterAll(async () => {
await driver.quit()
});

How do I run tests on multiple browsers in lambdatest?

I have a lambdatest and testcafe setup, lambdatest account has a single parallel run.
As far as I understand, testcafe doesn't support queuing of tests.
So my question is how do I manage to run tests on different browser/OS combination on lambdatest(one after the other without queuing).
Thanks in advance.
You can create several runners for each browser and run them in series. You can find an example in the following thread on GitHub:
https://github.com/DevExpress/testcafe/issues/2495#issuecomment-421090352
As Dmitry stated, you can create several runners for each browser and run them in series.
Here is an example code to run parallel testing over LambdaTest Selenium Grid through custom testcafe runner.
const browsers = [
['lambdatest:Chrome#74.0:Windows 10"', 'lambdatest:Chrome#75.0:Windows 10'],
['lambdatest:Chrome#76.0:Windows 8', 'lambdatest:Chrome#77.0:Windows 8'],
];
const runTest = async browser => {
console.log('starting tests');
await createTestCafe('localhost', 1337, 1338)
.then(tc => {
testcafe = tc;
const runner = testcafe.createRunner();
return runner
.src(['web-tests/*.ts'])
.browsers(browser)
.run();
})
.then(async failedCount => {
console.log('Tests failed: ' + failedCount);
await testcafe.close();
return;
});
}
const runAllBrowsers = async () => {
for (const browser of browsers) {
await runTest(browser);
}
}
For more information, refer to GitHub repository of LambdaTest & Testcafe. Cheers!

TestCafe RequestLogger - How to implement for every request in the test framework

We are trying to track down a network issue in our company which causes a Browser Disconnect General Error. I want to use RequestLogger timestamp to help us highlight when this intermittent issue occurs and also any additional request/response information at that time.
In the Request Logger documentation the .requestHooks(logger) is initiated at every test case level. And then console.log(logRecord.X.X) is used to log the record at that specific time.
But how can I have a continuous logging throughout my whole test framework without using console.log(logRecord.X.X) on every line?
Is it somehow possible to have the RequestLogger continuously running via my test-runner function?
if(nodeConfig.util.getEnv('NODE_ENV') == "jenkins-ci")
{
// #ts-ignore
// createTestCafe("localhost", port1, port2).then(tc => {
createTestCafe().then(tc => {
this.testcafe = tc;
this.runner = this.testcafe.createRunner();
return this.runner
.src(testPath)
.filter(filterSettings)
.browsers(environment.browserToLaunch)
.concurrency(environment.concurrencyAmount)
.reporter(reporterSettings)
.run(runSettingsCi);
})
.then(failedCount => {
console.log('Location ' + testPath + ' tests failed: ' + failedCount);
this.testcafe.close();
process.exit(0);
})
.catch((err) => {
console.log('Location ' + testPath + ' General Error');
console.log(err);
this.testcafe.close();
process.exit(1);
});
}
TestCafe doesn't allow attaching request hooks with the test runner class. At the same time, you can attach it to each fixture. RequestLogger will collect information about all requests.
For example:
import { Selector, RequestLogger } from 'testcafe';
const logger = RequestLogger();
fixture `Log all requests`
.page`devexpress.github.io/testcafe`
.requestHooks(logger)
.afterEach(() => console.log(logger.requests));
test('Test 1', async t => {
await t
.click(Selector('span').withText('Docs'))
.click(Selector('a').withText('Using TestCafe'))
.click(Selector('a').withText('Test API'));
});
test('Test 2', async t => {
await t
.click(Selector('span').withText('Docs'))
.click(Selector('a').withText('Continuous Integration'))
.click(Selector('a').withText('How It Works'));
});
Previously, TestCafe allowed you to attach request hooks to one test or fixture at a time. In the new TestCafe v1.19.0, you can also define global request hooks in a JavaScript configuration file .testcaferc.js to attach them to all fixtures and tests within a test run. You can learn more here: Global Request Hooks.
Please note that you can use the configFile option in CLI and program API to specify the path to a config file.
For the initial usage scenario, you can use the following example:
const { RequestLogger } = require('testcafe');
const logger = RequestLogger();
module.exports = {
hooks: {
request: logger,
},
};

How i can to customize testRunner from testcafe framework?

I have my testRunner for tests.
But the problem is -> how i can add the browser resolution in test
runner, because i don't want to add this to my tests.
Thanks for help.
runner
.src(testFiles)
.browsers('chrome')
.reporter('html', stream)
.run()
.then(failedCount => {
console.log(failedCount);
testcafe.close();
You can manage the resolution of your browser via cmd parameters. Since Chrome supports --window-size parameter you can pass it to the .browsers method of your runner.
Please see the following example:
const createTestCafe = require('testcafe');
let testcafe = null;
createTestCafe('localhost', 1337, 1338)
.then(tc => {
testcafe = tc;
const runner = testcafe.createRunner();
return runner
.src('my-tests.js')
.browsers(['chrome --window-size=1000,500', 'chrome --window-size=500,200'])
.run();
})
.then(failedCount => {
console.log('Tests failed: ' + failedCount);
testcafe.close();
});
Here I run tests in two Chrome instances but with different window sizes
Please also refer to the following article https://devexpress.github.io/testcafe/documentation/using-testcafe/programming-interface/runner.html#browsers to see different ways of using the .browsers method

TestCafe 'dynamic' tests cases

I created a few e2e sanity tests for my current project using TestCafe. These tests are standard TestCafe tests:
fixture(`Basic checkout flow`)
test('Main Flow', async (t) => {
});
I would like to execute this test for multiple site locales and for multiple channels. i.e. I need this test to run for nl_nl, nl_be, en_gb, .. and also for channels like b2c, b2b, ...
The easiest way is to create a loop in the test itself to loop over the locales and channels, but I want to run these test concurrently.
I tried to create a function to dynamically generate these tests, but TestCafe can't seem to detect the tests then.
dynamicTest('Main Flow', async (t) => {
});
function dynamicTest(testName, testFn) => {
const channels = ['b2c']
channels.forEach((channel) => {
test(`[Channel] ${channel}] ${testName}`, testFn);
});
};
Is there a better way of doing this? The only solution I see is running the test script multiple times from Jenkins to have concurrency.
more detailed code:
import HomePage from '../../page/HomePage/HomePage';
import EnvUtil from '../../util/EnvUtil';
const wrapper = (config, testFn) => {
config.locales.forEach(async locale =>
config.channels.forEach(async channel => {
const tstConfig = {
locale,
channel
};
tstConfig.env = EnvUtil.parse(tstConfig, config.args.env);
testConfig.foo = await EnvUtil.get() // If I remove this line it works!
testFn(config, locale, channel)
})
);
};
fixture(`[Feature] Feature 1`)
.beforeEach(async t => {
t.ctx.pages = {
home: new HomePage(),
... more pages here
};
});
wrapper(global.config, (testConfig, locale, channel) => {
test
.before(async (t) => {
t.ctx.config = testConfig;
})
.page(`foo.bar.com`)
(`[Feature] [Locale: ${locale.key}] [Channel: ${channel.key}] Feature 1`, async (t) => {
await t.ctx.pages.home.header.search(t, '3301');
.. more test code here
});
});
If I run it like this I get a "test is undefined" error. Is there something wrong in the way I'm wrapping "test"?
Using TestCafe of version 0.23.1, you can run tests imported from external libraries or generated dynamically even if the test file you provide does not contain any tests.
You can learn more here: Run Dynamically Loaded Tests