TestCafe RequestLogger - How to implement for every request in the test framework - testing

We are trying to track down a network issue in our company which causes a Browser Disconnect General Error. I want to use RequestLogger timestamp to help us highlight when this intermittent issue occurs and also any additional request/response information at that time.
In the Request Logger documentation the .requestHooks(logger) is initiated at every test case level. And then console.log(logRecord.X.X) is used to log the record at that specific time.
But how can I have a continuous logging throughout my whole test framework without using console.log(logRecord.X.X) on every line?
Is it somehow possible to have the RequestLogger continuously running via my test-runner function?
if(nodeConfig.util.getEnv('NODE_ENV') == "jenkins-ci")
{
// #ts-ignore
// createTestCafe("localhost", port1, port2).then(tc => {
createTestCafe().then(tc => {
this.testcafe = tc;
this.runner = this.testcafe.createRunner();
return this.runner
.src(testPath)
.filter(filterSettings)
.browsers(environment.browserToLaunch)
.concurrency(environment.concurrencyAmount)
.reporter(reporterSettings)
.run(runSettingsCi);
})
.then(failedCount => {
console.log('Location ' + testPath + ' tests failed: ' + failedCount);
this.testcafe.close();
process.exit(0);
})
.catch((err) => {
console.log('Location ' + testPath + ' General Error');
console.log(err);
this.testcafe.close();
process.exit(1);
});
}

TestCafe doesn't allow attaching request hooks with the test runner class. At the same time, you can attach it to each fixture. RequestLogger will collect information about all requests.
For example:
import { Selector, RequestLogger } from 'testcafe';
const logger = RequestLogger();
fixture `Log all requests`
.page`devexpress.github.io/testcafe`
.requestHooks(logger)
.afterEach(() => console.log(logger.requests));
test('Test 1', async t => {
await t
.click(Selector('span').withText('Docs'))
.click(Selector('a').withText('Using TestCafe'))
.click(Selector('a').withText('Test API'));
});
test('Test 2', async t => {
await t
.click(Selector('span').withText('Docs'))
.click(Selector('a').withText('Continuous Integration'))
.click(Selector('a').withText('How It Works'));
});

Previously, TestCafe allowed you to attach request hooks to one test or fixture at a time. In the new TestCafe v1.19.0, you can also define global request hooks in a JavaScript configuration file .testcaferc.js to attach them to all fixtures and tests within a test run. You can learn more here: Global Request Hooks.
Please note that you can use the configFile option in CLI and program API to specify the path to a config file.
For the initial usage scenario, you can use the following example:
const { RequestLogger } = require('testcafe');
const logger = RequestLogger();
module.exports = {
hooks: {
request: logger,
},
};

Related

How do I mock server-side API calls in a Nextjs app?

I'm trying to figure out how to mock calls to the auth0 authentication backend when testing a next js app with React Testing Library. I'm using auth0/nextjs-auth0 to handle authentication. My intention is to use MSW to provide mocks for all API calls.
I followed this example in the nextjs docs next.js/examples/with-msw to set up mocks for both client and server API calls. All API calls generated by the auth0/nextjs-auth0 package ( /api/auth/login , /api/auth/callback , /api/auth/logout and /api/auth/me) received mock responses.
A mock response for /api/auth/me is shown below
import { rest } from 'msw';
export const handlers = [
// /api/auth/me
rest.get(/.*\/api\/auth\/me$/, (req, res, ctx) => {
return res(
ctx.status(200),
ctx.json({
user: { name: 'test', email: 'email#domain.com' },
}),
);
}),
];
The example setup works fine when I run the app in my browser. But when I run my test the mocks are not getting picked up.
An example test block looks like this
import React from 'react';
import {render , screen } from '#testing-library/react';
import Home from 'pages/index';
import App from 'pages/_app';
describe('Home', () => {
it('should render the loading screen', async () => {
render(<App Component={Home} />);
const loader = screen.getByTestId('loading-screen');
expect(loader).toBeInTheDocument();
});
});
I render the page inside the App component like this <App Component={Home} /> so that I will have access to the various contexts wrapping the pages.
I have spent about 2 days on this trying out various configurations and I still don't know what I might be doing wrong. Any and every help is appreciated.
This is probably resolved already for the author, but since I ran into the same issue and could not find useful documentation, this is how I solved it for end to end tests:
Overriding/configuring the API host.
The plan is to have the test runner start next.js as custom server and then having it respond to both the next.js, as API routes.
A requirements for this to work is to be able to specify the backend (host) the API is calling (via environment variables). Howerver, access to environment variables in Next.js is limited, I made this work using the publicRuntimeConfig setting in next.config.mjs. Within that file you can use runtime environment variables which then bind to the publicRuntimeConfig section of the configuration object.
/** #type {import('next').NextConfig} */
const nextConfig = {
(...)
publicRuntimeConfig: {
API_BASE_URL: process.env.API_BASE_URL,
API_BASE_PATH: process.env.API_BASE_PATH,
},
(...)
};
export default nextConfig;
Everywhere I reference the API, I use the publicRuntimeConfig to obtain these values, which gives me control over what exactly the (backend) is calling.
Allowing to control the hostname of the API at runtime allows me to change it to the local machines host and then intercept, and respond to the call with a fixture.
Configuring Playwright as the test runner.
My e2e test stack is based on Playwright, which has a playwright.config.ts file:
import type { PlaywrightTestConfig } from '#playwright/test';
const config: PlaywrightTestConfig = {
globalSetup: './playwright.setup.js',
testMatch: /.*\.e2e\.ts/,
};
export default config;
This calls another file playwright.setup.js which configures the actual tests and backend API mocks:
import {createServer} from 'http';
import {parse} from 'url';
import next from 'next';
import EndpointFixture from "./fixtures/endpoint.json";
// Config
const dev = process.env.NODE_ENV !== 'production';
const baseUrl = process?.env?.API_BASE_URL || 'localhost:3000';
// Context
const hostname = String(baseUrl.split(/:(?=\d)/)[0]).replace(/.+:\/\//, '');
const port = baseUrl.split(/:(?=\d)/)[1];
const app = next({dev, hostname, port});
const handle = app.getRequestHandler();
// Setup
export default async function playwrightSetup() {
const server = await createServer(async (request, response) => {
// Mock for a specific endpoint, responds with a fixture.
if(request.url.includes(`path/to/api/endpoint/${EndpointFixture[0].slug}`)) {
response.write(JSON.stringify(EndpointFixture[0]));
response.end();
return;
}
// Fallback for pai, notifies about missing mock.
else if(request.url.includes('path/to/api/')) {
console.log('(Backend) mock not implementeded', request.url);
return;
}
// Regular Next.js behaviour.
const parsedUrl = parse(request.url, true);
await handle(request, response, parsedUrl);
});
// Start listening on the configured port.
server.listen(port, (error) => {
console.error(error);
});
// Inject the hostname and port into the applications publicRuntimeConfig.
process.env.API_BASE_URL = `http://${hostname}:${port}`;
await app.prepare();
}
Using this kind of setup, the test runner should start a server which responds to both the routes defined by/in Next.js as well as the routes intentionally mocked (for the backend) allowing you to specify a fixture to respond with.
Final notes
Using the publicRuntimeConfig in combination with a custom Next.js servers allows you to have a relatively large amount of control about the calls that are being made on de backend, however, it does not necessarily intercept calls from the frontend, the existing frontend mocks might stil be necessary.

Testcafe data driven testing - how to drive tests with data fetched from API

I'm having trouble figuring out how to drive tests with data fetched from a request. I've read the documentation here: https://testcafe.io/documentation/402804/recipes/best-practices/create-data-driven-tests, and all examples use static json file data available at compile time.
I can't fetch the data in fixture.before hook, because it will only be available inside of the test context, but I need to access the data outside of the test context for iteration, such that the test is inside of a for loop.
I've tried this solution: https://github.com/DevExpress/testcafe/issues/1948, however this fails with testcafe ERROR No tests found in the specified source files. Ensure the sources contain the 'fixture' and 'test' directives., even when I use the flag disable-test-syntax-validation and .run({ disableTestSyntaxValidation: true }); option.
I am looking for suggestions and workarounds so that I can await some data, then run my tests. Even if Testcafe doesn't explicitly support something like this, I figure there must be some workaround... Thanks in advance.
Edit:
file-a.ts
export function tSteps(...args) {
// some setup
const testcase = args[args.length - 1];
const testCtx = test(name, async t => {
...
});
return testCtx;
}
----
file-b.ts
export const parameterizedTest = <T>(..., testcase: (scenario: T) => TestFn) => {
// some setup...
// I have also tried awaiting rows data here, which does not work
// because tests are not discoverable at compile time
...
const scenarios: T[] = rows.map(row => {
...
});
scenarios.forEach((scenario, idx) => {
return testcase(scenario).meta({
some metadata
});
});
};
----
tests.ts
fixture(...).before(async () => {
// can't get the data i need here because it needs to be available outside of the fixture context
})
parameterizedTest<MyInterface>(some params, (scenario: MyInterface) => {
return tSteps('my test',
async f => {
// some setup
// test code goes here which uses scenario.attributex, scenario.attributey, etc.
}
).meta(...);
}
);
In v1.0.0 and later, TestCafe does not validate test syntax. Please specify the TestCafe version that you use when you see the validation error.
Unfortunately, we cannot use pseudo-code to reproduce the issue you encountered. Please share some code that we could run to see the problematic behavior.
Generally speaking, TestCafe allows you to fetch data asynchronously and then spawn tests based on the received values. For instance, the following code works fine for me with TestCafe 1.18.3:
import { fixture, test } from 'testcafe';
import fetch from './node-fetch-mock';
(async () => {
const testData = await fetch();
testData
.map(n => () => {
fixture `Fixture ${n}`
.page `https://google.com`;
test(`Test ${n}`, async t => {
await t.expect(true).ok();
});
})
.map(async test => { await test(); });
})();
node-fetch-mock.js
export default async function fetch() {
return [1, 2, 3, 4, 5];
}
The only caveat is that I have to import fixture and test explicitly because I call them from callbacks.
Could you please provide us with any test code snippet that demonstrates the problem? We need to correctly understand the cause of the problem and reproduce it on our side.

How can I log into my web app, then read through the records of my data.json file using TestCafe

I've googled and I can find how to loop through my data file. Apparently you run a test for each record of data.
I would like to have my single test log in, then cycle through each 'record' or item of the data file. The data is a series of searches in our app. So, the test would login and assert logged in then run those searches...
test('searches', async t => {
await t
// Log in...
.typeText('input[id="login-name"]', 'aguy')
.typeText('input[id="login-password"]', 'bbb')
.click('button[id="signin-button"')
.expect(Selector('span[id="logged-in-user"]').innerText).contains('Hal')
// At this point the app is ready to run through the searches doing this...
// forEach item in my data...
.typeText('input[id="simplecriteria"]', data.criteria)
.click('button[class="search-button"]')
.expect(Selector('div[class="mat-paginator-range-label"]').innerText).contains(data.srchResult)
});
TestCafe has test hooks, I recommend using them even though they are not that usuful in your case because TestCafe deletes cookies between tests, so if you log in once and then write your test like so:
const testData = require('../Resources/testData.json');
let executed = false;
fixture `Searches`
.page(baseUrl)
.beforeEach(async t => {
if (!executed) {
// run this only once before all tests
executed = true;
// log in
await t
.typeText('input[id="login-name"]', 'aguy')
.typeText('input[id="login-password"]', 'bbb')
.click('button[id="signin-button"')
.expect(Selector('span[id="logged-in-user"]').innerText).contains('Hal');
}
});
testData.forEach((data) => {
test('Searches', async t => {
await t
.typeText('input[id="simplecriteria"]', data.criteria)
.click('button[class="search-button"]')
.expect(Selector('div[class="mat-paginator-range-label"]').innerText).contains(data.srchResult);
});
});
then you'll most likely be logged out after the first test.
However, I'd still use beforeEach hook, but put the loop inside the test:
const testData = require('../Resources/testData.json');
fixture `Searches`
.page(baseUrl)
.beforeEach(async t => {
await t
// Log in...
.typeText('input[id="login-name"]', 'aguy')
.typeText('input[id="login-password"]', 'bbb')
.click('button[id="signin-button"')
.expect(Selector('span[id="logged-in-user"]').innerText).contains('Hal');
});
test('Searches', async t => {
testData.forEach((data) => {
await t
.typeText('input[id="simplecriteria"]', data.criteria)
.click('button[class="search-button"]')
.expect(Selector('div[class="mat-paginator-range-label"]').innerText).contains(data.srchResult);
});
});
There's obvious disadvantag:
many different searches are added as one test, so if one fails, the whole "searches" test case will be marked as failed
Another solution might be to find out what it means to be logged in. If it's about adding some cookie, you might log in once and then only set up the cookie before your tests. However, in many modern systems, such "log-in cookies" will have httpOnly flag, so you can't really set it in JavaScript.

TestCafe 'dynamic' tests cases

I created a few e2e sanity tests for my current project using TestCafe. These tests are standard TestCafe tests:
fixture(`Basic checkout flow`)
test('Main Flow', async (t) => {
});
I would like to execute this test for multiple site locales and for multiple channels. i.e. I need this test to run for nl_nl, nl_be, en_gb, .. and also for channels like b2c, b2b, ...
The easiest way is to create a loop in the test itself to loop over the locales and channels, but I want to run these test concurrently.
I tried to create a function to dynamically generate these tests, but TestCafe can't seem to detect the tests then.
dynamicTest('Main Flow', async (t) => {
});
function dynamicTest(testName, testFn) => {
const channels = ['b2c']
channels.forEach((channel) => {
test(`[Channel] ${channel}] ${testName}`, testFn);
});
};
Is there a better way of doing this? The only solution I see is running the test script multiple times from Jenkins to have concurrency.
more detailed code:
import HomePage from '../../page/HomePage/HomePage';
import EnvUtil from '../../util/EnvUtil';
const wrapper = (config, testFn) => {
config.locales.forEach(async locale =>
config.channels.forEach(async channel => {
const tstConfig = {
locale,
channel
};
tstConfig.env = EnvUtil.parse(tstConfig, config.args.env);
testConfig.foo = await EnvUtil.get() // If I remove this line it works!
testFn(config, locale, channel)
})
);
};
fixture(`[Feature] Feature 1`)
.beforeEach(async t => {
t.ctx.pages = {
home: new HomePage(),
... more pages here
};
});
wrapper(global.config, (testConfig, locale, channel) => {
test
.before(async (t) => {
t.ctx.config = testConfig;
})
.page(`foo.bar.com`)
(`[Feature] [Locale: ${locale.key}] [Channel: ${channel.key}] Feature 1`, async (t) => {
await t.ctx.pages.home.header.search(t, '3301');
.. more test code here
});
});
If I run it like this I get a "test is undefined" error. Is there something wrong in the way I'm wrapping "test"?
Using TestCafe of version 0.23.1, you can run tests imported from external libraries or generated dynamically even if the test file you provide does not contain any tests.
You can learn more here: Run Dynamically Loaded Tests

How to test promises in Mongo(ose)/Express app?

I'm using promises to wrap asynchronous (Mongo) DB ops at the end of an (expressJS) route.
I want to try and figure out how to test the following code.
userService
userService.findOne = function (id) {
var deferred = q.defer();
User.findOne({"_id" : id})
.exec(function (error, user) {
if (error) {
deferred.reject(error);
} else {
deferred.resolve(user);
}
});
return deferred.promise;
};
userRoute
var user = function (req, res) {
var userId = req.params.id
, userService = req.load("userService");
// custom middleware that enables me to inject mocks
return userService.findOne(id)
.then(function (user) {
console.log("called then");
res.json({
msg: "foo"
});
}).catch(function (error) {
console.log("called catch");
res.json({
error: error
});
}).done();
};
Here's an attempt to test the above with mocha
userTest
it("when resolved", function (done) {
var jsonSpy = sinon.spy(httpMock.res, "json")
, httpMock = require("/path/to/mock/http/object")
, serviceMock = require("/path/to/mock/service"),
, deferred = q.defer()
, findStub = sinon.stub(serviceMock, "findOne")
.returns(deferred.promise)
, loadStub = sinon.stub(httpMock.req, "load")
.returns(serviceMock),
retPromise;
// trigger route
routes.user(httpMock.req, httpMock.res);
// force promise to resolve?
deferred.resolve();
expect(jsonSpy.called).to.be.true; // fails
// chai as promised
retPromise = findStub.returnValues[0];
expect(retPromise).to.be.fulfilled; // passes
});
the http mock is just an empty object with no-ops where expressJS would normally start rendering stuff. I've added some logging inside those no-ops to get an idea on how this is hanging together.
This isn't really working out. I want to verify how the whole is integrated, to establish some sort of regression suite - but I've effectively mocked it to smithereens and I'm just testing my mocks (not entirely successfully at that).
I'm also noticing that the console logs inside my http mocks triggered by then and catch are firing twice - but the jsonSpy that is invoked inside the actual code (verified by logging out the sinon spy within the userRoute code) is not called in test.
Has anyone got some advice on integration testing strategies for express apps backed by Mongo?
It looks to me like you're not giving your promise an opportunity to fire before you check if the result has been called. You need to wait asynchronously for userService.findOne()'s promise chain to complete before jsonSpy.called will be set. Try this instead:
// start of code as normal
q.when(
routes.user(httpMock.req, httpMock.res),
function() { expect(jsonSpy.called).to.be.true; }
);
deferred.resolve();
// rest of code as normal
That should chain off the routes.user() promise and pass as expected.
One word of caution: I'm not familiar with your framework, so I don't know if it will wait patiently for all async events to go off. If it's giving you problems calling back into your defer chain, you may want to try nodeunit instead, which handles async tests very well (IMO).