TestCafe: Failed to complete a request to url - testing

Summary
We have smoke tests that run shortly after deployment on our web application. Sometimes it takes the login page takes a while for its first load.
Error
- Error in Role initializer -
Failed to complete a request to "https://myurl.com/account/login/" within the
timeout period. The problem may be related to local machine's network or firewall settings, server outage, or network problems that make the server inaccessible.
Possible Solutions
I'm hoping that adding a setPageTimeout in my Roles will solve this issue, however, I can't confirm until Tuesday.
Can anyone confirm if setPageTimeout is the way to go? If not, is there a solution available?
Example Solution
import { Role } from 'testcafe';
import { config, pageWait } './config/config';
import { loginPage } from '../pages'
const defaultPageTimeout = 5000;
export const orgAdminRole: Role = Role(config.baseUrl, async t => {
await t
.setPageLoadTimeout(pageWait.extraLongPoll)
.typeText(loginPage.userNameInput, config.orgAdminUser)
.typeText(loginPage.passwordInput, config.orgAdminPass)
.click(loginPage.loginButton)
.setPageLoadTimeout(defaultPageTimeout);
}, { preserveUrl: true });
export const userRole: Role = Role(config.baseUrl, async t => {
await t
.setPageLoadTimeout(pageWait.extraLongPoll)
.typeText(loginPage.userNameInput, config.user)
.typeText(loginPage.passwordInput, config.userPass)
.click(loginPage.loginButton)
.setPageLoadTimeout(defaultPageTimeout);
}, { preserveUrl: true });

UPD: Use the following API to configure your timeouts:
--page-load-timeout-ms
--ajax-request-timeout-ms
--page-request-timeout-ms
Test.timeouts Method
Old answer:
The reason of this issue is the request timeouts. So, using setPageLoadTimeout is not a solution in your test case.
As a workaround, I suggest you change the request timeouts:
import { Selector } from 'testcafe';
// Import DestinationRequest from the testcafe-hammerhead module. Please, specify your own environment path.
import { DestinationRequest } from '../../../../../../node_modules/testcafe-hammerhead/lib/request-pipeline/destination-request';
fixture `Fixture`
.page `https://example.com`;
test('test', async t => {
// Set timeouts
DestinationRequest.XHR_TIMEOUT = 10 * 60 * 1000; // XHR requests timeout
DestinationRequest.TIMEOUT = 10 * 60 * 1000; // other requests timeout
// Actions and assertions
// Restore default timeouts
DestinationRequest.XHR_TIMEOUT = 2 * 60 * 1000;
DestinationRequest.TIMEOUT = 25 * 1000;
});
We will consider the implementation of public options to set the timeouts in the context of the following issue: https://github.com/DevExpress/testcafe/issues/2940.

As an addition to the previous answer I think TestCafe have changed DestinationRequest exporting mode. So I'm using as import * as DestinationRequest ....

Related

Test execution is different between Github Actions and local

I'm testing my express.js API via importing the root app instance and using it through chai-http. The tests are run via mocha.
As the title say, when I run my tests via Github Actions the results are different than when they are run locally. On Github Actions, I get an automatic 503 error from express on any of the test requests or any type of request, while locally all tests run fine and pass. When the 503 is received the entire test runner hangs as well and doesn't exit until the MongoDB driver times out with an error.
Initially the test timed out so I added --timeout 15000 to bypass it, hopefully temporarily.
This is the general setup and imports done before the tests
import {server} from "../bld/server.js";
import {join} from "path";
import chai from "chai";
import chaiHttp from "chai-http";
import chaiAjv from "chai-json-schema-ajv";
chai.use(chaiHttp);
chai.use(chaiAjv);
const api = !!process.env.TEST_MANUAL
? (process.env.API_ADDRESS ?? "http://localhost") + ":" + (process.env.API_PORT ?? "8000")
: server;
console.log((typeof api == "string")
? `Using manually hosted server: "${api}"`
: "Using imported server instance");
const request = chai.request;
const expect = chai.expect;
And the first test looks like this
describe("verification flow", () => {
const actionUrl = "/" + join("action", "verify", "0");
var response;
describe("phase 0: getting verification code", () => {
it("should return a verification code", async () => {
const res = await request(api).post(actionUrl);
expect(res).to.have.status(201);
expect(res).to.have.json;
expect(res.body).to.have.property("verificationCode");
response = res.body;
});
// ....
});
// ...
});
There are no errors on imports, no errors on attaching the server to chai, and from what I can log no issues with file pathing either. At this point I'm lost to as what could be causing the issue and don't know where to look for the root of the problem.
More specific information below:
Dependencies: https://pastebin.com/tu3n9FkZ
Github Actions: https://pastebin.com/HEk5adWQ
No artifacts (dependencies, builds) have been cached according to logs
Thank you for your time and let me know if you need any more information

How do I mock server-side API calls in a Nextjs app?

I'm trying to figure out how to mock calls to the auth0 authentication backend when testing a next js app with React Testing Library. I'm using auth0/nextjs-auth0 to handle authentication. My intention is to use MSW to provide mocks for all API calls.
I followed this example in the nextjs docs next.js/examples/with-msw to set up mocks for both client and server API calls. All API calls generated by the auth0/nextjs-auth0 package ( /api/auth/login , /api/auth/callback , /api/auth/logout and /api/auth/me) received mock responses.
A mock response for /api/auth/me is shown below
import { rest } from 'msw';
export const handlers = [
// /api/auth/me
rest.get(/.*\/api\/auth\/me$/, (req, res, ctx) => {
return res(
ctx.status(200),
ctx.json({
user: { name: 'test', email: 'email#domain.com' },
}),
);
}),
];
The example setup works fine when I run the app in my browser. But when I run my test the mocks are not getting picked up.
An example test block looks like this
import React from 'react';
import {render , screen } from '#testing-library/react';
import Home from 'pages/index';
import App from 'pages/_app';
describe('Home', () => {
it('should render the loading screen', async () => {
render(<App Component={Home} />);
const loader = screen.getByTestId('loading-screen');
expect(loader).toBeInTheDocument();
});
});
I render the page inside the App component like this <App Component={Home} /> so that I will have access to the various contexts wrapping the pages.
I have spent about 2 days on this trying out various configurations and I still don't know what I might be doing wrong. Any and every help is appreciated.
This is probably resolved already for the author, but since I ran into the same issue and could not find useful documentation, this is how I solved it for end to end tests:
Overriding/configuring the API host.
The plan is to have the test runner start next.js as custom server and then having it respond to both the next.js, as API routes.
A requirements for this to work is to be able to specify the backend (host) the API is calling (via environment variables). Howerver, access to environment variables in Next.js is limited, I made this work using the publicRuntimeConfig setting in next.config.mjs. Within that file you can use runtime environment variables which then bind to the publicRuntimeConfig section of the configuration object.
/** #type {import('next').NextConfig} */
const nextConfig = {
(...)
publicRuntimeConfig: {
API_BASE_URL: process.env.API_BASE_URL,
API_BASE_PATH: process.env.API_BASE_PATH,
},
(...)
};
export default nextConfig;
Everywhere I reference the API, I use the publicRuntimeConfig to obtain these values, which gives me control over what exactly the (backend) is calling.
Allowing to control the hostname of the API at runtime allows me to change it to the local machines host and then intercept, and respond to the call with a fixture.
Configuring Playwright as the test runner.
My e2e test stack is based on Playwright, which has a playwright.config.ts file:
import type { PlaywrightTestConfig } from '#playwright/test';
const config: PlaywrightTestConfig = {
globalSetup: './playwright.setup.js',
testMatch: /.*\.e2e\.ts/,
};
export default config;
This calls another file playwright.setup.js which configures the actual tests and backend API mocks:
import {createServer} from 'http';
import {parse} from 'url';
import next from 'next';
import EndpointFixture from "./fixtures/endpoint.json";
// Config
const dev = process.env.NODE_ENV !== 'production';
const baseUrl = process?.env?.API_BASE_URL || 'localhost:3000';
// Context
const hostname = String(baseUrl.split(/:(?=\d)/)[0]).replace(/.+:\/\//, '');
const port = baseUrl.split(/:(?=\d)/)[1];
const app = next({dev, hostname, port});
const handle = app.getRequestHandler();
// Setup
export default async function playwrightSetup() {
const server = await createServer(async (request, response) => {
// Mock for a specific endpoint, responds with a fixture.
if(request.url.includes(`path/to/api/endpoint/${EndpointFixture[0].slug}`)) {
response.write(JSON.stringify(EndpointFixture[0]));
response.end();
return;
}
// Fallback for pai, notifies about missing mock.
else if(request.url.includes('path/to/api/')) {
console.log('(Backend) mock not implementeded', request.url);
return;
}
// Regular Next.js behaviour.
const parsedUrl = parse(request.url, true);
await handle(request, response, parsedUrl);
});
// Start listening on the configured port.
server.listen(port, (error) => {
console.error(error);
});
// Inject the hostname and port into the applications publicRuntimeConfig.
process.env.API_BASE_URL = `http://${hostname}:${port}`;
await app.prepare();
}
Using this kind of setup, the test runner should start a server which responds to both the routes defined by/in Next.js as well as the routes intentionally mocked (for the backend) allowing you to specify a fixture to respond with.
Final notes
Using the publicRuntimeConfig in combination with a custom Next.js servers allows you to have a relatively large amount of control about the calls that are being made on de backend, however, it does not necessarily intercept calls from the frontend, the existing frontend mocks might stil be necessary.

WebSQL threw an error [Error: Error code 1: no such table: document-store]

We are using react-naive-sqlite-2 in our application with RxDB and started getting this error somewhat sporadically. It seems to happen after we remove the database. I was surprised to see this was a WebSQL error since we are using react-native and WebSQL is deprecated. I don't have great ways to debug this but my hunch is that we have some code that still tries to access the now dead database.
This is the code we use to set up our database:
import SQLiteAdapterFactory from 'pouchdb-adapter-react-native-sqlite'
import SQLite from 'react-native-sqlite-2'
import { addRxPlugin, createRxDatabase } from 'rxdb'
import { RxDBReplicationGraphQLPlugin } from 'rxdb/plugins/replication-graphql'
import type { DatabaseType } from '../generated'
/**
* SQLITE SETUP
*/
const SQLiteAdapter = SQLiteAdapterFactory(SQLite)
addRxPlugin(SQLiteAdapter)
addRxPlugin(require('pouchdb-adapter-http'))
/**
* Other plugins
*/
addRxPlugin(RxDBReplicationGraphQLPlugin)
export const getRxDB = async () => {
return await createRxDatabase<DatabaseType>({
name: 'gatherdatabase',
adapter: 'react-native-sqlite', // the name of your adapter
multiInstance: false,
})
The issue happens after we logout and attempt to log back in. When we logout, we call removeRxDatabase. Has anyone ran into this kind of issue before or know of ways to debug?
For posterity, the issue was that we had a reference to the database in our state management library (Zustand) that was being held onto past logout. When we tried to login again, our getOrCreateDatabase function didn't make a new one but it wasn't valid since we had run database.remove() in rxdb. We ended up just clearing the Zustand db and calling database.remove() at one place.
export const useRxDB = create<UseRxDBType>((set, get) => ({
database: undefined,
/**
* we set the database to ready in the LocalDocument store once local docs are loaded into the store
*/
databaseIsReady: false,
getOrCreateDatabase: async () => {
let database = get().database
if (!database) {
database = await createRxDatabase()
if (!Rx.isRxDatabase(database)) throw new Error(`database isnt a valid RxDB database.`)
set({ database })
}
return database
},
resetDatabase: async () => {
const database = get().database
if (database) {
await database.remove()
set({ database: undefined })
}
},
}))

TestCafe RequestLogger - How to implement for every request in the test framework

We are trying to track down a network issue in our company which causes a Browser Disconnect General Error. I want to use RequestLogger timestamp to help us highlight when this intermittent issue occurs and also any additional request/response information at that time.
In the Request Logger documentation the .requestHooks(logger) is initiated at every test case level. And then console.log(logRecord.X.X) is used to log the record at that specific time.
But how can I have a continuous logging throughout my whole test framework without using console.log(logRecord.X.X) on every line?
Is it somehow possible to have the RequestLogger continuously running via my test-runner function?
if(nodeConfig.util.getEnv('NODE_ENV') == "jenkins-ci")
{
// #ts-ignore
// createTestCafe("localhost", port1, port2).then(tc => {
createTestCafe().then(tc => {
this.testcafe = tc;
this.runner = this.testcafe.createRunner();
return this.runner
.src(testPath)
.filter(filterSettings)
.browsers(environment.browserToLaunch)
.concurrency(environment.concurrencyAmount)
.reporter(reporterSettings)
.run(runSettingsCi);
})
.then(failedCount => {
console.log('Location ' + testPath + ' tests failed: ' + failedCount);
this.testcafe.close();
process.exit(0);
})
.catch((err) => {
console.log('Location ' + testPath + ' General Error');
console.log(err);
this.testcafe.close();
process.exit(1);
});
}
TestCafe doesn't allow attaching request hooks with the test runner class. At the same time, you can attach it to each fixture. RequestLogger will collect information about all requests.
For example:
import { Selector, RequestLogger } from 'testcafe';
const logger = RequestLogger();
fixture `Log all requests`
.page`devexpress.github.io/testcafe`
.requestHooks(logger)
.afterEach(() => console.log(logger.requests));
test('Test 1', async t => {
await t
.click(Selector('span').withText('Docs'))
.click(Selector('a').withText('Using TestCafe'))
.click(Selector('a').withText('Test API'));
});
test('Test 2', async t => {
await t
.click(Selector('span').withText('Docs'))
.click(Selector('a').withText('Continuous Integration'))
.click(Selector('a').withText('How It Works'));
});
Previously, TestCafe allowed you to attach request hooks to one test or fixture at a time. In the new TestCafe v1.19.0, you can also define global request hooks in a JavaScript configuration file .testcaferc.js to attach them to all fixtures and tests within a test run. You can learn more here: Global Request Hooks.
Please note that you can use the configFile option in CLI and program API to specify the path to a config file.
For the initial usage scenario, you can use the following example:
const { RequestLogger } = require('testcafe');
const logger = RequestLogger();
module.exports = {
hooks: {
request: logger,
},
};

Let Puppeteer wait for globalSetup to finish

I use Jest-Puppeteer to end2end test a webapplication. All tests run parallel with async functions. Now I find that the first test already runs before the globalSetup has finished and the data preperation is done (initializing client-settings etc.)
I've tried to timeout after the request, but that isn't working because now all requests have a timeout.
import puppeteer from "puppeteer";
import { getUrlByPath, post } from "../helper";
module.exports = async function globalSetup(globalConfig) {
await setupPuppeteer(globalConfig);
puppeteer.launch({args: ["--no-sandbox", "--disable-setuid-sandbox"]}).then(async browser => {
const page = await browser.newPage();
await post(
page,
getUrlByPath("somePath"),
"prepare_data_for_testing",
);
await browser.close();
});
};
Above code runs a globalConfig, after that it starts preparing the data for the testing environment.
Is there a way to make the test suites run AFTER this script returns the post with http 200: ok ?
I had to place await before puppeteer.launch and add require("expect-puppeteer"); at the top.