Test execution is different between Github Actions and local - express

I'm testing my express.js API via importing the root app instance and using it through chai-http. The tests are run via mocha.
As the title say, when I run my tests via Github Actions the results are different than when they are run locally. On Github Actions, I get an automatic 503 error from express on any of the test requests or any type of request, while locally all tests run fine and pass. When the 503 is received the entire test runner hangs as well and doesn't exit until the MongoDB driver times out with an error.
Initially the test timed out so I added --timeout 15000 to bypass it, hopefully temporarily.
This is the general setup and imports done before the tests
import {server} from "../bld/server.js";
import {join} from "path";
import chai from "chai";
import chaiHttp from "chai-http";
import chaiAjv from "chai-json-schema-ajv";
chai.use(chaiHttp);
chai.use(chaiAjv);
const api = !!process.env.TEST_MANUAL
? (process.env.API_ADDRESS ?? "http://localhost") + ":" + (process.env.API_PORT ?? "8000")
: server;
console.log((typeof api == "string")
? `Using manually hosted server: "${api}"`
: "Using imported server instance");
const request = chai.request;
const expect = chai.expect;
And the first test looks like this
describe("verification flow", () => {
const actionUrl = "/" + join("action", "verify", "0");
var response;
describe("phase 0: getting verification code", () => {
it("should return a verification code", async () => {
const res = await request(api).post(actionUrl);
expect(res).to.have.status(201);
expect(res).to.have.json;
expect(res.body).to.have.property("verificationCode");
response = res.body;
});
// ....
});
// ...
});
There are no errors on imports, no errors on attaching the server to chai, and from what I can log no issues with file pathing either. At this point I'm lost to as what could be causing the issue and don't know where to look for the root of the problem.
More specific information below:
Dependencies: https://pastebin.com/tu3n9FkZ
Github Actions: https://pastebin.com/HEk5adWQ
No artifacts (dependencies, builds) have been cached according to logs
Thank you for your time and let me know if you need any more information

Related

How do I mock server-side API calls in a Nextjs app?

I'm trying to figure out how to mock calls to the auth0 authentication backend when testing a next js app with React Testing Library. I'm using auth0/nextjs-auth0 to handle authentication. My intention is to use MSW to provide mocks for all API calls.
I followed this example in the nextjs docs next.js/examples/with-msw to set up mocks for both client and server API calls. All API calls generated by the auth0/nextjs-auth0 package ( /api/auth/login , /api/auth/callback , /api/auth/logout and /api/auth/me) received mock responses.
A mock response for /api/auth/me is shown below
import { rest } from 'msw';
export const handlers = [
// /api/auth/me
rest.get(/.*\/api\/auth\/me$/, (req, res, ctx) => {
return res(
ctx.status(200),
ctx.json({
user: { name: 'test', email: 'email#domain.com' },
}),
);
}),
];
The example setup works fine when I run the app in my browser. But when I run my test the mocks are not getting picked up.
An example test block looks like this
import React from 'react';
import {render , screen } from '#testing-library/react';
import Home from 'pages/index';
import App from 'pages/_app';
describe('Home', () => {
it('should render the loading screen', async () => {
render(<App Component={Home} />);
const loader = screen.getByTestId('loading-screen');
expect(loader).toBeInTheDocument();
});
});
I render the page inside the App component like this <App Component={Home} /> so that I will have access to the various contexts wrapping the pages.
I have spent about 2 days on this trying out various configurations and I still don't know what I might be doing wrong. Any and every help is appreciated.
This is probably resolved already for the author, but since I ran into the same issue and could not find useful documentation, this is how I solved it for end to end tests:
Overriding/configuring the API host.
The plan is to have the test runner start next.js as custom server and then having it respond to both the next.js, as API routes.
A requirements for this to work is to be able to specify the backend (host) the API is calling (via environment variables). Howerver, access to environment variables in Next.js is limited, I made this work using the publicRuntimeConfig setting in next.config.mjs. Within that file you can use runtime environment variables which then bind to the publicRuntimeConfig section of the configuration object.
/** #type {import('next').NextConfig} */
const nextConfig = {
(...)
publicRuntimeConfig: {
API_BASE_URL: process.env.API_BASE_URL,
API_BASE_PATH: process.env.API_BASE_PATH,
},
(...)
};
export default nextConfig;
Everywhere I reference the API, I use the publicRuntimeConfig to obtain these values, which gives me control over what exactly the (backend) is calling.
Allowing to control the hostname of the API at runtime allows me to change it to the local machines host and then intercept, and respond to the call with a fixture.
Configuring Playwright as the test runner.
My e2e test stack is based on Playwright, which has a playwright.config.ts file:
import type { PlaywrightTestConfig } from '#playwright/test';
const config: PlaywrightTestConfig = {
globalSetup: './playwright.setup.js',
testMatch: /.*\.e2e\.ts/,
};
export default config;
This calls another file playwright.setup.js which configures the actual tests and backend API mocks:
import {createServer} from 'http';
import {parse} from 'url';
import next from 'next';
import EndpointFixture from "./fixtures/endpoint.json";
// Config
const dev = process.env.NODE_ENV !== 'production';
const baseUrl = process?.env?.API_BASE_URL || 'localhost:3000';
// Context
const hostname = String(baseUrl.split(/:(?=\d)/)[0]).replace(/.+:\/\//, '');
const port = baseUrl.split(/:(?=\d)/)[1];
const app = next({dev, hostname, port});
const handle = app.getRequestHandler();
// Setup
export default async function playwrightSetup() {
const server = await createServer(async (request, response) => {
// Mock for a specific endpoint, responds with a fixture.
if(request.url.includes(`path/to/api/endpoint/${EndpointFixture[0].slug}`)) {
response.write(JSON.stringify(EndpointFixture[0]));
response.end();
return;
}
// Fallback for pai, notifies about missing mock.
else if(request.url.includes('path/to/api/')) {
console.log('(Backend) mock not implementeded', request.url);
return;
}
// Regular Next.js behaviour.
const parsedUrl = parse(request.url, true);
await handle(request, response, parsedUrl);
});
// Start listening on the configured port.
server.listen(port, (error) => {
console.error(error);
});
// Inject the hostname and port into the applications publicRuntimeConfig.
process.env.API_BASE_URL = `http://${hostname}:${port}`;
await app.prepare();
}
Using this kind of setup, the test runner should start a server which responds to both the routes defined by/in Next.js as well as the routes intentionally mocked (for the backend) allowing you to specify a fixture to respond with.
Final notes
Using the publicRuntimeConfig in combination with a custom Next.js servers allows you to have a relatively large amount of control about the calls that are being made on de backend, however, it does not necessarily intercept calls from the frontend, the existing frontend mocks might stil be necessary.

Supertest and jest: cannot close express connection

I am using jest and supertest to raise instance of express and run tests on it.
I faced with the problem of busy port that cannot still solve.
In my test I do the next:
import supertest from 'supertest';
const agent = supertest(app);
Then I go requests with agent and everything works fine.
Until I am running another test.
In app.js I have:
var app = express();
app.post('/auth/changePassword',VerifyToken, auth.changePassword);
app.listen(4001, function () {
console.log('Server is running');
});
So first spec runs perfect. But second tries to listen to the port that is already in use.
I really do not know how to close the connection here.
I tried app.close() but got error that such method. This is clear, that I have to assign
server = app.listen(4001, function () {
console.log('Server is running');
});
server.close();
But do not know how can I do that.
I also tried to preset agent in jest.setup and assign it to global variable
import app from "../../server/app";
import supertest from 'supertest';
const agent = supertest(app);
global.agent = agent;
But situation is the same, first test passed, second tries to raise express on the same port.
Supertest is able to start and stop your app for you - you should never need to explicitly kill the express server during the tests (even when running in parallel)
The problem is that in app.js your server is actually started - this means that when the app tests are run your server is started every time app.js is read (or once per test case)
You can remedy this by splitting the server start logic into a seperate file, so that importing app.js doesn't start the app (rather returns an express server instance) The pattern I normally use for this is:
// app.js
import express from 'express'
export const app = express();
app.get("/example", (req,res) => {
return res.status(200).json({data: "running"})
})
// server.js
import app from "./app"
app.listen(3000, () => console.log("listening on 3000"));
// app.spec.js
import app from "./app"
import request from "supertest"
it("should be running", async () => {
const result = await request(app).get("/example");
expect(result.body).toEqual({data: "running"});
});

Mocking API calls with Detox and Nock

I'm trying to mock API calls from Detox tests and nothing seems to be working. Nock in theory would do exactly what I want but there when I run my tests with nock debugging it isn't seeing any of the requests being made from my app. I'm using axios to make requests and I've tried setting the adapter on my axios instance to the http adapter.
Any suggestions on how to get nock working with Detox or if there is another mocking library you have had success with is appreciated, thanks!
What I ended up doing was leveraging the mocking specified in Detox docs (a separate *.e2e.js file hitting a different endpoint during tests). You define these special files that only run during e2e, I've set mine up to only hit localhost:9091 -- I then run an Express server on that port serving the endpoints I need.
Maybe an ugly way to do it, would love suggestions!
My mock file:
// src/helpers/axios.e2e.js
import axios from 'axios';
const instance = axios.create({
baseURL: `http://localhost:9091/api`,
});
export default instance;
Here's how I am running an express server during tests:
// e2e/mytest.e2e.js
const express = require('express');
let server;
beforeAll(async () => {
const app = express();
app.post('/api/users/register/', (req, res) => {
res.status(400)
res.send({"email": ["Test error: Email is required!"]})
})
await new Promise(function(resolve) {
server = app.listen(9091, "127.0.0.1", function() {
console.log(` Running server on '${JSON.stringify(server.address())}'...`);
resolve();
});
});
})
afterAll(() => {
server.close()
})

Let Puppeteer wait for globalSetup to finish

I use Jest-Puppeteer to end2end test a webapplication. All tests run parallel with async functions. Now I find that the first test already runs before the globalSetup has finished and the data preperation is done (initializing client-settings etc.)
I've tried to timeout after the request, but that isn't working because now all requests have a timeout.
import puppeteer from "puppeteer";
import { getUrlByPath, post } from "../helper";
module.exports = async function globalSetup(globalConfig) {
await setupPuppeteer(globalConfig);
puppeteer.launch({args: ["--no-sandbox", "--disable-setuid-sandbox"]}).then(async browser => {
const page = await browser.newPage();
await post(
page,
getUrlByPath("somePath"),
"prepare_data_for_testing",
);
await browser.close();
});
};
Above code runs a globalConfig, after that it starts preparing the data for the testing environment.
Is there a way to make the test suites run AFTER this script returns the post with http 200: ok ?
I had to place await before puppeteer.launch and add require("expect-puppeteer"); at the top.

Prevent supertest from running until express server has started

I have an node / express js app that was generated using the yoman full stack generator. I have swapped out mongo / mongoose for cloudant db (which is just a paid for version of couchdb). I have a written a wrapper for the Cloudant node.js library which handles cookie auth with my instance via an init() method wrapped in a promise. I have refactored my application to not start the express server until the connection to the db has been established as per snippet below taken from my app.js
myDb.init(config).then(function (db) {
logger.write(1001, '','Connection to Cloudant Established');
// Start server
server.listen(config.port, config.ip, function () {
logger.write(1001, "",'Express server listening on '+config.port+', in '+app.get('env')+' mode');
});
});
On my express routes I have introduced a new middleware which attaches the db object to the request for use across the middleware chain as per below. This gets the db connection object before setting the two collections to use.
exports.beforeAll = function (req, res, next) {
req.my = {};
// Adding my-db
req.my.db = {};
req.my.db.connection = myDb.getDbConnection();
req.my.db.orders = req.my.db.connection.use(dbOrders);
req.my.db.dbRefData = req.my.db.connection.use(dbRefData);
next();
};
This mechanism works when i manually drive my apis through POSTman as the express server won't start until after the promise from the db connection has been resolved. However when running my automated tests the first few tests are now always failing because the application has not finished initialising with the db before jasmine starts to run my tests against the APIs. I can see in my logs the requests on the coming through and myDb.getDbConnection(); in the middleware returning undefined. I am using supertest and node-jasmine to run my tests. For example
'use strict';
var app = require('../../app');
var request = require('supertest');
describe('GET /api/content', function () {
it('should respond with JSON object', function (done) {
request(app)
.get('/api/content')
.expect(200)
.expect('Content-Type', /json/)
.end(function (err, res) {
if (err) return done(err);
expect(res.body).toEqual(jasmine.any(Object));
done();
});
});
});
So, my question is how can I prevent supertest from making the requests until the server.listen() step has been completed as a result of the myDb.init() call being resolved? OR perhaps there is some kind of jasmine beforeAll that I can use to stop it running the describes until the promise has been resolved?
You could make you app return an EventEmitter which emits a "ready" event when it has completed its initialisation.
Then your test code, in a before clause, can wait until the "ready" event arrives from the app before proceeding with the tests.