I have a pretty standard Express 4 server.
When I run Mocha, it doesn’t exit when the following middleware is used.
app.use("/v1/healthcheck", routes.healthcheck)
"use strict"
import express, { Request, Response, Router } from "express"
// import bearer from "../models/bearer"
const router: Router = express.Router()
// GET /v1/healthcheck
router.get("", async (req: Request, res: Response) => {
try {
// await bearer.findOne()
res.empty()
} catch (error) {
res.internalServerError(error)
}
})
export default router
res.empty = () => {
res.sendStatus(204)
}
If I comment out that middleware, tests fail (404) and exit.
Someone knows what is going on here?
Edit
Using why-is-node-running (as suggested here), apparently it has something to do with underlying MongoDB DNS (DNSCHANNEL).
# DNSCHANNEL
/Users/sunknudsen/server/node_modules/mongodb/lib/core/sdam/srv_polling.js:5 - const dns = require('dns');
/Users/sunknudsen/server/node_modules/mongodb/lib/core/sdam/topology.js:22 - const SrvPoller = require('./srv_polling').SrvPoller;
/Users/sunknudsen/server/node_modules/mongodb/lib/core/index.js:36 - Topology: require('./sdam/topology').Topology,
/Users/sunknudsen/server/node_modules/mongodb/index.js:4 - const core = require('./lib/core');
/Users/sunknudsen/server/node_modules/mongoose/lib/drivers/node-mongodb-native/binary.js:8 - const Binary = require('mongodb').Binary;
/Users/sunknudsen/server/node_modules/mongoose/lib/drivers/node-mongodb-native/index.js:7 - exports.Binary = require('./binary');
/Users/sunknudsen/server/node_modules/mongoose/lib/index.js:15 - require('./driver').set(require('./drivers/node-mongodb-native'));
/Users/sunknudsen/server/node_modules/mongoose/index.js:9 - module.exports = require('./lib/');
/Users/sunknudsen/server/build/server.js:7 - const mongoose_1 = __importDefault(require("mongoose"));
/Users/sunknudsen/server/build/test/index.js:11 - const server_1 = __importDefault(require("../server"));
/Users/sunknudsen/server/node_modules/mocha/lib/esm-utils.js:20 - return require(file);
/Users/sunknudsen/server/node_modules/mocha/lib/esm-utils.js:33 - const result = await exports.requireOrImport(path.resolve(file));
/Users/sunknudsen/server/node_modules/mocha/lib/mocha.js:421 - return esmUtils.loadFilesAsync(
/Users/sunknudsen/server/node_modules/mocha/lib/cli/run-helpers.js:156 - await mocha.loadFilesAsync();
/Users/sunknudsen/server/node_modules/mocha/lib/cli/run-helpers.js:225 - return run(mocha, options, fileCollectParams);
/Users/sunknudsen/server/node_modules/mocha/lib/cli/run.js:366 - await runMocha(mocha, argv);
/Users/sunknudsen/server/node_modules/yargs/lib/command.js:241 - ? innerArgv.then(argv => commandHandler.handler(argv))
Related
All my imported interceptors work fine in Axios, but getResponseTime interceptor doesn't.
NON WORKING EXAMPLE
What about just to assume the case I want to manage and import interceptors to Axios. I'd like to define them in separate file logTimeInterceptor.js:
export const responseTimeInterceptor = (response) => {
// to avoid overwriting if another interceptor
// already defined the same object (meta)
response.meta = response.meta || {}
response.meta.requestStartedAt = new Date().getTime()
return response
}
export const logResponseTimeInterceptor = ((response) => {
// Logs response time - only for successful requests
console.log(
`📡 API | Execution time for: ${response.config.url} - ${
new Date().getTime() - response.config.meta.requestStartedAt
} ms`
)
return response
},
// Support for failed requests
// Handle 4xx & 5xx responses
(response) => {
console.error(
`📡 API | Execution time for: ${response.config.url} - ${
new Date().getTime() - response.config.meta.requestStartedAt
} ms`
)
throw response
}
)
Then I import them and I use them in the way in a given example of httpClient.js:
import axios from 'axios'
import { ApiSettings } from '#/api/config'
import {
responseTimeInterceptor,
logResponseTimeInterceptor
} from '#/api/interceptors/logTimeInterceptor'
// ============== A P I C L I E N T =============
const httpClient = axios.create(ApiSettings)
// ============== A P I I N T E R C E P T O R S =============
httpClient.interceptors.request.use(authInterceptor)
httpClient.interceptors.response.use(
responseTimeInterceptor,
logResponseTimeInterceptor,
)
export default httpClient
But those interceptors don't work as imports together, and I think they has to stay intact in httpClient file. Please advise. It only breaks with logResponseTimes interceptor, the others work fine.
// UPDATE: According to #IVO GELOV I tried with one argument but still with no success, also I moved responseTimeInterceptor to request, the refactored code looks as follows:
// httpClient.js
httpClient.interceptors.request.use(responseTimeInterceptor)
httpClient.interceptors.response.use(logResponseTimeInterceptor)
httpClient.interceptors.response.use(logResponseErrorTimeInterceptor)
responseTimeInterceptor.js
export const responseTimeInterceptor = (response) => {
// to avoid overwriting if another interceptor
// already defined the same object (meta)
response.meta = response.meta || {}
response.meta.requestStartedAt = new Date().getTime()
return response
}
export const logResponseTimeInterceptor = (response) => {
// Logs response time - only for successful requests
console.log(
`📡 API | Execution time for: ${response.config.url} - ${
new Date().getTime() - response.config.meta.requestStartedAt
} ms`
)
return response
}
export const logResponseErrorTimeInterceptor = (response) => {
// Support for failed requests
// Handle 4xx & 5xx responses
console.error(
`📡 API | Execution time for: ${response.config.url} - ${
new Date().getTime() - response.config.meta.requestStartedAt
} ms`
)
throw response
}
Using Vite's dev server, if I try to access a non-existent URL (e.g. localhost:3000/nonexistent/index.html), I would expect to receive a 404 error. Instead I receive a 200 status code, along with the contents of localhost:3000/index.html.
How can I configure Vite so that it returns a 404 in this situation?
(This question: Serve a 404 page with app created with Vue-CLI, is very similar but relates to the Webpack-based Vue-CLI rather than Vite.)
Vite 3
Vite 3.x introduced appType, which can be used to enable/disable the history fallback. Setting it to 'mpa' disables the history fallback while keeping the index.html transform and the 404 handler enabled. The naming is somewhat misleading, as it implies the mode is only for MPAs, but on the contrary, you can use this mode for SPAs:
import { defineConfig } from 'vite'
export default defineConfig({
appType: 'mpa', // disable history fallback
})
Note the history fallback normally rewrites / to /index.html, so you'd have to insert your own middleware to do that if you want to keep that behavior:
import { defineConfig } from 'vite'
const rewriteSlashToIndexHtml = () => {
return {
name: 'rewrite-slash-to-index-html',
apply: 'serve',
enforce: 'post',
configureServer(server) {
// rewrite / as index.html
server.middlewares.use('/', (req, _, next) => {
if (req.url === '/') {
req.url = '/index.html'
}
next()
})
},
}
}
export default defineConfig({
appType: 'mpa', // disable history fallback
plugins: [
rewriteSlashToIndexHtml(),
],
})
Vite 2
Vite 2.x does not support disabling the history API fallback out of the box.
As a workaround, you can add a Vite plugin that removes Vite's history API fallback middleware (based on #ChrisCalo's answer):
// vite.config.js
import { defineConfig } from 'vite'
const removeViteSpaFallbackMiddleware = (middlewares) => {
const { stack } = middlewares
const index = stack.findIndex(({ handle }) => handle.name === 'viteSpaFallbackMiddleware')
if (index > -1) {
stack.splice(index, 1)
} else {
throw Error('viteSpaFallbackMiddleware() not found in server middleware')
}
}
const removeHistoryFallback = () => {
return {
name: 'remove-history-fallback',
apply: 'serve',
enforce: 'post',
configureServer(server) {
// rewrite / as index.html
server.middlewares.use('/', (req, _, next) => {
if (req.url === '/') {
req.url = '/index.html'
}
next()
})
return () => removeViteSpaFallbackMiddleware(server.middlewares)
},
}
}
export default defineConfig({
plugins: [
removeHistoryFallback(),
],
})
One disadvantage of this plugin is it relies on Vite's own internal naming of the history fallback middleware, which makes this workaround brittle.
You could modify fallback middleware to change the default behaves, or anything else you want. Here is an example. https://github.com/legend-chen/vite-404-redirect-plugin
Here's an approach that doesn't try to check what's on disk (which yielded incorrect behavior for me).
Instead, this approach:
removes Vite's SPA fallback middleware
it uses Vite's built-in HTML transformation and returns /dir/index.html (if it exists) for /dir or /dir/ requests
404s for everything else
// express not necessary, but its API does simplify things
const express = require("express");
const { join } = require("path");
const { readFile } = require("fs/promises");
// ADJUST THIS FOR YOUR PROJECT
const PROJECT_ROOT = join(__dirname, "..");
function removeHistoryFallback() {
return {
name: "remove-history-fallback",
configureServer(server) {
// returned function runs AFTER Vite's middleware is built
return function () {
removeViteSpaFallbackMiddleware(server.middlewares);
server.middlewares.use(transformHtmlMiddleware(server));
server.middlewares.use(notFoundMiddleware());
};
},
};
}
function removeViteSpaFallbackMiddleware(middlewares) {
const { stack } = middlewares;
const index = stack.findIndex(function (layer) {
const { handle: fn } = layer;
return fn.name === "viteSpaFallbackMiddleware";
});
if (index > -1) {
stack.splice(index, 1);
} else {
throw Error("viteSpaFallbackMiddleware() not found in server middleware");
}
}
function transformHtmlMiddleware(server) {
const middleware = express();
middleware.use(async (req, res, next) => {
try {
const rawHtml = await getIndexHtml(req.path);
const transformedHtml = await server.transformIndexHtml(
req.url, rawHtml, req.originalUrl
);
res.set(server.config.server.headers);
res.send(transformedHtml);
} catch (error) {
return next(error);
}
});
// named function for easier debugging
return function customViteHtmlTransformMiddleware(req, res, next) {
middleware(req, res, next);
};
}
async function getIndexHtml(path) {
const indexPath = join(PROJECT_ROOT, path, "index.html");
return readFile(indexPath, "utf-8");
}
function notFoundMiddleware() {
const middleware = express();
middleware.use((req, res) => {
const { method, path } = req;
res.status(404);
res.type("html");
res.send(`<pre>Cannot ${method} ${path}</pre>`);
});
return function customNotFoundMiddleware(req, res, next) {
middleware(req, res, next);
};
}
module.exports = {
removeHistoryFallback,
};
What's funny is that Vite seems to take the stance that:
it's a dev and build tool only, it's not to be used in production
built files are meant to be served statically, therefore, it doesn't come with a production server
However, for static file servers:
some configurations of static file servers will return index files when a directory is requested
they generally don't fallback to serving index.html when a file is not found and instead return a 404 in those situations
Therefore, it doesn't make much sense that Vite's dev server has this fallback behavior when it's targeting production environments that don't have it. It would be nice if there were a "correct" way to just turn off the history fallback while keeping the rest of the serving behavior (HTML transformation, etc).
I'm building a that aims to serve a mobile application. Besides serving the client, it will have several back-office functionalities.
We are using swagger and we do want to be able to access the swagger docs of our back-office endpoints. However, we do not want to expose all of our endpoints publicly.
Assuming that having all endpoints public is a bad option one solutions we are thinking of is letting our server serve two ports, and then only exposing one port to the public. We have created a small sample repo that that serves a client module and a back-office module on two different ports.
The main.ts looks like the following:
import { NestFactory } from '#nestjs/core';
import { ClientModule } from './modules/client/client.module';
import * as express from 'express';
import * as http from 'http';
import {ExpressAdapter} from '#nestjs/platform-express';
import { BackOfficeModule } from './modules/backoffice/backoffice.module';
import { SwaggerModule, DocumentBuilder } from '#nestjs/swagger';
async function bootstrap() {
const clientServer = express();
const clientApp = await NestFactory.create(
ClientModule,
new ExpressAdapter(clientServer),
);
const clientOptions = new DocumentBuilder()
.setTitle('ClientServer')
.setDescription('The client server API description')
.setVersion('1.0')
.addTag('client')
.build();
const clientDocument = SwaggerModule.createDocument(clientApp, clientOptions);
SwaggerModule.setup('api', clientApp, clientDocument);
await clientApp.init();
const backOfficeServer = express();
const backOfficeApp = await NestFactory.create(
BackOfficeModule,
new ExpressAdapter(backOfficeServer),
);
const backOfficeOptions = new DocumentBuilder()
.setTitle('BackOffice')
.setDescription('The back office API description')
.setVersion('1.0')
.addTag('backOffice')
.build();
const backOfficeDocument = SwaggerModule.createDocument(backOfficeApp, backOfficeOptions);
SwaggerModule.setup('api', backOfficeApp, backOfficeDocument);
await backOfficeApp.init();
http.createServer(clientServer).listen(3000); // The public port (Load balancer will route traffic to this port)
http.createServer(backOfficeServer).listen(4000); // The private port (Will be accessed through a bastian host or similar)
}
bootstrap();
Another option would be to create a bigger separation of the codebase and infrastructure, however as this is a very early stage we feel that is unnecessary.
Our question to the Nest community is thus, has anyone done this? If so, what is are your experience? What are the drawbacks to separating our backend code like this?
Disclaimer: this solution is for express+REST combination.
Routing
Even thought nestjs can't separate controller's based on port, it can separate them based on host. Using that, you can add a reverse proxy in front of your application, that modifies the host header based on the port. Or, you can do that in an express middleware, to make things even more simpe. This is what I did:
async function bootstrap() {
const publicPort = 3000
const privatePort = 4000
const server = express()
server.use((req, res, next) => {
// act as a proper reverse proxy and set X-Forwarded-Host header if it hasn't been set
req.headers['x-forwarded-host'] ??= req.headers.host
switch (req.socket.localPort) {
case publicPort:
req.headers.host = 'public'
break
case privatePort:
req.headers.host = 'private'
break
default:
// this shouldn't be possible
res.sendStatus(500)
return
}
next()
})
const app = await NestFactory.create(AppModule, new ExpressAdapter(server))
http.createServer(server).listen(publicPort)
http.createServer(server).listen(privatePort)
}
Controllers:
#Controller({ path: 'cats', host: 'public' })
export class CatsController {...}
#Controller({ path: 'internal' host: 'private' })
export class InternalController {...}
Alternatively, you can simplify by creating your own PublicController and PrivateController decorators:
// decorator for public controllers, also sets guard
export const PublicController = (path?: string): ClassDecorator => {
return applyDecorators(Controller({ path, host: 'public' }), UseGuards(JwtAuthGuard))
}
// decorator for private controllers
export const PrivateController = (path?: string): ClassDecorator => {
return applyDecorators(Controller({ path, host: 'private' }))
}
#PublicController('cats')
export class CatsController {...}
#PrivateController('internal')
export class InternalController {...}
Swagger
For swagger, SwaggerModule.createDocument has an option "include", which accepts a list of modules to include in the swagger docs. With a bit of effort we can also turn the swagger serving part into an express Router, so both the private and public swagger can be served on the same path, for the different ports:
async function bootstrap() {
const publicPort = 3000
const privatePort = 4000
const server = express()
server.use((req, res, next) => {
// act as a proper reverse proxy and set X-Forwarded-Host header if it hasn't been set
req.headers['x-forwarded-host'] ??= req.headers.host
switch (req.socket.localPort) {
case publicPort:
req.headers.host = 'public'
break
case privatePort:
req.headers.host = 'private'
break
default:
// this shouldn't be possible
res.sendStatus(500)
return
}
next()
})
const app = await NestFactory.create(AppModule, new ExpressAdapter(server))
// setup swagger
let publicSwaggerRouter = await createSwaggerRouter(app, [CatsModule])
let privateSwaggerRouter: await createSwaggerRouter(app, [InternalModule])
server.use('/api', (req: Request, res: Response, next: NextFunction) => {
switch (req.headers.host) {
case 'public':
publicSwaggerRouter(req, res, next)
return
case 'private':
privateSwaggerRouter(req, res, next)
return
default:
// this shouldn't be possible
res.sendStatus(500)
return
}
})
http.createServer(server).listen(publicPort)
http.createServer(server).listen(privatePort)
}
async function createSwaggerRouter(app: INestApplication, modules: Function[]): Promise<Router> {
const swaggerConfig = new DocumentBuilder().setTitle('MyApp').setVersion('1.0').build()
const document = SwaggerModule.createDocument(app, swaggerConfig, { include: modules })
const swaggerUi = loadPackage('swagger-ui-express', 'SwaggerModule', () => require('swagger-ui-express'))
const swaggerHtml = swaggerUi.generateHTML(document)
const router = Router()
.use(swaggerUi.serveFiles(document))
.get('/', (req: Request, res: Response, next: NextFunction) => {
res.send(swaggerHtml)
})
return router
}
That's ok, but if you want to run two servers on 1 host, I would recommend to create two files like main-client.ts and main-back-office.ts and run them in different processes, because in that case failures of one server would not affect work of another.
Also if you are not run this in Docker I would suggest tools like forever, pm2, supervisor or my own very small library workers-cluster
If you run it in Docker and don't want big refactoring, I would recommend to create
single Dockerfile with running different CMD or ENTRYPOINT commands
The NestJS docs cover how to let one server serve multiple ports:
https://docs.nestjs.com/faq/multiple-servers#multiple-simultaneous-servers
The following recipe shows how to instantiate a Nest application that listens on multiple ports (for example, on a non-HTTPS port and an HTTPS port) simultaneously.
const httpsOptions = {
key: fs.readFileSync('./secrets/private-key.pem'),
cert: fs.readFileSync('./secrets/public-certificate.pem'),
};
const server = express();
const app = await NestFactory.create(
ApplicationModule,
new ExpressAdapter(server),
);
await app.init();
http.createServer(server).listen(3000);
https.createServer(httpsOptions, server).listen(443);
I am trying to use express-validator to validate the req.body before sending a post request to insert data to postgres.
I have a route file, controller file and I want to carryout validation in a file called validate.js. Meanwhile, I have installed express-validator and in my server.js I have imported it. Other resources I come across seem to implement the validation in the function that contains the logic for inserting the data.
//server.js
....
import expressValidator from 'express-validator';
...
app.use(bodyParser.urlencoded({ extended: false }));
app.use(expressValidator);
//route.js
import express from 'express';
import usersController from './controller';
const router = express.Router();
router.post('/createuser', usersController.createUser);
//controller.js
createUser(req, res){
// ...
const { firstName, lastName, email, password } = req.body;
//code to insert user details to the database
}
//validator.js
import { check } from 'express-validator/check';
module.exports = [check('email').isEmail()];
I expect to implemet the validation in a file called validator.js to, say, validate the email before inserting to the database
I have same approach, except one thing that is we shouldn't handle validation error in our controller. So If any error is occurring at Validation Layer, it should throw back from there only. We shouldn't allow our control flow to enter into the Controller Layer. So below are the code example:
useRoute.js
const route = express.Router();
const {
**validateUser**,
} = require('../middlewares/validators/userValidator');
route.route('/').post(**validateUser**, createUser);
route.route('/:id').put(**validateUser**, updateUser);
module.exports = route;
userValidator.js
const {check, validationResult} = require('express-validator');
exports.validateUser = [
check('name')
.trim()
.escape()
.not()
.isEmpty()
.withMessage('User name can not be empty!')
.bail()
.isLength({min: 3})
.withMessage('Minimum 3 characters required!')
.bail(),
check('email')
.trim()
.normalizeEmail()
.not()
.isEmpty()
.withMessage('Invalid email address!')
.bail(),
(req, res, next) => {
const errors = validationResult(req);
if (!errors.isEmpty())
return res.status(422).json({errors: errors.array()});
next();
},
];
controller.js
/**
* #desc - create new User
* #method - POST
*/
exports.createCategory = async (req, res) => {
// do your stuff here. (No need to check any validation error here)
}
Here is the way i use express-validator. I have a file validator.js where i have validation logic for many routes. For example:
validator.js
const { check } = require('express-validator/check');
exports.createUser = [check('email').isEmail()];
exports.anotherRoute = [// check data];
exports.doSomethingElse = [// check data];
Now in your route file you require the validator.js file const validator = require("./validator"); // or where your file is located
and use the validation logic you want as a middleware. For example:
route.js
//
router.post('/createuser', validator.createUser, usersController.createUser);
Last, inside your controller you have to check for possible errors created during validation, after requiring validationResult.
controller.js
const { validationResult } = require('express-validator/check');
exports.createUser(req, res) {
const errors = validationResult(req);
if (!errors.isEmpty()) {
return res.status(422).json({ errors: errors.array() });
}
// do stuff here.
}
Also, you don't have to use app.use(expressValidator); in your server.js file
I was running into a few problems with async functions, here is my humble solution hope this helps someone:
Route Definitions
const router = require('express').Router();
const userValidator = require('./Validators/UserValidator');
const userController = require('./Controllers/UserController');
router.post('/users', userValidator.add, userController.add);
Validator
const { check, validationResult } = require('express-validator');
const generateValidators = () => [
check('first_name')...,
check('last_name')...,
check('email')...,
check('password')...
]
const reporter = (req, res, next) => {
const errors = validationResult(req);
if (!errors.isEmpty()) {
const errorMessages = errors.array().map(error => error.msg);
return res.status(400).json({
errors: errorMessages
});
}
next();
}
module.exports = {
add: [
generateValidators(),
reporter
]
};
just adding few changes to Shivam's answer for
email
const {check, validationResult} = require('express-validator');
exports.validateUser = [
check('name')
.trim()
.escape()
.not()
.isEmpty()
.withMessage('User name can not be empty!')
.bail()
.isLength({min: 3})
.withMessage('Minimum 3 characters required!')
.bail(),
check('email')
trim()
.not()
.isEmpty()
.withMessage("Email name can not be empty!")
.bail()
.isEmail()
.withMessage("Invalid email address!")
.bail(),
(req, res, next) => {
const errors = validationResult(req);
if (!errors.isEmpty())
return res.status(422).json({errors: errors.array()});
next();
},
];
Really hoping someone can help me.
What I am trying to do: Call a REST API for a json and resolve a Angular 2 promise.
ServerAPI running Node.js/ExpressJS/Lodash
Server.js file:
var express = require('express');
var app = express();
var bodyParser = require("body-parser");
var data = require('./data.json');
var _ = require('lodash');
var cors = require('cors');
app.use(bodyParser.urlencoded({ extended: false }));
app.use(cors());
app.get('/GetData', function (req, resp) {
if (req.query.search != null) {
var result = _.find(data, function (o) {
return o.value === req.query.search.toLowerCase().trim()
});
return resp.send(result)
}
});
app.listen(1337, function () {
console.log('Listening at Port 1337');
});
Ran as tested http://localhost:1337/GetData?search=colorado and returns a vaild json object.
ClientAPI
Service file calling HTTP request:
import {Injectable} from "#angular/core";
import {Http} from "#angular/http";
import {Config} from "../config";
import {SearchResult} from "../models/search-result.model";
import {MockSearchData} from "../mock/mock-search-results";
import {Observable} from 'rxjs/Rx';
import 'rxjs/add/operator/map';
#Injectable()
export class ApiDataService {
constructor(private http:Http) {
}
public performSearchRequest(searchTerm:string,queryType:string):Promise<SearchResult[]> {
return new Promise<SearchResult[]>((resolve, reject) => {
let url = Config.apiBaseUrl + Config.searchApi;
url += "?search=" + searchTerm;
console.log("Your query to be: " + url);
if (searchTerm != "") {
if (queryType == 'mock') {
resolve(MockSearchData);
} else if (queryType == 'api') {
let data = [];
this.http.get(url)
.map(resp => resp.json())
.subscribe(getData => data = getData);
resolve(data);
} else {
reject("No query type found.");
}
} else {
reject("Please enter a search term.");
};
});
}
}
The resolve of the mock data which is a local json file within the ClientAPI works perfectly. I need to get the if function for the api query type to work.
The Angular app starts with no issue and runs the http.get without error. I checked the network tab under the dev tools and can see that a HTTP request was done and returns its response is the valid JSON object I want resolved e.g there is data being returned. Yet the table i am resolving this into is blank.
WHAT AM I DOING WRONG!
The issue occurs here:
this.http.get(url)
.map(resp => resp.json())
.subscribe(getData => data = getData);
resolve(data);
You subscribe to the observable, but it isn’t “resolved” yet when you call resolve directly afterwards. That means you’re really just calling resolve([]).
Instead, do something like this:
this.http.get()./*...*/.subscribe(result => resolve(result));
You also might want to look into the toPromise method on Observables as well as the way to construct a resolved promise directly.