Frequent timeout with app using Serverless Framework (AWS Lambda/Gateway), Express, Mongoose/MongoDB Atlas - express

Trigger warning : Beginner question.
I built an api using Express and Mongoose with a MongoDB Atlas DB.
Most of the time, it works normally, but often I get timeout errors. This seems to happen very randomly and concerns all routes, etc... Precisely, I get :
`502 Internal server error via POSTMAN`
and in the Serverless Dashboard, I get :
invocation
time invoked 1 day ago, mar 08 at 1:38pm
fatal error Function execution duration going to exceeded configured timeout limit.
cold start
duration 48.9 s
memory used na
request
endpoint /{proxy+}
method POST
status 502
message Internal server error
latency 27 ms
and span & log :
I used this tutorial to wrap my express app to deploy it with serverless framework : https://dev.to/adnanrahic/a-crash-course-on-serverless-apis-with-express-and-mongodb-193k
Serverless.yml file :
service: serviceName
app: appName
org: orgName
provider:
name: aws
runtime: nodejs12.x
stage: ${env:NODE_ENV}
region: eu-central-1
environment:
NODE_ENV: ${env:NODE_ENV}
DB: ${env:DB}
functions:
app:
handler: server.run
events:
- http:
path: /
method: ANY
cors: true
- http:
path: /{proxy+}
method: ANY
cors: true
plugins:
- serverless-offline # Utiliser pour tester localement
- serverless-dotenv-plugin
server.js file :
const sls = require('serverless-http')
const app = require('./app')
module.exports.run = sls(app)
app.js file :
const express = require('express')
const cors = require('cors')
const bodyParser = require('body-parser')
const newRoutes = require('./routes/file')
const app = express()
app.use(bodyParser.json())
const helmet = require('helmet')
app.use(helmet())
app.options('*', cors())
app.use(cors({ allowedHeaders: 'Content-Type, Authorization' }))
app.use('/new-route', newRoutes)
app.use((error, req, res, next) => {
console.log(error)
const status = error.status || 500
const message = error.message
res.status(status).json({
status: status,
message: message
})
})
// Gère la connexion à la base de donnée :
require('./db')
module.exports = app
and finally db.js file :
const mongoose = require('mongoose')
mongoose
.connect(
process.env.DB, {
useNewUrlParser: true,
useUnifiedTopology: true
})
.then(() => {
console.log('connected')
})
.catch(err => console.log(err))
From what I have read, it is related to cold start in Lambda and the way API Gateway handles timeouts (!?). I have read this on mongoose documentation (https://mongoosejs.com/docs/lambda.html), and read also other tutorials, but I don't how exaclty I should adapt it to my situation.
Thank you for your help

Under your provider add timeout, maximum value of timeout in lambda is
900 seconds, place it according to your execution time like 30 seconds
and see what happens
provider:
timeout: 30
The error is clearly saying that it's execution exceeded timeout, since you have not configured timeout so it was using default timeout of 3 seconds, hopefully it will solve the issue

The issue is likely due to your open database connection. While this connection is established any calls to callback won't be returned to the client and your function will timeout.
You need to set context.callbackWaitsForEmptyEventLoop to false.
Here is the explanation from the docs:
callbackWaitsForEmptyEventLoop – Set to false to send the response right away when the callback executes, instead of waiting for the Node.js event loop to be empty. If this is false, any outstanding events continue to run during the next invocation.
With serverless-http you can set this option quite easily within your server.js file:
const sls = require('serverless-http')
const app = require('./app')
module.exports.run = sls(app, { callbackWaitsForEmptyEventLoop: false })

Related

Nuxt.js on Google App Engine Standard Environment - 500 errors

I am serving Nuxt.js SSR application from Google App Engine, from F2 instance. All is deployed ok and works well, but sometimes I hit a weird 500 server error. I think the problem is because GAE shuts down my instance because it is rarely used. When the instance is created again, my website fails to load (timeouts), resulting in 500 error. I am using serverMiddleware with Express.
/server/middleware.js
const express = require('express')
const app = express()
// this fetches a CMS token and creates a CMS object used in the request below
const CMS = require('../cms')
app.use(express.json())
app.use(express.urlencoded({ extended: true }))
// get Page from the CMS
app.post('/page', async (req, res, next) => {
try {
let result = await CMS.getPage(req.body.route)
res.status(200).json(result)
} catch (error) {
console.log('Error', error)
}
})
app.yaml
runtime: nodejs12
instance_class: F2
handlers:
- url: /_nuxt
static_dir: .nuxt/dist/client
secure: always
- url: /(.*\.(gif|png|jpg|ico|txt))$
static_files: static/\1
upload: static/.*\.(gif|png|jpg|ico|txt)$
secure: always
- url: /.*
secure: always
redirect_http_response_code: 301
script: auto
env_variables:
HOST: "0.0.0.0"
NODE_ENV: "production"
automatic_scaling:
min_idle_instances: 1
max_idle_instances: 1
min_instances: 1
max_instances: 1
target_cpu_utilization: 0.9
target_throughput_utilization: 0.9
max_concurrent_requests: 50
And here is the relevant log screenshot:
0: {
logMessage: "Request was aborted after waiting too long to attempt to service your request."
severity: "ERROR"
time: "2022-01-14T08:55:22.057499Z"
}
Am I doing something wrong here? I would not mind keeping the costs down and scaling back to 0 instances when they are not needed, but the website would still need to be served as fast as possible.

"Execution failed" when setting up API Gateway and Fargate with AWS CDK

I am trying to setup AWS API Gateway to access a fargate container in a private VPC as described here. For this I am using AWS CDK as described below. But when I curl the endpoint after successful cdk deploy I get "Internal Server Error" as a response. I can't find any additional information. For some reason API GW can't reach the container.
So when I curl the endpoint like this:
curl - i https://xxx.execute-api.eu-central-1.amazonaws.com/prod/MyResource
... I get the following log output in cloud watch:
Extended Request Id: NpuEPFWHliAFm_w=
Verifying Usage Plan for request: 757c6b9e-c4af-4dab-a5b1-542b15a1ba21. API Key: API Stage: ...
PI Key authorized because method 'ANY /MyResource/{proxy+}' does not require API Key. Request will not contribute to throttle or quota limits
Usage Plan check succeeded for API Key and API Stage ...
Starting execution for request: 757c6b9e-c4af-4dab-a5b1-542b15a1ba21
HTTP Method: GET, Resource Path: /MyResource/test
Execution failed due to configuration error: There was an internal error while executing your request
CDK Code
First I create a network load balanced fargate service:
private setupService(): NetworkLoadBalancedFargateService {
const vpc = new Vpc(this, 'MyVpc');
const cluster = new Cluster(this, 'MyCluster', {
vpc: vpc,
});
cluster.connections.allowFromAnyIpv4(Port.tcp(5050));
const taskDefinition = new FargateTaskDefinition(this, 'MyTaskDefinition');
const container = taskDefinition.addContainer('MyContainer', {
image: ContainerImage.fromRegistry('vad1mo/hello-world-rest'),
});
container.addPortMappings({
containerPort: 5050,
hostPort: 5050,
});
const service = new NetworkLoadBalancedFargateService(this, 'MyFargateServie', {
cluster,
taskDefinition,
assignPublicIp: true,
});
service.service.connections.allowFromAnyIpv4(Port.tcp(5050));
return service;
}
Next I create the VpcLink and the API Gateway:
private setupApiGw(service: NetworkLoadBalancedFargateService) {
const api = new RestApi(this, `MyApi`, {
restApiName: `MyApi`,
deployOptions: {
loggingLevel: MethodLoggingLevel.INFO,
},
});
// setup api resource which forwards to container
const resource = api.root.addResource('MyResource');
resource.addProxy({
anyMethod: true,
defaultIntegration: new HttpIntegration('http://localhost.com:5050', {
httpMethod: 'ANY',
options: {
connectionType: ConnectionType.VPC_LINK,
vpcLink: new VpcLink(this, 'MyVpcLink', {
targets: [service.loadBalancer],
vpcLinkName: 'MyVpcLink',
}),
},
proxy: true,
}),
defaultMethodOptions: {
authorizationType: AuthorizationType.NONE,
},
});
resource.addMethod('ANY');
this.addCorsOptions(resource);
}
Anyone has a clue what is wrong with this config?
After hours of trying I finally figured out that the security groups do not seem to be updated correctly when setting up the VpcLink with CDK. Broadening the allowed connection with
service.service.connections.allowFromAnyIpv4(Port.allTraffic())
solved it. Still need to figure out which minimum set needs to be set instead of allTrafic()
Additionally I replaced localhost in the HttpIntegration by the endpoint of the load balancer like this:
resource.addMethod("ANY", new HttpIntegration(
'http://' + service.loadBalancer.loadBalancerDnsName,
{
httpMethod: 'ANY',
options: {
connectionType: ConnectionType.VPC_LINK,
vpcLink: new VpcLink(this, 'MyVpcLink', {
targets: [service.loadBalancer],
vpcLinkName: 'MyVpcLink',
})
},
}
))

Axios get request working locally but timing out on serverless

I'm trying to scrape some websites, but for some reason it works locally (localhost) with express, but not when I've deployed it to lambda. Tried w/ the ff serverless-http and aws-serverless-express and serverless-express plugin. Also tried switching between axios and superagent.
Routes work fine, and after hrs of investigating, I've narrowed the problem down to the fetch/axios bit. When i don't add a timeout to axios/superagent/etc, the app just keeps running and timing out at 15/30 sec, whichever is set and get an error 50*.
service: scrape
provider:
name: aws
runtime: nodejs10.x
stage: dev
region: us-east-2
memorySize: 128
timeout: 15
plugins:
- serverless-plugin-typescript
- serverless-express
functions:
app:
handler: src/server.handler
events:
- http:
path: /
method: ANY
cors: true
- http:
path: /{proxy+}
method: ANY
cors: true
protected async fetchHtml(uri: string): Promise<CheerioStatic | null> {
const htmlElement = await Axios.get(uri, { timeout: 5000 });
if(htmlElement.status === 200) {
const $ = Cheerio.load(htmlElement && htmlElement.data || '');
$('script').remove();
return $;
}
return null;
}
As far as i know, the default timeout of axios is indefinite. Remember, API gateway has hard limit of 29 sec timeout.
I had the same issue recently, sometimes the timeouts are due to cold starts. So I basically had to add a retry logic for the api call in my frontend react application.

Calling express-session backed by connect-redis inside websockets/ws

Okay so here goes. I am trying to to use express-session backed by connect-redis together with websockets/ws. The vast majority of example code I have seen usually includes the use of Express to serve the client a websocket script of some type and then configuring the server app to use the express-session middleware to work with session data...with websockets further down the process chain.
My client side app on the other hand immediately initiates an Opening Handshake/Upgrade request, which bypasses Express and heads straight into websockets/ws. For this reason I am not able to work with session data through the commonly used app.use(session(...) middleware setup.
The below code uses a setup that allows me to call express-session inside websocket/ws and to put the connection request through express-session so I can get at the session data. This works as is shown by the output also below. What does not work however is backing express-session by connect-redis inside this setup. To check I call a redis client directly to query my redis box after express-session has run, no keys are returned.
I have three questions:
1. Is what I am doing below the 'correct' way of integrating express-session with websockets/ws for the client setup described?
2. If not what should I be doing?
3. If this is a good way, is it possible to get connect-redis working as session store?
Your insights are very welcome.
Server-side code:
const express = require('express');
const Session = require('express-session');
const http = require('http');
const ws = require('ws');
const util = require('util');
const redis = require('redis');
const client = redis.createClient(6379, '10.0.130.10', {no_ready_check: true});
const RedisStore = require('connect-redis')(Session);
const SessionStore = new RedisStore({host: '10.0.130.10', port: '6379', ttl: 60, logErrors: true});
//const SessionStore = new Session.MemoryStore();
var session = Session({
store: SessionStore,
cookie: {secure: true, maxAge: 3600, httpOnly: true},
resave: false,
saveUninitialized: true,
secret: '12345'
});
// Define Express and WS servers
const app = express();
const server = http.createServer(app);
const wss = new ws.Server({ server });
// WS Websocket
wss.on('connection', function connection(ws, req) {
session(req, {}, function(){
let sessionId = req.sessionID;
console.log('SessionID: ' + sessionId);
let sessionCookie = req.session.cookie;
console.log('SessionCookie: ' + JSON.stringify(sessionCookie));
});
client.keys('*', function(err, reply) {
// reply is null when the key is missing
console.log('Redis reponse: ' + reply);
});
});
server.listen(10031, function listening() {
console.log('Listening on: ' + server.address().port);
});
Server-side console.log output (note the Redis response):
Listening on: 10031
SessionID: Oi8AdsoZTAm3hRLmxPfGo43Kmmj_Yd6F
SessionCookie: {"originalMaxAge":3599,"expires":"2017-05-29T17:45:54.467Z","secure":true,"httpOnly":true,"path":"/"}
Redis reponse:
Reference:
express-session, connect-redis and einaros/ws
Reference:
ExpressJS & Websocket & session sharing

Socket.IO Authentication

I am trying to use Socket.IO in Node.js, and am trying to allow the server to give an identity to each of the Socket.IO clients. As the socket code is outside the scope of the http server code, it doesn't have easy access to the request information sent, so I'm assuming it will need to be sent up during the connection. What is the best way to
1) get the information to the server about who is connecting via Socket.IO
2) authenticate who they say they are (I'm currently using Express, if that makes things any easier)
Use connect-redis and have redis as your session store for all authenticated users. Make sure on authentication you send the key (normally req.sessionID) to the client. Have the client store this key in a cookie.
On socket connect (or anytime later) fetch this key from the cookie and send it back to the server. Fetch the session information in redis using this key. (GET key)
Eg:
Server side (with redis as session store):
req.session.regenerate...
res.send({rediskey: req.sessionID});
Client side:
//store the key in a cookie
SetCookie('rediskey', <%= rediskey %>); //http://msdn.microsoft.com/en-us/library/ms533693(v=vs.85).aspx
//then when socket is connected, fetch the rediskey from the document.cookie and send it back to server
var socket = new io.Socket();
socket.on('connect', function() {
var rediskey = GetCookie('rediskey'); //http://msdn.microsoft.com/en-us/library/ms533693(v=vs.85).aspx
socket.send({rediskey: rediskey});
});
Server side:
//in io.on('connection')
io.on('connection', function(client) {
client.on('message', function(message) {
if(message.rediskey) {
//fetch session info from redis
redisclient.get(message.rediskey, function(e, c) {
client.user_logged_in = c.username;
});
}
});
});
I also liked the way pusherapp does private channels.
A unique socket id is generated and
sent to the browser by Pusher. This is
sent to your application (1) via an
AJAX request which authorizes the user
to access the channel against your
existing authentication system. If
successful your application returns an
authorization string to the browser
signed with you Pusher secret. This is
sent to Pusher over the WebSocket,
which completes the authorization (2)
if the authorization string matches.
Because also socket.io has unique socket_id for every socket.
socket.on('connect', function() {
console.log(socket.transport.sessionid);
});
They used signed authorization strings to authorize users.
I haven't yet mirrored this to socket.io, but I think it could be pretty interesting concept.
I know this is bit old, but for future readers in addition to the approach of parsing cookie and retrieving the session from the storage (eg. passport.socketio ) you might also consider a token based approach.
In this example I use JSON Web Tokens which are pretty standard. You have to give to the client page the token, in this example imagine an authentication endpoint that returns JWT:
var jwt = require('jsonwebtoken');
// other requires
app.post('/login', function (req, res) {
// TODO: validate the actual user user
var profile = {
first_name: 'John',
last_name: 'Doe',
email: 'john#doe.com',
id: 123
};
// we are sending the profile in the token
var token = jwt.sign(profile, jwtSecret, { expiresInMinutes: 60*5 });
res.json({token: token});
});
Now, your socket.io server can be configured as follows:
var socketioJwt = require('socketio-jwt');
var sio = socketIo.listen(server);
sio.set('authorization', socketioJwt.authorize({
secret: jwtSecret,
handshake: true
}));
sio.sockets
.on('connection', function (socket) {
console.log(socket.handshake.decoded_token.email, 'has joined');
//socket.on('event');
});
The socket.io-jwt middleware expects the token in a query string, so from the client you only have to attach it when connecting:
var socket = io.connect('', {
query: 'token=' + token
});
I wrote a more detailed explanation about this method and cookies here.
Here is my attempt to have the following working:
express: 4.14
socket.io: 1.5
passport (using sessions): 0.3
redis: 2.6 (Really fast data structure to handle sessions; but you can use others like MongoDB too. However, I encourage you to use this for session data + MongoDB to store other persistent data like Users)
Since you might want to add some API requests as well, we'll also use http package to have both HTTP and Web socket working in the same port.
server.js
The following extract only includes everything you need to set the previous technologies up. You can see the complete server.js version which I used in one of my projects here.
import http from 'http';
import express from 'express';
import passport from 'passport';
import { createClient as createRedisClient } from 'redis';
import connectRedis from 'connect-redis';
import Socketio from 'socket.io';
// Your own socket handler file, it's optional. Explained below.
import socketConnectionHandler from './sockets';
// Configuration about your Redis session data structure.
const redisClient = createRedisClient();
const RedisStore = connectRedis(Session);
const dbSession = new RedisStore({
client: redisClient,
host: 'localhost',
port: 27017,
prefix: 'stackoverflow_',
disableTTL: true
});
// Let's configure Express to use our Redis storage to handle
// sessions as well. You'll probably want Express to handle your
// sessions as well and share the same storage as your socket.io
// does (i.e. for handling AJAX logins).
const session = Session({
resave: true,
saveUninitialized: true,
key: 'SID', // this will be used for the session cookie identifier
secret: 'secret key',
store: dbSession
});
app.use(session);
// Let's initialize passport by using their middlewares, which do
//everything pretty much automatically. (you have to configure login
// / register strategies on your own though (see reference 1)
app.use(passport.initialize());
app.use(passport.session());
// Socket.IO
const io = Socketio(server);
io.use((socket, next) => {
session(socket.handshake, {}, next);
});
io.on('connection', socketConnectionHandler);
// socket.io is ready; remember that ^this^ variable is just the
// name that we gave to our own socket.io handler file (explained
// just after this).
// Start server. This will start both socket.io and our optional
// AJAX API in the given port.
const port = 3000; // Move this onto an environment variable,
// it'll look more professional.
server.listen(port);
console.info(`🌐 API listening on port ${port}`);
console.info(`🗲 Socket listening on port ${port}`);
sockets/index.js
Our socketConnectionHandler, I just don't like putting everything inside server.js (even though you perfectly could), especially since this file can end up containing quite a lot of code pretty quickly.
export default function connectionHandler(socket) {
const userId = socket.handshake.session.passport &&
socket.handshake.session.passport.user;
// If the user is not logged in, you might find ^this^
// socket.handshake.session.passport variable undefined.
// Give the user a warm welcome.
console.info(`⚡︎ New connection: ${userId}`);
socket.emit('Grettings', `Grettings ${userId}`);
// Handle disconnection.
socket.on('disconnect', () => {
if (process.env.NODE_ENV !== 'production') {
console.info(`⚡︎ Disconnection: ${userId}`);
}
});
}
Extra material (client):
Just a very basic version of what the JavaScript socket.io client could be:
import io from 'socket.io-client';
const socketPath = '/socket.io'; // <- Default path.
// But you could configure your server
// to something like /api/socket.io
const socket = io.connect('localhost:3000', { path: socketPath });
socket.on('connect', () => {
console.info('Connected');
socket.on('Grettings', (data) => {
console.info(`Server gretting: ${data}`);
});
});
socket.on('connect_error', (error) => {
console.error(`Connection error: ${error}`);
});
References:
I just couldn't reference inside the code, so I moved it here.
1: How to set up your Passport strategies: https://scotch.io/tutorials/easy-node-authentication-setup-and-local#handling-signupregistration
This article (http://simplapi.wordpress.com/2012/04/13/php-and-node-js-session-share-redi/) shows how to
store sessions of the HTTP server in Redis (using Predis)
get these sessions from Redis in node.js by the session id sent in a cookie
Using this code you are able to get them in socket.io, too.
var io = require('socket.io').listen(8081);
var cookie = require('cookie');
var redis = require('redis'), client = redis.createClient();
io.sockets.on('connection', function (socket) {
var cookies = cookie.parse(socket.handshake.headers['cookie']);
console.log(cookies.PHPSESSID);
client.get('sessions/' + cookies.PHPSESSID, function(err, reply) {
console.log(JSON.parse(reply));
});
});
use session and Redis between c/s
Server side
io.use(function(socket, next) {
// get here session id
console.log(socket.handshake.headers.cookie); and match from redis session data
next();
});
this should do it
//server side
io.sockets.on('connection', function (con) {
console.log(con.id)
})
//client side
var io = io.connect('http://...')
console.log(io.sessionid)