I have a basic Serverless Express app in a lambda, with a route set to async true. I want to trigger this route asynchronously from a different application, and expect it to run in the background without having to wait for the response.
My full serverless.yml
service: service-name
useDotenv: true
custom:
serverless-offline:
useChildProcesses: true
webpack:
webpackConfig: ./webpack.config.js
packager: "yarn"
includeModules:
forceExclude:
- aws-sdk
prune:
automatic: true
includeLayers: true
number: 3
envStage:
staging: staging
domainPrefix:
staging: service.staging
customDomain:
domainName: ${self:custom.domainPrefix.${opt:stage}}.mydomain.com
basePath: ""
stage: ${self:custom.envStage.${opt:stage}}
createRoute53Record: true
plugins:
- serverless-domain-manager
- serverless-webpack
- serverless-prune-plugin
- serverless-offline
provider:
lambdaHashingVersion: "20201221"
name: aws
runtime: nodejs14.x
region: us-east-1
apiGateway:
minimumCompressionSize: 1024
iamRoleStatements:
- Effect: Allow
Action: ssm:Get*
Resource: "arn:aws:ssm:*:*:parameter/myparams/*"
- Effect: Allow
Action: kms:Decrypt
Resource: "*"
functions:
express:
handler: src/index.middyHandler
events:
- http:
path: /
method: options
- http:
path: /{any+} # Catch all routes
method: options
- http:
path: foo/{any+}
method: get
- http:
path: foo/{any+}
method: post
async: true
Note: The role that deploys this app has permissions to read write to Cloudwatch, and I can see logs from the synchronous invocations, but not from async invocations.
My index.middyHandler
import serverless from "serverless-http";
import express from "express";
import helmet from "helmet";
import bodyParser from "body-parser";
import cookieParser from "cookie-parser";
import middy from "#middy/core";
import ssm from "#middy/ssm";
import doNotWaitForEmptyEventLoop from "#middy/do-not-wait-for-empty-event-loop";
import cors from "cors";
import fooRoutes from "./routes/foo";
const app = express();
app.use(
cors({
methods: "GET,HEAD,OPTIONS,POST",
preflightContinue: false,
credentials: true,
origin: true,
optionsSuccessStatus: 204,
})
);
app.use(helmet({ contentSecurityPolicy: false, crossOriginEmbedderPolicy: false }));
app.use(bodyParser.json());
app.use(bodyParser.urlencoded({ extended: true }));
app.use(cookieParser());
app.get("/ping", (req, res) => {
res.send("Pong!");
});
// Register routes
app.use("/foo", fooRoutes);
const handler = serverless(app);
export const middyHandler = middy(handler)
.use(
doNotWaitForEmptyEventLoop({
runOnError: true,
runOnAfter: true,
runOnBefore: true,
})
)
.use(
ssm({
setToEnv: true,
fetchData: {
MY_KEYS: "ssm/path"
},
})
)
When I call this method, it correctly returns a 200 response immediately. But the actual code is never run, I have a DB insert in there, and it doesn't happen. In the API Gateway I can see the X-Amz-Invocation-Type header is correctly being passed as Event type.
It is not a proxy integration, as required for async invocation
What am I missing here? The route controller is a test and the code is very simple
testAsync: async (req, res) => {
console.log("In Test Async"); // Does not display in Cloudwatch
try {
const { value } = req.body;
const resp = await updateTest(value); // This just inserts an entry in the DB with value
return res.send(resp);
} catch (err) {
return res.status(500).send(err);
}
},
Is there any other setting I'm missing here? I'm not an AWS expert, so any help would be highly appreciated. Thanks!
Related
I'm using the serverless framework to try and test EventBridge.
The documentation is a little sparce, but for my test I would like to have two lambda functions created: first one publishes an event, the second consumes it.
Here is my YAML:
service: events
frameworkVersion: '2'
provider:
name: aws
runtime: nodejs12.x
lambdaHashingVersion: '20201221'
functions:
vehicle:
handler: handler.vehicle
events:
- httpApi:
path: /vehicle
method: '*'
bundle:
handler: handler.bundle
events:
- httpApi:
path: /bundle
method: '*'
- eventBridge:
eventBus: vehicle-bus
pattern:
source:
- aos.vehicle.upload
detail-type:
- VehicleUpload
and my handler.js
"use strict";
const AWS = require('aws-sdk');
module.exports.vehicle = async (event) => {
const eventBridge = new AWS.EventBridge({ region: 'us-east-1' });
const vrm = 'WR17MMN'
return eventBridge.putEvents({
Entries: [
{
EventBusName: 'veihcle-bus',
Source: 'aos.vehicle.upload',
DetailType: 'VehicleUpload',
Detail: `{ "Registration": "${vrm}" }`,
},
]
}).promise()
};
module.exports.bundle = async (event) => {
return {
statusCode: 200,
body: JSON.stringify(
{
message: "BUNDLE",
input: event,
aos: "First test OK",
},
null,
2
),
};
};
(I realise I can't just return that from the Lambda but it also needs to be an endpoint. If I make the function body of bundle empty I still get a server error.
What am I missing?
So you need this minimal setup:
org: myOrg
app: my-events
service: event-bridge-serverless
provider:
name: aws
runtime: nodejs10.x
region: eu-west-1
lambdaHashingVersion: 20201221
environment:
DYNAMODB_TABLE: ${self:service}-dev
eventBridge:
useCloudFormation: true
iamRoleStatements:
- Effect: "Allow"
Action:
- "events:PutEvents"
Resource: "*"
functions:
asset:
handler: handler.asset
events:
- eventBridge:
eventBus: my-events
pattern:
source:
- my.event
I have quite a big app built with serverless and now we are trying out serverless-stack. I am trying to reference a user pool created by sst in serverless.yml function. Is it possible? Below are the steps I've tried:
I have created a user pool
import * as cdk from '#aws-cdk/core'
import * as cognito from '#aws-cdk/aws-cognito'
import * as sst from '#serverless-stack/resources'
export default class UserServiceStack extends sst.Stack {
constructor(scope: cdk.Construct, id: string, props: sst.StackProps = {}) {
super(scope, id, props)
const userPool = new cognito.UserPool(this, 'userPool', {
signInAliases: {
email: true,
phone: true,
},
autoVerify: {
email: true,
phone: true,
},
passwordPolicy: {
minLength: 8,
requireDigits: false,
requireLowercase: false,
requireSymbols: false,
requireUppercase: false,
},
signInCaseSensitive: false,
selfSignUpEnabled: true,
})
new cdk.CfnOutput(this, 'UserPoolId', {
value: userPool.userPoolId,
})
const userPoolClient = new cognito.UserPoolClient(this, 'userPoolClient', {
userPool,
authFlows: {
adminUserPassword: true,
userPassword: true,
},
})
new cdk.CfnOutput(this, 'UserPoolClientId', {
value: userPoolClient.userPoolClientId,
})
}
}
and want to update my post confirmation trigger defined in serverless.yml
...
createUser:
handler: createUser.default
events:
- cognitoUserPool:
pool: !ImportValue '${self:custom.sstApp}...' # what to put here?
trigger: PostConfirmation
existing: true
Figured it out.
First How to use cdk output variables in serverless.yml.
Export them into a file
AWS_PROFILE=<profile-name> npx sst deploy --outputs-file ./exports.json
and in serverless.yml you can reference it like so
...
createUser:
handler: createUser.default
events:
- cognitoUserPool:
pool: ${file(../../infrastructure/exports.json):${self:custom.sstApp}-UserServiceStack.userPoolName}
trigger: PostConfirmation
existing: true
Second. serverless is setup such that you have to pass userPoolName, not userPoolId. So I had to generate userpool name and output it
import * as uuid from 'uuid'
...
const userPoolName = uuid.v4()
const userPool = new cognito.UserPool(this, 'userPool', {
userPoolName,
...
})
...
// eslint-disable-next-line no-new
new cdk.CfnOutput(this, 'userPoolName', {
value: userPoolName,
})
Third to avoid AccessDeniedException when calling lambda as a trigger you need to add the following to your resources
- Resources:
OnCognitoSignupPermission:
Type: 'AWS::Lambda::Permission'
Properties:
Action: "lambda:InvokeFunction"
FunctionName:
Fn::GetAtt: [ "CreateUserLambdaFunction", "Arn"] # the name must be uppercased name of your lambda + LambdaFunction at the end
Principal: "cognito-idp.amazonaws.com"
SourceArn: ${file(../../infrastructure/exports.json):${self:custom.sstApp}-UserServiceStack.userPoolArn}
I want to send a POST request to an external API with axios in a nuxt projekt where I use the nuxt auth module.
When a user is authenticated axios seems to automatically add an authorization header (which is fine and often required for calls to my backend API). However, when doing calls to an external API the header might not be accepted and cause the call to fail.
Is there any way to specify for which URLs the auth header should be added or excluded?
Here are the configurations of the auth and axios module in my nuxt.config
// Axios module configuration
axios: {
baseURL: '//localhost:5000',
},
// Auth module configuration
auth: {
strategies: {
local: {
endpoints: {
login: { url: '/auth/login', method: 'post', propertyName: 'token' },
logout: { url: '/auth/logout', method: 'delete' },
user: { url: '/auth/user', method: 'get', propertyName: 'user' },
},
},
},
}
Some more background:
In my particular usecase I want to upload a file to an Amazon S3 bucket, so I create a presigned upload request and then want to upload the file directly into the bucket. This works perfectly fine as long as the user is not authenticated.
const { data } = await this.$axios.get('/store/upload-request', {
params: { type: imageFile.type },
})
const { url, fields } = data
const formData = new FormData()
for (const [field, value] of Object.entries(fields)) {
formData.append(field, value)
}
formData.append('file', imageFile)
await this.$axios.post(url, formData)
I tried to unset the Auth header via the request config:
const config = {
transformRequest: (data, headers) => {
delete headers.common.Authorization
}
}
await this.$axios.post(url, formData, config)
This seems to prevent all formData related headers to be added. Also setting any header in the config via the headers property or in the transformRequest function does not work, which again causes the call to the external API to fail obviously (The request will be sent without any of these specific headers).
As I'm working with the nuxt axios module I'm not sure how to add an interceptor to the axios instance as described here or here.
Any help or hints on where to find help is very much appreciated :)
Try the following
Solution 1, create a new axios instance in your plugins folder:
export default function ({ $axios }, inject) {
// Create a custom axios instance
const api = $axios.create({
headers: {
// headers you need
}
})
// Inject to context as $api
inject('api', api)
}
Declare this plugin in nuxt.config.js, then you can send your request :
this.$api.$put(...)
Solution 2, declare axios as a plugin in plugins/axios.js and set the hearders according to the request url:
export default function({ $axios, redirect, app }) {
const apiS3BaseUrl = // Your s3 base url here
$axios.onRequest(config => {
if (config.url.includes(apiS3BaseUrl) {
setToken(false)
// Or delete $axios.defaults.headers.common['Authorization']
} else {
// Your current axios config here
}
});
}
Declare this plugin in nuxt.config.js
Personally I use the first solution, it doesn't matter if someday the s3 url changes.
Here is the doc
You can pass the below configuration to nuxt-auth. Beware, those plugins are not related to the root configuration, but related to the nuxt-auth package.
nuxt.config.js
auth: {
redirect: {
login: '/login',
home: '/',
logout: '/login',
callback: false,
},
strategies: {
...
},
plugins: ['~/plugins/config-file-for-nuxt-auth.js'],
},
Then, create a plugin file that will serve as configuration for #nuxt/auth (you need to have #nuxt/axios installed of course.
PS: in this file, exampleBaseUrlForAxios is used as an example to set the variable for the axios calls while using #nuxt/auth.
config-file-for-nuxt-auth.js
export default ({ $axios, $config: { exampleBaseUrlForAxios } }) => {
$axios.defaults.baseURL = exampleBaseUrlForAxios
// I guess that any usual axios configuration can be done here
}
This is the recommended way of doing things as explained in this article. Basically, you can pass runtime variables to your project when you're using this. Hence, here we are passing a EXAMPLE_BASE_URL_FOR_AXIOS variable (located in .env) and renaming it to a name that we wish to use in our project.
nuxt.config.js
export default {
publicRuntimeConfig: {
exampleBaseUrlForAxios: process.env.EXAMPLE_BASE_URL_FOR_AXIOS,
}
}
I have a small front-end and back-end separated project with development environment and production environment, so I want to set the proxy to call api. vue/cli version is 4.6.5.
file structs:
src
axios
api.js
request.js
components
home
LastBlogs.vue
.env.development
.env.production
package.json
vue.config.js
.env.development:
NODE_ENV = 'development'
VUE_APP_BASE_API = '/dev-api'
VUE_APP_API_ADDRESS= 'http://localhost:8080/blog/'
.env.production:
NODE_ENV = 'production'
# base api
VUE_APP_BASE_API = '/api'
# api publicPath
VUE_APP_API_ADDRESS= 'http://localhost:8080/blog'
vue.config.js:
'use strict'
var path = require('path')
module.exports = {
configureWebpack: {
devtool: 'source-map'
},
assetsDir: 'static',
devServer: {
contentBase: path.join(__dirname, 'dist'),
compress: true,
port: 8001,
proxy: {
[process.env.VUE_APP_BASE_API]: {
target: [process.env.VUE_APP_API_ADDRESS], // api地址
changeOrigin: true,
ws: true,
pathRewrite: {
['^' + process.env.VUE_APP_BASE_API]: '/api',
}
}
}
}
}
axios:
import axios from 'axios'
import qs from 'qs'
// import {app} from '../main.js'
console.log(process.env)
/****** 创建axios实例 ******/
const request = axios.create({
baseURL: process.env.VUE_APP_API_ADDRESS,
timeout:5000
})
// some code of interceptors
export default request;
api.js:
import request from './request.js'
var api = process.env.VUE_APP_BASE_API //'/api'
export function getLastBlogs(){
return request({
url: api+'/blog/lastBlogs',
method: 'get'
})
}
I call api in vue file as this:
<script>
import {getLastBlogs} from '#/axios/blogApi.js'
export default {
name: 'LastBlogs',
data() {
return {
blogs: ["AAAA", "BBBB"]
}
},
created: async function(){
let res = await getLastBlogs();
this.blogs = res.data
}
}
</script>
I got 404 at terminal:
error: xhr.js:160 GET http://localhost:8080/blog/dev-api/blog/lastBlogs 404
and the api of back end is ok:
When I put http://localhost:8080/blog/api/blog/lastBlogs in browser, I get this:
{"code":"0","msg":"操作成功","data":[{"id":1,"blogUser":1,"blogTitle":"test1","blogDescription":"for test","blogContent":"ABABABABAB","blogCreated":"2020-09-20T10:44:01","blogStatus":0},{"id":2,"blogUser":1,"blogTitle":"test2","blogDescription":"for test","blogContent":"BABABABABA","blogCreated":"2020-08-20T10:44:01","blogStatus":0}]}
What should I do? Thanks.
So you are configuring Vue CLI dev-server (running on port 8001) to proxy all requests to /api to http://localhost:8080/blog/api (server) but at the same time configure Axios to use baseURL: process.env.VUE_APP_API_ADDRESS ...which means Axios is sending all requests directly to your server instead of proxy...
Just remove that baseURL: process.env.VUE_APP_API_ADDRESS from Axios config
I also believe your pathRewrite option in proxy config is incorrect.
You dispatch request to /dev-api/blog/lastBlogs
Request goes to Vue dev-server (localhost:8001)
Proxy translates /dev-api into http://localhost:8080/blog/dev-api = http://localhost:8080/blog/dev-api/blog/lastBlogs
pathRewrite is applied to whole path part of the URL - /blog/dev-api/blog/lastBlogs - RegEx ^/dev-api will not match
Try to change pathRewrite into [process.env.VUE_APP_BASE_API]: '/api'
Anyone have any ideas why I'm getting "Access Denied" when trying to put object into S3 inside a lambda function? I have the serverless AWS user with AdministorAccess and allow access to s3 resource inside serverless.yml:
iamRoleStatements:
- Effect: Allow
Action:
- s3:PutObject
Resource: "arn:aws:s3:::*"
Edit - here are the files
serverless.yml
service: testtest
app: testtest
org: workx
provider:
name: aws
runtime: nodejs12.x
iamRoleStatements:
- Effect: Allow
Action:
- s3:PutObject
Resource: "arn:aws:s3:::*/*"
functions:
hello:
handler: handler.hello
events:
- http:
path: users/create
method: get
handler.js
'use strict';
const AWS = require('aws-sdk');
// get reference to S3 client
const S3 = new AWS.S3();
// Uload the content to s3 and allow download
async function uploadToS3(content) {
console.log('going to upload to s3!');
const Bucket = 'mtest-exports';
const key = 'testtest.csv';
try {
const destparams = {
Bucket,
Key: key,
Body: content,
ContentType: "text/csv",
};
console.log('going to put object', destparams);
const putResult = await S3.putObject(destparams).promise();
return putResult;
} catch (error) {
console.log(error);
throw error;
}
}
module.exports.hello = async event => {
const result = await uploadToS3('hello world');
return {
statusCode: 200,
body: JSON.stringify(result),
};
};
I was using TypeScript plugin - #serverless/typescript. I used it to create Lambda function that will resize images that are uploaded to S3 + do some kind of content moderation.
Here is the content of serverless.ts file:
import type { AWS } from '#serverless/typescript';
import resizeImageLambda from '#functions/resizeImageLambda';
const serverlessConfiguration: AWS = {
service: 'myservice-image-resize',
frameworkVersion: '3',
plugins: ['serverless-esbuild'],
provider: {
name: 'aws',
stage: 'dev',
region: 'us-east-1',
profile: 'myProjectProfile', // reference to your local AWS profile created by serverless config command
// architecture: 'arm64', // to support Lambda w/ graviton
iam: {
role: {
statements: [
{
Effect: 'Allow',
Action: [
's3:GetObject',
's3:PutObject',
's3:PutObjectAcl',
's3:ListBucket',
'rekognition:DetectModerationLabels'
],
Resource: [
'arn:aws:s3:::myBucket/*',
'arn:aws:s3:::myBucket',
'arn:aws:s3:::/*',
'*'
]
},
{
Effect: 'Allow',
Action: [
's3:ListBucket',
'rekognition:DetectModerationLabels'
],
Resource: ['arn:aws:s3:::myBucket']
}
]
}
},
// architecture: 'arm64',
runtime: 'nodejs16.x',
environment: {
AWS_NODEJS_CONNECTION_REUSE_ENABLED: '1',
NODE_OPTIONS: '--enable-source-maps --stack-trace-limit=1000',
SOURCE_BUCKET_NAME:
'${self:custom.myEnvironment.SOURCE_BUCKET_NAME.${self:custom.myStage}}',
DESTINATION_BUCKET_NAME:
'${self:custom.myEnvironment.DESTINATION_BUCKET_NAME.${self:custom.myStage}}'
}
},
// import the function via paths
functions: { resizeImageLambda },
package: { individually: true },
custom: {
esbuild: {
bundle: true,
minify: false,
sourcemap: true,
exclude: ['aws-sdk'],
target: 'node16',
define: { 'require.resolve': undefined },
platform: 'node',
concurrency: 10,
external: ['sharp'],
packagerOptions: {
scripts:
'rm -rf node_modules/sharp && SHARP_IGNORE_GLOBAL_LIBVIPS=1 npm install --arch=x64 --platform=linux --libc=glibc sharp'
}
},
myEnvironment: {
SOURCE_BUCKET_NAME: {
dev: 'myBucket',
prod: 'myBucket-prod'
},
DESTINATION_BUCKET_NAME: {
dev: 'myBucket',
prod: 'myBucketProd'
}
},
myStage: '${opt:stage, self:provider.stage}'
}
};
module.exports = serverlessConfiguration;
resizeImageLambda.ts
/* eslint-disable no-template-curly-in-string */
// import { Config } from './config';
export const handlerPath = (context: string) =>
`${context.split(process.cwd())[1].substring(1).replace(/\\/g, '/')}`;
export default {
handler: `${handlerPath(__dirname)}/handler.main`,
events: [
{
s3: {
bucket: '${self:custom.myEnvironment.SOURCE_BUCKET_NAME.${self:custom.myStage}}',
event: 's3:ObjectCreated:*',
existing: true,
forceDeploy: true // for existing buckets
}
}
],
timeout: 15 * 60, // 15 min
memorySize: 2048
};
I remember there were few issues when I wanted to connect it to existing buckets (created outside serverless framework) such as IAM policy was not re-created / updated properly (see forceDeploy end existing parameters in function.events[0].s3 properties in resizeLambda.ts file)
Turns out I was an idiot and have the custom config in the wrong place and ruin the serverless.yml file!