reference cognito user pool created by sst (serverless-stack) in serverless.yml - serverless-framework

I have quite a big app built with serverless and now we are trying out serverless-stack. I am trying to reference a user pool created by sst in serverless.yml function. Is it possible? Below are the steps I've tried:
I have created a user pool
import * as cdk from '#aws-cdk/core'
import * as cognito from '#aws-cdk/aws-cognito'
import * as sst from '#serverless-stack/resources'
export default class UserServiceStack extends sst.Stack {
constructor(scope: cdk.Construct, id: string, props: sst.StackProps = {}) {
super(scope, id, props)
const userPool = new cognito.UserPool(this, 'userPool', {
signInAliases: {
email: true,
phone: true,
},
autoVerify: {
email: true,
phone: true,
},
passwordPolicy: {
minLength: 8,
requireDigits: false,
requireLowercase: false,
requireSymbols: false,
requireUppercase: false,
},
signInCaseSensitive: false,
selfSignUpEnabled: true,
})
new cdk.CfnOutput(this, 'UserPoolId', {
value: userPool.userPoolId,
})
const userPoolClient = new cognito.UserPoolClient(this, 'userPoolClient', {
userPool,
authFlows: {
adminUserPassword: true,
userPassword: true,
},
})
new cdk.CfnOutput(this, 'UserPoolClientId', {
value: userPoolClient.userPoolClientId,
})
}
}
and want to update my post confirmation trigger defined in serverless.yml
...
createUser:
handler: createUser.default
events:
- cognitoUserPool:
pool: !ImportValue '${self:custom.sstApp}...' # what to put here?
trigger: PostConfirmation
existing: true

Figured it out.
First How to use cdk output variables in serverless.yml.
Export them into a file
AWS_PROFILE=<profile-name> npx sst deploy --outputs-file ./exports.json
and in serverless.yml you can reference it like so
...
createUser:
handler: createUser.default
events:
- cognitoUserPool:
pool: ${file(../../infrastructure/exports.json):${self:custom.sstApp}-UserServiceStack.userPoolName}
trigger: PostConfirmation
existing: true
Second. serverless is setup such that you have to pass userPoolName, not userPoolId. So I had to generate userpool name and output it
import * as uuid from 'uuid'
...
const userPoolName = uuid.v4()
const userPool = new cognito.UserPool(this, 'userPool', {
userPoolName,
...
})
...
// eslint-disable-next-line no-new
new cdk.CfnOutput(this, 'userPoolName', {
value: userPoolName,
})
Third to avoid AccessDeniedException when calling lambda as a trigger you need to add the following to your resources
- Resources:
OnCognitoSignupPermission:
Type: 'AWS::Lambda::Permission'
Properties:
Action: "lambda:InvokeFunction"
FunctionName:
Fn::GetAtt: [ "CreateUserLambdaFunction", "Arn"] # the name must be uppercased name of your lambda + LambdaFunction at the end
Principal: "cognito-idp.amazonaws.com"
SourceArn: ${file(../../infrastructure/exports.json):${self:custom.sstApp}-UserServiceStack.userPoolArn}

Related

Problem when connect sequelize on express with postgreSQL

I have a little problem with connecting sequelize in express
TypeError: sequelize.sync is not a function
I have database setup in docker
database:
image: postgres
container_name: database
restart: unless-stopped
environment:
POSTGRES_PASSWORD: secret
POSTGRES_DB: express_database
volumes:
- ./postgres-data:/var/lib/postgresql/data
ports:
- "5432:5432"
And this is my sequelize define
import Sequelize from 'sequelize'
const sequelize = new Sequelize(
'express_database',
'postgres',
'secret',
{
host: 'database',
dialect: 'postgres'
}
);
export default { sequelize }
And in index.js
import sequelize from './Util/database.js'
(async () => {
try {
await sequelize.sync({
force: false
})
} catch (e) {
console.log(e)
}
})()
And my user model
import Sequelize from 'sequelize';
import db from '../Util/database';
const User = db.define('users', {
id: {
type: Sequelize.INTEGER,
autoIncrement: true,
allowNull: false,
primaryKey: true
},
first_name: {
type: Sequelize.STRING,
allowNull: false
}
});
export default { User }
I struggled all day and searched for information but I don't realize what the problem might be. Do you know how I can solve this whole thing?
Is it a problem that I wrote the code with es6?
export default { sequelize } exports an object with only 1 key named 'sequelize' and containing what you want.
Replace it with export default sequelize. Otherwise, you'd be forced to call sequelize.sequelize.sync() when you import it^^

How can I set up 2 AWS lambda functions, with one firing an event on eventBridge and the other reacting to it?

I'm using the serverless framework to try and test EventBridge.
The documentation is a little sparce, but for my test I would like to have two lambda functions created: first one publishes an event, the second consumes it.
Here is my YAML:
service: events
frameworkVersion: '2'
provider:
name: aws
runtime: nodejs12.x
lambdaHashingVersion: '20201221'
functions:
vehicle:
handler: handler.vehicle
events:
- httpApi:
path: /vehicle
method: '*'
bundle:
handler: handler.bundle
events:
- httpApi:
path: /bundle
method: '*'
- eventBridge:
eventBus: vehicle-bus
pattern:
source:
- aos.vehicle.upload
detail-type:
- VehicleUpload
and my handler.js
"use strict";
const AWS = require('aws-sdk');
module.exports.vehicle = async (event) => {
const eventBridge = new AWS.EventBridge({ region: 'us-east-1' });
const vrm = 'WR17MMN'
return eventBridge.putEvents({
Entries: [
{
EventBusName: 'veihcle-bus',
Source: 'aos.vehicle.upload',
DetailType: 'VehicleUpload',
Detail: `{ "Registration": "${vrm}" }`,
},
]
}).promise()
};
module.exports.bundle = async (event) => {
return {
statusCode: 200,
body: JSON.stringify(
{
message: "BUNDLE",
input: event,
aos: "First test OK",
},
null,
2
),
};
};
(I realise I can't just return that from the Lambda but it also needs to be an endpoint. If I make the function body of bundle empty I still get a server error.
What am I missing?
So you need this minimal setup:
org: myOrg
app: my-events
service: event-bridge-serverless
provider:
name: aws
runtime: nodejs10.x
region: eu-west-1
lambdaHashingVersion: 20201221
environment:
DYNAMODB_TABLE: ${self:service}-dev
eventBridge:
useCloudFormation: true
iamRoleStatements:
- Effect: "Allow"
Action:
- "events:PutEvents"
Resource: "*"
functions:
asset:
handler: handler.asset
events:
- eventBridge:
eventBus: my-events
pattern:
source:
- my.event

Keystone.js 6 access denied adminMeta

i want to seed data onConnect, but i have access denied, using this query :
{
keystone: keystone {
adminMeta {
lists {
key
description
label
singular
plural
path
fields {
path
}
}
}
}
i have this error even iam using sudo, context.sudo().graphql.raw :
[
Error: Access denied
at /Users/sidalitemkit/work/web/yet/wirxe/wirxe-app/node_modules/#keystone-next/admin-ui/system/dist/admin-ui.cjs.dev.js:552:19
at processTicksAndRejections (node:internal/process/task_queues:94:5)
at async Promise.all (index 0)
at async Promise.all (index 0) {
locations: [ [Object] ],
path: [ 'keystone', 'adminMeta' ]
}
]
here my config :
export default auth.withAuth(
config({
db: {
adapter: 'prisma_postgresql',
url:
'postgres://admin:aj093bf7l6jdx5hm#wirxe-app-database-do-user-9126376-0.b.db.ondigitalocean.com:25061/wirxepool?schema=public&pgbouncer=true&sslmode=require',
onConnect: initialiseData,
},
ui: {
isAccessAllowed: (context) => !!context.session?.data,
},
lists,
session: withItemData(
statelessSessions({
maxAge: sessionMaxAge,
secret: sessionSecret,
}),
{ User: 'email' },
),
}),
);
i figured out that when i do :
isAccessAllowed: (context) => true
it's working
any advice here
context.sudo() disabled access control. there could be some issue with your query. isAccessAllowed: (context) => true is related to admin-ui and not to the backend implementation of graphql. This could be a bug please open a bug in the repo. They whould be able to fix it quickly.
I do not see sample initialiseData to try myself. Also the graphql is designed as such if you try to access some non existing item then it may give you access denied error even though there is not access control (all access set to true).
There is also another api which is easier in creating the initial items. You should use new list api, available as context.sudo().lists.<ListName>.createOne or createMany like this
const user = await context.sudo().lists.User.createOne({
data: {
name: 'Alice',
posts: { create: [{ title: 'My first post' }] },
},
query: 'id name posts { id title }',
});
or
const users = await context.lists.User.createOne({
data: [
{
data: {
name: 'Alice',
posts: [{ create: { title: 'Alices first post' } }],
},
},
{
data: {
name: 'Bob',
posts: [{ create: { title: 'Bobs first post' } }],
},
},
],
query: 'id name posts { id title }',
});
for more details see List Items API and Database Items API in their preview documentation.
You can find a working example in keystonejs repository (blog)
You have to await and pass context to the initialiseData() method. The onConnect hook already provides this context for you
also, you can look for an argument like '--seed-data' so it's only run once
and run the code as:
keystone --seed-data
export default auth.withAuth(
config({
db: {
adapter: 'prisma_postgresql',
url:
'postgres://admin:aj093bf7l6jdx5hm#wirxe-app-database-do-user-9126376-0.b.db.ondigitalocean.com:25061/wirxepool?schema=public&pgbouncer=true&sslmode=require',
async onConnect(context) {
if (process.argv.includes('--seed-data')) {
await initialiseData(context);
}
},
},
ui: {
isAccessAllowed: (context) => !!context.session?.data,
},
lists,
session: withItemData(
statelessSessions({
maxAge: sessionMaxAge,
secret: sessionSecret,
}),
{ User: 'email' },
),
}),
);

Serverless framework lambda function access denied to S3

Anyone have any ideas why I'm getting "Access Denied" when trying to put object into S3 inside a lambda function? I have the serverless AWS user with AdministorAccess and allow access to s3 resource inside serverless.yml:
iamRoleStatements:
- Effect: Allow
Action:
- s3:PutObject
Resource: "arn:aws:s3:::*"
Edit - here are the files
serverless.yml
service: testtest
app: testtest
org: workx
provider:
name: aws
runtime: nodejs12.x
iamRoleStatements:
- Effect: Allow
Action:
- s3:PutObject
Resource: "arn:aws:s3:::*/*"
functions:
hello:
handler: handler.hello
events:
- http:
path: users/create
method: get
handler.js
'use strict';
const AWS = require('aws-sdk');
// get reference to S3 client
const S3 = new AWS.S3();
// Uload the content to s3 and allow download
async function uploadToS3(content) {
console.log('going to upload to s3!');
const Bucket = 'mtest-exports';
const key = 'testtest.csv';
try {
const destparams = {
Bucket,
Key: key,
Body: content,
ContentType: "text/csv",
};
console.log('going to put object', destparams);
const putResult = await S3.putObject(destparams).promise();
return putResult;
} catch (error) {
console.log(error);
throw error;
}
}
module.exports.hello = async event => {
const result = await uploadToS3('hello world');
return {
statusCode: 200,
body: JSON.stringify(result),
};
};
I was using TypeScript plugin - #serverless/typescript. I used it to create Lambda function that will resize images that are uploaded to S3 + do some kind of content moderation.
Here is the content of serverless.ts file:
import type { AWS } from '#serverless/typescript';
import resizeImageLambda from '#functions/resizeImageLambda';
const serverlessConfiguration: AWS = {
service: 'myservice-image-resize',
frameworkVersion: '3',
plugins: ['serverless-esbuild'],
provider: {
name: 'aws',
stage: 'dev',
region: 'us-east-1',
profile: 'myProjectProfile', // reference to your local AWS profile created by serverless config command
// architecture: 'arm64', // to support Lambda w/ graviton
iam: {
role: {
statements: [
{
Effect: 'Allow',
Action: [
's3:GetObject',
's3:PutObject',
's3:PutObjectAcl',
's3:ListBucket',
'rekognition:DetectModerationLabels'
],
Resource: [
'arn:aws:s3:::myBucket/*',
'arn:aws:s3:::myBucket',
'arn:aws:s3:::/*',
'*'
]
},
{
Effect: 'Allow',
Action: [
's3:ListBucket',
'rekognition:DetectModerationLabels'
],
Resource: ['arn:aws:s3:::myBucket']
}
]
}
},
// architecture: 'arm64',
runtime: 'nodejs16.x',
environment: {
AWS_NODEJS_CONNECTION_REUSE_ENABLED: '1',
NODE_OPTIONS: '--enable-source-maps --stack-trace-limit=1000',
SOURCE_BUCKET_NAME:
'${self:custom.myEnvironment.SOURCE_BUCKET_NAME.${self:custom.myStage}}',
DESTINATION_BUCKET_NAME:
'${self:custom.myEnvironment.DESTINATION_BUCKET_NAME.${self:custom.myStage}}'
}
},
// import the function via paths
functions: { resizeImageLambda },
package: { individually: true },
custom: {
esbuild: {
bundle: true,
minify: false,
sourcemap: true,
exclude: ['aws-sdk'],
target: 'node16',
define: { 'require.resolve': undefined },
platform: 'node',
concurrency: 10,
external: ['sharp'],
packagerOptions: {
scripts:
'rm -rf node_modules/sharp && SHARP_IGNORE_GLOBAL_LIBVIPS=1 npm install --arch=x64 --platform=linux --libc=glibc sharp'
}
},
myEnvironment: {
SOURCE_BUCKET_NAME: {
dev: 'myBucket',
prod: 'myBucket-prod'
},
DESTINATION_BUCKET_NAME: {
dev: 'myBucket',
prod: 'myBucketProd'
}
},
myStage: '${opt:stage, self:provider.stage}'
}
};
module.exports = serverlessConfiguration;
resizeImageLambda.ts
/* eslint-disable no-template-curly-in-string */
// import { Config } from './config';
export const handlerPath = (context: string) =>
`${context.split(process.cwd())[1].substring(1).replace(/\\/g, '/')}`;
export default {
handler: `${handlerPath(__dirname)}/handler.main`,
events: [
{
s3: {
bucket: '${self:custom.myEnvironment.SOURCE_BUCKET_NAME.${self:custom.myStage}}',
event: 's3:ObjectCreated:*',
existing: true,
forceDeploy: true // for existing buckets
}
}
],
timeout: 15 * 60, // 15 min
memorySize: 2048
};
I remember there were few issues when I wanted to connect it to existing buckets (created outside serverless framework) such as IAM policy was not re-created / updated properly (see forceDeploy end existing parameters in function.events[0].s3 properties in resizeLambda.ts file)
Turns out I was an idiot and have the custom config in the wrong place and ruin the serverless.yml file!

How to document rest api using aws cdk

I'm creating a REST API using AWS CDK version 1.22 and I would like to document my API using CDK as well, but I do not see any documentation generated for my API after deployment.
I've dived into aws docs, cdk example, cdk reference but I could find concrete examples that help me understand how to do it.
Here is my code:
const app = new App();
const api = new APIStack(app, 'APIStack', { env }); // basic api gateway
// API Resources
const resourceProps: APIResourceProps = {
gateway: api.gateway,
}
// dummy endpoint with some HTTP methods
const siteResource = new APISiteStack(app, 'APISiteStack', {
env,
...resourceProps
});
const siteResourceDocs = new APISiteDocs(app, 'APISiteDocs', {
env,
...resourceProps,
});
// APISiteDocs is defined as follow:
class APISiteDocs extends Stack {
constructor(scope: Construct, id: string, props: APIResourceProps) {
super(scope, id, props);
new CfnDocumentationVersion(this, 'apiDocsVersion', {
restApiId: props.gateway.restApiId,
documentationVersion: config.app.name(`API-${config.gateway.api.version}`),
description: 'Spare-It API Documentation',
});
new CfnDocumentationPart(this, 'siteDocs', {
restApiId: props.gateway.restApiId,
location: {
type: 'RESOURCE',
method: '*',
path: APISiteStack.apiBasePath,
statusCode: '405',
},
properties: `
{
"status": "error",
"code": 405,
"message": "Method Not Allowed"
}
`,
});
}
}
Any help/hint is appreciated, Thanks.
I have tested with CDK 1.31 and it is possible to use the CDK's default deployment option and also add a document version to the stage. I have used the deployOptions.documentVersion in rest api definition to set the version identifier of the API documentation:
import * as cdk from '#aws-cdk/core';
import * as apigateway from "#aws-cdk/aws-apigateway";
import {CfnDocumentationPart, CfnDocumentationVersion} from "#aws-cdk/aws-apigateway";
export class CdkSftpStack extends cdk.Stack {
constructor(scope: cdk.Construct, id: string, props?: cdk.StackProps) {
super(scope, id, props);
const documentVersion = "v1";
// create the API
const api = new apigateway.RestApi(this, 'books-api', {
deploy: true,
deployOptions: {
documentationVersion: documentVersion
}
});
// create GET method on /books resource
const books = api.root.addResource('books');
books.addMethod('GET');
// // create documentation for GET method
new CfnDocumentationPart(this, 'doc-part1', {
location: {
type: 'METHOD',
method: 'GET',
path: books.path
},
properties: JSON.stringify({
"status": "successful",
"code": 200,
"message": "Get method was succcessful"
}),
restApiId: api.restApiId
});
new CfnDocumentationVersion(this, 'docVersion1', {
documentationVersion: documentVersion,
restApiId: api.restApiId,
description: 'this is a test of documentation'
});
}
}
From what I can gather, if you use the CDK's default deployment options which create stage and deployment on your behalf, it won't be possible to append the stage with a documentation version set.
Instead, the solution would be to set the RESTAPI's option object to deploy:false and define the stage and deployment manually.
stack.ts code
import * as cdk from '#aws-cdk/core';
import * as apigateway from '#aws-cdk/aws-apigateway';
import { Stage, Deployment, CfnDocumentationPart, CfnDocumentationVersion, CfnDeployment } from '#aws-cdk/aws-apigateway';
export class StackoverflowHowToDocumentRestApiUsingAwsCdkStack extends cdk.Stack {
constructor(scope: cdk.Construct, id: string, props?: cdk.StackProps) {
super(scope, id, props);
// create the API, need to not rely on CFN's automatic deployment because we need to
// make our own deployment to set the documentation we create
const api = new apigateway.RestApi(this, 'books-api',{
deploy: false
});
// create GET method on /books resource
const books = api.root.addResource('books');
books.addMethod('GET');
// // create documentation for GET method
const docpart = new CfnDocumentationPart(this, 'doc-part1', {
location: {
type: 'METHOD',
method: 'GET',
path: books.path
},
properties: JSON.stringify({
"status": "successful",
"code": 200,
"message": "Get method was succcessful"
}),
restApiId: api.restApiId
});
const doc = new CfnDocumentationVersion(this, 'docVersion1', {
documentationVersion: 'version1',
restApiId: api.restApiId,
description: 'this is a test of documentation'
});
// not sure if this is necessary but it made sense to me
doc.addDependsOn(docpart);
const deployment = api.latestDeployment ? api.latestDeployment: new Deployment(this,'newDeployment',{
api: api,
description: 'new deployment, API Gateway did not make one'
});
// create stage of api with documentation version
const stage = new Stage(this, 'books-api-stage1', {
deployment: deployment,
documentationVersion: doc.documentationVersion,
stageName: 'somethingOtherThanProd'
});
}
}
OUTPUT:
Created a feature request for this option here.
I had the same exact problem. The CfnDocumentationVersion call has to occur after you create all of your CfnDocumentationPart. Using your code as an example, it should look something like this:
class APISiteDocs extends Stack {
constructor(scope: Construct, id: string, props: APIResourceProps) {
super(scope, id, props);
new CfnDocumentationPart(this, 'siteDocs', {
restApiId: props.gateway.restApiId,
location: {
type: 'RESOURCE',
method: '*',
path: APISiteStack.apiBasePath,
statusCode: '405',
},
properties: JSON.stringify({
"status": "error",
"code": 405,
"message": "Method Not Allowed"
}),
});
new CfnDocumentationVersion(this, 'apiDocsVersion', {
restApiId: props.gateway.restApiId,
documentationVersion: config.app.name(`API-${config.gateway.api.version}`),
description: 'Spare-It API Documentation',
});
}
}