Serverless framework lambda function access denied to S3 - amazon-s3

Anyone have any ideas why I'm getting "Access Denied" when trying to put object into S3 inside a lambda function? I have the serverless AWS user with AdministorAccess and allow access to s3 resource inside serverless.yml:
iamRoleStatements:
- Effect: Allow
Action:
- s3:PutObject
Resource: "arn:aws:s3:::*"
Edit - here are the files
serverless.yml
service: testtest
app: testtest
org: workx
provider:
name: aws
runtime: nodejs12.x
iamRoleStatements:
- Effect: Allow
Action:
- s3:PutObject
Resource: "arn:aws:s3:::*/*"
functions:
hello:
handler: handler.hello
events:
- http:
path: users/create
method: get
handler.js
'use strict';
const AWS = require('aws-sdk');
// get reference to S3 client
const S3 = new AWS.S3();
// Uload the content to s3 and allow download
async function uploadToS3(content) {
console.log('going to upload to s3!');
const Bucket = 'mtest-exports';
const key = 'testtest.csv';
try {
const destparams = {
Bucket,
Key: key,
Body: content,
ContentType: "text/csv",
};
console.log('going to put object', destparams);
const putResult = await S3.putObject(destparams).promise();
return putResult;
} catch (error) {
console.log(error);
throw error;
}
}
module.exports.hello = async event => {
const result = await uploadToS3('hello world');
return {
statusCode: 200,
body: JSON.stringify(result),
};
};

I was using TypeScript plugin - #serverless/typescript. I used it to create Lambda function that will resize images that are uploaded to S3 + do some kind of content moderation.
Here is the content of serverless.ts file:
import type { AWS } from '#serverless/typescript';
import resizeImageLambda from '#functions/resizeImageLambda';
const serverlessConfiguration: AWS = {
service: 'myservice-image-resize',
frameworkVersion: '3',
plugins: ['serverless-esbuild'],
provider: {
name: 'aws',
stage: 'dev',
region: 'us-east-1',
profile: 'myProjectProfile', // reference to your local AWS profile created by serverless config command
// architecture: 'arm64', // to support Lambda w/ graviton
iam: {
role: {
statements: [
{
Effect: 'Allow',
Action: [
's3:GetObject',
's3:PutObject',
's3:PutObjectAcl',
's3:ListBucket',
'rekognition:DetectModerationLabels'
],
Resource: [
'arn:aws:s3:::myBucket/*',
'arn:aws:s3:::myBucket',
'arn:aws:s3:::/*',
'*'
]
},
{
Effect: 'Allow',
Action: [
's3:ListBucket',
'rekognition:DetectModerationLabels'
],
Resource: ['arn:aws:s3:::myBucket']
}
]
}
},
// architecture: 'arm64',
runtime: 'nodejs16.x',
environment: {
AWS_NODEJS_CONNECTION_REUSE_ENABLED: '1',
NODE_OPTIONS: '--enable-source-maps --stack-trace-limit=1000',
SOURCE_BUCKET_NAME:
'${self:custom.myEnvironment.SOURCE_BUCKET_NAME.${self:custom.myStage}}',
DESTINATION_BUCKET_NAME:
'${self:custom.myEnvironment.DESTINATION_BUCKET_NAME.${self:custom.myStage}}'
}
},
// import the function via paths
functions: { resizeImageLambda },
package: { individually: true },
custom: {
esbuild: {
bundle: true,
minify: false,
sourcemap: true,
exclude: ['aws-sdk'],
target: 'node16',
define: { 'require.resolve': undefined },
platform: 'node',
concurrency: 10,
external: ['sharp'],
packagerOptions: {
scripts:
'rm -rf node_modules/sharp && SHARP_IGNORE_GLOBAL_LIBVIPS=1 npm install --arch=x64 --platform=linux --libc=glibc sharp'
}
},
myEnvironment: {
SOURCE_BUCKET_NAME: {
dev: 'myBucket',
prod: 'myBucket-prod'
},
DESTINATION_BUCKET_NAME: {
dev: 'myBucket',
prod: 'myBucketProd'
}
},
myStage: '${opt:stage, self:provider.stage}'
}
};
module.exports = serverlessConfiguration;
resizeImageLambda.ts
/* eslint-disable no-template-curly-in-string */
// import { Config } from './config';
export const handlerPath = (context: string) =>
`${context.split(process.cwd())[1].substring(1).replace(/\\/g, '/')}`;
export default {
handler: `${handlerPath(__dirname)}/handler.main`,
events: [
{
s3: {
bucket: '${self:custom.myEnvironment.SOURCE_BUCKET_NAME.${self:custom.myStage}}',
event: 's3:ObjectCreated:*',
existing: true,
forceDeploy: true // for existing buckets
}
}
],
timeout: 15 * 60, // 15 min
memorySize: 2048
};
I remember there were few issues when I wanted to connect it to existing buckets (created outside serverless framework) such as IAM policy was not re-created / updated properly (see forceDeploy end existing parameters in function.events[0].s3 properties in resizeLambda.ts file)

Turns out I was an idiot and have the custom config in the wrong place and ruin the serverless.yml file!

Related

Etherscan has no support for network sepolia with chain id 11155111

I am trying to verify my deployed contract from truffle and getting "Etherscan has no support for network sepolia with chain id 11155111" error. So I am working with Etherscan and I deployed my contract on sepolia testnet.
How can I solve this problem?
My truffle-config.js
const apikeys = require("./chains/apikeys");
const keys = require("./keys.json");
module.exports = {
plugins: ["truffle-plugin-verify"],
api_keys:{
etherscan: "myApiEtherScan"
},
contracts_build_directory: "./public/contracts",
networks: {
development: {
host: "127.0.0.1",
port: 7545,
network_id: "*",
},
sepolia: {
provider: () =>
new HDWalletProvider(
keys.PRIVATE_KEY,
keys.INFURA_SEPOLIA_URL,
),
network_id: 11155111,
gas:5221975,
gasPrice:20000000000,
confirmations: 3,
timeoutBlocks:200,
skipDryRun: true
}
},
compilers: {
solc: {
version: "0.8.16",
settings: {
optimizer: {
enabled: true, // Default: false
runs: 1000 // Default: 200
}
}
}
},
};
Plugin truffle-plugin-verify doesn't have Sepolia chain support
I added the sepolia api url and etherscan url to constants and it works
const API_URLS = {
...
11155111: 'https://api-sepolia.etherscan.io/api',
...
}
const EXPLORER_URLS = {
...
11155111: 'https://sepolia.etherscan.io/address',
...
}
https://github.com/tafonina/truffle-plugin-verify

Strapi media library image url broken by Digital Ocean

I am trying to upload an image to Digital Ocean Storage. it's upload in DO but after callback strapi generate wrong URL
for Example: https://https//jobsflow/d0e989a489bdc380c55e5846076d07f8.png?updated_at=2022-06-08T17:00:32.934Z thats mean https://https//.
//jobsflow is my location of storage.
here is my config/plugins.js code
module.exports = {
upload: {
config: {
provider: "strapi-provider-upload-dos",
providerOptions: {
key: process.env.DO_SPACE_ACCESS_KEY,
secret: process.env.DO_SPACE_SECRET_KEY,
endpoint: process.env.DO_SPACE_ENDPOINT,
space: process.env.DO_SPACE_BUCKET,
directory: process.env.DO_SPACE_DIRECTORY,
cdn: process.env.DO_SPACE_CDN,
},
},
},
};
//here is my config/middleware.js
module.exports = [
"strapi::errors",
{
name: "strapi::security",
config: {
contentSecurityPolicy: {
useDefaults: true,
directives: {
"connect-src": ["'self'", "https:"],
"img-src": [
"'self'",
"data:",
"blob:",
"*.digitaloceanspaces.com"
],
"media-src": ["'self'", "data:", "blob:"],
upgradeInsecureRequests: null,
},
},
},
},
"strapi::cors",
"strapi::poweredBy",
"strapi::logger",
"strapi::query",
"strapi::body",
"strapi::favicon",
"strapi::public",
];
please help me..! if you have any idea
Are you using a custom upload provider for this?
Why not use the official #strapi/provider-upload-aws-s3 plugin?
// path config/plugins.js
...
upload: {
config: {
provider: 'aws-s3',
providerOptions: {
accessKeyId: env('DO_ACCESS_KEY_ID'),
secretAccessKey: env('DO_ACCESS_SECRET'),
region: env('DO_REGION'),
endpoint: env('DO_ENDPOINT'),
params: {
Bucket: env('DO_BUCKET'),
}
},
},
},
Another nice trick to change the URL of an image and point it to your CDN is adding this:
// src/index.js
async bootstrap({strapi}){
strapi.db.lifecycles.subscribe({
models: ['plugin::upload.file'],
// use cdn url instead of space origin
async beforeCreate(data) {
data.params.data.url = data.params.data.url.replace(__ORIGINAL_URL__, __CDN_URL__)
// you can even do more here like setting policies for the object you're uploading
},
});
}

How can I set up 2 AWS lambda functions, with one firing an event on eventBridge and the other reacting to it?

I'm using the serverless framework to try and test EventBridge.
The documentation is a little sparce, but for my test I would like to have two lambda functions created: first one publishes an event, the second consumes it.
Here is my YAML:
service: events
frameworkVersion: '2'
provider:
name: aws
runtime: nodejs12.x
lambdaHashingVersion: '20201221'
functions:
vehicle:
handler: handler.vehicle
events:
- httpApi:
path: /vehicle
method: '*'
bundle:
handler: handler.bundle
events:
- httpApi:
path: /bundle
method: '*'
- eventBridge:
eventBus: vehicle-bus
pattern:
source:
- aos.vehicle.upload
detail-type:
- VehicleUpload
and my handler.js
"use strict";
const AWS = require('aws-sdk');
module.exports.vehicle = async (event) => {
const eventBridge = new AWS.EventBridge({ region: 'us-east-1' });
const vrm = 'WR17MMN'
return eventBridge.putEvents({
Entries: [
{
EventBusName: 'veihcle-bus',
Source: 'aos.vehicle.upload',
DetailType: 'VehicleUpload',
Detail: `{ "Registration": "${vrm}" }`,
},
]
}).promise()
};
module.exports.bundle = async (event) => {
return {
statusCode: 200,
body: JSON.stringify(
{
message: "BUNDLE",
input: event,
aos: "First test OK",
},
null,
2
),
};
};
(I realise I can't just return that from the Lambda but it also needs to be an endpoint. If I make the function body of bundle empty I still get a server error.
What am I missing?
So you need this minimal setup:
org: myOrg
app: my-events
service: event-bridge-serverless
provider:
name: aws
runtime: nodejs10.x
region: eu-west-1
lambdaHashingVersion: 20201221
environment:
DYNAMODB_TABLE: ${self:service}-dev
eventBridge:
useCloudFormation: true
iamRoleStatements:
- Effect: "Allow"
Action:
- "events:PutEvents"
Resource: "*"
functions:
asset:
handler: handler.asset
events:
- eventBridge:
eventBus: my-events
pattern:
source:
- my.event

reference cognito user pool created by sst (serverless-stack) in serverless.yml

I have quite a big app built with serverless and now we are trying out serverless-stack. I am trying to reference a user pool created by sst in serverless.yml function. Is it possible? Below are the steps I've tried:
I have created a user pool
import * as cdk from '#aws-cdk/core'
import * as cognito from '#aws-cdk/aws-cognito'
import * as sst from '#serverless-stack/resources'
export default class UserServiceStack extends sst.Stack {
constructor(scope: cdk.Construct, id: string, props: sst.StackProps = {}) {
super(scope, id, props)
const userPool = new cognito.UserPool(this, 'userPool', {
signInAliases: {
email: true,
phone: true,
},
autoVerify: {
email: true,
phone: true,
},
passwordPolicy: {
minLength: 8,
requireDigits: false,
requireLowercase: false,
requireSymbols: false,
requireUppercase: false,
},
signInCaseSensitive: false,
selfSignUpEnabled: true,
})
new cdk.CfnOutput(this, 'UserPoolId', {
value: userPool.userPoolId,
})
const userPoolClient = new cognito.UserPoolClient(this, 'userPoolClient', {
userPool,
authFlows: {
adminUserPassword: true,
userPassword: true,
},
})
new cdk.CfnOutput(this, 'UserPoolClientId', {
value: userPoolClient.userPoolClientId,
})
}
}
and want to update my post confirmation trigger defined in serverless.yml
...
createUser:
handler: createUser.default
events:
- cognitoUserPool:
pool: !ImportValue '${self:custom.sstApp}...' # what to put here?
trigger: PostConfirmation
existing: true
Figured it out.
First How to use cdk output variables in serverless.yml.
Export them into a file
AWS_PROFILE=<profile-name> npx sst deploy --outputs-file ./exports.json
and in serverless.yml you can reference it like so
...
createUser:
handler: createUser.default
events:
- cognitoUserPool:
pool: ${file(../../infrastructure/exports.json):${self:custom.sstApp}-UserServiceStack.userPoolName}
trigger: PostConfirmation
existing: true
Second. serverless is setup such that you have to pass userPoolName, not userPoolId. So I had to generate userpool name and output it
import * as uuid from 'uuid'
...
const userPoolName = uuid.v4()
const userPool = new cognito.UserPool(this, 'userPool', {
userPoolName,
...
})
...
// eslint-disable-next-line no-new
new cdk.CfnOutput(this, 'userPoolName', {
value: userPoolName,
})
Third to avoid AccessDeniedException when calling lambda as a trigger you need to add the following to your resources
- Resources:
OnCognitoSignupPermission:
Type: 'AWS::Lambda::Permission'
Properties:
Action: "lambda:InvokeFunction"
FunctionName:
Fn::GetAtt: [ "CreateUserLambdaFunction", "Arn"] # the name must be uppercased name of your lambda + LambdaFunction at the end
Principal: "cognito-idp.amazonaws.com"
SourceArn: ${file(../../infrastructure/exports.json):${self:custom.sstApp}-UserServiceStack.userPoolArn}

Deploy express api with serverless-webpack. Sequelize import error

I try do deploy me project to AWS Lambda with serverless-webpack. But i got error on import model to sequelize.
Here is error log from AWS:
module initialization error: ReferenceError
at t.default (/var/task/app.js:1:8411)
at Sequelize.import(/var/task/node_modules/sequelize/lib/sequelize.js:398:32)
My serverless.yml file:
service: backend-aquatru
plugins:
- serverless-webpack
- serverless-offline
- serverless-dotenv-plugin
custom:
webpackIncludeModules:
forceInclude:
- pg
- pg-hstore
webpackConfig: 'webpack.config.js'
includeModules: true
packager: 'npm'
provider:
name: aws
runtime: nodejs8.10
stage: dev
region: us-west-1
functions:
app:
handler: app.handler
events:
- http: 'ANY /'
- http: 'ANY {proxy+}'
webpack.config.js
const path = require('path')
const Dotenv = require('dotenv-webpack')
module.exports = {
entry: './app.js',
target: 'node',
externals: [nodeExternals()],
output: {
libraryTarget: 'commonjs',
path: path.resolve(__dirname, '.webpack'),
filename: 'app.js', // this should match the first part of function handler in serverless.yml
},
plugins: [
new Dotenv(),
],
module: {
rules: [
{
test: /\.js$/,
exclude: /node_modules/,
include: __dirname,
loaders: ['babel-loader'],
},
],
},
}
And my sequelize import model:
import {sequelize as dbConfig} from '../config/vars'
import Sequelize from 'sequelize';
const db = {
Sequelize,
sequelize: new Sequelize(dbConfig.makeUri(), {
operatorsAliases: Sequelize.Op,
dialect: dbConfig.dialect
}),
};
// require each model from every endpoint with sequelize
db.User = db.sequelize.import('User', require('../api/models/user.model'));
// run associations from every model here
Object.keys(db).forEach((modelName) => {
if ('associate' in db[modelName]) {
db[modelName].associate(db);
}
});
export const User = db.User;
export default db;
I saw serverless-webpack issue with import model files with require, and modify my code that way. But it doesn't help. Here is link to discussion. [enter link description here][issue]