Serverless Framework: Request must be smaller than 69905067 bytes for the UpdateFunctionCode operation - serverless-framework

I am using package uploading zipped file like
frameworkVersion: "=1.27.3"
service: recipes
provider:
name: aws
endpointType: REGIONAL
runtime: python3.6
stage: dev
region: eu-central-1
memorySize: 512
deploymentBucket:
name: dfki-meta
versionFunctions: false
stackTags:
Project: DFKIAPP
# Allows updates to all resources except deleting/replacing EC2 instances
stackPolicy:
- Effect: Allow
Principal: "*"
Action: "Update:*"
Resource: "*"
- Effect: Deny
Principal: "*"
Action:
- Update: Replace
- Update: Delete
Resource: "*"
Condition:
StringEquals:
ResourceType:
- AWS::EC2::Instance
# Access to RDS and S3 Bucket
iamRoleStatements:
- Effect: "Allow"
Action: "s3:ListBucket"
Resource: "*"
package:
individually: true
functions:
# get_recipes:
# handler: handler.get_recipes
# module: recipes_crud
# package:
# individually: true
# timeout: 30
# events:
# - http:
# path: recipes
# method: get
# request:
# parameters:
# querystring:
# persona: true
get_recommendation:
handler: handler.get_recommendation
module: recipes_ml
package:
artifact: zipped_dir.zip
timeout: 30
events:
- http:
path: recipes/{id}
method: get
request:
parameters:
paths:
id: true
querystring:
schaerfe_def: true
saettig_def: true
erfahrung_def: true
schaerfe_wunsch: true
saettig_wunsch: true
erfahrung_wunsch: true
gericht_wunsch: true
stimmung_wunsch: true
Can not understand this error, isn't 52.18 under 69905067 bytes ?
(node:50928) ExperimentalWarning: The fs.promises API is experimental
Serverless: Packaging function: get_recommendation...
Serverless: Uploading function: get_recommendation (52.18 MB)...
Serverless Error ---------------------------------------
Request must be smaller than 69905067 bytes for the UpdateFunctionCode operation
Get Support --------------------------------------------
Docs: docs.serverless.com
Bugs: github.com/serverless/serverless/issues
Issues: forum.serverless.com
Your Environment Information -----------------------------
OS: darwin
Node Version: 10.1.0
Serverless Version: 1.27.3

The package size should be lower than 50MB according to the docs
https://docs.aws.amazon.com/lambda/latest/dg/limits.html
from this blog post
The 20 MB addition presumably is there there to account for request
overhead involved with the AWS API (e.g. base64 encoding of the zip
file data). So far the 50 MB limit holds true-ish. But, we’re not
defeated yet.

This seems to be an issue only while uploading individual lambda function using serverless but if you don't give --function parameter and deploy full stack then it works fine!!!

Related

"X-Cache: Miss from cloudfront" as a result of a call to AWS API Gateway

When I send a GET request to AWS APIGateway's URL "https://blablabla.execute-api.us-east-1.amazonaws.com/dev/crs/blablabla.png" or Custom Domain's URL "devblablabla.bla.com" via browser or POSTMAN I receive a 200 response with the "X-Cache: Miss from cloudfront" header:
GET request to AWS APIGateway
Do you have any idea how I can rewrite the serverless.yml file for receiving 200 response with the "X-cache:HIT" header?
This is the configuration that I deploy:
# serverless.yml
service: s3-blablabla-service
provider:
name: aws
stage: dev
region: us-east-1
environment:
SERVICE_NAME: ${self:service}
apiGateway:
binaryMediaTypes: "*/*"
plugins:
- serverless-apigateway-service-proxy
- serverless-domain-manager
- serverless-finch
custom:
c3launchBucketName: "blabla-pl-${self:provider.stage}"
c3scormBucketName: "blabla-crs-${self:provider.stage}"
domainName: "${self:provider.stage}blablabla.bla.com" # Change this to your domain.
basePath: "" # This will be prefixed to all routes
apiGatewayServiceProxies:
- s3:
path: /pl/{myKey+} # use path param
method: get
action: GetObject
bucket:
# ${self:custom.c3launchBucketName}
Ref: S3Bucket
key:
pathParam: myKey
requestParameters:
"integration.request.header.cache-control": "'public, max-age=31536000, immutable'"
- s3:
path: /crs/{myKey+} # use path param
method: get
action: GetObject
bucket:
# ${self:custom.c3scormBucketName}
Ref: S3ScormBucket
key:
pathParam: myKey
requestParameters:
"integration.request.header.cache-control": "'public, max-age=31536000, immutable'"
customDomain:
domainName: ${self:custom.domainName}
basePath: ${self:custom.basePath}
stage: ${self:provider.stage}
createRoute53Record: true
autoDomain: true
client:
bucketName: ${self:custom.c3launchBucketName}
resources:
Resources:
S3Bucket:
Type: 'AWS::S3::Bucket'
Properties:
BucketName: ${self:custom.c3launchBucketName}
S3ScormBucket:
Type: 'AWS::S3::Bucket'
Properties:
BucketName: ${self:custom.c3scormBucketName}
After the deployment I receive this result:
endpoints:
GET - https://blablabla.execute-api.us-east-1.amazonaws.com/dev/pl/{myKey+}
GET - https://blablabla.execute-api.us-east-1.amazonaws.com/dev/crs/{myKey+}
Service deployed to stack s3-blablabla-service-dev
Serverless Domain Manager:
Domain Name: devblablabla.bla.com
Target Domain: abrakadabra.cloudfront.net
Hosted Zone Id: BARBARBAR

Serverless: TypeError: Cannot read property 'stage' of undefined

frameworkVersion: '2'
plugins:
- serverless-step-functions
- serverless-python-requirements
- serverless-parameters
- serverless-pseudo-parameters
provider:
name: aws
region: us-east-2
stage: ${opt:stage, 'dev'}
runtime: python3.7
versionFunctions: false
iam:
role: arn:aws:iam::#{AWS::AccountId}:role/AWSLambdaVPCAccessExecutionRole
apiGateway:
shouldStartNameWithService: true
lambdaHashingVersion: 20201221
package:
exclude:
- node_modules/**
- venv/**
# Lambda functions
functions:
generateAlert:
handler: handler.generateAlert
generateData:
handler: handler.generateDataHandler
timeout: 600
approveDenied:
handler: handler.approveDenied
timeout: 600
stepFunctions:
stateMachines:
"claims-etl-and-insight-generation-${self:provider.stage}":
loggingConfig:
level: ALL
includeExecutionData: true
destinations:
- Fn::GetAtt: ["ETLStepFunctionLogGroup", Arn]
name: "claims-etl-and-insight-generation-${self:provider.stage}"
definition:
Comment: "${self:provider.stage} ETL Workflow"
StartAt: RawQualityJob
States:
# Raw Data Quality Check Job Start
RawQualityJob:
Type: Task
Resource: arn:aws:states:::glue:startJobRun.sync
Parameters:
JobName: "data_quality_v2_${self:provider.stage}"
Arguments:
"--workflow-name": "${self:provider.stage}-Workflow"
"--dataset_id.$": "$.datasetId"
"--client_id.$": "$.clientId"
Next: DataQualityChoice
Retry:
- ErrorEquals: [States.ALL]
MaxAttempts: 2
IntervalSeconds: 10
BackoffRate: 5
Catch:
- ErrorEquals: [States.ALL]
Next: GenerateErrorAlertDataQuality
# End Raw Data Quality Check Job
DataQualityChoice:
Type: Task
Resource:
Fn::GetAtt: [approveDenied, Arn]
Next: Is Approved ?
Is Approved ?:
Type: Choice
Choices:
- Variable: "$.quality_status"
StringEquals: "Denied"
Next: FailState
Default: HeaderLineJob
FailState:
Type: Fail
Cause: "Denied status"
# Header Line Job Start
HeaderLineJob:
Type: Parallel
Branches:
- StartAt: HeaderLineIngestion
States:
HeaderLineIngestion:
Type: Task
Resource: arn:aws:states:::glue:startJobRun.sync
Parameters:
JobName: headers_lines_etl_rs_v2
Arguments:
"--workflow-name.$": "$.Arguments.--workflow-name"
"--dataset_id.$": "$.Arguments.--dataset_id"
"--client_id.$": "$.Arguments.--client_id"
End: True
Retry:
- ErrorEquals: [States.ALL]
MaxAttempts: 2
IntervalSeconds: 10
BackoffRate: 5
Catch:
- ErrorEquals: [States.ALL]
Next: GenerateErrorAlertHeaderLine
End: True
# Header Line Job End
GenerateErrorAlertDataQuality:
Type: Task
Resource:
Fn::GetAtt: [generateAlert, Arn]
End: true
resources:
Resources:
# Cloudwatch Log
"ETLStepFunctionLogGroup":
Type: AWS::Logs::LogGroup
Properties:
LogGroupName: "ETLStepFunctionLogGroup_${self:provider.stage}"
This is what my serverless.yml file looks like.
When I run the command:
sls deploy --stage staging
It show
Type Error ----------------------------------------------
TypeError: Cannot read property 'stage' of undefined
at Variables.getValueFromOptions (/snapshot/serverless/lib/classes/Variables.js:648:37)
at Variables.getValueFromSource (/snapshot/serverless/lib/classes/Variables.js:579:17)
at /snapshot/serverless/lib/classes/Variables.js:539:12
Your Environment Information ---------------------------
Operating System: linux
Node Version: 14.4.0
Framework Version: 2.30.3 (standalone)
Plugin Version: 4.5.1
SDK Version: 4.2.0
Components Version: 3.7.4
How I can fix this? I tried with different version of serverless.
There is error in yamlParser file, which is provided by serverless-step-functions.
Above is my serverless config file.
It looks like a $ sign is missing from your provider -> stage?
provider:
name: aws
region: us-east-2
stage: ${opt:stage, 'dev'} # $ sign is missing?
runtime: python3.7
versionFunctions: false
iam:
role: arn:aws:iam::#{AWS::AccountId}:role/AWSLambdaVPCAccessExecutionRole
apiGateway:
shouldStartNameWithService: true
lambdaHashingVersion: 20201221

How to eliminate serverless framework error Template format error

Testing, learning serverless framework. I'm trying to deploy simple/basic state machine with two simple lambda functions.
Serverless definition as follows:
frameworkVersion: '2'
app: state-machine
org: macdrorepo
service: state-machine
plugins:
- serverless-python-requirements
- serverless-iam-roles-per-function
- serverless-step-functions
- serverless-pseudo-parameters
custom:
pythonRequirements:
dockerizePip: non-linux
slim: true
zip: true
provider:
name: aws
runtime: python3.8
region: eu-central-1
stage: ${opt:stage, 'testing'}
timeout: 30
package:
individually: true
exclude:
- node_modules/**
- .git/**
- .venv/**
functions:
processpurchase:
module: state-machine
memorySize: 128
stages:
- testing
- dev
handler: ProcessPurchase.process_purchase
processrefund:
module: state-machine
memorySize: 128
stages:
- testing
- dev
handler: ProcessRefund.process_refund
stepFunctions:
validate: true
stateMachines:
TransactionChoiceMachine:
name: ChoiceMachineTest-${self:provider.stage}
dependsOn: CustomIamRole
definition:
Comment: "Purchase refund choice"
StartAt: ProcessTransaction
States:
ProcessTransaction:
Type: Choice
Choices:
- Variable: "$.TransactionType"
StringEquals: PURCHASE
Next: PurchaseState
- Variable: "$.TransactionType"
StringEquals: REFUND
Next: RefundState
PurchaseState:
Type: Task
Resource:
Fn::GetAtt: [processpurchase, Arn]
End: true
RefundState:
Type: Task
Resource:
Fn::GetAtt: [processrefund, Arn]
End: true
During deploy, sls is saying my state machine definition is ok: State machine "TransactionChoiceMachine" definition is valid
My environment information:
Your Environment Information ---------------------------
Operating System: linux
Node Version: 12.20.0
Framework Version: 2.14.0
Plugin Version: 4.1.2
SDK Version: 2.3.2
Components Version: 3.4.3
Setup SLS_DEBUG=* is not helping me much as I do not know js unfortunately.
After serverless deploy command, I'm getting error:
Error: The CloudFormation template is invalid: Template format error: Unresolved resource dependencies [CustomIamRole] in the Resources block of the template
Looks like you are referencing something called CustomIamRole in your state machine creation but I cannot see it being created anywhere in the yaml file. Either create the role and use it in creation or remove the depends on part.

Deploying cube.js using serverless framework results in an error

I am trying to deploy cube.js project using serverless framework on aws and when I access the endpoint produced by serverless, it results in the following error on the browser
Cannot GET /
Here is my serverless.yml file
service: cloud-analytics
provider:
name: aws
stage: production
runtime: nodejs8.10
iamRoleStatements:
- Effect: "Allow"
Action:
- "sns:*"
- "athena:*"
- "s3:*"
- "glue:*"
Resource:
- "*"
vpc:
securityGroupIds:
- sg-xxxxxxxxx # Your DB and Redis security groups here
subnetIds:
- subnet-xxxxxxxxx
environment:
CUBEJS_AWS_KEY: ${opt:awsKey}
CUBEJS_AWS_SECRET: ${opt:awsSecret}
CUBEJS_AWS_REGION: us-east-1
CUBEJS_AWS_S3_OUTPUT_LOCATION: ${opt:location}
REDIS_URL: ${opt:redis_url_with_port}
CUBEJS_DB_TYPE: athena
CUBEJS_API_SECRET:XXXXXX
CUBEJS_APP: "${self:service.name}-${self:provider.stage}"
NODE_ENV: ${self:provider.stage}
AWS_ACCOUNT_ID:
Fn::Join:
- ""
- - Ref: "AWS::AccountId"
functions:
cubejs:
handler: cube.api
timeout: 30
events:
- http:
path: /
method: GET
- http:
path: /{proxy+}
method: ANY
cubejsProcess:
handler: cube.process
timeout: 630
events:
- sns: "${self:service.name}-${self:provider.stage}-process"
plugins:
- serverless-express
I have followed this steps in this blog to set up NAT https://medium.com/#philippholly/aws-lambda-enable-outgoing-internet-access-within-vpc-8dd250e11e12
Cube.js file is as follows with server core options
const AWSHandlers = require('#cubejs-backend/serverless-aws');
const AthenaDriver = require('#cubejs-backend/athena-driver');
module.exports = new AWSHandlers({
externalDbType: 'athena',
externalDriverFactory: () => new AthenaDriver({
accessKeyId: process.env.CUBEJS_AWS_KEY,
secretAccessKey: process.env.CUBEJS_AWS_SECRET,
region: process.env.CUBEJS_AWS_REGION,
S3OutputLocation: process.env.CUBEJS_AWS_S3_OUTPUT_LOCATION
})
});
When I run the endpoint
https://xxxxx.execute-api.us-east-1.amazonaws.com/production/
which is produced by the serverless api gateway I get the error
Cannot GET /
On Cloudwatch I see the cubejs lambda being invoked and see logs for start and end request id. I dont see any logs on cubejsProcess lambda.
Where/How can I debug this to see where the issue is?
By default in production mode Cube.js disables dev server capability and it's why you don't see any Playground working at / path: https://cube.dev/docs/deployment#production-mode. Please use REST API to test your deployment: https://cube.dev/docs/rest-api.

Cognito permission to lambda function using serverless framework

I tried giving my lambda function permission to access Cognito and also to invoke another lambda function using the following code in my serverless.yml file.
The code :
# NOTE: update this with your service name
service: xxxx
# Use the serverless-webpack plugin to transpile ES6
plugins:
- serverless-webpack
- serverless-offline
# serverless-webpack configuration
# Enable auto-packing of external modules
custom:
webpack:
webpackConfig: ./webpack.config.js
includeModules: true
provider:
name: aws
runtime: nodejs8.10
stage: dev
region: ap-south-1
environment:
MYSQLHOST: 'xxxx'
MYSQLPORT: 'xxxx'
MYSQLUSER: 'xxxx'
MYSQLPASS: 'xxxx'
MYSQLDATABASE: 'xxxx'
USERPOOLID: 'xxxx'
USERPOOLREGION: 'xxxx'
# To load environment variables externally
# rename env.example to env.yml and uncomment
# the following line. Also, make sure to not
# commit your env.yml.
#
#environment: ${file(env.yml):${self:provider.stage}}
Version: "2012-10-17"
iamRoleStatements:
- Effect: "Allow"
Action:
-cognito-identity:*
-cognito-sync:*
-cognito-idp:*
-lambda:*
Resource:
-"*"
functions:
# Defines an HTTP API endpoint that calls the main function in create.js
# - path: url path is /notes
# - method: POST request
# - cors: enabled CORS (Cross-Origin Resource Sharing) for browser cross
# domain api call
# - authorizer: authenticate using the AWS IAM role
createUser:
handler: createUser.main
events:
- http:
path: users/create
method: post
cors: true
authorizer: aws_iam
getUsers:
handler: getUsers.main
events:
- http:
path: getUsers
method: get
cors: true
authorizer: aws_iam
When I added the permissions for dynamodb , those got added to my lambda Role.But the cognito permissions arent getting attached to the role.
The Serverless Framework , handles the creation of role on its own , based on the yml file.
Once the role gets created , I can add the policy through the AWS console.
But the framework doesn't create them even after specifying.