Create S3 Bucket and upload code in Serverless Framework - amazon-s3

serverless.yml
provider:
name: aws
runtime: nodejs16.x
deploymentBucket:
name: myS3Bucket
resources:
Resources:
MyLambdaBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: ${self.provider.deploymentBucket.name}
I would like to specify bucket name, at the same time if bucket doesn't exist I'd like to create bucket first.
But I got an Error
Error:
Could not locate deployment bucket: "myS3Bucket". Error: The specified bucket does not exist
How can I create bucket if it doesn't exist, and deploy to the bucket?

I think you have an error in your syntax. Try this:
provider:
name: aws
runtime: nodejs16.x
custom:
deploymentBucket: 'myS3Bucket'
resources:
Resources:
MyLambdaBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: ${self.provider.custom.deploymentBucket}
Note the difference with quotes, I'm also not using name here and I also use a custom key to have all the custom variables together.

Related

serverless remove never works because bucket I never created does not exist

I have a lambda s3 trigger and a corresponding s3 bucket in my serverless.yaml which works perfectly when I deploy it via serverless deploy.
However when I want to remove everything with serverless remove I always get the same error: (even without changing anything in the aws console)
An error occurred: DataImportCustomS31 - Received response status [FAILED] from custom resource. Message returned: The specified bucket does not exist
Which is strange because I never specified a bucket with that name in my serverless. I assume this somehow comes from the existing: true property of my s3 trigger but I can't fully explain it nor do I know how to fix it.
this is my serverless.yaml:
service: myTestService
provider:
name: aws
runtime: nodejs12.x
region: eu-central-1
profile: myprofile
stage: dev
stackTags:
owner: itsme
custom:
testDataImport:
bucketName: some-test-bucket-zxzcq234
# functions
functions:
dataImport:
handler: src/functions/dataImport.handler
events:
- s3:
bucket: ${self:custom.testDataImport.bucketName}
existing: true
event: s3:ObjectCreated:*
rules:
- suffix: .xlsx
tags:
owner: itsme
# Serverless plugins
plugins:
- serverless-plugin-typescript
- serverless-offline
# Resources your functions use
resources:
Resources:
TestDataBucket:
Type: AWS::S3::Bucket
Properties:
AccessControl: Private
BucketEncryption:
ServerSideEncryptionConfiguration:
- ServerSideEncryptionByDefault:
SSEAlgorithm: AES256
BucketName: ${self:custom.testDataImport.bucketName}
VersioningConfiguration:
Status: Enabled

Serverless Framework - Create a Lambda and S3 and upload a file in S3. Then, extract to DynamoDB with Lambda

It is my first time using serverless framework and my mission is to create a lambda, s3 and dynamodb with serverless and then invoke lambda to transfer from s3 to dynamo.
I am trying to get a name generated by serverless to my S3 to use it in my Lambda but I had no luck with that.
This is how my serveless.yml looks like:
service: fetch-file-and-store-in-s3
frameworkVersion: ">=1.1.0"
custom:
bucket:
Ref: Outputs.AttachmentsBucketName
provider:
name: aws
runtime: nodejs12.x
stage: dev
region: us-east-1
iamRoleStatements:
- Effect: Allow
Action:
- s3:PutObject
- s3:PutObjectAcl
Resource: "arn:aws:s3:::${self:custom.bucket.Ref}/*"
functions:
save:
handler: handler.save
environment:
BUCKET: ${self:custom.bucket.Ref}
resources:
# S3
AttachmentsBucket:
Type: AWS::S3::Bucket
Properties:
# Set the CORS policy
CorsConfiguration:
CorsRules:
-
AllowedOrigins:
- '*'
AllowedHeaders:
- '*'
AllowedMethods:
- GET
- PUT
- POST
- DELETE
- HEAD
MaxAge: 3000
# Print out the name of the bucket that is created
Outputs:
AttachmentsBucketName:
Value:
Ref: AttachmentsBucket
and here is the part where it creates s3 bucket
Resources:
# S3
AttachmentsBucket:
Type: AWS::S3::Bucket
Properties:
# Set the CORS policy
CorsConfiguration:
CorsRules:
- AllowedOrigins:
- '*'
- AllowedHeaders:
- '*'
- AllowedMethods:
- GET
- PUT
- POST
- DELETE
- HEAD
- MaxAge: 3000
# Print out the name of the bucket that is created
Outputs:
AttachmentsBucketName:
Value:
Ref: AttachmentsBucket
and this is the error I am currently getting:
λ sls deploy
Serverless: Packaging service...
Serverless: Excluding development dependencies...
Serverless: Uploading CloudFormation file to S3...
Serverless: Uploading artifacts...
Serverless: Uploading service fetch-file-and-store-in-s3.zip file to S3 (7.32 MB)...
Serverless: Validating template...
Error --------------------------------------------------
Error: The CloudFormation template is invalid: Invalid template property or properties [AttachmentsBucket, Type, Properties]
You have some issues with indentation:
resources:
Resources:
# S3
AttachmentsBucket:
Type: AWS::S3::Bucket
Properties:
# Set the CORS policy
CorsConfiguration:
CorsRules:
- AllowedOrigins:
- '*'
- AllowedHeaders:
- '*'
- AllowedMethods:
- GET
- PUT
- POST
- DELETE
- HEAD
- MaxAge: 3000
# Print out the name of the bucket that is created
Outputs:
AttachmentsBucketName:
Value:
Ref: AttachmentsBucket
Indentation is important for serverless.yml file. In this case, AttachmentsBucket is a resource, it should be sub-section under Resources with one tab space, and then Type and Properties should have one tabbed spaces from Resource Name: AttachmentsBucket, while it actually have two in the sample provided. CloudFormation will not be able to process this particular resource since it is not able to identify resource with proper name and properties.
See the updated sample:
Resources:
AttachmentsBucket:
Type: AWS::S3::Bucket
Properties:
# Set the CORS policy
CorsConfiguration:
CorsRules:
- AllowedOrigins:
- '*'
- AllowedHeaders:
- '*'
- AllowedMethods:
- GET
- PUT
- POST
- DELETE
- HEAD
- MaxAge: 3000
# Print out the name of the bucket that is created
Outputs:
AttachmentsBucketName:
Value: !Ref AttachmentsBucket
You can validate the cloudformation templates by using the aws cli tool here
But your question is regarding how to make lambda and dynamodb load works and in your description you are asking about the deployment part. Can you update your question and tags?
I was able to figure out a solution. As I am very new and it was my first project I wasn't very familiar with the terms in the beginning. what I did was to name my bucket here:
Resources:
Bucket:
Type: AWS::S3::Bucket
Properties:
BucketName: ${self:custom.bucket} # Getting the name of table I defined under custom in serverless.yml
# Make Bucket publicly accessable
MyBucketPolicy:
Type: AWS::S3::BucketPolicy
Properties:
Bucket: !Ref Bucket
PolicyDocument:
Statement:
- Effect: Allow
Principal: '*' # public access to access the bucket files
Action: s3:GetObject
Resource: 'arn:aws:s3:::${self:custom.bucket}/*'
Then to upload a file with the deploy I found a plugin called serverless-s3bucket-sync
And added in the custom attribute and the location of my file under folder:
custom:
bucket: mybucketuniquename #unique global name it will create for the bucket
s3-sync:
- folder: images
bucket: ${self:custom.bucket}
And added the IamRole:
iamRoleStatements:
#S3 Permissions
- Effect: Allow
Action:
- s3:*
Resource: "arn:aws:s3:::${self:custom.bucket}"

serverless error: "bucket already exist" while deploying to Gitlab

I am newbee to serverless stack,Following is the serverless.yml file. On deploying this in GitLab I get error as:
Serverless Error ---------------------------------------
An error occurred: S3XPOLLBucket - bucket already exists.
Serverless.yml file is :
service: sa-s3-resources
plugins:
- serverless-s3-sync
- serverless-s3-remover
custom:
basePath: sa-s3-resources
environment: ${env:ENV}
provider:
name: aws
stage: ${env:STAGE}
region: ${env:AWS_DEFAULT_REGION}
environment:
STAGE: ${self:provider.stage}
resources:
Resources:
S3XPOLLBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: gs-sa-xpoll-file-${self:custom.environment}-${self:provider.stage}
S3JNLBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: gs-sa-jnl-file-${self:custom.environment}-${self:provider.stage}
An Amazon S3 bucket name is globally unique, and the namespace is shared by all AWS accounts. This means that after a bucket is created, the name of that bucket cannot be used by another AWS account in any AWS Region until the bucket is deleted.
That means you have to choose a unique name that has not already chosen by someone else (or even you in a different development stack) globally
More details
https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html

How to resolve "specified origin access identity does not exist or is not valid"

I have a problem with these lines in my serverless.yml file.
I am using the Serverless plugin serverless-single-page-app-plugin.
# CustomOriginConfig:
# HTTPPort: 80
# HTTPSPort: 443
# OriginProtocolPolicy: https-only
## In case you want to restrict the bucket access use S3OriginConfig and remove CustomOriginConfig
S3OriginConfig:
OriginAccessIdentity: origin-access-identity/cloudfront/E127EXAMPLE51Z
I want use s3OriginConfig and disable access through the S3 bucket. I can do this manually. But I want to get the effect as in the picture below:
You might have solved it as you have asked your question long back but this might help if you didn't. I too faced the same issue and after some research through AWS documentation, I got to know how to use the required attributes. Below points to be considered regarding your question.
As your origin is Amazon S3 bucket, you should use S3OriginConfig in Distribution.
If new OAI is required then you have to create a CloudFrontOriginAccessIdentity resource and refer the OAI and S3CanonicalUserId attribute to the CloudFront Distribution and S3BucketPolicy resources respectively.
Please find the below snippet in response to your question.
WebAppDistribution:
Type: AWS::CloudFront::Distribution
Properties:
DistributionConfig:
Origins:
- DomainName: 'passport-front.s3.amazonaws.com'
Id: 'WebApp'
S3OriginConfig:
OriginAccessIdentity: !Join ['', ['origin-access-identity/cloudfront/', !Ref CloudFrontOAI]]
CloudFrontOAI:
Type: AWS::CloudFront::CloudFrontOriginAccessIdentity
Properties:
CloudFrontOriginAccessIdentityConfig:
Comment: 'access-identity-passport-front.s3.amazonaws.com'
WebAppBucket:
Type: AWS::S3::Bucket
DeletionPolicy: "Retain"
Properties:
AccessControl: PublicRead
BucketName: "passport-front"
WebAppBucketPolicy:
Type: AWS::S3::BucketPolicy
Properties:
Bucket: !Ref WebAppBucket
PolicyDocument:
Statement:
- Action: s3:GetObject
Effect: Allow
Principal:
CanonicalUser: !GetAtt CloudFrontOAI.S3CanonicalUserId
Resource: !Join ['', ['arn:aws:s3:::', !Ref 'WebAppBucket', /*]]
References: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/quickref-cloudfront.html

Creating a lambda function as an event handler for an S3 bucket

I'm trying to do something pretty simple. I want to create a lambda function, an S3 bucket, and make the lambda function the event handler for the S3 bucket, using the serverless framework. Here's my definition file:
service: test-project
provider:
name: aws
stage: ${opt:stage, 'dev'}
runtime: nodejs8.10
endpointType: REGIONAL
role: arn:aws:iam::xxxxx:role/lambda_role
functions:
MyEventHandler:
name: fn
handler: src/fn.handler
events:
- s3: container
resources:
Resources:
S3BucketContainer:
Type: AWS::S3::Bucket
Properties:
BucketName: the-container-bucket
But when I run:
$ sls deploy --region us-east-1 --stage dev
I get:
Serverless: Operation failed!
Serverless Error ---------------------------------------
An error occurred: S3BucketContainer - Unable to validate the following destination configurations (Service: Amazon S3; Status Code: 400; Error Code: InvalidArgument; Request ID: xxxxx; S3 Extended Request ID: xxxxx).
Get Support --------------------------------------------
Docs: docs.serverless.com
Bugs: github.com/serverless/serverless/issues
Issues: forum.serverless.com
Your Environment Information -----------------------------
OS: linux
Node Version: 8.10.0
Serverless Version: 1.34.1
Does anyone know what's wrong?
The error message is horrible but right.
The bucket is trying to be created with config to send notifications to your lambda. At this point in the deployment, the lambda hasn't given the bucket permissions to invoke and so the bucket creation fails.
If you didn't specify a custom bucket resource (to change the bucket name), serverless would have added the dependency automatically.
That all said you're not the first and the docs have been updated to reflect this issue.
Add this additional resource and apparently (see below) it should work:
resources:
Resources:
MyEventHandlerLambdaPermissionContainerS3:
Type: AWS::Lambda::Permission
Properties:
FunctionName:
"Fn::GetAtt":
- MyEventHandlerLambda
- Arn
Principal: "s3.amazonaws.com"
Action: "lambda:InvokeFunction"
SourceAccount:
Ref: AWS::AccountId
SourceArn: "arn:aws:s3:::the-container-bucket"
I say apparently because I resolved this differently (see here), using DependsOn to control the order in CloudFormation.