serverless error: "bucket already exist" while deploying to Gitlab - amazon-s3

I am newbee to serverless stack,Following is the serverless.yml file. On deploying this in GitLab I get error as:
Serverless Error ---------------------------------------
An error occurred: S3XPOLLBucket - bucket already exists.
Serverless.yml file is :
service: sa-s3-resources
plugins:
- serverless-s3-sync
- serverless-s3-remover
custom:
basePath: sa-s3-resources
environment: ${env:ENV}
provider:
name: aws
stage: ${env:STAGE}
region: ${env:AWS_DEFAULT_REGION}
environment:
STAGE: ${self:provider.stage}
resources:
Resources:
S3XPOLLBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: gs-sa-xpoll-file-${self:custom.environment}-${self:provider.stage}
S3JNLBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: gs-sa-jnl-file-${self:custom.environment}-${self:provider.stage}

An Amazon S3 bucket name is globally unique, and the namespace is shared by all AWS accounts. This means that after a bucket is created, the name of that bucket cannot be used by another AWS account in any AWS Region until the bucket is deleted.
That means you have to choose a unique name that has not already chosen by someone else (or even you in a different development stack) globally
More details
https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html

Related

Create S3 Bucket and upload code in Serverless Framework

serverless.yml
provider:
name: aws
runtime: nodejs16.x
deploymentBucket:
name: myS3Bucket
resources:
Resources:
MyLambdaBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: ${self.provider.deploymentBucket.name}
I would like to specify bucket name, at the same time if bucket doesn't exist I'd like to create bucket first.
But I got an Error
Error:
Could not locate deployment bucket: "myS3Bucket". Error: The specified bucket does not exist
How can I create bucket if it doesn't exist, and deploy to the bucket?
I think you have an error in your syntax. Try this:
provider:
name: aws
runtime: nodejs16.x
custom:
deploymentBucket: 'myS3Bucket'
resources:
Resources:
MyLambdaBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: ${self.provider.custom.deploymentBucket}
Note the difference with quotes, I'm also not using name here and I also use a custom key to have all the custom variables together.

serverless remove never works because bucket I never created does not exist

I have a lambda s3 trigger and a corresponding s3 bucket in my serverless.yaml which works perfectly when I deploy it via serverless deploy.
However when I want to remove everything with serverless remove I always get the same error: (even without changing anything in the aws console)
An error occurred: DataImportCustomS31 - Received response status [FAILED] from custom resource. Message returned: The specified bucket does not exist
Which is strange because I never specified a bucket with that name in my serverless. I assume this somehow comes from the existing: true property of my s3 trigger but I can't fully explain it nor do I know how to fix it.
this is my serverless.yaml:
service: myTestService
provider:
name: aws
runtime: nodejs12.x
region: eu-central-1
profile: myprofile
stage: dev
stackTags:
owner: itsme
custom:
testDataImport:
bucketName: some-test-bucket-zxzcq234
# functions
functions:
dataImport:
handler: src/functions/dataImport.handler
events:
- s3:
bucket: ${self:custom.testDataImport.bucketName}
existing: true
event: s3:ObjectCreated:*
rules:
- suffix: .xlsx
tags:
owner: itsme
# Serverless plugins
plugins:
- serverless-plugin-typescript
- serverless-offline
# Resources your functions use
resources:
Resources:
TestDataBucket:
Type: AWS::S3::Bucket
Properties:
AccessControl: Private
BucketEncryption:
ServerSideEncryptionConfiguration:
- ServerSideEncryptionByDefault:
SSEAlgorithm: AES256
BucketName: ${self:custom.testDataImport.bucketName}
VersioningConfiguration:
Status: Enabled

Serverless Framework - Create a Lambda and S3 and upload a file in S3. Then, extract to DynamoDB with Lambda

It is my first time using serverless framework and my mission is to create a lambda, s3 and dynamodb with serverless and then invoke lambda to transfer from s3 to dynamo.
I am trying to get a name generated by serverless to my S3 to use it in my Lambda but I had no luck with that.
This is how my serveless.yml looks like:
service: fetch-file-and-store-in-s3
frameworkVersion: ">=1.1.0"
custom:
bucket:
Ref: Outputs.AttachmentsBucketName
provider:
name: aws
runtime: nodejs12.x
stage: dev
region: us-east-1
iamRoleStatements:
- Effect: Allow
Action:
- s3:PutObject
- s3:PutObjectAcl
Resource: "arn:aws:s3:::${self:custom.bucket.Ref}/*"
functions:
save:
handler: handler.save
environment:
BUCKET: ${self:custom.bucket.Ref}
resources:
# S3
AttachmentsBucket:
Type: AWS::S3::Bucket
Properties:
# Set the CORS policy
CorsConfiguration:
CorsRules:
-
AllowedOrigins:
- '*'
AllowedHeaders:
- '*'
AllowedMethods:
- GET
- PUT
- POST
- DELETE
- HEAD
MaxAge: 3000
# Print out the name of the bucket that is created
Outputs:
AttachmentsBucketName:
Value:
Ref: AttachmentsBucket
and here is the part where it creates s3 bucket
Resources:
# S3
AttachmentsBucket:
Type: AWS::S3::Bucket
Properties:
# Set the CORS policy
CorsConfiguration:
CorsRules:
- AllowedOrigins:
- '*'
- AllowedHeaders:
- '*'
- AllowedMethods:
- GET
- PUT
- POST
- DELETE
- HEAD
- MaxAge: 3000
# Print out the name of the bucket that is created
Outputs:
AttachmentsBucketName:
Value:
Ref: AttachmentsBucket
and this is the error I am currently getting:
λ sls deploy
Serverless: Packaging service...
Serverless: Excluding development dependencies...
Serverless: Uploading CloudFormation file to S3...
Serverless: Uploading artifacts...
Serverless: Uploading service fetch-file-and-store-in-s3.zip file to S3 (7.32 MB)...
Serverless: Validating template...
Error --------------------------------------------------
Error: The CloudFormation template is invalid: Invalid template property or properties [AttachmentsBucket, Type, Properties]
You have some issues with indentation:
resources:
Resources:
# S3
AttachmentsBucket:
Type: AWS::S3::Bucket
Properties:
# Set the CORS policy
CorsConfiguration:
CorsRules:
- AllowedOrigins:
- '*'
- AllowedHeaders:
- '*'
- AllowedMethods:
- GET
- PUT
- POST
- DELETE
- HEAD
- MaxAge: 3000
# Print out the name of the bucket that is created
Outputs:
AttachmentsBucketName:
Value:
Ref: AttachmentsBucket
Indentation is important for serverless.yml file. In this case, AttachmentsBucket is a resource, it should be sub-section under Resources with one tab space, and then Type and Properties should have one tabbed spaces from Resource Name: AttachmentsBucket, while it actually have two in the sample provided. CloudFormation will not be able to process this particular resource since it is not able to identify resource with proper name and properties.
See the updated sample:
Resources:
AttachmentsBucket:
Type: AWS::S3::Bucket
Properties:
# Set the CORS policy
CorsConfiguration:
CorsRules:
- AllowedOrigins:
- '*'
- AllowedHeaders:
- '*'
- AllowedMethods:
- GET
- PUT
- POST
- DELETE
- HEAD
- MaxAge: 3000
# Print out the name of the bucket that is created
Outputs:
AttachmentsBucketName:
Value: !Ref AttachmentsBucket
You can validate the cloudformation templates by using the aws cli tool here
But your question is regarding how to make lambda and dynamodb load works and in your description you are asking about the deployment part. Can you update your question and tags?
I was able to figure out a solution. As I am very new and it was my first project I wasn't very familiar with the terms in the beginning. what I did was to name my bucket here:
Resources:
Bucket:
Type: AWS::S3::Bucket
Properties:
BucketName: ${self:custom.bucket} # Getting the name of table I defined under custom in serverless.yml
# Make Bucket publicly accessable
MyBucketPolicy:
Type: AWS::S3::BucketPolicy
Properties:
Bucket: !Ref Bucket
PolicyDocument:
Statement:
- Effect: Allow
Principal: '*' # public access to access the bucket files
Action: s3:GetObject
Resource: 'arn:aws:s3:::${self:custom.bucket}/*'
Then to upload a file with the deploy I found a plugin called serverless-s3bucket-sync
And added in the custom attribute and the location of my file under folder:
custom:
bucket: mybucketuniquename #unique global name it will create for the bucket
s3-sync:
- folder: images
bucket: ${self:custom.bucket}
And added the IamRole:
iamRoleStatements:
#S3 Permissions
- Effect: Allow
Action:
- s3:*
Resource: "arn:aws:s3:::${self:custom.bucket}"

How to specify S3 bucket region SAM template

I'm learning AWS Serverless Application Model. I'm trying the following model:
AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'
Resources:
MyLambdaFunction:
Type: 'AWS::Serverless::Function'
Properties:
Runtime: nodejs8.10
Handler: index.handler
CodeUri:
Bucket: artifacts-for-lambda
Key: my-lambda-package.zip
Events:
MySchedule:
Type: Schedule
Properties:
Schedule: rate(1 minute)
MyS3Upload:
Type: S3
Properties:
Bucket: !Ref MyS3Bucket
Events: s3:ObjectCreated:*
MyS3Bucket:
Type: 'AWS::S3::Bucket'
Properties:
BucketName: upload-something-here
This is how I'm running it:
aws cloudformation deploy
--capabilities CAPABILITY_NAMED_IAM
--template-file sam-template.yaml
--stack-name my-serverless-app
This is the error I'm receiving:
Error occurred while GetObject. S3 Error Code: PermanentRedirect. S3 Error Message: The bucket is in this region: us-east-1. Please use this region to retry the request (Service: AWSLambdaInternal; Status Code: 400; Error Code: InvalidParameterValueException
us-east-2 is my default region per my AWS config file.
If us-east-2 is my default region why am I getting this error message saying The bucket is in this region: us-east-1? How do I specify a region for my S3 bucket in my serverless script?
Tom,
I used SAM in one of the projects I was working on. You can use it like this:
sam package --template-file template.yml \
--output-template-file packaged.yml \
--s3-bucket developing-sam-applications
--region YOUR_REGION
Moreover, you can deploy using this command with the region specified:
sam deploy --template-file packaged.yml \
--stack-name developing-sam-applications \
--capabilities CAPABILITY_IAM
--region YOUR_REGION
Note: Make sure, you have bucket and function in the same region. If you want to deploy on different region, you'll need a bucket in that region.

Creating a lambda function as an event handler for an S3 bucket

I'm trying to do something pretty simple. I want to create a lambda function, an S3 bucket, and make the lambda function the event handler for the S3 bucket, using the serverless framework. Here's my definition file:
service: test-project
provider:
name: aws
stage: ${opt:stage, 'dev'}
runtime: nodejs8.10
endpointType: REGIONAL
role: arn:aws:iam::xxxxx:role/lambda_role
functions:
MyEventHandler:
name: fn
handler: src/fn.handler
events:
- s3: container
resources:
Resources:
S3BucketContainer:
Type: AWS::S3::Bucket
Properties:
BucketName: the-container-bucket
But when I run:
$ sls deploy --region us-east-1 --stage dev
I get:
Serverless: Operation failed!
Serverless Error ---------------------------------------
An error occurred: S3BucketContainer - Unable to validate the following destination configurations (Service: Amazon S3; Status Code: 400; Error Code: InvalidArgument; Request ID: xxxxx; S3 Extended Request ID: xxxxx).
Get Support --------------------------------------------
Docs: docs.serverless.com
Bugs: github.com/serverless/serverless/issues
Issues: forum.serverless.com
Your Environment Information -----------------------------
OS: linux
Node Version: 8.10.0
Serverless Version: 1.34.1
Does anyone know what's wrong?
The error message is horrible but right.
The bucket is trying to be created with config to send notifications to your lambda. At this point in the deployment, the lambda hasn't given the bucket permissions to invoke and so the bucket creation fails.
If you didn't specify a custom bucket resource (to change the bucket name), serverless would have added the dependency automatically.
That all said you're not the first and the docs have been updated to reflect this issue.
Add this additional resource and apparently (see below) it should work:
resources:
Resources:
MyEventHandlerLambdaPermissionContainerS3:
Type: AWS::Lambda::Permission
Properties:
FunctionName:
"Fn::GetAtt":
- MyEventHandlerLambda
- Arn
Principal: "s3.amazonaws.com"
Action: "lambda:InvokeFunction"
SourceAccount:
Ref: AWS::AccountId
SourceArn: "arn:aws:s3:::the-container-bucket"
I say apparently because I resolved this differently (see here), using DependsOn to control the order in CloudFormation.