YAML code not well formed for S3 Bucket Encryption - amazon-s3

Can someone tell me what i am doing wrong here. I am trying to enforce the S3 bucket to use the KMS key, which i am referencing, but the YAML format is incorrect. Can you advice. Thanks
TrailBucket:
Condition: InternalBucket
Type: 'AWS::S3::Bucket'
Properties: {
BucketEncryption:
ServerSideEncryptionConfiguration: *********
- ServerSideEncryptionByDefault:
SSEAlgorithm: !Ref Key
}
TrailBucketPolicy:
Condition: InternalBucket
Type: 'AWS::S3::BucketPolicy'
Properties:
Bucket: !Ref TrailBucket
PolicyDocument:
If i remove the brackets {}, then i get the following error message in cloud-formation
The XML you provided was not well-formed or did not validate against our published schema (Service: Amazon S3; Status Code: 400; Error Code: MalformedXML; Request ID:

Try removing the brackets and asterisks

Related

Serverless Framework - S3 upload lambda trigger

I'd like to trigger different lambdas on the same bucket according to the folder where the file is uploaded. Basically, when the user uploads a file to "user/some_id/bills" I want to trigger lambda 1; When the user upload a file to "user/some_id/docs" I want to trigger lambda 2;
I tried the configuration bellow but did not work...
insertUploadBill:
handler: resources/insertUploadBill.main
events:
- s3:
bucket: ${self:custom.settings.BUCKET}
event: s3:ObjectCreated:*
rules:
- prefix: user/*/bills/
insertUploadDocs:
handler: resources/insertUploadDoc.main
events:
- s3:
bucket: ${self:custom.settings.BUCKET}
event: s3:ObjectCreated:*
rules:
- prefix: user/*/docs/
if you look at the docs
https://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html#notification-how-to-filtering
The wild card characters in rules (prefix / suffix) cannot be used.
So either you can change the S3 object key to match something like this
user/images/[user-id]
Or you can make a separate lambda to be invoked on all the s3:ObjectCreated:* events and then use this lambda to match the key and invoke your current lambdas. resources/insertUploadBill.main and resources/insertUploadDoc.main

How can I delete an existing S3 event notification?

When I try to delete an event notification from S3, I get the following message:
In Text:
Unable to validate the following destination configurations. Not authorized to invoke function [arn:aws:lambda:eu-west-1:FOOBAR:function:FOOBAR]. (arn:aws:lambda:eu-west-1:FOOBAR:function:FOOBAR, null)
Nobody in my organization seems to be able to delete that - not even admins.
When I try to set the same S3 event notification in AWS Lambda as a trigger via the web interface, I get
Configuration is ambiguously defined. Cannot have overlapping suffixes in two rules if the prefixes are overlapping for the same event type. (Service: Amazon S3; Status Code: 400; Error Code: InvalidArgument; Request ID: FOOBAR; S3 Extended Request ID: FOOBAR/FOOBAR/FOOBAR)
How can I delete that existing event notification? How can I further investigate the problem?
I was having the same problem tonight and did the following:
1) Issue the command:
aws s3api put-bucket-notification-configuration --bucket=mybucket --notification-configuration="{}"
2) In the console, delete the troublesome event.
Assuming you have better permissions from the CLI:
aws s3api put-bucket-notification-configuration --bucket=mybucket --notification-configuration='{"LambdaFunctionConfigurations": []}'
retrieve all the notification configurations of a specific bucket
aws s3api get-bucket-notification-configuration --bucket=mybucket > notification.sh
the notification.sh file would look like the following
{
"LambdaFunctionConfigurations": [
{
"Id": ...,
"LambdaFunctionArn": ...,
"Events": [...],
"Filter": {...},
},
{ ... },
]
}
remove the notification object from the notification.sh
modify the notification.sh like the following
#! /bin/zsh
aws s3api put-bucket-notification-configuration --bucket=mybucket --notification-configuration='{
"LambdaFunctionConfigurations": [
{
"Id": ...,
"LambdaFunctionArn": ...,
"Events": [...],
"Filter": {...},
},
{ ... },
]
}'
run the shell script
source notification.sh
There is no 's3api delete notification-configuration' in AWS CLI. Only the 's3api put-bucket-notification-configuration' is present which will override any previously existing events in the s3 bucket. So, if you wish to delete a specific event only you need to handle that programatically.
Something like this:
Step 1. Do a 's3api get-bucket-notification-configuration' and get the s3-notification.json file.
Step 2. Now edit this file to reach the required s3-notification.json file using your code.
Step 3. Finally, do 's3api put-bucket-notification-configuration' (aws s3api put-bucket-notification-configuration --bucket my-bucket --notification-configuration file://s3-notification.json)
i had worked on the logic in AWS CLI, it requires a jq command to merge the json output
I tried but doesnt work for me, I uploaded a lambda with the same name of function but without events, after go to the function in the dashboard and add trigger with the same prefix and suffix, when apply changes the dashboard says error, but if you come back to function lambda, you can see the trigger now is linked to lambda, so after you can remove tha lambda or events

Way to access S3 Bucket's name from cloudformation/boto3?

My team wants to create an S3 bucket in a cloudformation template without assigning it a bucket name (to let cloudformation name it itself).
When putting a file into the S3 bucket from my lambda function, is there a way for me to get the S3 bucket's name without having to manually look at the AWS console and check what name was created?
Using the Serverless Application Model (SAM) you can include environment variables with the function properties
AWSTemplateFormatVersion: '2010-09-09'
Description: "Demo"
Transform: 'AWS::Serverless-2016-10-31'
Resources:
MyLambdaFunction:
Type: 'AWS::Serverless::Function'
Properties:
Runtime: nodejs10.x
Handler: index.handler
CodeUri: ./src
Policies:
- Version: '2012-10-17'
Statement:
- Action:
- s3:PutObject
Effect: 'Allow'
Resource: !Sub ${MyS3Bucket.Arn}/*
Environment:
Variables:
BUCKET_NAME: !Ref MyS3Bucket
MyS3Bucket:
Type: 'AWS::S3::Bucket'
Then your function can access the environment variables using process.env.BUCKET_NAME in node.js. In python I think you'd use os.environ['BUCKET_NAME']
const aws = require('aws-sdk');
const s3 = new aws.S3();
exports.handler = async (event) => {
const params = {
Body: 'The Body',
Bucket: process.env.BUCKET_NAME,
Key: 'abc123',
}
return s3.putObject(params).promise();
}
I would assume this works for any CloudFormation templates which aren't using the SAM transform too.
You can use Fn::GetAtt to get the values from your newly created S3 bucket.
You can check it here.
S3 Ref & Get Attribute Documentation
The problem is how to pass the value to lambda function.
Here is the step that might be works.
Use the function Get Attribute above to get the s3 bucket name that cloudformation created.
Insert the value into a file, you can use the UserData, or use Metadata if you are using cloud-init already.
Store the file into existed s3 bucket (or any other storage that lambda can access), you can using the cloud formation template bucket, that always been created when you launch a cloudformation template (usually named cf-template...).
Add a code to your lambda to access the s3 and get the file. Now you get the data of the s3 bucket that your cloudformation has been created and can use it on lambda.
Hope this help.

IdentityPool Creation with CloudFormation

I'm attempting to follow along with a tutorial located at http://serverless-stack.com/chapters/create-a-cognito-identity-pool.html for identity pool creation and document the creation by using cloudformation so that I can easily undo everything when I am done. However, I am having trouble finding any examples that show how to effectively do this using the template syntax. What I currently have is the following
ScratchUserPool:
Type: AWS::Cognito::UserPool
Properties:
UserPoolName: notes-user-pool
ScratchUserPoolClient:
Type: AWS::Cognito::UserPoolClient
Properties:
ClientName: notes-client
ExplicitAuthFlows: [ADMIN_NO_SRP_AUTH]
UserPoolId:
Ref: ScratchUserPool
ScratchIdentityPool:
Type: AWS::Cognito::IdentityPool
Properties:
IdentityPoolName: ScratchIdentityPool
AllowUnauthenticatedIdentities: false
CognitoIdentityProviders:
- ClientId:
Ref: ScratchUserPoolClient
ProviderName:
Ref: ScratchUserPool
The deployment step is failing when it attempts to create the ScratchIdentityPool. I get an error stating that:
An error occurred while provisioning your stack: ScratchIdentityPool -
Invalid Cognito Identity Provider (Service: AmazonCognitoIdentity;
Status Code: 400; Error Code: InvalidParameterException; Request ID:
bc058020-663b-11e7-9f2a-XXXXXXXXXX)
Am I not referencing the Client or Provider name correctly?
Almost immediately after I posted my question, I think I was able to answer it. The problem with my identity pool is that I needed to reference the provider name in the following way:
ScratchIdentityPool:
Type: AWS::Cognito::IdentityPool
Properties:
IdentityPoolName: ScratchIdentityPool
AllowUnauthenticatedIdentities: false
CognitoIdentityProviders:
- ClientId:
Ref: ScratchUserPoolClient
ProviderName:
Fn::GetAtt: [ScratchUserPool, ProviderName]
I needed to use the special amazon function Fn::GetAtt in order for this to work.

Backand's API with S3: Upload to region other than US Standard

I would like to use AWS S3 to store my app's user's files securely.
I am based in the EU (UK), so my bucket's region is EU (Ireland). Based on the Noterious example in the Backand docs, and the snippet provided by the Backand dashboard, this is my custom File Upload action:
function backandCallback(userInput, dbRow, parameters, userProfile) {
var data = {
"key" : "<my AWS key ID",
"secret" : "<my secret key>",
"filename" : parameters.filename,
"filedata" : parameters.filedata,
"region" : "Ireland",
"bucket" : "<my bucket name>"
};
var response = $http({method:"PUT",url:CONSTS.apiUrl + "/1/file/s3" ,
data: data, headers: {"Authorization":userProfile.token}});
return response;
}
When testing the action in the Backand dashboard, I get this error: 417 The remote server returned an error: (500) Internal Server Error.: An error occurred, please try again or contact the administrator. Error details: Maximum number of retry attempts reached : 3.
With an American bucket and region: "US Standard", it works without error. So, similarly to this answer, I think this is because the AWS endpoint isn't correctly set up.
I have tried region: "EU", region: "Ireland", region: "eu-west-1" and similar combinations.
So - Is there any way to configure Backand to use AWS endpoints other than US Standard? (I'd have thought that would have been the whole point of setting the region.)
We have checked this issues and apparently there is a different in the security method of AWS between east coast (N. Virginia) and newer regions like Ireland.
This issue is scheduled for one of the next releases, and I will update here when resolved.