Setting up an s3 event notification for an existing bucket to SQS using cdk is trying to create an unknown lambda function - amazon-s3

I am trying to setup an s3 event notification for an existing S3 bucket using aws cdk.
Below is the code.
bucket = s3.Bucket.from_bucket_name(self, "S3Bucket", f"some-{stack_settings.aws_account_id}")
bucket.add_event_notification(
s3.EventType.OBJECT_CREATED,
s3n.SqsDestination(queue),
s3.NotificationKeyFilter(
prefix="uploads/"
),
)
The stack creation fails and I am seeing below error on cloudformation console.
User: arn:aws:sts::<account>:assumed-role/some-cicd/i-8989898989xyz
is not authorized to perform: lambda:InvokeFunction on resource:
arn:aws:lambda:us-east-1:<account_number>:function:<some name>-a-BucketNotificationsHandl-b2kDmawsGjpL
because no identity-based policy allows the lambda:InvokeFunction action (Service: AWSLambda;
Status Code: 403; Error Code: AccessDeniedException; Request ID: c2d91744-416c-454d-a510-ff4cce061b80;
Proxy: null)
I am not sure what this lambda is. I am not trying to create any such lambda in my cdk app.
Does anyone know what is going on here and if there is anything wrong with my code ?

The ability to add notifications to an existing bucket is implemented with a custom resource - that is, a lambda that uses the AWS SDK to modify the bucket's settings.
CloudFormation invokes this lambda when creating this custom resource (also on update/delete).
If you would like details, here's the relevant github issue, you can see the commit that added the feature.

Related

Amplify s3 trigger for storage

i have crated an amplify react app with a storage where i can save my pdf file, i have create a trigger from the aws interface, but when i load a file my app don't trigger the lambda function.
i load all file in a public folder of my storage and if i go in my storage properties i have the event
event. when i try the function manualy i have the event in cloudwatch but when i insert a document in my s3 bucket no. where is the problem? Where am I doing wrong?
this is my trigger trigger and this is my function lambda code
thanks for help
i try to retrive pdf file when is load in a s3 bucket
You created the trigger "from the aws interface". I don't believe "Amplify Studio" supports that yet, and you should never make changes to Amplify generated resources via the "AWS Console" web interface.
You should probably undo whatever you setup, and then do an amplify push from within your project to make sure it still deploys.
If your S3 bucket (Storage) is managed by Amplify, and your Lambda (Function) is managed by Amplify, then you can easily generate a trigger that will activate the Lambda when changes occur in the S3 bucket.
From Amplify trigger documentation, you add the trigger from the S3 bucket:
amplify update storage
Then
? Do you want to add a Lambda Trigger for your S3 Bucket? Yes
? Select from the following options
❯ Choose an existing function from the project
Create a new function
After those steps, deploy your changes:
amplify push
Then go ahead and drop a test file into your S3 bucket.
Your lambda will receive all S3 events for the bucket. Your code probably only wants to process the s3:ObjectCreated events.

I have started getting the following error recently on release change action in AWS codePipeline console. Also attaching the screenshot

Invalid action configuration
The input artifact, codepipeline-ap-south-1-359590581532/deliv_Module/BuildArtif/saPzkcn, does not exist or you do not have permissions to access it: The specified key does not exist. (Service: Amazon S3; Status Code: 404; Error Code: NoSuchKey; Request ID: A2WSY5359TJMCZ4V; S3 Extended Request ID: cnf3XMLT+B+p90oJZHDuJfM5nsCyD1JLVFjgqqaGATx2KRuHmxM//tXJIz0FnSJQPFZfMAVDt0o=; Proxy: null)
You can check The Role/permissions of the CodePipeline if he have access to the bucket S3 which contains the artifact, and also have permissions of KMS.... , Also if the bucket and the Codepipeline are in the same region.
And check also if the artifact exist on the bucket S3.

CloudFormation script stuck in UPDATE_ROLLBACK_FAILED, AWSLambdaInternal needs GetObject permissions. How to resolve?

I have a CloudFormation stack stuck in a state of UPDATE_ROLLBACK_FAILED.
The error message I'm seeing, associated with the creation of a Lambda, is:
Your access has been denied by S3, please make sure your request credentials
have permission to GetObject for <codepipeline-bucket-name>/<file-name>]. S3
Error Code: AccessDenied. S3 Error Message: Access Denied (Service:
AWSLambdaInternal; Status Code: 403; Error Code: AccessDeniedException;
Request ID: <request-id>)
I have double checked that the IAM Role associated with the stack has the correct S3 permissions, but I don't think it's CloudFormation that's throwing the permission error. I think it's AWSLambdaInternal service.
I have a dozen other stacks that use the same IAM role and I've not had this problem. I even tried making the specific S3 object public to see if that might be the problem. But I can only assume that AWSLambdaInternal does not have S3 GetObject permission. I even tried adding sts:AssumeRole permissions for lambda.awsamazon.com to the IAM role the CloudFormation script uses, but that didn't change anything.
The stack was working fine until I decided to move the Lambda function it creates into a VPC.
Actually the status UPDATE_ROLLBACK_FAILED means that you not only have update failed, but also failed rollback, which actually should never be happen.
You should check if you have changed any of CloudFormation managed resources manually and if you sure that you didn't - report your problem to CloudFormation support.
The lambda function that you moved into a VPC -- does it not by any chance serve a custom CloudFormation resource for you? Because when rollback fails for me it's usually my own doing - when I have messed up with the custom resource's lambda function and it fails. When in doubt I usually make the delete action for the custom resource a no-op, or make it always report success even if it fails (and then I have to manually delete the resource).
If the lambda function is indeed responsible for the management of a custom resource then you should 1) move it back where it was; 2) create a new lambda function which you know will be accessible and will work; 3) only then switch to using the lambda function.
In CloudFormation's stack events listing, look what resource update failed. The individual error is not as important as the resource it was triggered for.

S3 access denied when trying to run aws cli

using the AWS CLI I'm trying to run
aws cloudformation create-stack --stack-name FullstackLambda --template-url https://s3-us-west-2.amazonaws.com/awsappsync/resources/lambda/LambdaCFTemplate.yam --capabilities CAPABILITY_NAMED_IAM --region us-west-2
but I get the error
An error occurred (ValidationError) when calling the CreateStack operation: S3 error: Access Denied
I have already set my credential with
aws configure
PS I got the create-stack command from the AppSync docs (https://docs.aws.amazon.com/appsync/latest/devguide/tutorial-lambda-resolvers.html)
Looks like you accidentally skipped l letter at the end of template file name:
LambdaCFTemplate.yam -> LambdaCFTemplate.yaml
First make sure the S3 URL is correct. But since this is a 403, I doubt it's the case.
Yours could result from a few different scenarios.
1.If both APIs and IAM user are MFA protected, you have to generate temporary credentials using aws sts get-session-token and use it
2.Use a role to provide cloudformation read access to the template object in S3. First create a IAM role with read access to S3. Then create a parameter like below and ref it in resource properties IamInstanceProfile block
"InstanceProfile":{
"Description":"Instance Profile Name",
"Type":"String",
"Default":"iam-test-role"
}

AWS Lambda working with S3

I want to create a Python Lambda function to take uploaded s3 images and create a thumbnail version of them.
I have permission problems where I cannot get access to my bucket. I understand that I need to create a bucket policy. I don't understand how I can make a policy which works for a lambda request performing the thumbnail process?
It sounds like you want to do the following:
Fire lambda whenever the something is uploaded to your bucket
Read a file from the bucket
Write a (thumbnail) file back to the bucket
You'll need 3 different permissions to do that:
The S3 service will need permission to invoke your lambda function (this is done for you when you add an S3 event source via the AWS Lambda console).
The lambda execution role (the one selected on the Configuration tab of the Lambda Console) will need read/write access to call S3. You can generate a policy for this on the policy generator by selecting IAM Policy from the drop down and then selecting the S3 permissions you need.
For added security, you can set a bucket policy on S3 to only allow the lambda function to access it. You can generate this from the policy generator as well by selecting S3 policy. You would then enter lambda.amazonaws.com as the Principal.