AWS Lambda working with S3 - amazon-s3

I want to create a Python Lambda function to take uploaded s3 images and create a thumbnail version of them.
I have permission problems where I cannot get access to my bucket. I understand that I need to create a bucket policy. I don't understand how I can make a policy which works for a lambda request performing the thumbnail process?

It sounds like you want to do the following:
Fire lambda whenever the something is uploaded to your bucket
Read a file from the bucket
Write a (thumbnail) file back to the bucket
You'll need 3 different permissions to do that:
The S3 service will need permission to invoke your lambda function (this is done for you when you add an S3 event source via the AWS Lambda console).
The lambda execution role (the one selected on the Configuration tab of the Lambda Console) will need read/write access to call S3. You can generate a policy for this on the policy generator by selecting IAM Policy from the drop down and then selecting the S3 permissions you need.
For added security, you can set a bucket policy on S3 to only allow the lambda function to access it. You can generate this from the policy generator as well by selecting S3 policy. You would then enter lambda.amazonaws.com as the Principal.

Related

Amplify s3 trigger for storage

i have crated an amplify react app with a storage where i can save my pdf file, i have create a trigger from the aws interface, but when i load a file my app don't trigger the lambda function.
i load all file in a public folder of my storage and if i go in my storage properties i have the event
event. when i try the function manualy i have the event in cloudwatch but when i insert a document in my s3 bucket no. where is the problem? Where am I doing wrong?
this is my trigger trigger and this is my function lambda code
thanks for help
i try to retrive pdf file when is load in a s3 bucket
You created the trigger "from the aws interface". I don't believe "Amplify Studio" supports that yet, and you should never make changes to Amplify generated resources via the "AWS Console" web interface.
You should probably undo whatever you setup, and then do an amplify push from within your project to make sure it still deploys.
If your S3 bucket (Storage) is managed by Amplify, and your Lambda (Function) is managed by Amplify, then you can easily generate a trigger that will activate the Lambda when changes occur in the S3 bucket.
From Amplify trigger documentation, you add the trigger from the S3 bucket:
amplify update storage
Then
? Do you want to add a Lambda Trigger for your S3 Bucket? Yes
? Select from the following options
❯ Choose an existing function from the project
Create a new function
After those steps, deploy your changes:
amplify push
Then go ahead and drop a test file into your S3 bucket.
Your lambda will receive all S3 events for the bucket. Your code probably only wants to process the s3:ObjectCreated events.

S3 objects deny access - These objects came from another account's AWS CodeBuild project

(+)
I just found a similar question and answer with help from petrch (thanks!) and being try to apply...
CodeBuild upload build artifact to S3 with ACL
I'm updating accountB's S3 bucket by accountA's CodeBuild project.
A problem is, all the object from accountA's CodeBuild deny to access.
My purpose is using this S3 bucket for static hosting.
I set all requirements for static hosting and it's working fine when I uploaded simple index.html manually.
But the individual object from accountA's CodeBuild project show below attached error.
ex) index.html properties & permission
I checked the Disable artifact encryption option in the artifact setting in the CodeBuild project.
and also on the override params,
encryptionDisabled: true
This code build project is working fine when I save the output in the same account S3.
(S3 static hosting site in AccountA is working well)
But getting access issue in accountB's S3.
Before try to touch KMS policy, I want to know if I missed some configurations in the CodeBuild.
Please advice me what I have to do or missed...
Thanks.
(+)
I just found a similar question and answer with help from petrch (thanks!) and being try to apply...
CodeBuild upload build artifact to S3 with ACL
Upload the objects with bucket-owner-full-control canned ACL, otherwise the objects will be still "owned" by the source account.
See:
https://docs.aws.amazon.com/AmazonS3/latest/userguide/acl-overview.html
It says:
Amazon S3 access control lists (ACLs) enable you to manage access to buckets and objects. Each bucket and object has an ACL attached to it as a subresource. It defines which AWS accounts or groups are granted access and the type of access. When a request is received against a resource, Amazon S3 checks the corresponding ACL to verify that the requester has the necessary access permissions.
When you create a bucket or an object, Amazon S3 creates a default ACL that grants the resource owner full control over the resource. This is shown in the following sample bucket ACL (the default object ACL has the same structure)
So the object has ACL of the source bucket, it's not very obvious, but you can provide an ACL during the PutObject action from the source account. So it can still be just one call.

S3 Access Denied with boto for private bucket as root user

I am trying to access a private S3 bucket that I've created in the console with boto3. However, when I try any action e.g. to list the bucket contents, I get
boto3.setup_default_session()
s3Client = boto3.client('s3')
blist = s3Client.list_objects(Bucket=f'{bucketName}')['Contents']
ClientError: An error occurred (AccessDenied) when calling the ListObjects operation: Access Denied
I am using my default profile (no need for IAM roles). The Access Control List on the browser states that the bucket owner has list/read/write permissions. The canonical id listed as the bucket owner is the same as the canonical id I get when I go to 'Your Security Credentials'.
In short, it feels like the account permissions are ok, but boto is not logging in with the right profile. In addition, running similar commands from the command line e.g.
aws s3api list-buckets
also gives Access Denied. I have no problem running these commands at work, where I have a work log-in and IAM roles. It's just running them on my personal 'default' profile.
Any suggestions?
It appears that your credentials have not been stored in a configuration file.
You can run this AWS CLI command:
aws configure
It will then prompt you for Access Key and Secret Key, then will store them in the ~.aws/credentials file. That file is automatically used by the AWS CLI and boto3.
It is a good idea to confirm that it works via the AWS CLI first, then you will know that it should work for boto3 also.
I would highly recommend that you create IAM credentials and use them instead of root credentials. It is quite dangerous if the root credentials are compromised. A good practice is to create an IAM User for specific applications, then limit the permissions granted to that application. This avoids situations where a programming error (or a security compromise) could lead to unwanted behaviour (eg resources being used or data being deleted).

Allow API users to run AWS Lambda using execution role from Cognito identity pool

I'm using AWS amplify to create an app, where users can upload images using either private or public file access levels, as described in the documentation. Besides this, I've implemented a lambda function which upon request through API gateway modifies an image and returns a link to the modified image.
What I want is that a given user should be able to call the API and modify only his own images, but not that of other users; i.e. allow the AWS lambda function to use the execution role from the cognito user. If I allow the lambda function to access all data in the S3 bucket then it works fine - but I don't want users to be able to access other users images.
I've been at it for a while now, trying different things to no avail.
Now I've integrated the API with the user pool as described here:
https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-enable-cognito-user-pool.html
And then I've tried to follow this guide:
https://aws.amazon.com/premiumsupport/knowledge-center/cognito-user-pool-group/
Which does not work since the "cognito:roles" is not present in the event variable of the lambda_handler (presumably because there are not user pool groups?).
What would the right way be to go about this in an AWS Amplify app?
Primarily, I've followed this guide:
https://aws.amazon.com/premiumsupport/knowledge-center/cognito-user-pool-group/
Use API Gateway request mapping and check permissions in Lambda itself:
Use API Gateway request mapping to pass context.identity.cognitoIdentityId to Lambda. Just it should be a Lambda integration with mapping (not a Proxy integration). Another limitation is that API request should be POST, for GET it's also possible if you map cognitoIdentityId to query string.
Lambda has access to all files in S3
Implement access control check in Lambda itself. Lambda can read all permissions of the file in S3. And then see if owner is Cognito user.

How do I create a s3 bucket, IAM user with full access to S3 and how do I pass the users credentials to my application?

I am using Amazon cloudformation template https://s3.amazonaws.com/cloudformation-templates-us-east-1/PHPHelloWorld.template to setup my application. I need to create a S3 bucket and a IAM user with full access to S3. My PHP application would need the credentials of the user created to upload files to s3.
How do I create a s3 bucket, IAM user with full access to S3 and how do I pass the users credentials to my application ?
Also I have to install the Amazon PHP SDK and some softwares, what are the entries I need to add to the UserData section of PHPHelloWorld.template
Thank you
The example template list contains a template for giving an IAM user full access.
Somewhat counter-intuitively you don't set any properties on the S3 bucket, it's either on an S3 bucket policy, or the IAM user.