The account who's S3 bucket I am looking to invoke my lambda function has given me access to assume roles. If I attach the role to the lambda functions execution role will this invoke my lambda function? I dont think this will work because whenever I have invoked lambda's from S3 in the past, I needed to add the notification configuration to my S3 bucket. Am I able to do this on the bucket that I do not own ?
I am looking to begin testing today and wanted some insight.
Thank you.
Related
I want to use data transfer API by GCP to grab data from s3 to my GCS bucket. The S3 bucket is controlled by our client and we have zero control on it.
We learned to use this API the AWS IAM has to have these permissions:
s3:ListBucket
s3:GetObject
s3:GetBucketLocation
https://cloud.google.com/st... (edited)
When I asked them, they said the permission are given in prefix level and not bucket level since the bucket has data for many clients and not just us. They do not want to give any permission which might give us access to the whole bucket data and we should be limited to our prefix level.
I want to know if asking for this permission (s3:GetBucketLocation) in the prefix level will give us access to the ALL data present in the bucket? or it is just allowing the transfer API to locate the data?
I did check the AWS documentation and the closest answer was about GetBucketLocation API which stated:
" Returns the Region the bucket resides in. You set the bucket's Region using the LocationConstraint request parameter in a CreateBucket request. For more information, see CreateBucket."
So it does seem it only returns the region of the bucket BUT there is no documentation to be found specific to the permission itself.
In google documentation it does say that this API is only need the region, however we need to make sure it does not open a way for us to read all the data in the bucket if that makes sense.
Please let me know if you have any knowledge on this.
Firstly, I don't think you need to get the bucket's region programatically. If you're interested in a specific bucket, perhaps the client could tell you the region of it?
The action-level detail is in the SAR for S3 which just says:
Grants permission to return the Region that an Amazon S3 bucket resides in
So there's nothing about object-level (i.e. data) access granted by it.
I've just realized that if I allow the browser to upload a file to my S3 bucket (using a session token from my server), an attacker can use known object keys to use those temporary permissions to overwrite those files (and either replace with malicious or empty content).
Some say the solution would be to use object versioning, but I'm wondering if a lambda function can intercept that PutObject request, check if the key already exists in the bucket, and if so, deny the operation.
The short answer is no.
This is because S3 is eventually consistent. Even if you did something clever like attempting a getObject to see if the file exists, you may well get a false negative under the heavy quick-fire load you're expecting from an attacker.
If you want to ensure that an signed url can be used once and only once, then you'll have to replace the signed url functionality with your own. An example would be to use API Gateway + Lambda + DynamoDB. In this case you would create an 'upload token' which you would save to DynamoDB and return to the user. When the user then uploads a file using the token, it is removed from DynamoDB (which can be made immediately consistent).
we have a requirement to create item in dynamodb table when a bucket is created using AWS Lambda, any help would be appreciated
If you are creating S3 bucket from lambda, then same lambda can add an item to dynamoDB. Your lambda role should have permission to add item to dynamoDB.
To achieve this what you can do is, you can use cloud custodianwith lambda.
In cloud custodian there is a way by which you can automatically add username tag,enable encryption etc while creating a s3 bucket with the help of this you can add your custom logic and insted to adding username tag etc you can use boto3 to add the bucket value to DynamoDB.
As per amazon: "Enabling notifications is a bucket-level operation; that is, you store notification configuration information in the notification subresource associated with a bucket"
The previous statement quotes the AWS S3 Documentation Configuring Amazon S3 Event Notifications. It implies that the bucket have to exist and it is over an specific bucket that you create and configure the notification event. basically: You can't produce a notification event on a bucket creation.
I would suggest as an alternate solution: to consider a scheduled process that monitors the existing list of s3 buckets and write a record to DynamoDB when a new bucket is created (new bucket shows in the list). The following : Creating, Listing, and Deleting Amazon S3 Buckets illustrates some examples of what can be done using the Java SDK but the same can be found in other languages as well (see: List Buckets).
Note: You can look at the following tutorial: Schedule AWS Lambda Functions Using CloudWatch Events as one of the possible ways to to run a lambda on an schedule/interval.
Accessing the Bucket object properties can give you extra information like the bucket creation date, bucket owner and more (details of the bucket class can be found here)
Can you some one guide me how to setup an event notification for object level permission change.Currently notification available for read,write,delete etc..
But I am looking to setup a email trigger if some one changed access permission in an s3 object inside a bucket.
There are two ways to deal with this kind of concern:
Proactive: write IAM policies that prevent users from putting object
with public access
Reactive: use CloudWatch Events to detect issues and respond to them (see blog post)
I am using boto to interact with S3 buckets, and in some cases I just want to be able to list a publicly-readable bucket without passing my credentials. If I pass credentials, boto actually does not let me list the bucket even though it is publicly visible. Is it possible to connect to S3 and list a bucket without passing credentials?
The docs don't mention it, but after digging into the code I discovered a hidden kwarg that solves my problem:
conn = boto.connect_s3(anon=True)
Then you can call conn.get_bucket() on any bucket that is publicly readable.
I find that this works without creds if you are just reading the head of the bucket:
import boto3
s3 = boto3.resource('s3')
bucket = s3.Bucket(options.bucket)
s3.meta.client.head_bucket(Bucket=bucket.name)
I use it when I need to ping my bucket for readability.