Create Lambda Function AWS for buckets adding to S3 - amazon-s3

we have a requirement to create item in dynamodb table when a bucket is created using AWS Lambda, any help would be appreciated

If you are creating S3 bucket from lambda, then same lambda can add an item to dynamoDB. Your lambda role should have permission to add item to dynamoDB.

To achieve this what you can do is, you can use cloud custodianwith lambda.
In cloud custodian there is a way by which you can automatically add username tag,enable encryption etc while creating a s3 bucket with the help of this you can add your custom logic and insted to adding username tag etc you can use boto3 to add the bucket value to DynamoDB.

As per amazon: "Enabling notifications is a bucket-level operation; that is, you store notification configuration information in the notification subresource associated with a bucket"
The previous statement quotes the AWS S3 Documentation Configuring Amazon S3 Event Notifications. It implies that the bucket have to exist and it is over an specific bucket that you create and configure the notification event. basically: You can't produce a notification event on a bucket creation.
I would suggest as an alternate solution: to consider a scheduled process that monitors the existing list of s3 buckets and write a record to DynamoDB when a new bucket is created (new bucket shows in the list). The following : Creating, Listing, and Deleting Amazon S3 Buckets illustrates some examples of what can be done using the Java SDK but the same can be found in other languages as well (see: List Buckets).
Note: You can look at the following tutorial: Schedule AWS Lambda Functions Using CloudWatch Events as one of the possible ways to to run a lambda on an schedule/interval.
Accessing the Bucket object properties can give you extra information like the bucket creation date, bucket owner and more (details of the bucket class can be found here)

Related

S3 Bucket From a Different Account Invoking My Lambda Function

The account who's S3 bucket I am looking to invoke my lambda function has given me access to assume roles. If I attach the role to the lambda functions execution role will this invoke my lambda function? I dont think this will work because whenever I have invoked lambda's from S3 in the past, I needed to add the notification configuration to my S3 bucket. Am I able to do this on the bucket that I do not own ?
I am looking to begin testing today and wanted some insight.
Thank you.

How to create an aws_iam_role for extra section of S3 connection

I am trying to set up an IAM role that can be used in creating an S3 connection in my Airflow. It looks like I can add the following JSON blob to the extra section of S3 connection page:
{
"aws_iam_role": "aws_iam_role_name",
"region_name": "us-west-2"
}
But I am not sure how to create this aws_iam_role. Ideally I would like to give read/write access to a particular S3 bucket. Do I need to create an IAM user first somehow? I am lost.
I am open to better/easier ways to achieve the same.

s3:GetBucketLocation IAM permission for data transfer API on GCP

I want to use data transfer API by GCP to grab data from s3 to my GCS bucket. The S3 bucket is controlled by our client and we have zero control on it.
We learned to use this API the AWS IAM has to have these permissions:
s3:ListBucket
s3:GetObject
s3:GetBucketLocation
https://cloud.google.com/st... (edited)
When I asked them, they said the permission are given in prefix level and not bucket level since the bucket has data for many clients and not just us. They do not want to give any permission which might give us access to the whole bucket data and we should be limited to our prefix level.
I want to know if asking for this permission (s3:GetBucketLocation) in the prefix level will give us access to the ALL data present in the bucket? or it is just allowing the transfer API to locate the data?
I did check the AWS documentation and the closest answer was about GetBucketLocation API which stated:
" Returns the Region the bucket resides in. You set the bucket's Region using the LocationConstraint request parameter in a CreateBucket request. For more information, see CreateBucket."
So it does seem it only returns the region of the bucket BUT there is no documentation to be found specific to the permission itself.
In google documentation it does say that this API is only need the region, however we need to make sure it does not open a way for us to read all the data in the bucket if that makes sense.
Please let me know if you have any knowledge on this.
Firstly, I don't think you need to get the bucket's region programatically. If you're interested in a specific bucket, perhaps the client could tell you the region of it?
The action-level detail is in the SAR for S3 which just says:
Grants permission to return the Region that an Amazon S3 bucket resides in
So there's nothing about object-level (i.e. data) access granted by it.

Unauthenticated bucket listing possible with boto?

I am using boto to interact with S3 buckets, and in some cases I just want to be able to list a publicly-readable bucket without passing my credentials. If I pass credentials, boto actually does not let me list the bucket even though it is publicly visible. Is it possible to connect to S3 and list a bucket without passing credentials?
The docs don't mention it, but after digging into the code I discovered a hidden kwarg that solves my problem:
conn = boto.connect_s3(anon=True)
Then you can call conn.get_bucket() on any bucket that is publicly readable.
I find that this works without creds if you are just reading the head of the bucket:
import boto3
s3 = boto3.resource('s3')
bucket = s3.Bucket(options.bucket)
s3.meta.client.head_bucket(Bucket=bucket.name)
I use it when I need to ping my bucket for readability.

How to restrict Amazon S3 API access?

Is there a way to create a different identity to (access key / secret key) to access Amazon S3 buckets via the REST API where I can restrict access (read only for example)?
The recommended way is to use IAM to create a new user, then apply a policy to that user.
Yes, you can. The S3 API documentation describes the Authentication and Access Control services available to you. You can set up a bucket so that another Amazon S3 account can read but not modify items in the bucket.
Check out the details at http://docs.amazonwebservices.com/AmazonS3/2006-03-01/dev/index.html?UsingAuthAccess.html (follow the link to "Using Query String Authentication")- this is a subdocument to the one Greg Posted, and describes how to generate access URLs on the fly.
This uses a hashed form of the private key and allows expiration, so you can give brief access to files in a bucket without allowed unfettered access to the rest of the S3 store.
Constructing the REST URL is quite difficult, it took me about 3 hours of coding to get it right, but this is a very powerful access technique.