Sharing lambda layers across different region or different account - amazon-s3

I have a lambda layer named "function.zip" in code commit and in s3 bucket . Cloudformation works fine without any issue since i'm using the function.zip saved in s3 and using s3 uri as content uri in CFT for "AWS::Serverless::LayerVersion". Since i am using the s3 uri, "function.zip" will be available only in one region(ohio).
I want this fuction.zip be available in all the region in same account. Will it be possible to access the "function.zip" from all the region using codecommit by keeping "AWS::Serverless::LayerVersion" content uri in Cloudformation as codecommit rather than using s3uri. If you could share the sample template and how are you accessing it by saving it in codecommit that would be helpful.

Related

How can I add a CORS rule to an existing s3 bucket by CDK v3 javascript?

I could see there is a function called addCorsRule under type Bucket to add a cors rule to a newly created S3.
I am trying to achieve similar thing for a existing S3 bucket that is not created by me, and it's in another aws account.
The return type of all the finding existing bucket function e.g. fromBucketAttributes is IBucket. It does not provide a method for add a new cors rule.
I am wondering if there will be a work around to achieve this?
Thank you very much.
https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_s3.Bucket.html
https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_s3.IBucket.html

What is the S3 hostedZoneId input used for?

Digging through our Pulumi code, I see that it sets the hostedZoneId on the S3 bucket it creates and I don't understand what it's used for.
The bucket holds internal content (Pulumi state files) and is not set as a static website.
The Pulumi docs in AWS S3 Bucket hostedZoneId only state:
The Route 53 Hosted Zone ID for this bucket's region.
with a link to what appears to be an irrelevant page (looks like a copy-paste error since that link is mentioned earlier on the page).
S3 API docs don't mention the field either.
Terraform's S3 bucket docs, which Pulumi relies on many times but is also a good reference for S3 API in general, exposes this as an output, but not as an input attribute.
Does anyone know what this attribute is used for?

Attach Existing Cloud Resource (ex: S3 Bucket) to a Pulumi Project

Firstly, I love Pulumi.
We're trying to propose Pulumi as a solution for a distributed architecture and it is going swimmingly. The uncertainty I have right now is if it's possible to attach an existing cloud resource to the Pulumi configuration.
There is already an S3 bucket existing with media, what I'm wondering is if it is possible to define the S3 bucket in our Pulumi config, or does Pulumi have to be the creator of the cloud resource before it can be managed by Pulumi?
This is possible with a get function of a resource. In case of an S3 Bucket named "tpsReports-4f64efc" and a Lambda function "zipTpsReports-19d51dc", it would look like this:
const tpsReports = aws.s3.Bucket.get("tpsReports", "tpsReports-4f64efc");
const zipFunc = aws.lambda.Function.get("zipTpsReports", "zipTpsReports-19d51dc");
When you run your Pulumi program, the status of these resources will say read instead of create or update.
If you want to go one step further and adopt an existing resource to be fully managed by Pulumi, this blog post documents the entire process.

Can you tag individual S3 objects in AWS?

It appears as though I can only use tags at the bucket level in S3. That seems to make sense in a way, because you would likely only do billing at that kind of macro level. However, I can see a few use cases for tagging so that different folks get billed for different objects in the same bucket.
Can you tag individual S3 objects?
Object tagging is a new feature, announced at December, 2016. From the announcement:
With S3 Object Tagging you can manage and control access for Amazon S3 objects. S3 Object Tags are key-value pairs applied to S3 objects which can be created, updated or deleted at any time during the lifetime of the object. With these, you’ll have the ability to create Identity and Access Management (IAM) policies, setup S3 Lifecycle policies, and customize storage metrics. These object-level tags can then manage transitions between storage classes and expire objects in the background.
See also: S3 » Objects » Object Tagging
At the moment, it doesn't look like you can search by tags, or that object tagging affects billing.
It's not "tagging" for the purpose of AWS-side billing, but you can use object metadata to store whatever data you'd like for an object.
http://docs.amazonwebservices.com/AmazonS3/latest/dev/UsingMetadata.html
Now, we can add tags to each object.
Using AWS S3API,
aws s3api put-object-tagging --bucket bucket_name --key key_name --tagging 'TagSet=[{Key=type,Value=text1}]'
We can also add tags to objects using python API. Following code snippet add tags to all objects in bucket. You can pass object name if you want to add tag to just one object.
session = aws_session.set_aws_session()
s3 = boto3.Session(aws_access_key_id, aws_secret_access_key)
bucketName = 'bucketName'
bucket = s3.Bucket(bucketName)
object_list = bucket.objects.all()
s3 = session.client('s3')
tagging = {'TagSet' : [{'Key': 'CONF', 'Value':'No'}]}
for obj in object_list:
s3.put_object_tagging(
Bucket = bucketName,
Key = obj.key,
Tagging = tagging
)
According to documentation you can only tag buckets:
Cost allocation tagging allows you to label S3 buckets so you can more
easily track their cost against projects or other criteria..
It is consistent with what you can see in both management console and SDK documentation.
Of course you could use folder/object metadata to do a finer "tagging" on your own, but I think you will find a better solution.
S3 tags are new feature released on 29 Nov, 2016. Tags can be added on bucket and on individual objects. S3 tags are exciting feature as you can keep business taxonomy data, even control access.release of s3 tag feature
s3 tags can be added using new s3 console from browser. To add tag from browser, assuming you are on new s3 console. Select the item --> More --> Add tag.
To view tag, click on object using new console and view properties.
Aws S3 cli currently not supporting tag feature. Aws s3 api are providing way to add and read tag on object. add tag using s3 api,get tag using s3 api
I don't think you can tag individual items in S3 the same way you can generally tag resources.
However you can add metadata to items in S3 to identify them. You could then report on items with different types by either:
- Paging through items in the bucket (obviously rather slow) and collating any information you want about them
- Having an external metadata store in a database of your choice, which you could then use to run reports. For example how many items of different types, total size, etc. Of course anything you would want to report on would have to be added to the database first
I would definitely be interested in any better solutions though!
Yes, you can tag objects...but not for cost allocation:
Perhaps it is important to draw a distinction between cost allocation tags, and labelling objects with tags. To quote the Amazon documentation: "Cost allocation tags can only be used to label buckets. For information about tags used for labeling objects, see Object Tagging"
Labels: Tagging an object in a bucket:
These are much like meta-data key value pairs defined by users themselves:

Amazon S3. Maximum object size

Is there a possibility to set maximum file (object) size using a bucket's policy?
I found here a question like this, but there is no size limitation in the examples.
No, you can't do this with a bucket policy. Check the Element Descriptions page of the S3 documentation for an exhaustive list of the things you can do in a bucket policy.
However, you can specify a content-length-range restriction within a Browser Uploads policy document. This feature is commonly used for giving untrusted users write access to specific keys within an S3 bucket you control (e.g. user-facing media uploads), and it provides the appropriate tools for limiting the location, size, and data types that can be uploaded without needing to expose your S3 credentials.