What is the S3 hostedZoneId input used for? - amazon-s3

Digging through our Pulumi code, I see that it sets the hostedZoneId on the S3 bucket it creates and I don't understand what it's used for.
The bucket holds internal content (Pulumi state files) and is not set as a static website.
The Pulumi docs in AWS S3 Bucket hostedZoneId only state:
The Route 53 Hosted Zone ID for this bucket's region.
with a link to what appears to be an irrelevant page (looks like a copy-paste error since that link is mentioned earlier on the page).
S3 API docs don't mention the field either.
Terraform's S3 bucket docs, which Pulumi relies on many times but is also a good reference for S3 API in general, exposes this as an output, but not as an input attribute.
Does anyone know what this attribute is used for?

Related

How to query Google Cloud logs by scope/storage via api or java libs

I have defined log sinks to various storage buckets.
In GCP Logs explorer (https://console.cloud.google.com/logs/query) I can specify query scope by project or by specified bucket storages.
How to achieve the same feature (scope by specified storages) using GCP logs api and/or google java libraries ?
Aby links to docs ?
You can achieve the same feature using GCP logs api, by using resourceNames[] Query parameter. Here BUCKET_ID refers to Log bucket ID not storage bucket ID. Refer to this documentation for information.
In resource-oriented APIs, resources are named entities, and resource names are their identifiers. Each resource must have its own unique resource name. The resource name is made up of the ID of the resource itself, the IDs of any parent resources, and its API service name.
gRPC APIs should use scheme-less URIs for resource names. They generally follow the REST URL conventions and behave much like network file paths. They can be easily mapped to REST URLs: see the Standard Methods section for details.

Sharing lambda layers across different region or different account

I have a lambda layer named "function.zip" in code commit and in s3 bucket . Cloudformation works fine without any issue since i'm using the function.zip saved in s3 and using s3 uri as content uri in CFT for "AWS::Serverless::LayerVersion". Since i am using the s3 uri, "function.zip" will be available only in one region(ohio).
I want this fuction.zip be available in all the region in same account. Will it be possible to access the "function.zip" from all the region using codecommit by keeping "AWS::Serverless::LayerVersion" content uri in Cloudformation as codecommit rather than using s3uri. If you could share the sample template and how are you accessing it by saving it in codecommit that would be helpful.

Attach Existing Cloud Resource (ex: S3 Bucket) to a Pulumi Project

Firstly, I love Pulumi.
We're trying to propose Pulumi as a solution for a distributed architecture and it is going swimmingly. The uncertainty I have right now is if it's possible to attach an existing cloud resource to the Pulumi configuration.
There is already an S3 bucket existing with media, what I'm wondering is if it is possible to define the S3 bucket in our Pulumi config, or does Pulumi have to be the creator of the cloud resource before it can be managed by Pulumi?
This is possible with a get function of a resource. In case of an S3 Bucket named "tpsReports-4f64efc" and a Lambda function "zipTpsReports-19d51dc", it would look like this:
const tpsReports = aws.s3.Bucket.get("tpsReports", "tpsReports-4f64efc");
const zipFunc = aws.lambda.Function.get("zipTpsReports", "zipTpsReports-19d51dc");
When you run your Pulumi program, the status of these resources will say read instead of create or update.
If you want to go one step further and adopt an existing resource to be fully managed by Pulumi, this blog post documents the entire process.

AMWS s3 bucket image url

I am using AMWS s3 in a ruby on rails project to store images for my models. Everything is working fine. I was just wondering if it okay/normal that if someone right clicks an image, it shows the following url:
https://mybucketname.s3.amazonaws.com/uploads/photo/picture/100/batman.jpg
Is this a hacking risk, letting people see your bucket name? I guess I was expecting to see a bunch of randomized letters or something. /Noob
Yes, it's normal.
It's not a security risk unless your bucket permissions allow unauthenticated actions like uploading and deleting objects by anonymous users (obviously, having the bucket name would be necessary if a malicious user wanted to overwrite your files) or your bucket name itself provides some kind of information you don't want revealed.
If it makes you feel better, you can always associate a CloudFront distribution with your bucket -- a CloudFront distribution has a default hostname like d1a2b3c4dexample.cloudfront.net, which you can use in your links, or you can associate a vanity hostname with the CloudFront distribution, like assets.example.com, neither of which will reveal the bucket name.
But your bucket name, itself, is not considered sensitive information. It is common practice to use links to objects in buckets, which necessarily include the bucket name.

Amazon S3. Maximum object size

Is there a possibility to set maximum file (object) size using a bucket's policy?
I found here a question like this, but there is no size limitation in the examples.
No, you can't do this with a bucket policy. Check the Element Descriptions page of the S3 documentation for an exhaustive list of the things you can do in a bucket policy.
However, you can specify a content-length-range restriction within a Browser Uploads policy document. This feature is commonly used for giving untrusted users write access to specific keys within an S3 bucket you control (e.g. user-facing media uploads), and it provides the appropriate tools for limiting the location, size, and data types that can be uploaded without needing to expose your S3 credentials.