How can I add a CORS rule to an existing s3 bucket by CDK v3 javascript? - amazon-s3

I could see there is a function called addCorsRule under type Bucket to add a cors rule to a newly created S3.
I am trying to achieve similar thing for a existing S3 bucket that is not created by me, and it's in another aws account.
The return type of all the finding existing bucket function e.g. fromBucketAttributes is IBucket. It does not provide a method for add a new cors rule.
I am wondering if there will be a work around to achieve this?
Thank you very much.
https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_s3.Bucket.html
https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_s3.IBucket.html

Related

What is the S3 hostedZoneId input used for?

Digging through our Pulumi code, I see that it sets the hostedZoneId on the S3 bucket it creates and I don't understand what it's used for.
The bucket holds internal content (Pulumi state files) and is not set as a static website.
The Pulumi docs in AWS S3 Bucket hostedZoneId only state:
The Route 53 Hosted Zone ID for this bucket's region.
with a link to what appears to be an irrelevant page (looks like a copy-paste error since that link is mentioned earlier on the page).
S3 API docs don't mention the field either.
Terraform's S3 bucket docs, which Pulumi relies on many times but is also a good reference for S3 API in general, exposes this as an output, but not as an input attribute.
Does anyone know what this attribute is used for?

Sharing lambda layers across different region or different account

I have a lambda layer named "function.zip" in code commit and in s3 bucket . Cloudformation works fine without any issue since i'm using the function.zip saved in s3 and using s3 uri as content uri in CFT for "AWS::Serverless::LayerVersion". Since i am using the s3 uri, "function.zip" will be available only in one region(ohio).
I want this fuction.zip be available in all the region in same account. Will it be possible to access the "function.zip" from all the region using codecommit by keeping "AWS::Serverless::LayerVersion" content uri in Cloudformation as codecommit rather than using s3uri. If you could share the sample template and how are you accessing it by saving it in codecommit that would be helpful.

Attach Existing Cloud Resource (ex: S3 Bucket) to a Pulumi Project

Firstly, I love Pulumi.
We're trying to propose Pulumi as a solution for a distributed architecture and it is going swimmingly. The uncertainty I have right now is if it's possible to attach an existing cloud resource to the Pulumi configuration.
There is already an S3 bucket existing with media, what I'm wondering is if it is possible to define the S3 bucket in our Pulumi config, or does Pulumi have to be the creator of the cloud resource before it can be managed by Pulumi?
This is possible with a get function of a resource. In case of an S3 Bucket named "tpsReports-4f64efc" and a Lambda function "zipTpsReports-19d51dc", it would look like this:
const tpsReports = aws.s3.Bucket.get("tpsReports", "tpsReports-4f64efc");
const zipFunc = aws.lambda.Function.get("zipTpsReports", "zipTpsReports-19d51dc");
When you run your Pulumi program, the status of these resources will say read instead of create or update.
If you want to go one step further and adopt an existing resource to be fully managed by Pulumi, this blog post documents the entire process.

Container storage location with specified provisioning code not available

On creating a bucket using s3 API, I get the
Container storage location with specified provisioning code not available (Service: Amazon S3; Status Code: 400; Error Code: InvalidLocationConstraint; Request ID: f377cc84-2e76-490b-8161-4407a4b8d9d7), S3 Extended Request ID: null error.
However, I can create a bucket using the service portal on Softlayer. Programmatically I can get the latest list of buckets and even delete it, but creation throws the above error.
A recent update has introduced unintended behavior around bucket creation, and we're working to fix it. The system is expecting a Location Constraint of us-standard. Provide the following XML block in the body of the request:
<CreateBucketConfiguration>
<LocationConstraint>us-standard</LocationConstraint>
</CreateBucketConfiguration>
If using an SDK, you'd follow the conventions of the particular library you are using. For example, using boto3 creating a new bucket might look like this:
bucket = s3.create_bucket(Bucket='my-bucket',
CreateBucketConfiguration={'LocationConstraint': 'us-standard'})
In java, it would look like:
s3.createBucket(bucketName, "us-standard");
Here's a link to the docs (we're steadily improving them).
Let me know if that doesn't help, or if you are using another tool or SDK. :)

Support for "Expired Object Delete Marker" with CloudFormation

When creating an S3 Bucket with Versioning turned on how do you use CloudFormation to enable the lifecycle option to delete "Object Delete Markers" when there are no "Non Current" objects remaining.
See Example 8 in the Examples of Lifecycle Configuration documentation that uses ExpiredObjectDeleteMarker:
<LifecycleConfiguration>
<Rule>
...
<Expiration>
<ExpiredObjectDeleteMarker>true</ExpiredObjectDeleteMarker>
</Expiration>
<NoncurrentVersionExpiration>
<NoncurrentDays>30</NoncurrentDays>
</NoncurrentVersionExpiration>
</Rule>
</LifecycleConfiguration>
By setting the ExpiredObjectDeleteMarker element to true in the Expiration action, you direct Amazon S3 to remove expired object delete markers. Amazon S3 will remove an expired object delete marker no sooner than 48 hours after the object expired.
This is achievable via the UI, however I cannot find reference to this support via CloudFormation: Amazon S3 Lifecycle Rule
At the time of writing CloudFormation syntax doesn't have the option. Instead of using the original S3 LifecycleConfiguration they renamed the properties for CloudFormation purposes and forgot this particular one (and a couple of others).
A much better place to ask this question would be CloudFormation forums, where people working at AWS can actually notice and fix the problem by implementing the missing rule.
Example of another missing rule (AbortIncompleteMultipartUpload) that was asked about in the forums: https://forums.aws.amazon.com/thread.jspa?messageID=746212
As a workaround, one possible solution is to use CloudFormation Custom Resources which can be implemented using a Lambda function. The process is described at http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-custom-resources-lambda.html
AWS prioritizes the CloudFormation roadmap based on votes on github issues. Vote for adding ExpiredObjectDeleteMarker support here
I've posted my solution as a GitHub gist here.
Basically, this CloudFormation template creates a Lambda function written in Python using Boto and passes it the name of the bucket to set/unset ExpiredObjectDeleteMarker on. The rest of the bucket life cycle is NOT controlled by the function.