Generate AWS CloudFormation Template from Pre-Existing S3 bucket - amazon-s3

I have an S3 bucket that already has all the correct policies, lifecycles, etc. that I like.
I am converting what is pre-existing into Terraform Infra as Code because we are going to be deploying in multiple regions.
I cannot seem to figure out how to export a CloudFormation template of a pre-existing S3 bucket. I can figure it out for generic services, but not for a specific S3 bucket.
Can this be done?
Eg, I would like to do something like what is described here but for an S3 bucket.
Otherwise, I can try my best to look at the GUI and see if I'm capturing all the little details, but I'd rather not.
(I am not including details on the contents of the bucket, of course, but just the configuration of the bucket. I.e., how would I recreate the same bucket, but using Terraform -- or CloudFormation -- instead.)

Related

Automating S3 bucket creation to store tf.state file prior to terraform execution

Is there a solution that generates an s3 bucket for storing tf.state file prior to terraform execution?
Meaning I don't want to create an s3 bucket on the console and specify it in the backend.tf, I want the s3 bucket to be automatically created and the tf.state file stored on this automatically created s3 bucket.
Does anyone have any experience with this? Is this even possible?

How to dynamically change the "S3 object key" in AWS CodePipeline when source is S3 bucket

I am trying to use S3 bucket as source for CodePipeline. We want to save source code version like "1.0.1" or "1.0.2" in S3 bucket each time we trigger Jenkins pipeline dynamically as source which is saved in S3 bucket. But since the "S3 object key" is not dynamic we cant able to build artifact based on version numbers which is generated dynamically by Jenkins. Is there a way to make the "S3 object key" dynamic and take value from Jenkins pipeline when code pipeline is triggered.
Not possible natively but you can do that by writing your own Lambda function. It’d require Lambda as it’s a restriction with CodePipeline that you’ve to specify a fixed object key name while setting up the pipeline.
So, let’s say you’ve 2 pipelines, CircleCI (CCI) & CodePipeline (CP). CCI generates some files and push it to your S3 bucket (S3-A). Now, you want CP to pick up the latest zip file as a source. But since the latest zip file will be having different names (1.0.1 or 1.0.2), you can’t do that dynamically.
So, on that S3 bucket (S3-A), you can have have S3 event notification trigger enabled with your custom Lambda function. Whenever any new object gets uploaded to that S3 bucket (S3-A), your Lambda function will be triggered, it’ll fetch the latest uploaded object to that S3 bucket (S3-A), zip/unzip that object and push it to an another S3 bucket (S3-B) with some fixed name like file.zip with which you’ll configure your CP with as a source. As there’s a new object with file.zip in your S3 bucket (S3-B), your CP will be triggered automatically.
PS: You’ll have to write your own Lambda function such that it’ll do all those above operations like zipping/unzipping up the newly uploaded object in S3-A, uploading it to S3-B, etc.

How to stop AWS ElasticBeanstalk from creating an S3 Bucket or inserting into it?

It created an S3 bucket. If I delete it, it just creates a new one. How can I set it to not create a bucket or to stop write permissions from it?
You cannot prevent AWS Elastic Beanstalk from creating S3 Bucket as it stores your application and settings as a bundle in that bucket and executes deployments. That bucket is required till the time you run/deploy your application using AWS EB. Please be vary of deleting these buckets as this may cause your deployments/applications to crash. Although, you may remove older objects (which may not be in use).
Take a look at this link for a detailed information on how EB uses S3 buckets for deployments https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/AWSHowTo.S3.html

Upload multiple files to AWS S3 bucket without overwriting existing objects

I am very new to AWS technology.
I want to add some files to an existing S3 bucket without overwriting existing objects. I am using Spring Boot technology for my project.
Can anyone please suggest how can we add/upload multiple files without overwriting existing objects?
AWS S3 supports object versioning in the bucket, in which for use case of uploading same file, S3 will keep all files within the bucket with different version rather than overwriting it.
This can be configured using AWS Console or CLI to enable the Versioning feature. You may want to refer this link for more info.
You probably already found an answer to this, but if you're using the CDK or the CLI you can specify a destinationKeyPrefix. If you want multiple folders in an S3, which was my case, the folder name will be your destinationKeyPrefix.

Is it possible to keep files in glacier after deletion from s3?

Is it possible to move or copy files from s3 to glacier (or if not possible another cheaper storage class) although the original s3 files will be deleted? Looking for a robust solution for server backups from whm > s3 > glacier. I've trialled multiple lifecycle rules, and can see several questions have been asked around this here, but I can't seem to get the settings right.
WHM sends backups to s3 fine for me. It works by essentially creating a mirror of the on-server backups on s3. My problem is that the way the whm/s3 integration works means that when the on-server backups are deleted at the end of the month so are the backups in the s3 bucket.
What I'd like to achieve is that before the files are deleted from s3 they're permanently kept for a specified period, say 6 months. I've tried rules to archive them to glacier without success and think this is because the original files are deleted and so are the glacier instances?
Is what I'm trying to achieve possible?
Thanks.
There are actually two ways to use Amazon Glacier:
As an Amazon S3 storage class (as you describe), or
By interacting with Amazon Glacier directly
Amazon Glacier has its own API that you can use to upload/download objects to/from a Glacier vault (which is the equivalent to an S3 bucket). In fact, when you use Amazon S3 to move data into Glacier, S3 is simply calling the standard Glacier API to send the data to Glacier. The difference is that S3 is managing the vault for you, so you own't see the objects listed in your Glacier console.
So, what you might choose to do is:
Create your WHM backups
Send them directly to Glacier
Versioning
An alternative approach is to use Amazon S3 Versioning. This means that objects delete from Amazon S3 are not actually deleted. Rather, a delete marker hides the object, but the object is still accessible.
You could then define a lifecycle policy to delete non-current versions (including deleted objects) after a period of time.
See (old article): Amazon S3 Lifecycle Management for Versioned Objects | AWS News Blog