How to stop AWS ElasticBeanstalk from creating an S3 Bucket or inserting into it? - amazon-s3

It created an S3 bucket. If I delete it, it just creates a new one. How can I set it to not create a bucket or to stop write permissions from it?

You cannot prevent AWS Elastic Beanstalk from creating S3 Bucket as it stores your application and settings as a bundle in that bucket and executes deployments. That bucket is required till the time you run/deploy your application using AWS EB. Please be vary of deleting these buckets as this may cause your deployments/applications to crash. Although, you may remove older objects (which may not be in use).
Take a look at this link for a detailed information on how EB uses S3 buckets for deployments https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/AWSHowTo.S3.html

Related

Is it possible to sync an azure repo with MWAA (Amazon Workflows for Apache Airflow)?

I have set up a private MWAA instance in AWS. It has set up a bucket that stores DAGs in S3.
I've created a private repository in Azure DevOps and have set up a role that can access this bucket.
With Azure-Pipelines is it possible to sync the entire repository to control the DAGs created/modified in that S3 bucket?
I've seen it's possible to create artefacts and push them to the S3 bucket, but what if a dag is deleted? The DAG will still persist in the S3 Bucket and will still be available in MWAA.
Any guidance will be appreciated.
If you just want to sync entire repository to S3 bucket,you can use the task Amazon S3 Upload in your azure pipeline.
I'm not sure if that will fully address your problem, though.
If there is any misunderstanding, please feel free to add comments related to your issue.

Generate AWS CloudFormation Template from Pre-Existing S3 bucket

I have an S3 bucket that already has all the correct policies, lifecycles, etc. that I like.
I am converting what is pre-existing into Terraform Infra as Code because we are going to be deploying in multiple regions.
I cannot seem to figure out how to export a CloudFormation template of a pre-existing S3 bucket. I can figure it out for generic services, but not for a specific S3 bucket.
Can this be done?
Eg, I would like to do something like what is described here but for an S3 bucket.
Otherwise, I can try my best to look at the GUI and see if I'm capturing all the little details, but I'd rather not.
(I am not including details on the contents of the bucket, of course, but just the configuration of the bucket. I.e., how would I recreate the same bucket, but using Terraform -- or CloudFormation -- instead.)

Is it possible to keep files in glacier after deletion from s3?

Is it possible to move or copy files from s3 to glacier (or if not possible another cheaper storage class) although the original s3 files will be deleted? Looking for a robust solution for server backups from whm > s3 > glacier. I've trialled multiple lifecycle rules, and can see several questions have been asked around this here, but I can't seem to get the settings right.
WHM sends backups to s3 fine for me. It works by essentially creating a mirror of the on-server backups on s3. My problem is that the way the whm/s3 integration works means that when the on-server backups are deleted at the end of the month so are the backups in the s3 bucket.
What I'd like to achieve is that before the files are deleted from s3 they're permanently kept for a specified period, say 6 months. I've tried rules to archive them to glacier without success and think this is because the original files are deleted and so are the glacier instances?
Is what I'm trying to achieve possible?
Thanks.
There are actually two ways to use Amazon Glacier:
As an Amazon S3 storage class (as you describe), or
By interacting with Amazon Glacier directly
Amazon Glacier has its own API that you can use to upload/download objects to/from a Glacier vault (which is the equivalent to an S3 bucket). In fact, when you use Amazon S3 to move data into Glacier, S3 is simply calling the standard Glacier API to send the data to Glacier. The difference is that S3 is managing the vault for you, so you own't see the objects listed in your Glacier console.
So, what you might choose to do is:
Create your WHM backups
Send them directly to Glacier
Versioning
An alternative approach is to use Amazon S3 Versioning. This means that objects delete from Amazon S3 are not actually deleted. Rather, a delete marker hides the object, but the object is still accessible.
You could then define a lifecycle policy to delete non-current versions (including deleted objects) after a period of time.
See (old article): Amazon S3 Lifecycle Management for Versioned Objects | AWS News Blog

S3 Replication Between Regions

I search a way to replicate between S3 buckets across regions.
The purpose is that if a file accidentally deleted because a bug in my application, I would be able to restore it from the other bucket.
There is any way to do it without upload the file twice (meaning, not in the application layer)?
Set versioning on your S3 Bucket. After that it will keep all version files which you uploaded or updated in S3 Bucket. After that you can restore any version of file from version listing. See - Amazon S3 Object Lifecycle Management

How to filter or cleanup my S3 buckets clutters by log file?

I use S3 and amazon cloud front to put images.
When I go on amazon S3 interface, it's hard to find the folder where i have put my images because i need to scroll 10 minutes past all the buckets it creates every 15 minutes/hour. There are literally thousands.
Is it normal?
Did I put something wrong on the settings of S3 or of the cloud front file I connected to this S3 folder?
What should I do to delete them? It seems I can only delete them one by one.
See here a snapshot:
AND SO ON.....FOR THOUSANDS OF FILES UNTIL...
Those are not buckets, but are actually log files generated by S3 because you enabled logging for your bucket and configured it to save the logs in the same bucket.
If you want to keep logging enabled but make it easier to work with the logs, just use a prefix in the logging configuration or set up logging to use a different bucket.
If you don't need the logs, just disable logging.
See http://docs.aws.amazon.com/AmazonS3/latest/dev/ServerLogs.html for more details.