I am trying to code an AWS ConfigRule in CloudFormation. A rule to make sure S3 buckets are encrypted, ALB access logs are logging, also that EBS volumes are encrypted.
Do I need to code a new scope and source for each one of these mentioned above, or can I code them all together? meaning, is this below ok
Related
I have an S3 bucket that already has all the correct policies, lifecycles, etc. that I like.
I am converting what is pre-existing into Terraform Infra as Code because we are going to be deploying in multiple regions.
I cannot seem to figure out how to export a CloudFormation template of a pre-existing S3 bucket. I can figure it out for generic services, but not for a specific S3 bucket.
Can this be done?
Eg, I would like to do something like what is described here but for an S3 bucket.
Otherwise, I can try my best to look at the GUI and see if I'm capturing all the little details, but I'd rather not.
(I am not including details on the contents of the bucket, of course, but just the configuration of the bucket. I.e., how would I recreate the same bucket, but using Terraform -- or CloudFormation -- instead.)
It created an S3 bucket. If I delete it, it just creates a new one. How can I set it to not create a bucket or to stop write permissions from it?
You cannot prevent AWS Elastic Beanstalk from creating S3 Bucket as it stores your application and settings as a bundle in that bucket and executes deployments. That bucket is required till the time you run/deploy your application using AWS EB. Please be vary of deleting these buckets as this may cause your deployments/applications to crash. Although, you may remove older objects (which may not be in use).
Take a look at this link for a detailed information on how EB uses S3 buckets for deployments https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/AWSHowTo.S3.html
Our AWS statement came in and we noticed we're being doubly charged for the number of requests.
First charge is for Asia Pacific (Tokyo) (ap-northeast-1) and this is straightforward because it's where our bucket is located. But there's another charge against US East (N. Virginia) (us-east-1) with a similar number of requests.
Long story short, it appears this is happening because we're using the aws s3 command and we haven't specified a region either via the --region option or any of the fallback methods.
Typing aws configure list shows region: Value=<not set> Type=None Location=None.
And yet our aws s3 commands succeed, albeit with this seemingly hidden charge. The presumption is, our requests first go to us-east-1, but since there isn't a bucket there by the name we specified, it turns around and comes back to ap-northeast-1, where it ultimately succeeds while getting accounted twice.
The ec2 instance where the aws command is run is itself in ap-northeast-1 if that counts for anything.
So the question is, is the presumption above a reasonable account of what's happening? (i.e. Is it expected behaviour.) And, it seems a bit insidious to me but is there a proper rationale for this?
What you are seeing is correct. The aws s3 command needs to know the region in order to access the S3 bucket.
Since this has not been provided, it will make a request to us-east-1, which is effectively the default - see the AWS S3 region chart to see that us-east-1 does not require a location constraint.
If the S3 receives a request for a bucket which is not in that region then it returns a PermanentRedirect response with the correct region for the Bucket. The AWS CLI handles this transparently and repeats the request with the correct endpoint which includes the region.
The easiest way to see this in action is to run commands in debug mode:
aws s3 ls ap-northeast-1-bucket --debug
The output will include:
DEBUG - Response body:
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>PermanentRedirect</Code><Message>The bucket you are attempting to access
must be addressed using the specified endpoint. Please send all future requests to
this endpoint.</Message>
<Endpoint>ap-northeast-1-bucket.s3.ap-northeast-1.amazonaws.com</Endpoint>
<Bucket>ap-northeast-1</Bucket>
<RequestId>3C4FED2EFFF915E9</RequestId><HostId>...</HostId></Error>
The AWS CLI does not assume the Region is the same as the calling EC2 instance, it's a long running confusion/feature request.
Additional Note: Not all AWS services will auto-discover the region in this way and will fail if the Region is not set. S3 works because it uses a Global Namespace which inherently requires some form of discovery service.
We are facing error while we are trying to load a huge zip file from S3 bucket to redshift from EC2 instance and even aginity. Waht is the real issue here?
As far as we have checked this can be because of the VPC NACL rules but not sure.
Error :
ERROR: Connection timed out after 50000 milliseconds
I also got this error and the Enhanced VPC Routing is enabled , check the routing from your Redshift cluster to S3.
There are several ways to let the Redshift cluster reach S3 , you can see the link below:
https://docs.aws.amazon.com/redshift/latest/mgmt/enhanced-vpc-routing.html
I solved this error by setting NAT for my private subnet which is used by my Redshift cluster.
I think you are correct, it might be because bucket access rules or secret/access keys.
Here are some pointers to debug it further if above doesn't work.
Create a small zip file, then try again if its something because of Size(but I don't think it is possible case.)
Split your zip file into multiple zip files and create Manifest file for loading rather then single file.
I hope your will find this useful.
You should create an IAM role which authorizes Amazon Redshift to access other AWS services like S3 on your behalf, you must associate that role with an Amazon Redshift cluster before you can use the role to load or unload data.
Check below link for setting up IAM role:
https://docs.aws.amazon.com/redshift/latest/mgmt/copy-unload-iam-role.html
I got this error when the Redshift cluster had Enhanced VPC Routing enabled, but no route in the route table for S3. Adding the S3 endpoint fixed the issue. Link to docs.
After making some changes for an aws hosted static website, I deleted an aws s3 bucket through the AWS console. However, the bucket is now orphaned. Although it is not listed in the AWS console, I can see still reach what is left of it through the CLI and through the URI.
When I try to recreate a www bucket with the same name, the AWS console returns the following error:
Bucket already exists
The bucket with issues has a www prefix, so now I have two different versions (www and non-www) of the same website.
The problem URI is:
www.michaelrieder.com and www.michaelrieder.com.s3-eu-west-1.amazonaws.com
I made many failed attempts to delete the bucket using the aws s3 CLI utility. I tried aws rb force, aws rm, and any other command I remotely thought might work.
I need to delete and recreate the bucket with exactly the same name so I can have www website redirection working correctly as aws enforces static website naming conventions strictly.
When I execute the aws s3 CLI command for example:
aws s3 rb s3://www.michaelrieder.com --force --debug
A typical CLI error message is:
An error occurred (AccessDenied) when calling the DeleteObject operation: Access Denied
It thought it might be a cache related issue and that the the bucket would flush itself after a period of time, but the issue has persisted for over 48 hours.
It seems to be a permissions issue, but I cannot find a way to change the phantom bucket’s permissions or any method of deleting the bucket or even it’s individual objects, since I do not have access to the bucket via the AWS console or the aws s3 CLI.
Appreciate any ideas. Please help.