Can one source predefined AWS bucket policies like Lego blocks? - amazon-s3

Suppose that we have a requirement that all bucket policies should reject storage requests that don't include encryption information. A clean way would be to define this once as a template of sorts, then import that template into specific bucket policies when needed.
I cant seem to find anything that can do this, both in the AWS access policy language and in Terraform. I would like to do this in Terraform if possible, but any advice would be appreciated.

Related

How do I add origin access control to S3 object within sceptre template?

I can use AWS console to follow directions and set up an S3 object behind CloudFront using an "origin access control." There's a trickiness in a certain ordering of creating and updating distributions and bucket policies. I've been unable to figure out how to configure all of this in a YAML file for automated deployment using sceptre? I have looked around. Any help appreciated.
You need to create some cloudformation reasources like S3Bucket, BucketPolicy, OriginAccessIdentity and a Cloudfront resources.
This blog post will help you

How is security done at folder level within a single bucket in aws s3?

I am very naive at AWS s3. Recently, we have a requirement of using the AWS s3 bucket for storing big files. I wanted to know, How do we do security at folder level within a single bucket in s3? Do S3 takes care of that? If yes, by what means? I understand that. they do encryption and decryption of data, but that does not suffice. We are a service provider, where multiple tenants would be using the same bucket. How folder within AWS bucket can be isolated with security? For one bucket there will be single access-key, but what about a folder in a bucket?
You should use a bucket policy to restrict/Allow user access to the folder. You can do this using the S3 Console or you can assign an IAM role to the user. Please take a look at the link for more details.
https://aws.amazon.com/blogs/security/writing-iam-policies-grant-access-to-user-specific-folders-in-an-amazon-s3-bucket/

Received S3 bucket security notification email for my AWS account?

Recently I have got an email related to my AWS S3 buckets ACL
and the email says:
We’re writing to remind you that one or more of your Amazon S3 bucket access control lists (ACLs) or bucket policies are currently configured to allow read or write access from any user on the Internet. The list of buckets with this configuration is below.
By default, S3 bucket ACLs or policies allow only the account owner to read or write contents from the bucket; however, these ACLs or bucket policies can be configured to permit world access. While there are reasons to configure buckets with world access, including public websites or publicly downloadable content, recently, there have been public disclosures of S3 bucket contents that were inadvertently configured to allow world read or write access but were not intended to be publicly available.
We encourage you to promptly review your S3 buckets and their contents to ensure that you are not inadvertently making objects available to users that you don’t intend. Bucket ACLs and policies can be reviewed in the AWS Management Console (http://console.aws.amazon.com ), or using the AWS CLI tools. ACLs permitting access to either “All Users” or “Any Authenticated AWS User” (which includes any AWS account) are effectively granting world access to the related content.
So, my question is what should I do to overcome this?
As the first answer, yes these mails are like reminders. What should you do is;
Spot the S3 Buckets that needs to be private
Check their Bucket ACL's. Look to the Public Access & Listing
After that check the Bucket policy. Remember that Bucket policies are more valid than the ACL's (For example, ACL may set to DENY mode but if the policy is on ALLOW, every object would be Public)
For the best practices please check this link;
https://d0.awsstatic.com/whitepapers/Security/AWS_Security_Best_Practices.pdf
[Page 28 of 74]
This is a courtesy notice, letting you know that content in Amazon S3 is public. If this is how you want your S3 bucket(s) configured, then there is no need to take action.
If this is not how you wish your buckets to be configured, then you should remove those permissions. (See plenty of online information on how to do this.)
I suspect that many people just blindly copy instructions from various online tutorials and might not realise the impact of their configurations. This email is just letting AWS customers know about their current configuration.

Prevent view of individual file in AWS S3 bucket

I'm currently looking to host an app with the Angular frontend in a AWS S3 bucket connecting to a PHP backend using the AWS Elastic Beanstalk. I've got it set up and it's working nicely.
However, using S3 to create a static website, anyone can view your code, including the various Angular JS files. This is mostly fine, but I want to create either a file or folder to keep sensitive information in that cannot be viewed by anyone, but can be included/required by all other files. Essentially I want a key that I can attach to all calls to the backend to make sure only authorised requests get through.
I've experimented with various permissions but always seems to be able to view all files, presumably because the static website hosting bucket policy ensures everything is public.
Any suggestions appreciated!
Cheers.
The whole idea of static website hosting on S3 means the content to be public, for example, you have maintenance of your app/web, so you redirect users to the S3 static page notifying there is maintenance ongoing.
I am not sure what all have you tried when you refer to "experimented with various permissions", however, have you tried to setup a bucket policy or maybe setup the bucket as a CloudFront origin and set a Signed URL. This might be a bit tricky considering you want to call these sensitive files by other files. But the way to hide those sensitive files will either be by using some sort of bucket policy or by restricting using some sort of signed URL in my opinion.

Create my own error page for Amazon S3

I was wondering if it's possible to create my own error pages for my S3 buckets. I've got CloudFront enabled and I am using my own CNAME to assign the S3 to a subdomain for my website. This helps me create tidy links that reference my domain name.
When someone tries to access a file that has perhaps been deleted or the link isn't quite correct, they get the XML S3 error page which is ugly and not very helpful to the user.
Is there a way to override these error pages so I can display a helpful HTML page instead?
If you configure your bucket as a 'website', you can create custom error pages.
For more details see the Amazon announcement of this feature and the AWS developer guide.
There are however some caveats with this approach, a major one being that your objects need to be publicly available.
It also works with Cloudfront, but the same public access limitations apply. See https://forums.aws.amazon.com/ann.jspa?annID=921.
If you want, you can try these out
right away by configuring your Amazon
S3 bucket as a website and making the
new Amazon S3 website endpoint a
custom origin for your CloudFront
distribution. A few notes when you do
this. First, you must set your custom
origin protocol policy to “http-only.”
Second, you’ll need to use a tool that
supports CloudFront’s custom origin
feature – the AWS Management Console
does not at this point offer this
feature. Finally, note that when you
use Amazon S3’s static website
feature, all the content in your S3
bucket must be publicly accessible, so
you cannot use CloudFront’s private
content feature with that bucket. If
you would like to use private content
with S3, you need to use the S3 REST
endpoint (e.g., s3.amazonaws.com).