Received S3 bucket security notification email for my AWS account? - amazon-s3

Recently I have got an email related to my AWS S3 buckets ACL
and the email says:
We’re writing to remind you that one or more of your Amazon S3 bucket access control lists (ACLs) or bucket policies are currently configured to allow read or write access from any user on the Internet. The list of buckets with this configuration is below.
By default, S3 bucket ACLs or policies allow only the account owner to read or write contents from the bucket; however, these ACLs or bucket policies can be configured to permit world access. While there are reasons to configure buckets with world access, including public websites or publicly downloadable content, recently, there have been public disclosures of S3 bucket contents that were inadvertently configured to allow world read or write access but were not intended to be publicly available.
We encourage you to promptly review your S3 buckets and their contents to ensure that you are not inadvertently making objects available to users that you don’t intend. Bucket ACLs and policies can be reviewed in the AWS Management Console (http://console.aws.amazon.com ), or using the AWS CLI tools. ACLs permitting access to either “All Users” or “Any Authenticated AWS User” (which includes any AWS account) are effectively granting world access to the related content.
So, my question is what should I do to overcome this?

As the first answer, yes these mails are like reminders. What should you do is;
Spot the S3 Buckets that needs to be private
Check their Bucket ACL's. Look to the Public Access & Listing
After that check the Bucket policy. Remember that Bucket policies are more valid than the ACL's (For example, ACL may set to DENY mode but if the policy is on ALLOW, every object would be Public)
For the best practices please check this link;
https://d0.awsstatic.com/whitepapers/Security/AWS_Security_Best_Practices.pdf
[Page 28 of 74]

This is a courtesy notice, letting you know that content in Amazon S3 is public. If this is how you want your S3 bucket(s) configured, then there is no need to take action.
If this is not how you wish your buckets to be configured, then you should remove those permissions. (See plenty of online information on how to do this.)
I suspect that many people just blindly copy instructions from various online tutorials and might not realise the impact of their configurations. This email is just letting AWS customers know about their current configuration.

Related

How to securely configure s3 for website access

I want to setup an s3 bucket securely but provide public access to website assets such as images, pdfs, documents, etc. There doesn't seem to be an easy way to do this.
I have tried setting up a new bucket which has Block Public Access enabled. I assume this is the best way to secure the bucket but can't enable viewing/downloading of files in this bucket.
I expect to be able to view/download website files from a browser but always get an Access Denied error.
All content in Amazon S3 buckets are private by default.
If you wish to provide public access to content, this can be done in several ways:
At the Bucket level by providing a Bucket Policy: This is ideal for providing access to a whole bucket, or a portion of a bucket.
At the Object level by using an Access Control List (ACL): This allows fine-grained control on an object-by-object basis.
Selectively, by creating a pre-signed URL: This allows your application to determine whether a particular application user should be permitted access
All three methods allow an object in Amazon S3 to be accessed via a URL. This is totally separate to making API calls to Amazon S3 using AWS credentials, which would allow control at the user-level.
Based on your description, it would appear that a Bucket Policy would best meet your needs, such as:
{
"Version":"2012-10-17",
"Statement":[
{
"Sid":"PublicPermission",
"Effect":"Allow",
"Principal": "*",
"Action":["s3:GetObject"],
"Resource":["arn:aws:s3:::my-bucket/*"]
}
]
}
This is saying: Allow anyone to get an object from my-bucket
Note that the policy specifies which calls are permitted, so it can allow upload, download, delete, etc. In the above example, it is only allowing GetObject, which means objects can be accessed/downloaded but not uploaded, deleted, etc.
The /* in the Resource allows further control by specifying a path within the bucket, so it would be possible to grant access only to a portion of the bucket.
When using a Bucket Policy, it is also necessary to deactivate Block Public Access settings to allow the Bucket Policy to be used. This is an extra layer of protection that ensures buckets are not accidentally made publicly accessible.
If, on the other hand, your actual goal is to keep content private but selectively make it available to application users, then you could use a pre-signed URL. An example is a photo website where people are permitted to view their private photos, but the photos are not publicly accessible.
This would be handled by having users authenticate to the application. Then, when they wish to access a photo, the application would determine whether they are permitted to see the photo. If so, the application would generate a pre-signed URL that grants temporary access to an object. Once the expiry time is passed, the link will no longer work.

Users Unable to Access S3 Image

Originally I have:
a Bucket (Singapore) , and then I copied this bucket to another region using the AWS CLI.
But the problem is that the resulted images in the new bucket is not accessible via web.
Any thoughts?
p.s: I had never set any policy to both buckets before.
By default, all content in an Amazon S3 bucket is private.
You can grant access to Amazon S3 objects in several ways:
Object-level ACLs: You can make individual files public by ticking the Read permission in the S3 console. This applies only to the specific file.
Bucket Policy: This is applied to the bucket, which assigns permissions to the whole bucket or paths within the bucket. For example, make all objects public. (See Example bucket policies)
IAM Policy: You can create a policy and apply it to a specific IAM User or IAM Group. The policy can grant access to specific buckets or paths within buckets, similar to the Bucket Policy.
Pre-Signed URLs: These can be generated by applications to grant time-limited access to objects stored in Amazon S3.
So, if you think that your users should able to access the files in your bucket, make sure you have granted access via one of the above methods.

Block S3 access to IPs from Syria, Iran, Sudan etc

Do you know any bucket policy that would allow me to block access to S3 files from specific countries?
Although you can block a specific IP range using an S3 bucket policy it is impossible to list all the IP ranges assigned from a specific country in this policy, as AWS imposes a 20KB limit on the size of the policy.
Bucket Policies are Limited to 20 Kilobytes in Size—If you have a large number of objects and users, your bucket policy could reach the 20K size limit.
(from http://docs.aws.amazon.com/AmazonS3/latest/dev/WhenToUseACLvsBucketPolicy.html)
A better solution to your problem would be to configure your S3 bucket to use signed requests (Query String Authentication) as described here:
http://docs.aws.amazon.com/AmazonS3/latest/dev/RESTAuthentication.html#RESTAuthenticationQueryStringAuth
and have a separate web service that checks the client's IP using a geolocation service and only issues these signed requests to clients that match your criteria.
Amazon CloudFront Adds Geo Restriction Feature. Take a look at http://aws.amazon.com/about-aws/whats-new/2013/12/18/amazon-cloudfront-adds-geo-restriction-feature/

Create my own error page for Amazon S3

I was wondering if it's possible to create my own error pages for my S3 buckets. I've got CloudFront enabled and I am using my own CNAME to assign the S3 to a subdomain for my website. This helps me create tidy links that reference my domain name.
When someone tries to access a file that has perhaps been deleted or the link isn't quite correct, they get the XML S3 error page which is ugly and not very helpful to the user.
Is there a way to override these error pages so I can display a helpful HTML page instead?
If you configure your bucket as a 'website', you can create custom error pages.
For more details see the Amazon announcement of this feature and the AWS developer guide.
There are however some caveats with this approach, a major one being that your objects need to be publicly available.
It also works with Cloudfront, but the same public access limitations apply. See https://forums.aws.amazon.com/ann.jspa?annID=921.
If you want, you can try these out
right away by configuring your Amazon
S3 bucket as a website and making the
new Amazon S3 website endpoint a
custom origin for your CloudFront
distribution. A few notes when you do
this. First, you must set your custom
origin protocol policy to “http-only.”
Second, you’ll need to use a tool that
supports CloudFront’s custom origin
feature – the AWS Management Console
does not at this point offer this
feature. Finally, note that when you
use Amazon S3’s static website
feature, all the content in your S3
bucket must be publicly accessible, so
you cannot use CloudFront’s private
content feature with that bucket. If
you would like to use private content
with S3, you need to use the S3 REST
endpoint (e.g., s3.amazonaws.com).

What are Best practices for delivering user-generated content sharing service globally from Amazon S3?

What are the issues for using Amazon S3 to store user-uploaded photos and video and delivering these to users around the world. One user's uploads may be viewed by users in any location. Is this the use-case for using Amazon CloudFront?
We really want a Global S3 bucket - why oh why has amazon set up regions!!
cheers
You already have the answer. That's exactly what CloudFront is for.
Its pretty trivial to 'link' CloudFront to your bucket, which then means your content is served from multiple edge locations around the world.
Like S3, you can public or private ditributions and you can now use the new Identity and Access Management (IAM) to protect your content too.