How to securely configure s3 for website access - amazon-s3

I want to setup an s3 bucket securely but provide public access to website assets such as images, pdfs, documents, etc. There doesn't seem to be an easy way to do this.
I have tried setting up a new bucket which has Block Public Access enabled. I assume this is the best way to secure the bucket but can't enable viewing/downloading of files in this bucket.
I expect to be able to view/download website files from a browser but always get an Access Denied error.

All content in Amazon S3 buckets are private by default.
If you wish to provide public access to content, this can be done in several ways:
At the Bucket level by providing a Bucket Policy: This is ideal for providing access to a whole bucket, or a portion of a bucket.
At the Object level by using an Access Control List (ACL): This allows fine-grained control on an object-by-object basis.
Selectively, by creating a pre-signed URL: This allows your application to determine whether a particular application user should be permitted access
All three methods allow an object in Amazon S3 to be accessed via a URL. This is totally separate to making API calls to Amazon S3 using AWS credentials, which would allow control at the user-level.
Based on your description, it would appear that a Bucket Policy would best meet your needs, such as:
{
"Version":"2012-10-17",
"Statement":[
{
"Sid":"PublicPermission",
"Effect":"Allow",
"Principal": "*",
"Action":["s3:GetObject"],
"Resource":["arn:aws:s3:::my-bucket/*"]
}
]
}
This is saying: Allow anyone to get an object from my-bucket
Note that the policy specifies which calls are permitted, so it can allow upload, download, delete, etc. In the above example, it is only allowing GetObject, which means objects can be accessed/downloaded but not uploaded, deleted, etc.
The /* in the Resource allows further control by specifying a path within the bucket, so it would be possible to grant access only to a portion of the bucket.
When using a Bucket Policy, it is also necessary to deactivate Block Public Access settings to allow the Bucket Policy to be used. This is an extra layer of protection that ensures buckets are not accidentally made publicly accessible.
If, on the other hand, your actual goal is to keep content private but selectively make it available to application users, then you could use a pre-signed URL. An example is a photo website where people are permitted to view their private photos, but the photos are not publicly accessible.
This would be handled by having users authenticate to the application. Then, when they wish to access a photo, the application would determine whether they are permitted to see the photo. If so, the application would generate a pre-signed URL that grants temporary access to an object. Once the expiry time is passed, the link will no longer work.

Related

Google Cloud Storage: Alternative to signed URLs for folders

Our application data storage is backed by Google Cloud Storage (and S3 and Azure Blob Storage). We need to give access to this storage to random outside tools (upload from local disk using CLI tools, unload from analytical database like Redshift, Snowflake and others). The specific use case is that users need to upload multiple big files (you can think about it much like m3u8 playlists for streaming videos - it's m3u8 playlist and thousands of small video files). The tools and users MAY not be affiliated with Google in any way (may not have Google account). We also absolutely need to data transfer to be directly to the storage, outside of our servers.
In S3 we use federation tokens to give access to a part of the S3 bucket.
So model scenario on AWS S3:
customer requests some data upload via our API
we give customers S3 credentials, that are scoped to s3://customer/project/uploadId, allowing upload of new files
client uses any tool to upload the data
client uploads s3://customer/project/uploadId/file.manifest, s3://customer/project/uploadId/file.00001, s3://customer/project/uploadId/file.00002, ...
other data (be it other uploadId or project) in the bucket is safe because the given credentials are scoped
In ABS we use STS token for the same purpose.
GCS does not seem to have anything similar, except for Signed URLs. Signed URLs have a problem though that they refer to a single file. That would either require us to know in advance how many files will be uploaded (we don't know) or the client would need to request each file's signed URL separately (strain on our API and also it's slow).
ACL seemed to be a solution, but it's only tied to Google-related identities. And those can't be created on demand and fast. Service users are also and option, but their creation is slow and generally they are discouraged for this use case IIUC.
Is there a way to create a short lived credentials that are limited to a subset of the CGS bucket?
Ideal scenario would be that the service account we use in the app would be able to generate a short lived token that would only have access to a subset of the bucket. But nothing such seems to exist.
Unfortunately, no. For retrieving objects, signed URLs need to be for exact objects. You'd need to generate one per object.
Using the * wildcard will specify the subdirectory you are targeting and will identify all objects under it. For example, if you are trying to access objects in Folder1 in your bucket, you would use gs://Bucket/Folder1/* but the following command gsutil signurl -d 120s key.json gs://bucketname/folderName/** will create a SignedURL for each of the files inside your bucket but not a single URL for the entire folder/subdirectory
Reason : Since subdirectories are just an illusion of folders in a bucket and are actually object names that contain a ‘/’, every file in a subdirectory gets its own signed URL. There is no way to create a single signed URL for a specific subdirectory and allow its files to be temporarily available.
There is an ongoing feature request for this https://issuetracker.google.com/112042863. Please raise your concern here and look for further updates.
For now, one way to accomplish this would be to write a small App Engine app that they attempt to download from instead of directly from GCS which would check authentication according to whatever mechanism you're using and then, if they pass, generate a signed URL for that resource and redirect the user.
Reference : https://stackoverflow.com/a/40428142/15803365

S3-backed CloudFront and signed URLs

Originally I set up an S3 bucket "bucket.mydomain.com" and used a CNAME in my DNS so I could pull files from there as if it was a subdomain. This worked for http with:
bucket.mydomain.com/image.jpg
or with https like:
s3.amazonaws.com/bucket.mydomain.com/image.jpg
Some files in this bucket were public access but some were "authenticated read" so that I would have to generate a signed URL with expiration in order for them to be read/downloaded.
I wanted to be able to use https without the amazon name in the URL, so I setup a CloudFront distribution with the S3 bucket as the origin. Now I can use https like:
bucket.mydomain.com/image.jpg
The problem I have now is that it seems either all my files in the bucket have to be public read, or they all have to be authenticated read.
How can I force signed URLs to be used for some files, but have other files be public read?
it seems either all my files in the bucket have to be public read, or they all have to be authenticated read
That is -- sort of -- correct, at least in a simple configuration.
CloudFront has a feature called an Origin Access Identity (OAI) that allows it to authenticate requests that it sends to your bucket.
CloudFront also supports controlling viewer access to your resources using CloudFront signed URLs (and signed cookies).
But these two features are independent of each other.
If an OAI is configured, it always sends authentication information to the bucket, regardless of whether the object is private or public.
Similarly, if you enable Restrict Viewer Access for a cache behavior, CloudFront will always require viewer requests to be signed, regardless of whether the object is private or public (in the bucket), because CloudFront doesn't know.
There are a couple of options.
If your content is separated logically by path, the solution is simple: create multiple Cache Behaviors, with Path Patterns to match, like /public/* or /private/* and configure them with individual, appropriate Restrict Viewer Access settings. Whether the object is public in the bucket doesn't matter, since CloudFront will pass-through requests for (e.g.) /public/* without requiring a signed URL if that Cache Behavior does not "Restrict Viewer Access." You can create 25 unique Cache Behavior Path Patterns by default.
If that is not a solution, you could create two CloudFront distributions. One would be without an OAI and without Restrict Viewer Acccess enabled. This distribution can only fetch public objects. The second distribution would have an OAI and would require signed URLs. You would use this for private objects (it would work for public objects, too -- but they would still need signed URLs). There would be no price difference here, but you might have cross-origin issues to contend with.
Or, you could modify your application to sign all URLs for otherwise public content when HTML is being rendered (or API responses, or whatever the context is for your links).
Or, depending on the architecture of your platform, there are probably other more complex approaches that might make sense, depending on the mix of public and private and your willingness to add some intelligence at the edge with Lambda#Edge triggers, which can do things like inspect/modify requests in flight, consult external logic and data sources (e.g. look up a session cookie in DynamoDB), intercept errors, and generate redirects.
Michael's description is good. Amazon has also stated (link below) "Signature Version 2 is being deprecated, and the final support for Signature Version 2 will end on June 24, 2019."
https://docs.aws.amazon.com/AmazonS3/latest/dev/auth-request-sig-v2.html

Received S3 bucket security notification email for my AWS account?

Recently I have got an email related to my AWS S3 buckets ACL
and the email says:
We’re writing to remind you that one or more of your Amazon S3 bucket access control lists (ACLs) or bucket policies are currently configured to allow read or write access from any user on the Internet. The list of buckets with this configuration is below.
By default, S3 bucket ACLs or policies allow only the account owner to read or write contents from the bucket; however, these ACLs or bucket policies can be configured to permit world access. While there are reasons to configure buckets with world access, including public websites or publicly downloadable content, recently, there have been public disclosures of S3 bucket contents that were inadvertently configured to allow world read or write access but were not intended to be publicly available.
We encourage you to promptly review your S3 buckets and their contents to ensure that you are not inadvertently making objects available to users that you don’t intend. Bucket ACLs and policies can be reviewed in the AWS Management Console (http://console.aws.amazon.com ), or using the AWS CLI tools. ACLs permitting access to either “All Users” or “Any Authenticated AWS User” (which includes any AWS account) are effectively granting world access to the related content.
So, my question is what should I do to overcome this?
As the first answer, yes these mails are like reminders. What should you do is;
Spot the S3 Buckets that needs to be private
Check their Bucket ACL's. Look to the Public Access & Listing
After that check the Bucket policy. Remember that Bucket policies are more valid than the ACL's (For example, ACL may set to DENY mode but if the policy is on ALLOW, every object would be Public)
For the best practices please check this link;
https://d0.awsstatic.com/whitepapers/Security/AWS_Security_Best_Practices.pdf
[Page 28 of 74]
This is a courtesy notice, letting you know that content in Amazon S3 is public. If this is how you want your S3 bucket(s) configured, then there is no need to take action.
If this is not how you wish your buckets to be configured, then you should remove those permissions. (See plenty of online information on how to do this.)
I suspect that many people just blindly copy instructions from various online tutorials and might not realise the impact of their configurations. This email is just letting AWS customers know about their current configuration.

Prevent view of individual file in AWS S3 bucket

I'm currently looking to host an app with the Angular frontend in a AWS S3 bucket connecting to a PHP backend using the AWS Elastic Beanstalk. I've got it set up and it's working nicely.
However, using S3 to create a static website, anyone can view your code, including the various Angular JS files. This is mostly fine, but I want to create either a file or folder to keep sensitive information in that cannot be viewed by anyone, but can be included/required by all other files. Essentially I want a key that I can attach to all calls to the backend to make sure only authorised requests get through.
I've experimented with various permissions but always seems to be able to view all files, presumably because the static website hosting bucket policy ensures everything is public.
Any suggestions appreciated!
Cheers.
The whole idea of static website hosting on S3 means the content to be public, for example, you have maintenance of your app/web, so you redirect users to the S3 static page notifying there is maintenance ongoing.
I am not sure what all have you tried when you refer to "experimented with various permissions", however, have you tried to setup a bucket policy or maybe setup the bucket as a CloudFront origin and set a Signed URL. This might be a bit tricky considering you want to call these sensitive files by other files. But the way to hide those sensitive files will either be by using some sort of bucket policy or by restricting using some sort of signed URL in my opinion.

Users Unable to Access S3 Image

Originally I have:
a Bucket (Singapore) , and then I copied this bucket to another region using the AWS CLI.
But the problem is that the resulted images in the new bucket is not accessible via web.
Any thoughts?
p.s: I had never set any policy to both buckets before.
By default, all content in an Amazon S3 bucket is private.
You can grant access to Amazon S3 objects in several ways:
Object-level ACLs: You can make individual files public by ticking the Read permission in the S3 console. This applies only to the specific file.
Bucket Policy: This is applied to the bucket, which assigns permissions to the whole bucket or paths within the bucket. For example, make all objects public. (See Example bucket policies)
IAM Policy: You can create a policy and apply it to a specific IAM User or IAM Group. The policy can grant access to specific buckets or paths within buckets, similar to the Bucket Policy.
Pre-Signed URLs: These can be generated by applications to grant time-limited access to objects stored in Amazon S3.
So, if you think that your users should able to access the files in your bucket, make sure you have granted access via one of the above methods.