S3 permission for hosting public images for a web app - amazon-s3

I've started using S3 to host images connected to Rails models. Images are not uploaded by users so I just use aws_sdk gem to store images to S3 buckets.
So far I've succeeded in storing the image, but I am confused about the permission. Maybe I'm wrong but it seems most of the documents talking about S3 permission are outdated and I can't find what they are referring to.
What I want to do is pretty basic. I just want to host images and the image themselves are public so anyone can view. However I don't want anyone to just access my bucket and see everything else that's hosted there. So basically it's just a normal web app that hosts images on S3. How and where can I change the permission settings so it works that way? Currently the access is only granted to myself and images are not viewable by typing the url in a browser.

For people that are looking for more specific information on how to write a policy that allows anyone to access an s3 bucket this may be helpful.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicRead",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": [
"arn:aws:s3:::DOC-EXAMPLE-BUCKET/*"
]
}
]
}
This, and other examples at, the link below.
https://docs.aws.amazon.com/AmazonS3/latest/userguide/example-bucket-policies.html#example-bucket-policies-use-case-2
If your are still having issues, check your block settings.

Answering my own question:
It seems that it's not as simple as setting something as "public" and expect everything to work. Basically you need to write a policy and apply it to a bucket if you want your bucket to be viewable by public.
Here's the page where you can build your policy: http://awspolicygen.s3.amazonaws.com/policygen.html

Take a look at the docs, more specific at the S3Objects write method: Class: AWS::S3::S3Object, which allows you to set a bunch of options for the uploaded file.
When uploading to your S3 bucket you have so set the proper :acl permission, because its default is :private and no public access is granted. Here's a modified snippet I grabbed from github:
# get an instance of the S3 interface using the default configuration
s3 = AWS::S3.new
# create a bucket
b = s3.buckets.create('example')
# upload a file
basename = File.basename('image.png')
o = b.objects[basename]
o.write(:file => file_name, :acl => :public_read)
# grab public url
image_public_url = o.public_url
...

Related

Opening pdf files from S3 bucket using a URL link

I wonder if anyone can help. I'm currently working on a project where we have a load of pdf documents in an S3 bucket. What i would like to be able to do is to open the link to each pdf automatically, however the link to each document is:
s3://squad-dataops-analysis-restricted-cdp-us-east-1/FAA_CDs/data/manual_23092022_test_all_pred/out/14032022/ocr_pdfs/14032022_0558.pdf
but this doesn't open straight into a pdf reader as the link is s3://
This is how we have the pdf files in the folder in the S3 bucket:
This what each individual pdf property in S3 looks like:
if i click the Object URL link in the property section i get the following:
Is there something I need to change in the S3 bucket policy to get this to work? Any help or guidance would be greatly appreciated.
It seems like you want a public S3 bucket (if fully public is not what you want, see the link towards the end). In order to make all objects public you must:
Navigate to the 'Permissions' tab of your bucket in the S3 console
Disable/uncheck 'Block public access (bucket settings)'
Add a statement to your bucket policy to allow getObject requests
Your bucket policy should look something like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::YOUR-BUCKET-NAME/*"
}
]
}
After you do this, you should see the 'Publicly accessible' flags on your bucket and you'll be able to access objects with the object URL.
This article goes into more depth on object access.

Only allow S3 files on my site

I want to allow s3 files only on my site. They should be able to download files from my site ony. Not from any other site. Please tell me how i can do that.
I tried bucket policy and cros configuration but it doesn't work.
You could restrict origin using CORS headers https://docs.aws.amazon.com/AmazonS3/latest/dev/cors.html this would only block cross-site resources directly displayed or processed, not files to be downloaded
As a solution - make the S3 bucket private (not public) and your site can generate signed expiring (temporary) url for S3 resources
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-signed-urls.html
there are multiple blogs and exampled just search for it
You could code your Bucket Policy to Restrict Access to a Specific HTTP Referrer:
{
"Version":"2012-10-17",
"Id":"http referer policy example",
"Statement":[
{
"Sid":"Allow get requests originating from www.example.com and example.com.",
"Effect":"Allow",
"Principal":"*",
"Action":"s3:GetObject",
"Resource":"arn:aws:s3:::examplebucket/*",
"Condition":{
"StringLike":{"aws:Referer":["http://www.example.com/*","http://example.com/*"]}
}
}
]
}
However, this can be faked so it is not a guaranteed method of controlling access.
If you really wish to control who can access your content, your application should verify the identity of the user (eg username/password) and then generate time-limited Amazon S3 Pre-Signed URLs. This is the way you would normally serve private content, such as personal photos and PDF files.

How to make gatsbyjs sites in Amazon S3 bucket using Amazon Lambda function?

I want to make AWS lambda function that will create new gatsbyjs sites in Amazon S3 bucket. I am familiar with aws lambda but So I need a help that anybody know the steps for that please let me.
You basically just create the site in Lambda and then upload the files to the S3 bucket.
The bucket needs to have all the uploaded files set as public, so the following bucket policy needs to be attached to the bucket.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AddPerm",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-gatsby-files/*"
}
]
}
A Gatsby site is a Single-Page Application, so all the requests need to be directed through index.html. To enable this, select Static Website Hosting in the bucket Properties. Select Enable website hosting and set index.html to both Index Document and Error Document.

How to setup private files with S3, CloudFront and Carrierwave

Currently I have an S3 Bucket with an access policy that only allows access from a CloudFront Origin Access Identity:
{
"Version": "2012-10-17",
"Id": "PolicyForCloudFrontPrivateContent",
"Statement": [
{
"Sid": "Grant a CloudFront Origin Identity access to support private content",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity XXXXXXXXXX"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-bucket/*"
}
]
}
This should prevent any access to the S3 asset which is not through the CloudFront distribution.
However when I input the S3 url into my web browser I can still access the file.
I am using Carrierwave and Fog to upload files to S3 and have config.fog_public = true so I believe what is happening is that Fog is putting a public access setting on the uploaded object.
I tried changing the setting to config.fog_public = false but that then started returning signed URL's that ignored my asset_host setting (so provided the signed S3 URL rather than the unsigned CloudFront URL).
I presume I need to keep config.fog_public = true and have an S3 bucket policy that overrides the public access setting that Fog is putting on my objects.
Can anyone advise if that is correct or is there a better approach?
TLDR: You should set fog_public to false, to ensure the files are private outside your other settings. Then, instead of calling url, which differs based on the fog_public setting, call public_url which should always return asset_host style urls.
Details:
config.fog_public = true does 2 things:
set public readable when the files are created
return public-style urls for reading
whereas config.fog_public = false does two different things:
set public readable to false when files are created
return private/signed-style urls for reading
Unfortunately you want 1/2 of each approach, which the url within carrierwave doesn't directly support.
I think you will probably want to leave config.fog_public = false in order to ensure the files are private as you desire, but you may have to get the urls differently in order to have them show up with the seemingly public urls you desire. Normally the usage of carrierwave suggests you should use url to get what you want, and this has branching logic based on the fog_public setting. You can skip over this logic and just get the asset_host url though, by just calling public_url instead.
All that said, out of my own curiosity, if the files will be publicly available via CDN anyway, why would they need to be private via S3?
Here you can see how to configure per model (one public, another private): https://stackoverflow.com/a/18800662/5945650

One Amazon S3 Bucket with different Policy Permissions based on File Type

I have an Amazon S3 Bucket that currently takes mp3 and image files. I want all the mp3 to be private and only accessible via signed URLs but the images should be public and accessible without the signed URLs.
Is this even possible. I know I could just set up another S3 bucket to handle this but I am really trying to avoid doing that. Any help on this would be greatly appreciated.
Yes it is possible:
"Resource": [
"arn:aws:s3:::yourBucketName/*.jpg"
]
OR
"Resource": [
"arn:aws:s3:::yourBucketName/*.mp3"
]
More discussion here:
Amazon AWS S3 IAM Policy based on namespace or tag