AWS S3: 403 when listing objects, but not when creating objects - amazon-s3

I have a AWS IAM policy with 2 rules, both referring to the same specific path in a S3 bucket. Users can only list/manage files inside that path.
Managing actions works fine (creating/uploading, deleting), but when it comes to LISTING files (the first rule), I get a 403 in that specific path or anything inside it (and of course, outside of it).
The EKS service is a Quarkus app extremely simple and barebones, it only has the S3 dependency and essential stuff. No other service has access to it using that policy.
First post, so please forgive me if I'm missing any information or question format. I searched around but none of the related solutions worked for me, so my JSON must have something wrong specifically when being read by AWS.
Thank you in advance.
NOTE: the code below do not include the action that works, only the part that doesn't.
{
"Statement": [
{
"Action": [
"s3:ListBucket"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::mybucket/folderA/subfolder/*"
],
"Sid": "ListObjectsInBucket"
}
],
"Version": "2012-10-17"
}
I used this documentation as basis (except the console access part): https://docs.amazonaws.cn/en_us/IAM/latest/UserGuide/reference_policies_examples_s3_rw-bucket-console.html
SOLUTION AFTER CORRECT ANSWER
Using the answer given, I added a condition to the rule in line 12 and beyond, specifying the path, while line 10 only refers to the bucket name. The below policy works like a charm now:
{
"Statement": [
{
"Sid": "ListObjectsInBucket",
"Action": [
"s3:ListBucket"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::bucket"
],
"Condition": {
"StringLike": {
"s3:prefix": "folderA/subfolder/*"
}
}
}
],
"Version": "2012-10-17"
}

You are not allowing access to the bucket but that needs to be there to able to list bucket's contents.
folderA/subfolder/ etc is actually a key in the bucket. Remember in the gui you may see it like a folder but in reality everything is flat, directly in the bucket as key, value pairs, value being the content.
If you look carefully in the link you sent, you will see that there is a permission to the bucket above the one for the object as well:
{
"Sid": "ListObjectsInBucket",
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": ["arn:aws:s3:::bucket-name"]
}

Related

AWS S3 Permissions: Locking down view to a domain

I'm attempting to lock down viewing of S3 resources - really just images - to my web application's domain. For instance, if someone goes to my site - let's say example.com - and there's a src reference to the image, I want it to be viewable. But if someone were to right click and open up the image directly in a new tab, they shouldn't be able to.
There's tons on the web out there, but I just can't seem to find the correct combination or permissions. And most tutorials don't usually talk about the "Block Public Access" settings, and I'm not sure how that fits in.
Here's the policy I'm attempting:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Allow get requests originating from example.com",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::my-bucket",
"Condition": {
"StringLike": {
"aws:Referer": [
"https://www.example.com/*"
]
}
}
},
{
"Sid": "Do not allow requests from anywhere else.",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::my-bucket",
"Condition": {
"StringNotLike": {
"aws:Referer": [
"https://www.example.com/*"
]
}
}
}
]
}
This doesn't seem to do anything. If "block public access" is on, both are blocked. If it's off, both are shown. That is, even though I have an explicit "Deny" list above, going right to the image on that bucket in the browser works fine.
I can also edit CORS, but I'd still then wonder why the deny list here wouldn't take care of that itself. Finally, after implementing the policy, I lose lots of abilities myself, such as setting CORS, even when using the root user account. I can probably just do things in a different order to make it happen, but I'd like to still be able to manage my permissions after submitting the policy.
Thanks.
Step 1: Block all public access should be disabled to apply the bucket policy settings, You can make your block all public settings as shown in the above image.
Step 2: Organize all your website images in to one folder like “images”
Step 3: Setup a bucket policy as below. It has two statements. Statement 1 denies images folder to all except from your domain. Statement 2 allows everything. Since deny overwrites allow, statement 1 has more power than statement 2 hence it blocks images which calling from outside of your domain.
{
"Version": "2012-10-17",
"Id": "http referer policy example",
"Statement": [
{
"Sid": "Deny get requests not originating from www.example.com and example.com.",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:Get*",
"Resource": "arn:aws:s3:::your-bucket/images/*",
"Condition": {
"StringNotLike": {
"aws:Referer": [
"http://www.example.com/*",
"http://www.example.com/"
]
}
}
},
{
"Sid": "Allow get requests",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:Get*",
"Resource": "arn:aws:s3:::your-bucket/*"
}
] }
Step 4: You have to change your front end code a little bit, wherever you are using image tags you need to add “referrerpolicy” set to “origin”, if you don’t set this field referer header won’t be forwarded to S3 and rule evaluation failed and 403 will occur.
Example: <img src="images/pic_trulli.jpg" alt="Trulli" width="500" height="333" referrerpolicy="origin">
This solution is tested and working. If you also need CORS, you can enable CORS on S3 bucket as well. But this policy is good enough to handle.
When calling with domain -->
When calling with S3 URL -->
If this solution helps you, mark it as answered.

Bucket policies allow upload certain file types

I have build minio server to store file with custom policy by mc. I have policy file below.
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:PutObject"
],
"Effect": "Allow",
"Principal": {
"AWS": [
"*"
]
},
"Resource": [
"arn:aws:s3:::my_bucket_name/*.jpg"
],
"Sid": "Statement1"
},
{
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Principal": {
"AWS": [
"*"
]
},
"Resource": [
"arn:aws:s3:::my_bucket_name/*"
],
"Sid": "Statement2"
}
]
}
Statement s3:GetObject is working, but statement s3:PutObject is not working. I still upload everything on MinIO browser. So, what I have to do to upload only jpg images.
Have you seen Allow only certain file types to be uploaded to my Amazon S3 bucket? It details a slightly more comprehensive method than the above. I.e., allowing a given extension, to give users permissions, and disallowing anything that's not that extension - to strip users with * permissions of the ability to upload anything else.
Without knowing what "not working" means, it's hard to debug. Can you provide the commands you're using, and any exceptions? Off the top of my head, here are debugging attempts I would try:
Is your file really .jpg, not .JPG or .jpeg or .JPEG?
Does changing "arn:aws:s3:::my_bucket_name/*.jpg" to "arn:aws:s3:::my_bucket_name/*" work? If not, what about changing it to "*"?
On another note: I'd be skeptical of file type restrictions, because file types are only superficially enforced. Anyone who can rename a file can also upload it, because .jpg is just an extension, and guarantees nothing about the actual content. In the best case, enforcing the extension rule is a happy-path guard rail for users, but you can't actually rely on it to protect your system from anything.

Amazon S3: Grant anonymous access from IP (via bucket policy)

I have a Amazon S3 bucket and would like to make it available to scripts on a certain machine, whithout the need to deploy login credentials. So my plan was to allow anonymous access only from the IP of that machine. I'm quite new to the Amazon cloud and bucket policies look like the way to go. I added the following policy to my bucket:
{
"Version": "2008-10-17",
"Id": "S3PolicyId1",
"Statement": [
{
"Sid": "IPAllow",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::name_of_my_bucket/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"my_ip_1/24",
"my_ip_2/24"
]
}
}
}
]
}
But anonymous access still does not work. For testing, I granted access to "Everyone" in the S3 management console. That works fine, but is obviously not what I want to do. ;-) Any hint what I'm doing wrong and how to get this working?
My use case is some data processing using EC2 and S3, so access control by IP would be much simpler than fiddling around with user accounts. If there's a simpler solution, I'm open for suggestions.
But anonymous access still does not work.
What operation still does not work exactly, do you by chance just try to list the objects in the bucket?
Quite often a use case implicitly involves Amazon S3 API calls also addressing different resource types besides the Resource explicitly targeted by the policy already. Specifically, you'll need to be aware of the difference between Operations on the Service (e.g. ListAllMyBuckets), Operations on Buckets (e.g. ListBucket) and Operations on Objects (e.g. GetObject).
In particular, the Resource specification of your policy currently addresses the objects within the bucket only (arn:aws:s3:::name_of_my_bucket/*), which implies that you cannot list objects in the bucket (you should be able to put/get/delete objects though in case) - in order to also allow listing of the objects in the bucket via ListBucket you would need to amend your policy as follows accordingly:
{
"Version": "2008-10-17",
"Id": "S3PolicyId1",
"Statement": [
{
// ... your existing statement for objects here ...
},
{
"Sid": "IPAllow",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::name_of_my_bucket",
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"my_ip_1/24",
"my_ip_2/24"
]
}
}
}
]
}

amazon s3 video files accessible only from my domain/server?

Now, I know that I cannot stop someone from downloading my videos and sharing, however I would prefer to have it to so that people do not copy paste links directly to my bucket. Thus, is there a way to make my bucket accessible only from my server/domain making the request?
If it helps, I'm using jwplayer which loads from a xml playlist that has all the links. This playlist definitely can be opened and viewed from anywhere and is where I expect the easy copy and paste comes from.
I don't want to mask the urls because that means my bucket is readable to everyone. There is probably some chance that someone will find the url of my bucket and the name of the files and connect everything together...
This is possible by Using Bucket Policies, which allows you to define access rights for Amazon S3 resources - there are a couple of Example Cases for Amazon S3 Bucket Policies illustrating the functionality, and amongst these you'll find an example for Restricting Access to Specific IP Addresses as well:
This statement grants permissions to any user to perform any S3 action
on objects in the specified bucket. However, the request must
originate from the range of IP addresses specified in the condition.
Depending on the specifics of your use case, a bucket policy for this might look like so:
{
"Version": "2008-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::bucket/*",
"Condition" : {
"IpAddress" : {
"aws:SourceIp": "192.168.143.0/24"
},
"NotIpAddress" : {
"aws:SourceIp": "192.168.143.188/32"
}
}
}
]
}
As shown the aws:sourceIp value for parameters IPAddress and NotIpAddress is expressed in CIDR notation, enabling respective flexibility for composing the desired scope.
Finally, you might want to check out the recommended AWS Policy Generator, select type S3 Bucket Policy and explore the available Actions and Conditions to compose more targeted policies for your use case eventually - the documentation for Conditions explains this in detail.
The Ip address will help if your server going to access your bucket. But JWPlayer is from client side. So the request is directly goes from jwplayer(browser) to s3 bucket url, Not via your server. In this case "referrer bucket policy" will help you in this.
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "1",
"Effect": "Deny",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::yourbucketname/*",
"Condition": {
"StringNotLike": {
"aws:Referer": [
"http://yoursitename.com/*",
"http://*.yoursitename.com/*"
]
}
}
}
]
}
So now s3 will allow if that request come from your site only.
You can have your bucket protected, which is by default the way it is. (meaning you only have access to objects in it) Then you can request files from Amazon S3 from your website and give it a time limit to which the user can see it.
//set time so that users can see file for 1 minute. then it is protected again.
$response = $s3->get_object_url(YOUR_A3_BUCKET, PATH/TO/FILE, '1 minutes');
This will automatically give you a url that has parameters associated with it which only is accessible for 1 minute. You can use that as your source within your website and then they could not copy and paste it into the browser after that 1 minute.
You can read more about this at the Amazon SDK for PHP
Restricting Access to a Specific HTTP Referrer
Suppose you have a website with domain name (www.example.com or example.com) with links to photos and videos stored in your Amazon S3 bucket, examplebucket. By default, all the Amazon S3 resources are private, so only the AWS account that created the resources can access them. To allow read access to these objects from your website, you can add a bucket policy that allows s3:GetObject permission with a condition, using the aws:referer key, that the get request must originate from specific webpages. The following policy specifies the StringLike condition with the aws:Referer condition key.
http://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies.html
For everyone who is stumbling upon this now, please take note that Amazon has changed the JSON format for the bucket policies and now requires each allowed / denied IP or domain to be listed separately. See below for an example.
Either way, I strongly recommend to use the AWS Policy Generator to make sure your formatting is correct.
AWS S3 Bucket Policy - Allow Access only from multiple IPs
{
"Id": "Policy1618636210012",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1618635877058",
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::bucketname/folder/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": "333.444.555.666"
}
},
"Principal": "*"
},
{
"Sid": "Stmt1618636151833",
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::bucketname/folder/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": "222.333.444.555"
}
},
"Principal": "*"
},
{
"Sid": "Stmt1618636203591",
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::bucketname/folder/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": "111.222.333.444"
}
},
"Principal": "*"
}
]
}

How to remove "delete" permission on Amazon S3

In the Amazon S3 console I only see a permission option for "upload/delete". Is there a way to allow uploading but not deleting?
The permissions you are seeing in the AWS Management Console directly are based on the initial and comparatively simple Access Control Lists (ACL) available for S3, which essentially differentiated READ and WRITE permissions, see Specifying a Permission:
READ - Allows grantee to list the objects in the bucket
WRITE - Allows grantee to create, overwrite, and delete any object in the
bucket
These limitations have been addressed by adding Bucket Policies (permissions applied on the bucket level) and IAM Policies (permissions applied on the user level), and all three can be used together as well (which can become rather complex, as addressed below), see Access Control for the entire picture.
Your use case probably asks for a respective bucket policy, which you an add directly from the S3 console as well. Clicking on Add bucket policy opens the Bucket Policy Editor, which features links to a couple of samples as well as the highly recommended AWS Policy Generator, which allows you to assemble a policy addressing your use case.
For an otherwise locked down bucket, the simplest form might look like so (please ensure to adjust Principal and Resource to your needs):
{
"Statement": [
{
"Action": [
"s3:PutObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::<bucket_name>/<key_name>",
"Principal": {
"AWS": [
"*"
]
}
}
]
}
Depending on your use case, you can easily compose pretty complex policies by combining various Allow and Deny actions etc. - this can obviously yield inadvertent permissions as well, thus proper testing is key as usual; accordingly, please take care of the implications when using Using ACLs and Bucket Policies Together or IAM and Bucket Policies Together.
Finally, you might want to have a look at my answer to Problems specifying a single bucket in a simple AWS user policy as well, which addresses another commonly encountered pitfall with policies.
You can attach no-delete policy to your s3 bucket. For example if you don't want this IAM user to perform any delete operation to any buckets or any objects, you can set something like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1480692207000",
"Effect": "Deny",
"Action": [
"s3:DeleteBucket",
"s3:DeleteBucketPolicy",
"s3:DeleteBucketWebsite",
"s3:DeleteObject",
"s3:DeleteObjectVersion"
],
"Resource": [
"arn:aws:s3:::*"
]
}
]
}
Also, you can check your policy with policy simulator https://policysim.aws.amazon.com to check if your set up is what you expected or not.
Hope this helps!
This worked perfect . Thanks to Pung Worathiti Manosroi . combined his mentioned policy as per below:
{
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:GetObjectAcl",
"s3:PutObjectAcl",
"s3:ListBucket",
"s3:GetBucketAcl",
"s3:PutBucketAcl",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::mybucketname/*",
"Condition": {}
},
{
"Effect": "Allow",
"Action": "s3:ListAllMyBuckets",
"Resource": "*",
"Condition": {}
},
{
"Effect": "Deny",
"Action": [
"s3:DeleteBucket",
"s3:DeleteBucketPolicy",
"s3:DeleteBucketWebsite",
"s3:DeleteObject",
"s3:DeleteObjectVersion"
],
"Resource": "arn:aws:s3:::mybucketname/*",
"Condition": {}
}
]
}
Yes, s3:DeleteObject is an option:
http://docs.aws.amazon.com/AmazonS3/latest/dev/using-with-s3-actions.html
However, there is no differentiation between changing an existing object (which would allow effectively deleting it) and creating a new object.