Invalid ARN error while creating S3 Bucket Policy using Policy generator - amazon-s3

Im trying to create Amazon S3 Bucket Policy using the Policy Generator
Though this is very basic, but not sure why Im getting "Resource field is not valid. You must enter a valid ARN." for any ARN, eg for this "arn:aws:s3:::s3-demo-bucket-2022"
I have tried with multiple s3 bucket, aws accounts, all giving same problem.
Any help/suggestion?

As in your case, I just tried using the AWS bucket policy generator (located here)to build a simple S3 bucket policy, but it did not recognize the AWS-generated ARN I entered for my bucket. I tried several times, and it did not work, so it appears that at this moment, there might be a bug in AWS's system that is causing the policy generator to not recognize valid ARNs for S3 buckets.
You may have to build your own bucket policy using AWS examples, and enter it under "Bucket policy" (within the "Permissions" tab) of your S3 bucket. For instance, if you want to configure your S3 bucket policy to host a publicly accessible static website (which must be enabled by clicking the appropriate tick box for your bucket in the AWS console), you might enter this JSON policy, which worked in my case:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Statement1",
"Principal": "*",
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": ["arn:aws:s3:::yourbucketname/*"]
}
]
}
If you go to edit the current policy (which might not yet exist), AWS will pre-populate most of this for you. Don't forget to add the "/*" to the end of your ARN (as I did here) if you want to specify access to the things IN the bucket as opposed to referring to the bucket itself.
Other JSON bucket policy examples are provided here by AWS:
https://docs.aws.amazon.com/AmazonS3/latest/userguide/example-bucket-policies.html#example-bucket-policies-use-case-3
I have reported the bug in the policy generator website to AWS via my AWS console. I recommend you do the same so they will notice the problem and hopefully fix it.
Edit 1: I noticed you can bypass the apparent bug in the AWS Policy Generator by entering an asterisk ("*") where you would normally enter a specific S3 bucket ARN (the asterisk means 'any bucket'). This will enable you to finish building your policy, which you can edit near the end, inserting your specific bucket ARN in the place of the asterisk next to "Resource." So the editable policy will look something like this before you add your ARN (within double quotes and brackets as shown above):
{
"Id": "Policy1656274053828",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1656274051729",
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": "*",
"Principal": "*"
}
]
}
Just copy and paste the JSON policy into the place where you need it.

Related

Prevent access of S3 bucket for admin via console

I have some very highly confidential data that i want to store in s3 bucket.
I want to make policies ( bucket or iam whatever required) in such a way that no one ( not even admin) can read the contents of files in that bucket from aws console.
But i will have a program running on my host that needs to put and get data from that s3 bucket.
Also i will be using server side encryption of s3 but i can't use client side encryption of s3.
You are looking for something like this;
{
"Id": "bucketPolicy",
"Statement": [
{
"Action": "s3:*",
"Effect": "Deny",
"NotPrincipal": {
"AWS": [
"arn:aws:iam::111111111111:user/USERNAME",
"arn:aws:iam::111111111111:role/ROLENAME"
]
},
"Resource": [
"arn:aws:s3:::examplebucket",
"arn:aws:s3:::examplebucket/*"
]
}
],
"Version": "2012-10-17"
}
For test purposes make sure you replace arn:aws:iam::111111111111:user/USERNAME with your user arn. So in case you lock out everybody you can at least perform actions on the bucket.
arn:aws:iam::111111111111:role/ROLENAME should be replaced by the role arn which is attached to your EC2 instance (I am assuming that is what you mean by host).

Amazon S3 Server Side Encryption Bucket Policy problems

I am using a bucket policy that denies any non-SSL communications and UnEncryptedObjectUploads.
{
"Id": "Policy1361300844915",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "DenyUnSecureCommunications",
"Action": "s3:*",
"Effect": "Deny",
"Resource": "arn:aws:s3:::my-bucket",
"Condition": {
"Bool": {
"aws:SecureTransport": false
}
},
"Principal": {
"AWS": "*"
}
},
{
"Sid": "DenyUnEncryptedObjectUploads",
"Action": "s3:PutObject",
"Effect": "Deny",
"Resource": "arn:aws:s3:::my-bucket/*",
"Condition": {
"StringNotEquals": {
"s3:x-amz-server-side-encryption": "AES256"
}
},
"Principal": {
"AWS": "*"
}
}
]
}
This policy works for applications that support SSL and SSE settings but only for the objects being uploaded.
I ran into these issues:
CloudBerry Explorer and S3 Browser failed during folders and files RENAME in the bucket with that Bucket Policy. After I applied only SSL requirement in the bucket policy, those browsers successfully completed file/folder renaming.
CloudBerry Explorer was able to RENAME objects with the full SSL/SSE bucket policy only after I enabled in Options – Amazon S3 Copy/Move through the local computer (slower and costs money).
All copy/move inside Amazon S3 failed due to that restrictive policy.
That means that we cannot control copy/move process that is not originated from the application that manipulates local objects. At least above mentioned CloudBerry Options proved that.
But I might be wrong, that is why I am posting this question.
In my case, with that bucket policy enabled, S3 Management Console becomes useless. Users cannot create folders, delete them, what they can is only upload files.
Is there something wrong with my bucket policy? I do not know those Amazon S3 mechanisms that used for objects manipulating.
Does Amazon S3 treat external requests (API/http headers) and internal requests differently?
Is it possible to apply this policy only to the uploads and not to internal Amazon S3 GET/PUT etc..? I have tried http referer with the bucket URL to no avail.
The bucket policy with SSL/SSE requirements is a mandatory for my implementation.
Any ideas would be appreciated.
Thank you in advance.
IMHO There is no way to automatically tell Amazon S3 to turn on SSE for every PUT requests.
So, what I would investigate is the following :
write a script that list your bucket
for each object, get the meta data
if SSE is not enabled, use the PUT COPY API (http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectCOPY.html) to add SSE
"(...) When copying an object, you can preserve most of the metadata (default) or specify new metadata (...)"
If the PUT operation succeeded, use the DELETE object API to delete the original object
Then run that script on an hourly or daily basis, depending on your business requirements.
You can use S3 API in Python (http://boto.readthedocs.org/en/latest/ref/s3.html) to make it easier to write the script.
If this "change-after-write" solution is not valid for you business wise, you can work at different level
use a proxy between your API client and S3 API (like a reverse proxy on your site), and configure it to add the SSE HTTP header for every PUT / POST requests.
Developer must go through the proxy and not be authorised to issue requests against S3 API endpoints
write a wrapper library to add the SSE meta data automatically and oblige developer to use your library on top of the SDK.
The later today are a matter of discipline in the organisation, as it is not easy to enforce them at a technical level.
Seb

amazon s3 invalid principal in bucket policy

I'm trying to create a new bucket policy in the Amazon S3 console and get the error
Invalid principal in policy - "AWS" : "my_username"
The username I'm using in principal is my default bucket grantee.
My policy
{
"Id": "Policy14343243265",
"Statement": [
{
"Sid": "SSdgfgf432432432435",
"Action": [
"s3:DeleteObject",
"s3:DeleteObjectVersion",
"s3:GetObject",
"s3:GetObjectVersion",
"s3:GetObjectVersionAcl",
"s3:PutObject",
"s3:PutObjectAcl",
"s3:PutObjectVersionAcl"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::my_bucket/*",
"Principal": {
"AWS": [
"my_username"
]
}
}
]
}
I don;t understand why I'm getting the error. What am I doing wrong?
As the error message says, your principal is incorrect. Check the S3 documentation on specifying Principals for how to fix it. As seen in the example policies, it needs to be something like arn:aws:iam::111122223333:root.
I was also getting the same error in the S3 Bucket policy generator. It turned out that one of the existing policies had a principal that had been deleted. The problem was not with the policy that was being added.
In this instance, to spot the policy that is bad you can look for a principal that does not have an account or a role in the ARN.
So, instead of looking like this:
"Principal": {
"AWS": "arn:aws:iam::123456789101:role/MyCoolRole"
}
It will look something like this:
"Principal": {
"AWS": "ABCDEFGHIJKLMNOP"
}
So instead of a proper ARN it will be an alphanumeric key like ABCDEFGHIJKLMNOP. In this case you will want to identify why the bad principal was there and most likely modify or delete it. Hopefully this will help someone as it was hard to track down for me and I didn't find any documentation to indicate this.
Better solution:
Create an IAM policy that gives access to the bucket
Assign it to a group
Put user into that group
Instead of saying "This bucket is allowed to be touched by this user", you can define "These are the people that can touch this".
It sounds silly right now, but wait till you add 42 more buckets and 60 users to the mix. Having a central spot to manage all resource access will save the day.
The value for Principal should be user arn which you can find in Summary section by clicking on your username in IAM.
It is because so that specific user can bind with the S3 Bucket Policy
In my case, it is arn:aws:iam::332490955950:user/sample ==> sample is the username
I was getting the same error message when I tried creating the bucket, bucket policy and principal (IAM user) inside the same CloudFormation stack. Although I could see that CF completed the IAM user creation before even starting the bucket policy creation, the stack deployment failed. Adding a DependsOn: MyIamUser to the BucketPolicy resource fixed it for me.
Why am I getting the error "Invalid principal in policy" when I try to update my Amazon S3 bucket policy?
Issue
I'm trying to add or edit the bucket policy of my Amazon Simple Storage Service (Amazon S3) bucket using the web console, awscli or terraform (etc). However, I'm getting the error message "Error: Invalid principal in policy." How can I fix this?
Resolution
You receive "Error: Invalid principal in policy" when the value of a Principal in your bucket policy is invalid. To fix this error, review the Principal elements in your bucket policy. Check that they're using one of these supported values:
The Amazon Resource Name (ARN) of an AWS Identity and Access Management (IAM) user or role --
Note: To find the ARN of an IAM user, run the [aws iam get-user][2] command. To find the ARN of an IAM role, run the [aws iam get-role][2] command or just go and check it from the IAM service in your account web console UI.
An AWS account ID
The string "*" to represent all users
Additionally, review the Principal elements in the policy and check that they're formatted correctly. If the Principal is one user, the element must be in this format:
"Principal": {
"AWS": "arn:aws:iam::AWS-account-ID:user/user-name1"
}
If the Principal is more than one user but not all users, the element must be in this format:
"Principal": {
"AWS": [
"arn:aws:iam::AWS-account-ID:user/user-name1",
"arn:aws:iam::AWS-account-ID:user/user-name2"
]
}
If the Principal is all users, the element must be in this format:
{
"Principal": "*"
}
If you find invalid Principal values, you must correct them so that you can save changes to your bucket policy.
Extra points!
AWS Policy Generator
Bucket Policy Examples
Ref-link: https://aws.amazon.com/premiumsupport/knowledge-center/s3-invalid-principal-in-policy-error/
I was facing the same issue when I've created a bash script to initiate my terraform s3 backend. After a few hours I've decided just to put sleep 5 after user creation and that made sense, you can notice it at the line 27 of my script
If you are getting the error Invalid principal in policy in S3 bucket policies, the following 3 steps are the way to resolve it.
1 Your bucket policy uses supported values for a Principal element
The Amazon Resource Name (ARN) of an IAM user or role
An AWS account ID
The string "*" to represent all users
2 The Principal value is formatted correctly
If the Principal is one user
"Principal": {
"AWS": "arn:aws:iam::111111111111:user/user-name1"
}
If the Principal is more than one user but not all users
"Principal": {
"AWS": [
"arn:aws:iam::111111111111:user/user-name1",
"arn:aws:iam::111111111111:user/user-name2"
]
}
If the Principal is all users
{
"Principal": "*"
}
3 The IAM user or role wasn't deleted
If your bucket policy uses IAM users or roles as Principals, then confirm that those IAM identities weren't deleted. When you edit and then try to save a bucket policy with a deleted IAM ARN, you get the "Invalid principal in policy" error.
Read more here.
FYI: If you are trying to give access to a bucket for a region that is not enabled it will give the same error.
From AWS Docs: If your S3 bucket is in an AWS Region that isn't enabled by default, confirm that the IAM principal's account has the AWS Region enabled. For more information, see Managing AWS Regions.
If you are trying to give Account_X_ID access to the my_bucket like below. You need to enable the region of my_bucket on Account_X_ID.
"Principal": {
"AWS": [
"arn:aws:iam::<Account_X_ID>:root"
]
}
"Resource": "arn:aws:s3:::my_bucket/*",
Hope this helps someone.

How can I make a S3 bucket public (the amazon example policy doesn't work)?

Amazon provides an example for Granting Permission to an Anonymous User as follows (see Example Cases for Amazon S3 Bucket Policies):
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "AddPerm",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::bucket/*"
}
]
}
Within my policy I've changed "bucket" in ""arn:aws:s3:::bucket/" to "my-bucket".
However, once I try to access an image within a folder of that bucket, I get the following Access denied error:
This XML file does not appear to have any style information associated
with it. The document tree is shown below.
(if I explicitly change the properties of that image to public, then reload its url, the image loads perfectly)
What am I doing wrong?
Update #1: Apparently it has something to do with a third party site that I've given access to. Although it has all of the permissions as the main user (me), and its objects are in the same folder, with the exact same permissions, it still won't let me make them publicly viewable. No idea why.
Update #2: Bucket policies do not apply to objects "owned" by others, even though they are within your bucket, see my answer for details.
Update
As per GoodGets' comment, the real issue has been that bucket policies to do not apply to objects "owned" by someone else, even though they are in your bucket, see GoodGets' own answer for details (+1).
Is this a new bucket/object setup or are you trying to add a bucket policy to a pre-existing setup?
In the latter case you might have stumbled over a related pitfall due to the interaction between the meanwhile three different S3 access control mechanisms available, which can be rather confusing indeed. This is addressed e.g. in Using ACLs and Bucket Policies Together:
When you have ACLs and bucket policies assigned to buckets, Amazon S3
evaluates the existing Amazon S3 ACLs as well as the bucket policy
when determining an account’s access permissions to an Amazon S3
resource. If an account has access to resources that an ACL or policy
specifies, they are able to access the requested resource.
While this sounds easy enough, unintentional interferences may result from the subtle different defaults between ACLs an policies:
With existing Amazon S3 ACLs, a grant always provides access to a
bucket or object. When using policies, a deny always overrides a
grant. [emphasis mine]
This explains why adding an ACL grant always guarantees access, however, this does not apply to adding a policy grant, because an explicit policy deny provided elsewhere in your setup would still be enforced, as further illustrated in e.g. IAM and Bucket Policies Together and Evaluation Logic.
Consequently I recommend to start with a fresh bucket/object setup to test the desired configuration before applying it to a production scenario (which might still interfere of course, but identifying/debugging the difference will be easier in case).
Good luck!
Bucket policies do not apply files with other owners. So although I've given write access to a third party, the ownership remains them, and my bucket policy will not apply to those objects.
I wasted hours on this, the root cause was stupid, and the solutions mentioned here didn't help (I tried them all), and the AWS s3 permissions docs didn't emphasize this point.
If you have Requester Pays setting ON, you cannot enable Anonymous access (either by bucket policy or ACL 'Everyone'). You can sure write the policies and ACL and apply them and even use the console to explicitly set a file to public, but a non signed url will still get a 403 access denied 100% of the time on that file, until you uncheck requester pays setting in the console for the entire bucket (properties tab when bucket is selected). Or, I assume, via some API REST call.
Unchecked Requester Pays and now anonymous access is working, with referrer restrictions, ect. In fairness, the AWS console does tell us:
While Requester Pays is enabled, anonymous access to this bucket is disabled.
The issue is with your Action it should be in array format
Try this:
{
"Version":"2012-10-17",
"Statement":[
{
"Sid":"AddPerm",
"Effect":"Allow",
"Principal": "*",
"Action":["s3:GetObject"],
"Resource":["arn:aws:s3:::examplebucket/*"]
}
]
}
Pass your Bucket name in 'Resource'
If you're having this problem with Zencoder uploads, checkout this page: https://app.zencoder.com/docs/api/encoding/s3-settings/public
The following policy will make the entire bucket public :
{
"Version":"2012-10-17",
"Statement":[
{
"Sid":"AddPerm",
"Effect":"Allow",
"Principal": "*",
"Action":["s3:GetObject"],
"Resource":["arn:aws:s3:::examplebucket/*"]
}
]
}
If you want a specific folder under that bucket to be public using Bucket policies , then you have to explicitly make that folder/prefix as public and then apply the bucket policy as follows :
{
"Version":"2012-10-17",
"Statement":[
{
"Sid":"AddPerm",
"Effect":"Allow",
"Principal": "*",
"Action":["s3:GetObject"],
"Resource":["arn:aws:s3:::examplebucket/images/*"]
}
]
}
The above policy will allow public read to all of the objects under images , but you will not be able to access other objects inside the bucket.
I know it is an old question but I would like to add information that may still be relevant today.
I believe that this bucket should be a static site. Because of this, you must use a specific URL for your rules to be accepted. To do this, you must add a "website" to your URL. Otherwise, it will treat it just like an object repository.
Example:
With the problem pointed out:
https://name-your-bucket.sa-east-1.amazonaws.com/home
Without the problem pointed out:
http://name-your-bucket.s3-website-sa-east-1.amazonaws.com/home
Hope this helps :)
This works.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicRead",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": "arn:aws:s3:::example-bucket/*"
}
]
}

How to set a bucket's ACL on S3?

I tried a couple of things: S3Browse, the RightAws Ruby gem and other tools. All allow granting access on an individual key basis, but I wasn't able to set the ACL on buckets. Actually, I set the ACL on the bucket, no errors are returned. But when I refresh or check in another tool, the bucket's ACL is reset to owner only.
I want to give read and write access to FlixCloud for an application I'm developing. They need the access to write the output files.
I was struggling with the ACL vs. Bucket Policy and found the following useful.
ACL
The ACL defines the permissions attached to a single file in your bucket. The Bucket Policy is a script that explains the permissions for any folder or file in a bucket. Use the bucket polcies to restrict hot linking, grant or deny access to specific or all files, restrict IP address, etc.
Edit the S3 Bucket Policy
Log into Amazon Web Services, click to S3 and click on the bucket name in the left column. View the bucket Properties panel at the bottom of the page. Click the button on the lower right corner that says "Edit bucket policy". This brings up a lightbox that you can paste the policy script into. If the script fails validation it will not save.
Sample Policy that enabled read access to everyone (useful if the bucket is being used as a content delivery network)
{
"Version": "2008-10-17",
"Id": "",
"Statement": [
{
"Sid": "AddPerm",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my_bucket_name/*"
}
]
}
Sample policy to prevent unauthorized hotlinking (third party sites linking to it) but allow anybody to download the files:
{
"Version":"2008-10-17",
"Id":"preventHotLinking",
"Statement":[ {
"Sid":"1",
"Effect":"Allow",
"Principal": {
"AWS":"*"
},
"Action":"s3:GetObject",
"Resource":"arn:aws:s3:::your.bucket.name/*",
"Condition":{
"StringLike": {
"aws:Referer": [
"http://yourwebsitename.com/*",
"http://www.yourwebsitename.com/*"
]
}
}
}]
}
Generate a Policy
http://awspolicygen.s3.amazonaws.com/policygen.html
Sample Bucket Policies
http://docs.amazonwebservices.com/AmazonS3/latest/dev/index.html?AccessPolicyLanguage_UseCases_s3_a.html
Yup, just checked it again after 10 min. ACL remains as configured. I guess this is something at your end then. Try different account/workstation.
I have just double checked that for you - S3fm was able to change the ACL successfully. I used their email s3#flixcloud.com as userid. You can see the user in the list afterwords as flixclouds3.