I would like to write an S3 bucket policy that would restrict public access to all items in the buckets and only allow downloads done using the AWS REST interface with which the Key and Shared Secret is passed. Any examples or help in writing such a policy would be greatly appreciated.
How about this?
{
"Id": "Policy1365979145718",
"Statement": [
{
"Sid": "Stmt1365979068994",
"Action": [
"s3:GetObject",
"s3:ListBucket"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::mybucket",
"Principal": {
"AWS": [
"CanonicalUser: 79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be"
]
}
}
]
}
Make a user for just this purpose (give out that user's key) and replace the CanonicalUser ID with the ID of that user. Of course you'll always have full access using the AWS account's root key.
Amazon has a Policy Generator if you want to use it.
Related
I know how to create a user through AWS Console en IAM, but I wonder where or how should I set the permissions to that user in order that he only could:
upload/delete files to a specific folder in a specific S3 bucket
I have this bucket:
So I wonder if I have to set up the permissions in that interface, or directly in the user in IAM service
I'm creating a Group there with this policy:
but for "Write" and "Read" there are a lot of policies, which ones do I need only for write/read files in a specific bucket?
Edit: Currently I have this policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:ListBucket",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::static.XXXXXX.com/images/carousel/*",
"Condition": {
"BoolIfExists": {
"aws:MultiFactorAuthPresent": "true"
}
}
}
]
}
I wonder if that is enough to:
log in AWS Console
go to S3 buckets and delete/read objects in the folder of the bucket that I want
You can attach a role to that user that gets a custom policy (Doc).
There you can choose the service, the actions which you want to allow and also the resource which are whitelisted.
You can either use a resource based policy that is attached with S3 or an identity based policy attached to an IAM User, Group or Role.
Identity-based policies and resource-based policies
You can attach below identity policy to the user to upload/delete files to a specific folder in a specific S3 bucket.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::SAMPLE-BUCKET-NAME/foldername"
}
]
}
For more details, refer Grant Access to User-Specific Folders in an Amazon S3 Bucket
I'm trying to define a policy for a specific user.
I have several buckets in my S3 but I want to give the user access to some of them.
I created the following policy:
{
"Version":"2012-10-17",
"Statement":[
{
"Sid":"AddPerm",
"Effect":"Allow",
"Principal": "*",
"Action":["s3:GetObject",
"s3:ListBucket",
"s3:ListAllMyBuckets",
"s3:GetBucketLocation",
"s3:PutObject"],
"Resource":["arn:aws:s3:::examplebucket"]
}
when I try to add a list of resources like this:
"Resource":["arn:aws:s3:::examplebucket1","arn:aws:s3:::examplebucket2"]
I get access denied
The only option that works for me (I get buckets lists) is:
"Resource": ["arn:aws:s3:::*"]
whats the problem?
Some Amazon S3 API calls operate at the Bucket-level, while some operate at the Object-level. Therefore, you will need a policy like:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:ListBucket"],
"Resource": ["arn:aws:s3:::test"]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": ["arn:aws:s3:::test/*"]
}
]
}
See: AWS Security Blog - Writing IAM Policies: How to Grant Access to an Amazon S3 Bucket
I found that its an AWS limitation.
There is no option get filtered list of buckets.
Once you give permissions to ListAllMyBuckets like this:
{
"Sid": "AllowUserToSeeBucketListInTheConsole",
"Action": ["s3:GetBucketLocation", "s3:ListAllMyBuckets"],
"Effect": "Allow",
"Resource": ["arn:aws:s3:::*"]
}
you get the list of all bucket (including buckets that you don't have permissions to it).
More info could be found here: https://aws.amazon.com/blogs/security/writing-iam-policies-grant-access-to-user-specific-folders-in-an-amazon-s3-bucket/
Few workarounds could be found here: Is there an S3 policy for limiting access to only see/access one bucket?
It seems like I should be able to make a rule to allow access from my ec2's elastic ip. Here is the code I have:
{
"Version": "2012-10-17",
"Id": "S3PolicyId1",
"Statement": [
{
"Sid": "IPAllow",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::big18v1/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": "12.123.12.123"
}
}
}
]
}
but.. it doesn't work. I get 'access denied'
any thoughts? I've read over and over about creating iam roles and things like that, but I don't really want to manipulate the bucket files... I just want to use the bucket like a server and get image files from it.
Am I thinking about this right? How should I let only my ec2 instance have access to my s3 bucket?
You should create an IAM role with access to the bucket, then use instance profiles to make the credentials available to the instances
http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2_instance-profiles.html
I have setup an unauthenticated role under Amazon Cognito Identity pool. My goal is that guest users of my mobile app would be able to upload debugging logs (small text files) to my S3 bucket so I can troubleshoot issues. I notice I would get "Access Denied" from S3 if I don't modify my S3 bucket permission. If I add allow "Everyone" to have "Upload/Delete" privilege, the file upload succeeded. My concern is someone would then be able to upload large files to my bucket and cause a security issue. What is the recommend configuration for my need above? I am a newbie to S3 and Cognito.
I am using Amazon AWS SDK for iOS but I suppose this question is platform neutral.
Edit:
My policy is as follows:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "iam:GetUser",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"s3:CreateBucket",
"s3:DeleteBucket",
"s3:DeleteObject",
"s3:GetBucketLocation",
"s3:GetObject",
"s3:ListBucket",
"s3:PutObject"
],
"Resource": ["arn:aws:s3:::import-to-ec2-*", "arn:aws:s3:::<my bucket name>/*"]
}
]
}
You don't need to modify the S3 bucket permission, but rather the IAM role associated with your identity pool. Try the following:
Visit the IAM console.
Find the role associated with your identity pool.
Attach a policy similar to the following to your role:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:PutObject"],
"Resource": ["arn:aws:s3:::MYBUCKET/*"]
}
]
}
Replace MYBUCKET with your bucket name
Access your bucket as normal from your application use the iOS SDK and Cognito
You may want to consider limiting permissions further, including ${cognito-identity.amazonaws.com:sub} to partition your users, but the above policy will get you started.
The answer above is incomplete as of 2015, you need to authorize BOTH the role AND the bucket polity in S3 to authorize that Role to write to the bucket. Use s3:PutObject in both cases. The console has wizards for both cases
As #einarc said (cannot comment yet), to make it works I had to edit role and Bucket Policy. This is good enough for testing:
Bucket Policy:
{
"Id": "Policy1500742753994",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1500742752148",
"Action": "s3:*",
"Effect": "Allow",
"Resource": "arn:aws:s3:::admin1.user1",
"Principal": "*"
}
]
}
Authenticated role's policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::*"
]
}
]
}
Now, I know that I cannot stop someone from downloading my videos and sharing, however I would prefer to have it to so that people do not copy paste links directly to my bucket. Thus, is there a way to make my bucket accessible only from my server/domain making the request?
If it helps, I'm using jwplayer which loads from a xml playlist that has all the links. This playlist definitely can be opened and viewed from anywhere and is where I expect the easy copy and paste comes from.
I don't want to mask the urls because that means my bucket is readable to everyone. There is probably some chance that someone will find the url of my bucket and the name of the files and connect everything together...
This is possible by Using Bucket Policies, which allows you to define access rights for Amazon S3 resources - there are a couple of Example Cases for Amazon S3 Bucket Policies illustrating the functionality, and amongst these you'll find an example for Restricting Access to Specific IP Addresses as well:
This statement grants permissions to any user to perform any S3 action
on objects in the specified bucket. However, the request must
originate from the range of IP addresses specified in the condition.
Depending on the specifics of your use case, a bucket policy for this might look like so:
{
"Version": "2008-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::bucket/*",
"Condition" : {
"IpAddress" : {
"aws:SourceIp": "192.168.143.0/24"
},
"NotIpAddress" : {
"aws:SourceIp": "192.168.143.188/32"
}
}
}
]
}
As shown the aws:sourceIp value for parameters IPAddress and NotIpAddress is expressed in CIDR notation, enabling respective flexibility for composing the desired scope.
Finally, you might want to check out the recommended AWS Policy Generator, select type S3 Bucket Policy and explore the available Actions and Conditions to compose more targeted policies for your use case eventually - the documentation for Conditions explains this in detail.
The Ip address will help if your server going to access your bucket. But JWPlayer is from client side. So the request is directly goes from jwplayer(browser) to s3 bucket url, Not via your server. In this case "referrer bucket policy" will help you in this.
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "1",
"Effect": "Deny",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::yourbucketname/*",
"Condition": {
"StringNotLike": {
"aws:Referer": [
"http://yoursitename.com/*",
"http://*.yoursitename.com/*"
]
}
}
}
]
}
So now s3 will allow if that request come from your site only.
You can have your bucket protected, which is by default the way it is. (meaning you only have access to objects in it) Then you can request files from Amazon S3 from your website and give it a time limit to which the user can see it.
//set time so that users can see file for 1 minute. then it is protected again.
$response = $s3->get_object_url(YOUR_A3_BUCKET, PATH/TO/FILE, '1 minutes');
This will automatically give you a url that has parameters associated with it which only is accessible for 1 minute. You can use that as your source within your website and then they could not copy and paste it into the browser after that 1 minute.
You can read more about this at the Amazon SDK for PHP
Restricting Access to a Specific HTTP Referrer
Suppose you have a website with domain name (www.example.com or example.com) with links to photos and videos stored in your Amazon S3 bucket, examplebucket. By default, all the Amazon S3 resources are private, so only the AWS account that created the resources can access them. To allow read access to these objects from your website, you can add a bucket policy that allows s3:GetObject permission with a condition, using the aws:referer key, that the get request must originate from specific webpages. The following policy specifies the StringLike condition with the aws:Referer condition key.
http://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies.html
For everyone who is stumbling upon this now, please take note that Amazon has changed the JSON format for the bucket policies and now requires each allowed / denied IP or domain to be listed separately. See below for an example.
Either way, I strongly recommend to use the AWS Policy Generator to make sure your formatting is correct.
AWS S3 Bucket Policy - Allow Access only from multiple IPs
{
"Id": "Policy1618636210012",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1618635877058",
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::bucketname/folder/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": "333.444.555.666"
}
},
"Principal": "*"
},
{
"Sid": "Stmt1618636151833",
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::bucketname/folder/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": "222.333.444.555"
}
},
"Principal": "*"
},
{
"Sid": "Stmt1618636203591",
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::bucketname/folder/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": "111.222.333.444"
}
},
"Principal": "*"
}
]
}