IAM configuration to access jgit on S3 - amazon-s3

I am trying to create IAM permissions so jgit can access a directory in one of my buckets.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:ListBucket"],
"Resource": ["arn:aws:s3:::<mybucket>/<mydir>/*"]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": ["arn:aws:s3:::<mybucket>/<mydir>"]
}
]
}
Unfortunately it throws an error. I am not sure what other allow actions need to happen for this to work. (A little new at IAM).
Caused by: java.io.IOException: Reading of '<mydir>/packed-refs' failed: 403 Forbidden
at org.eclipse.jgit.transport.AmazonS3.error(AmazonS3.java:519)
at org.eclipse.jgit.transport.AmazonS3.get(AmazonS3.java:289)
at org.eclipse.jgit.transport.TransportAmazonS3$DatabaseS3.open(TransportAmazonS3.java:284)
at org.eclipse.jgit.transport.WalkRemoteObjectDatabase.openReader(WalkRemoteObjectDatabase.java:365)
at org.eclipse.jgit.transport.WalkRemoteObjectDatabase.readPackedRefs(WalkRemoteObjectDatabase.java:423)
... 13 more
Caused by: java.io.IOException:
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>...</RequestId><HostId>...</HostId></Error>
at org.eclipse.jgit.transport.AmazonS3.error(AmazonS3.java:538)
... 17 more
The 403 Forbidden is obviously the error but not sure what needs to be added to the IAM. Any ideas?
[Should have added, too, that I tried this out in the policy simulator and it appeared to work there.]

The "403" error may simply mean that the key <mydir>/packed-refs doesn't exist. According to https://forums.aws.amazon.com/thread.jspa?threadID=56531:
Amazon S3 will return an AccessDenied error when a nonexistent key is requested and the requester is not allowed to list the contents of the bucket.
If you're pushing for the first time, that folder might not exist, and I'm guessing you would need ListBucket privileges on the parent directory to get the proper NoSuchKey response. Try changing that first statement to:
{
"Effect": "Allow",
"Action": ["s3:ListBucket"],
"Resource": ["arn:aws:s3:::<mybucket>/*"]
}
I also noticed that jgit push s3 refs/heads/master worked when jgit push s3 master did not.
To future folk: if all you want to do is to set up a git repos bucket with its own user, the following security policy seems to be good enough:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::<bucketname>"
]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::<bucketname>/*"
]
}
]
}

Related

ECS Task Access Denied to S3

I have an IAM role set for my task with the following permissions, yet I get access denied trying to access the buckets.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::bucket/Templates/*",
"arn:aws:s3:::bucket/*",
"arn:aws:s3:::anotherBucket/*"
]
}
]
}
The container instance has a role with the standard AmazonEC2ContainerServiceforEC2Role policy.
I seem to be able to read and write to folders under from bucket/ like bucket/00001, BUT I can't read from bucket/Templates.
Ive redeployed the permissions and the tasks repeatedly (using terraform) but nothing changes. Ive added logging to the app to ensure it's using the correct bucket and path / keys.
I'm stumped. Anyone got a clue what I might have missed here?
Thanks
PS: It just occurred to me, the files in the buckets I cant access I copy there using a script. This is done using credentials other than the creds the task is using.
aws s3 cp ..\Api\somefiles\000000000001\ s3://bucket/000000000001 --recursive --profile p
aws s3 cp ..\Api\somefiles\Templates\000000000001\ s3://bucket/Templates/000000000001 --recursive --profile p
I was using -acl bucket-owner-full-control on the cp command but I removed that to see if would help - it didnt. Maybe I need something else?
It works now because you changed the Resource to match "".
Try adding the bucket itself as a resource, along with / pattern:
"Version": "2012-10-17",
"Statement": [
{
"Sid": "sid1",
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets",
"s3:ListBucket",
"s3:HeadBucket"
],
"Resource": "*"
},
{
"Sid": "sid2",
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::bucket",
"arn:aws:s3:::bucket/*",
"arn:aws:s3:::anotherBucket"
"arn:aws:s3:::anotherBucket/*",
]
}
]
Solved. Found an old sample from a previous employer :) I needed a permission for List* explicitly, separate from the other permissions. I also needed to define the sids.
"Version": "2012-10-17",
"Statement": [
{
"Sid": "sid1",
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets",
"s3:ListBucket",
"s3:HeadBucket"
],
"Resource": "*"
},
{
"Sid": "sid2",
"Effect": "Allow",
"Action": "s3:*",
"Resource": "*"
}
]
}

Google Cloud Storage transfer from Amazon S3 - Invalid access key

I'm trying to create a transfer from my S3 bucket to Google Cloud - it's basically the same problem as in this question, but none of the answers work for me. Whenever I try to make a transfer, I get the following error:
Invalid access key. Make sure the access key for your S3 bucket is correct, or set the bucket permissions to Grant Everyone.
I've tried the following policies, to no success:
First policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:Get*",
"s3:List*",
"s3:GetBucketLocation"
],
"Resource": "*"
}
]
}
Second policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "*"
}
]
}
Third policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::my-bucket-name",
"arn:aws:s3:::my-bucket-name/*"
]
}
]
}
I've also made sure to grant the 'List' permission to 'Everyone'. Tried this on buckets in two different locations - Sao Paulo and Oregon. I'm starting to run out of ideas, hope you can help.
I know this question is over a year old but I just encountered the same error when trying to do the transfer via the console. I worked around this by executing IT via the gsutils command line tool instead.
After installing and configuring the tool, simply run:
gsutils cp s3://sourcebucket gs://targetbucket
Hope this is helpful!

What are appropriate S3 permissions for deploying to Elastic Beanstalk from CodeShip

What are the appropriate S3 permissions to deploy an Elastic Beanstalk app using CodeShip? When deploying a new version to a tomcat app I get these errors:
Service:Amazon S3, Message:You do not have permission to perform the
's3:ListBucket' action. Verify that your S3 policies and your ACLs
allow you to perform these actions.
Service:Amazon S3, Message:You do
not have permission to perform the 's3:GetObject' or 's3:ListBucket'
action. Verify that your S3 policies and your ACLs allow you to
perform these actions.
If I give the CodeShip user full access to S3 everything works, but this is not ideal. The current S3 permissions for my CodeShip user are
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:PutObjectAcl",
"s3:GetObject",
"s3:GetObjectAcl",
"s3:ListBucket",
"s3:DeleteObject",
"s3:GetBucketPolicy"
],
"Resource": [
"arn:aws:s3:::codeshipbucket/*"
]
}
]
}
My S3 bucket I have given CodeShip is a subfolder under codeshipbucket if it matters.
What are appropriate permissions?
These are the S3 permissions we had to give the IAM user we use with Codeship:
{
"Action": [
"s3:CreateBucket",
"s3:GetObject"
],
"Effect": "Allow",
"Resource": "*"
},
{
"Action": [
"s3:ListBucket",
"s3:GetObjectAcl",
"s3:GetBucketPolicy",
"s3:DeleteObject",
"s3:PutObject",
"s3:PutObjectAcl"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::elasticbeanstalk-[region]-[account-id]",
"arn:aws:s3:::elasticbeanstalk-[region]-[account-id]/*"
]
}
We executed eb deploy --debug and added the permissions one-by-one.
In our internal test we've been able to deploy to ElasticBeanstalk with just the following S3 permissions
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::YOUR_S3_BUCKET_NAME/*"
]
}
]
}
And this is what we currently recommend in our documentation available at https://codeship.com/documentation/continuous-deployment/deployment-to-elastic-beanstalk/#s3
That said, one of our awesome users published a very extensive guide on how to deploy to Elastic Beanstalk, which is available at http://nudaygames.squarespace.com/blog/2014/5/26/deploying-to-elastic-beanstalk-from-your-continuous-integration-system and recommends a broader set of S3 permissions.
Disclaimer: I work for Codeship, but you probably already guessed so from my answer.

Amazon S3 bucket permission for unauthenticated cognito role user

I have setup an unauthenticated role under Amazon Cognito Identity pool. My goal is that guest users of my mobile app would be able to upload debugging logs (small text files) to my S3 bucket so I can troubleshoot issues. I notice I would get "Access Denied" from S3 if I don't modify my S3 bucket permission. If I add allow "Everyone" to have "Upload/Delete" privilege, the file upload succeeded. My concern is someone would then be able to upload large files to my bucket and cause a security issue. What is the recommend configuration for my need above? I am a newbie to S3 and Cognito.
I am using Amazon AWS SDK for iOS but I suppose this question is platform neutral.
Edit:
My policy is as follows:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "iam:GetUser",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"s3:CreateBucket",
"s3:DeleteBucket",
"s3:DeleteObject",
"s3:GetBucketLocation",
"s3:GetObject",
"s3:ListBucket",
"s3:PutObject"
],
"Resource": ["arn:aws:s3:::import-to-ec2-*", "arn:aws:s3:::<my bucket name>/*"]
}
]
}
You don't need to modify the S3 bucket permission, but rather the IAM role associated with your identity pool. Try the following:
Visit the IAM console.
Find the role associated with your identity pool.
Attach a policy similar to the following to your role:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:PutObject"],
"Resource": ["arn:aws:s3:::MYBUCKET/*"]
}
]
}
Replace MYBUCKET with your bucket name
Access your bucket as normal from your application use the iOS SDK and Cognito
You may want to consider limiting permissions further, including ${cognito-identity.amazonaws.com:sub} to partition your users, but the above policy will get you started.
The answer above is incomplete as of 2015, you need to authorize BOTH the role AND the bucket polity in S3 to authorize that Role to write to the bucket. Use s3:PutObject in both cases. The console has wizards for both cases
As #einarc said (cannot comment yet), to make it works I had to edit role and Bucket Policy. This is good enough for testing:
Bucket Policy:
{
"Id": "Policy1500742753994",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1500742752148",
"Action": "s3:*",
"Effect": "Allow",
"Resource": "arn:aws:s3:::admin1.user1",
"Principal": "*"
}
]
}
Authenticated role's policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::*"
]
}
]
}

Amazon S3 Bucket and Folder Policy for IAM access?

Do you have a problem understanding S3 IAM Policies and Directives ? Can't quite wrap your head around their documentation ? I did.
I had a situation where I had to lock out several IAM users from a particular folder, and several buckets, except one, and most of their solutions and example solutions were about as clear as mud as far as I was concerned. After scouring the web and not finding what I was looking for I came upon a resource
(http://blogs.aws.amazon.com/security/post/Tx1P2T3LFXXCNB5/Writing-IAM-policies-Grant-access-to-user-specific-folders-in-an-Amazon-S3-bucke) that was clear and actually helpful, but it did need some modification, and result is the policy you see below....
What it does is allows the user access to a particular folder within a bucket, but DENIES access to any other listed folder in the same bucket. Mind you, you will not be able to block them from viewing the contents of the folder, nor will you block them from seeing that there are other buckets, that can't be helped. However, they won't have access to the bucket/folder of your choice.
{
"Version":"2012-10-17",
"Statement": [
{
"Sid": "AllowUserToSeeBucketListInTheConsole",
"Action": ["s3:ListAllMyBuckets", "s3:GetBucketLocation"],
"Effect": "Allow",
"Resource": ["arn:aws:s3:::*"]
},
{
"Sid": "AllowRootAndHomeListingOfCompanyBucket",
"Action": ["s3:ListBucket"],
"Effect": "Allow",
"Resource": ["arn:aws:s3:::yourbucketname"],
"Condition":{"StringEquals":{"s3:prefix":["","yourfoldername/"],"s3:delimiter":["/"]}}
},
{
"Sid": "AllowListingOfUserFolder",
"Action": ["s3:ListBucket"],
"Effect": "Allow",
"Resource": ["arn:aws:s3:::yourbucketname"],
"Condition":{"StringLike":{"s3:prefix":["yourfoldername/*"]}}
},
{
"Sid": "AllowAllS3ActionsInUserFolder",
"Effect": "Allow",
"Action": ["s3:GetObject"],
"Resource": ["arn:aws:s3:::yourbucketname/yourfoldername/*"]
},
{
"Action": [
"s3:*"
],
"Sid": "Stmt1375581921000",
"Resource": [
"arn:aws:s3:::yourbucketname/anotherfolder1/*",
"arn:aws:s3:::yourbucketname/anotherfolder2/*",
"arn:aws:s3:::yourbucketname/anotherfolder3/*",
"arn:aws:s3:::yourbucketname/anotherfolder4/*"
],
"Effect": "Deny"
}
]
}