I have a single S3/AWS account. I have several websites each which use their own bucket on S3 for reading/writing storage. I also host a lot of personal stuff (backups, etc) on other buckets on S3, which are not publicly accessible.
I would like to not have these websites-- some of which may have other people accessing their source code and configuration properties and seeing the S3 keys-- having access to my private data!
It seems from reading Amazon's docs that I need to partition privileges, by Amazon USER per bucket, not by access key per bucket. But that's not going to work. It also seems like I only get 2 access keys. I need to have one access key which is the master key, and several others which have much more circumscribed permissions-- only for certain buckets.
Is there any way to do that, or to approximate that?
You can achieve your goal by facilitating AWS Identity and Access Management (IAM):
AWS Identity and Access Management (IAM) enables you to securely
control access to AWS services and resources for your users. IAM
enables you to create and manage users in AWS, and it also enables you
to grant access to AWS resources for users managed outside of AWS in
your corporate directory. IAM offers greater security, flexibility,
and control when using AWS. [emphasis mine]
As emphasized, using IAM is strongly recommended for all things AWS anyway, i.e. ideally you should never use your main account credentials for anything but setting up IAM initially (as mentioned by Judge Mental already, you can generate as many access keys as you want like so).
You can use IAM just fine via the AWS Management Console (i.e. their is no need for 3rd party tools to use all available functionality in principle).
Generating the required policies can be a bit tricky in times, but the AWS Policy Generator is extremely helpful to get you started and explore what's available.
For the use case at hand you'll need a S3 Bucket Policy, see Using Bucket Policies in particular and Access Control for a general overview of the various available S3 access control mechanisms (which can interfere in subtle ways, see e.g. Using ACLs and Bucket Policies Together).
Good luck!
Yes, To access different login account with different permission using same AWS Account, you can use AWS IAM. As a developer of Bucket Explorer, I am suggesting try Bucket Explorer- team edition if you are looking for the tool provide you gui interface with different login and different access permission. read http://team20.bucketexplorer.com
Simply create a custom IAM group policy that limits access to a particular bucket
Such as...
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:ListAllMyBuckets"],
"Resource": "arn:aws:s3:::*"
},
{
"Effect": "Allow",
"Action": "s3:*",
"Resource":
[
"arn:aws:s3:::my.bucket",
"arn:aws:s3:::my.bucket/*"
]
}
]
}
The first action s3:ListAllMyBuckets allows the users to list all of the buckets. Without that, their S3 client will show nothing in the bucket listing when the users logon.
The second action grants full S3 privileges to the user for the bucket named 'my.bucket'. That means they're free to create/list/delete bucket resources and user ACLs within the bucket.
Granting s3:* access is pretty lax. If you want tighter controls on the bucket just look up the relevant actions you want to grant and add them as a list.
"Action": "s3:Lists3:GetObject, s3:PutObject, s3:DeleteObject"
I suggest you create this policy as a group (ex my.bucket_User) so you can assign it to every user who needs access to this bucket without any unnecessary copypasta.
Related
we have a multi-account setup where we deployed an organizational-level CloudTrail in our root account's Control Tower.
Organizational-level CloudTrail allows us to deploy CloudTrail in each of our respective accounts and provides them the ability to send logs to CloudWatch in our Root account and to an S3 logging bucket in our central logging account.
Now I have AWS Athena set up in our logging account to try and run queries on the logs generated through our organizational-level CloudTrail deployment. So far, I have managed to create the Athena Table that is built on the mentioned logging bucket and I also created a destination bucket for the query results.
When I try to run a simple "preview table" query, I get the following error:
Permission denied on S3 path: s3://BUCKET_NAME/PREFIX/AWSLogs/LOGGING_ACCOUNT_NUMBER/CloudTrail/LOGS_DESTINATION
This query ran against the "default" database, unless qualified by the query. Please post the error message on our forum or contact customer support with Query Id: f72e7dbf-929c-4096-bd29-b55c6c41f582
I figured that the error is caused by the logging bucket's policy lacking any statement allowing Athena access, but when I try to edit the bucket policy I get the following error:
Your bucket policy changes can’t be saved:
You either don’t have permissions to edit the bucket policy, or your bucket policy grants a level of public access that conflicts with your Block Public Access settings. To edit a bucket policy, you need s3:PutBucketPolicy permissions. To review which Block Public Access settings are turned on, view your account and bucket settings. Learn more about Identity and access management in Amazon S3
This is strange since the role I am using has full admin access to this account.
Please advise.
Thanks in advance!
I see this is is a follow up question to your previous one: S3 Permission denied when using Athena
Control Tower guardrail automatically deploys a guardrail which prohibits updating the aws-controltower bucket policy.
In your master account, go to AWS Organizations. Then, go to your Security OU. Then go to Policies tab. You should see 2 guardrail policies:
One of them will contain this policy:
{
"Condition": {
"ArnNotLike": {
"aws:PrincipalARN": "arn:aws:iam::*:role/AWSControlTowerExecution"
}
},
"Action": [
"s3:PutBucketPolicy",
"s3:DeleteBucketPolicy"
],
"Resource": [
"arn:aws:s3:::aws-controltower*"
],
"Effect": "Deny",
"Sid": "GRCTAUDITBUCKETPOLICYCHANGESPROHIBITED"
},
Add these principals below AWSControlTowerExecution:
arn:aws:iam::*:role/aws-reserved/sso.amazonaws.com/*/AWSReservedSSO_AWSAdministratorAccess*
arn:aws:iam::*:role/aws-reserved/sso.amazonaws.com/*/AWSReservedSSO_AdministratorAccess*
Your condition should look like this:
"Condition": {
"ArnNotLike": {
"aws:PrincipalArn": [
"arn:aws:iam::*:role/AWSControlTowerExecution",
"arn:aws:iam::*:role/aws-reserved/sso.amazonaws.com/*/AWSReservedSSO_AWSAdministratorAccess*",
"arn:aws:iam::*:role/aws-reserved/sso.amazonaws.com/*/AWSReservedSSO_AdministratorAccess*"
]
}
},
You shoulld be able to update the bucket after this is applied.
I'm actually going to sneak in two questions here:
1)
I'm trying to figure out if there is a way to let Cognito users manage access to their own folders. Let's say user Dave wants to share his protected file with user Anne. How would I go about to do this?
2) How can a group of users access the same restricted folders in a bucket? In my app users rarely work alone, and they upload files to the organization they belong.
Below is the policy I've gotten so far, but it's not doing it for me. Is there a way to do what I want directly in S3, or do I have to do a Lambda/Dynamo/S3 setup?
Do I need a unique policy for every organization and user in my app to achieve this? Isn't that a tad overkill?
I will be grateful for any help I can get on this topic.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::BUCKETNAME/user/${cognito-identity.amazonaws.com:sub}",
"arn:aws:s3:::BUCKETNAME/user/${cognito-identity.amazonaws.com:sub}/*"
]
}
]
}
Your use-case is beyond what should be implemented via a Bucket Policy.
Trying to add exceptions and special-cases to a bucket policy will make the rules complex to manage and will start hitting limits -- S3 supports bucket policies of up 20 kb.
Once you get into specific rules about users, objects, paths, etc then you should really be managing this through your central app, which then grants access to objects via Pre-Signed URLs that are generated on-the-fly as necessary.
Amazon S3 is a highly scalable, reliable storage system -- but don't expect it to have every feature you require. That's where your app code is required.
I'm creating an app that utilizes a feature similar to instagram -- users can upload images and view others'. They also need to be able to delete only their own.
I plan to store these images in S3. What's the safest way to allow users to upload, download, and delete their own? My current plan is to authenticate users through my own system, then exchange that login token for AWS Cognito credentials, which can upload and download to/from my S3 bucket.
Deleting I think will be more difficult. I imagine I will have clients send a request to a server that processes it, makes sure the requested deletion is allowed for that client, and then sends the request to S3 using admin credentials.
Is this a feasible way of managing all this, and how best can I disallow users from uploading random things to my bucket? I want them only to be able to upload images associated with their account and with my app, but presumably with the Cognito credentials they could upload anything.
Thanks, and let me know if I wasn't clear on anything.
When using Amazon Cognito, your mobile application users will assume an Identity and Access Management (IAM) role, which gives them permissions to access AWS resources.
A role could, for example, grant access to an Amazon S3 bucket to allow them to upload and download pictures. You can then limit their access to the S3 bucket such that they can only perform actions on objects within their own directory.
Here is an example policy that insert grants access to subdirectories based on their Cognito identity:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": ["s3:ListBucket"],
"Effect": "Allow",
"Resource": ["arn:aws:s3:::mybucket"],
"Condition": {"StringLike": {"s3:prefix": ["${cognito-identity.amazonaws.com:sub}/*"]}}
},
{
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Effect": "Allow",
"Resource": ["arn:aws:s3:::mybucket/${cognito-identity.amazonaws.com:sub}/*"]
}
]
}
This way, you can allow the mobile app to directly interface with S3 for uploads, downloads and deletes, rather than having to request it through a back-end service. This will allow your app to scale without having to have as many servers (so it's cheaper, too!)
For more details, see:
Understanding Amazon Cognito Authentication Part 3: Roles and Policies
I've been trying all possible options but with no results. My Bucket Policy works well with aws:Referer but it doesn't work at all with Source Ip as the condition.
My Server is hosted with EC2 and I am using the Public IP in this format xxx.xxx.xxx.xxx/32 (Public_Ip/32) as the Source Ip parameter.
Can anyone tell me what I am doing wrong?
Currently my Policy is the following
{
"Version": "2008-10-17",
"Id": "S3PolicyId1",
"Statement": [
{
"Sid": "IPDeny",
"Effect": "Deny",
"Principal": {
"AWS": "*"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::my_bucket/*",
"Condition": {
"NotIpAddress": {
"aws:SourceIp": "xx.xx.xxx.xxx/32"
}
}
}
]
}
I read all examples and case studies but it doesn't seem to allow access based on Source IP...
Thanks a lot!!!
While I won't disagree that policies are better than IP address wherever possible, the accepted answer didn't actually achieve the original question's goal. I needed to do this (I need access from a machine that wasn't EC2, and thus didn't have policies).
Here is a policy that only allows a certain (or multiple IPs) to access a bucket's object. This assumes that there is no other policy to allow access to the bucket (by default, buckets grant no public access).
This policy also does not allow listing. Only if you know if the full url to the object you need. If you need more permissions, just add them to the Action bit.
{
"Id": "Policy123456789",
"Statement": [
{
"Sid": "IPAllow",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject"
],
"Resource": "arn:aws:s3:::mybucket/*",
"Condition" : {
"IpAddress" : {
"aws:SourceIp": [
"xx.xx.xx.xx/32"
]
}
}
}
]
}
From the discussion on the comments on the question, it looks like your situation can be rephrased as follows:
How can I give an specific EC2 instance full access to an S3 bucket, and deny access from every other source?
Usually, the best approach is to create an IAM Role and launch your EC2 instance associated with that IAM Role. As I'm going to explain, it is usually much better to use IAM Roles to define your access policies than it is to specify source IP addresses.
IAM Roles
IAM, or Identity and Access Management, is a service that can be used to create users, groups and roles, manage access policies associated with those three kinds of entities, manage credentials, and more.
Once you have your IAM role created, you are able to launch an EC2 instance "within" that role. In simple terms, it means that the EC2 instance will inherit the access policy you associated with that role. Note that you cannot change the IAM Role associated with an instance after you launched the instance. You can, however, modify the Access Policy associated with an IAM Role whenever you want.
The IAM service is free, and you don't pay anything extra when you associate an EC2 instance with an IAM Role.
In your situation
In your situation, what you should do is create an IAM Role to use within EC2 and attach a policy that will give the permissions you need, i.e., that will "Allow" all the "s3:xxx" operations it will need to execute on that specific resource "arn:aws:s3:::my_bucket/*".
Then you launch a new instance with this role (on the current AWS Management Console, on the EC2 Launch Instance wizard, you do this on the 3rd step, right after choosing the Instance Type).
Temporary Credentials
When you associate an IAM Role with an EC2 instance, the instance is able to obtain a set of temporary AWS credentials (let's focus on the results and benefits, and not exactly on how this process works). If you are using the AWS CLI or any of the AWS SDKs, then you simply don't specify any credential at all and the CLI or SDK will figure out it has to look for those temporary credentials somewhere inside the instance.
This way, you don't have to hard code credentials, or inject the credentials into the instance somehow. The instance and the CLI or SDK will manage this for you. As an added benefit, you get increased security: the credentials are temporary and rotated automatically.
In your situation
If you are using the AWS CLI, you would simply run the commands without specifying any credentials. You'll be allowed to run the APIs that you specified in the IAM Role Access Policy. For example, you would be able to upload a file to that bucket:
aws s3 cp my_file.txt s3://my_bucket/
If you are using an SDK, say the Java SDK, you would be able to interact with S3 by creating the client objects without specifying any credentials:
AmazonS3 s3 = new AmazonS3Client(); // no credentials on the constructor!
s3.putObject("my_bucket", ........);
I hope this helps you solve your problem. If you have any further related questions, leave a comment and I will try to address them on this answer.
I'm backing up files from several customers directly into an Amazon S3 bucket - each customer to a different folder. I'm using a simple .Net client running under a Windows task once a night. To allow writing to the bucket, my client requires both the AWS access key and the secret key (I created a new pair).
My problem is:
How do I make sure none of my customers could potentially use the pair to peek in the bucket and in a folder not his own? Can I create a "write only" access pair?
Am I approaching this the right way? Should this be solved through AWS access settings, or should I client-side encrypt files on the customer's machine (each customer with a different key) prior to uploading and avoid a the above mentioned cross-access?
I just created a write-only policy like this and it seems to be working:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::BUCKET_NAME/*"
]
}
]
}
I think creating a drop like that is a much neater solution.
Use IAM to create a separate user for each customer (not just an additional key pair), then give each user access to only their S3 folder. For instance, if the bucket is called everybodysbucket, and customer A's files all start with userA/ (and customer B's with userB/), then you can grant permission to everybodysbucket/userA/* to the user for customer A, and to everybodysbucket/userB/* for customer B.
That will prevent each user from seeing any resources not their own.
Use can also control specific S3 operations, not just resources, that each user can access. So yes, you can grant write-only permission to the users if you want.
As a variation on the approach recommended in Charles' answer, you can also manage access control at the user level via SFTP. These users can all share the same global IAM policy:
SFTP does support user-specific home directories (akin to "chroot").
SFTP allows you to manage user access via service managed authentication or via your own authentication provider. I am unsure if service managed authentication has user limits.
If you wish to allow users to access uploaded files using a client, SFTP provides it very cleanly.