Managing AWS S3 Access via Mobile App - amazon-s3

I'm creating an app that utilizes a feature similar to instagram -- users can upload images and view others'. They also need to be able to delete only their own.
I plan to store these images in S3. What's the safest way to allow users to upload, download, and delete their own? My current plan is to authenticate users through my own system, then exchange that login token for AWS Cognito credentials, which can upload and download to/from my S3 bucket.
Deleting I think will be more difficult. I imagine I will have clients send a request to a server that processes it, makes sure the requested deletion is allowed for that client, and then sends the request to S3 using admin credentials.
Is this a feasible way of managing all this, and how best can I disallow users from uploading random things to my bucket? I want them only to be able to upload images associated with their account and with my app, but presumably with the Cognito credentials they could upload anything.
Thanks, and let me know if I wasn't clear on anything.

When using Amazon Cognito, your mobile application users will assume an Identity and Access Management (IAM) role, which gives them permissions to access AWS resources.
A role could, for example, grant access to an Amazon S3 bucket to allow them to upload and download pictures. You can then limit their access to the S3 bucket such that they can only perform actions on objects within their own directory.
Here is an example policy that insert grants access to subdirectories based on their Cognito identity:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": ["s3:ListBucket"],
"Effect": "Allow",
"Resource": ["arn:aws:s3:::mybucket"],
"Condition": {"StringLike": {"s3:prefix": ["${cognito-identity.amazonaws.com:sub}/*"]}}
},
{
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Effect": "Allow",
"Resource": ["arn:aws:s3:::mybucket/${cognito-identity.amazonaws.com:sub}/*"]
}
]
}
This way, you can allow the mobile app to directly interface with S3 for uploads, downloads and deletes, rather than having to request it through a back-end service. This will allow your app to scale without having to have as many servers (so it's cheaper, too!)
For more details, see:
Understanding Amazon Cognito Authentication Part 3: Roles and Policies

Related

Let Cognito users manage access to "own" S3 folders

I'm actually going to sneak in two questions here:
1)
I'm trying to figure out if there is a way to let Cognito users manage access to their own folders. Let's say user Dave wants to share his protected file with user Anne. How would I go about to do this?
2) How can a group of users access the same restricted folders in a bucket? In my app users rarely work alone, and they upload files to the organization they belong.
Below is the policy I've gotten so far, but it's not doing it for me. Is there a way to do what I want directly in S3, or do I have to do a Lambda/Dynamo/S3 setup?
Do I need a unique policy for every organization and user in my app to achieve this? Isn't that a tad overkill?
I will be grateful for any help I can get on this topic.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::BUCKETNAME/user/${cognito-identity.amazonaws.com:sub}",
"arn:aws:s3:::BUCKETNAME/user/${cognito-identity.amazonaws.com:sub}/*"
]
}
]
}
Your use-case is beyond what should be implemented via a Bucket Policy.
Trying to add exceptions and special-cases to a bucket policy will make the rules complex to manage and will start hitting limits -- S3 supports bucket policies of up 20 kb.
Once you get into specific rules about users, objects, paths, etc then you should really be managing this through your central app, which then grants access to objects via Pre-Signed URLs that are generated on-the-fly as necessary.
Amazon S3 is a highly scalable, reliable storage system -- but don't expect it to have every feature you require. That's where your app code is required.

AWS S3 Bucket Policy Source IP not working

I've been trying all possible options but with no results. My Bucket Policy works well with aws:Referer but it doesn't work at all with Source Ip as the condition.
My Server is hosted with EC2 and I am using the Public IP in this format xxx.xxx.xxx.xxx/32 (Public_Ip/32) as the Source Ip parameter.
Can anyone tell me what I am doing wrong?
Currently my Policy is the following
{
"Version": "2008-10-17",
"Id": "S3PolicyId1",
"Statement": [
{
"Sid": "IPDeny",
"Effect": "Deny",
"Principal": {
"AWS": "*"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::my_bucket/*",
"Condition": {
"NotIpAddress": {
"aws:SourceIp": "xx.xx.xxx.xxx/32"
}
}
}
]
}
I read all examples and case studies but it doesn't seem to allow access based on Source IP...
Thanks a lot!!!
While I won't disagree that policies are better than IP address wherever possible, the accepted answer didn't actually achieve the original question's goal. I needed to do this (I need access from a machine that wasn't EC2, and thus didn't have policies).
Here is a policy that only allows a certain (or multiple IPs) to access a bucket's object. This assumes that there is no other policy to allow access to the bucket (by default, buckets grant no public access).
This policy also does not allow listing. Only if you know if the full url to the object you need. If you need more permissions, just add them to the Action bit.
{
"Id": "Policy123456789",
"Statement": [
{
"Sid": "IPAllow",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject"
],
"Resource": "arn:aws:s3:::mybucket/*",
"Condition" : {
"IpAddress" : {
"aws:SourceIp": [
"xx.xx.xx.xx/32"
]
}
}
}
]
}
From the discussion on the comments on the question, it looks like your situation can be rephrased as follows:
How can I give an specific EC2 instance full access to an S3 bucket, and deny access from every other source?
Usually, the best approach is to create an IAM Role and launch your EC2 instance associated with that IAM Role. As I'm going to explain, it is usually much better to use IAM Roles to define your access policies than it is to specify source IP addresses.
IAM Roles
IAM, or Identity and Access Management, is a service that can be used to create users, groups and roles, manage access policies associated with those three kinds of entities, manage credentials, and more.
Once you have your IAM role created, you are able to launch an EC2 instance "within" that role. In simple terms, it means that the EC2 instance will inherit the access policy you associated with that role. Note that you cannot change the IAM Role associated with an instance after you launched the instance. You can, however, modify the Access Policy associated with an IAM Role whenever you want.
The IAM service is free, and you don't pay anything extra when you associate an EC2 instance with an IAM Role.
In your situation
In your situation, what you should do is create an IAM Role to use within EC2 and attach a policy that will give the permissions you need, i.e., that will "Allow" all the "s3:xxx" operations it will need to execute on that specific resource "arn:aws:s3:::my_bucket/*".
Then you launch a new instance with this role (on the current AWS Management Console, on the EC2 Launch Instance wizard, you do this on the 3rd step, right after choosing the Instance Type).
Temporary Credentials
When you associate an IAM Role with an EC2 instance, the instance is able to obtain a set of temporary AWS credentials (let's focus on the results and benefits, and not exactly on how this process works). If you are using the AWS CLI or any of the AWS SDKs, then you simply don't specify any credential at all and the CLI or SDK will figure out it has to look for those temporary credentials somewhere inside the instance.
This way, you don't have to hard code credentials, or inject the credentials into the instance somehow. The instance and the CLI or SDK will manage this for you. As an added benefit, you get increased security: the credentials are temporary and rotated automatically.
In your situation
If you are using the AWS CLI, you would simply run the commands without specifying any credentials. You'll be allowed to run the APIs that you specified in the IAM Role Access Policy. For example, you would be able to upload a file to that bucket:
aws s3 cp my_file.txt s3://my_bucket/
If you are using an SDK, say the Java SDK, you would be able to interact with S3 by creating the client objects without specifying any credentials:
AmazonS3 s3 = new AmazonS3Client(); // no credentials on the constructor!
s3.putObject("my_bucket", ........);
I hope this helps you solve your problem. If you have any further related questions, leave a comment and I will try to address them on this answer.

Amazon S3 Server Side Encryption Bucket Policy problems

I am using a bucket policy that denies any non-SSL communications and UnEncryptedObjectUploads.
{
"Id": "Policy1361300844915",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "DenyUnSecureCommunications",
"Action": "s3:*",
"Effect": "Deny",
"Resource": "arn:aws:s3:::my-bucket",
"Condition": {
"Bool": {
"aws:SecureTransport": false
}
},
"Principal": {
"AWS": "*"
}
},
{
"Sid": "DenyUnEncryptedObjectUploads",
"Action": "s3:PutObject",
"Effect": "Deny",
"Resource": "arn:aws:s3:::my-bucket/*",
"Condition": {
"StringNotEquals": {
"s3:x-amz-server-side-encryption": "AES256"
}
},
"Principal": {
"AWS": "*"
}
}
]
}
This policy works for applications that support SSL and SSE settings but only for the objects being uploaded.
I ran into these issues:
CloudBerry Explorer and S3 Browser failed during folders and files RENAME in the bucket with that Bucket Policy. After I applied only SSL requirement in the bucket policy, those browsers successfully completed file/folder renaming.
CloudBerry Explorer was able to RENAME objects with the full SSL/SSE bucket policy only after I enabled in Options – Amazon S3 Copy/Move through the local computer (slower and costs money).
All copy/move inside Amazon S3 failed due to that restrictive policy.
That means that we cannot control copy/move process that is not originated from the application that manipulates local objects. At least above mentioned CloudBerry Options proved that.
But I might be wrong, that is why I am posting this question.
In my case, with that bucket policy enabled, S3 Management Console becomes useless. Users cannot create folders, delete them, what they can is only upload files.
Is there something wrong with my bucket policy? I do not know those Amazon S3 mechanisms that used for objects manipulating.
Does Amazon S3 treat external requests (API/http headers) and internal requests differently?
Is it possible to apply this policy only to the uploads and not to internal Amazon S3 GET/PUT etc..? I have tried http referer with the bucket URL to no avail.
The bucket policy with SSL/SSE requirements is a mandatory for my implementation.
Any ideas would be appreciated.
Thank you in advance.
IMHO There is no way to automatically tell Amazon S3 to turn on SSE for every PUT requests.
So, what I would investigate is the following :
write a script that list your bucket
for each object, get the meta data
if SSE is not enabled, use the PUT COPY API (http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectCOPY.html) to add SSE
"(...) When copying an object, you can preserve most of the metadata (default) or specify new metadata (...)"
If the PUT operation succeeded, use the DELETE object API to delete the original object
Then run that script on an hourly or daily basis, depending on your business requirements.
You can use S3 API in Python (http://boto.readthedocs.org/en/latest/ref/s3.html) to make it easier to write the script.
If this "change-after-write" solution is not valid for you business wise, you can work at different level
use a proxy between your API client and S3 API (like a reverse proxy on your site), and configure it to add the SSE HTTP header for every PUT / POST requests.
Developer must go through the proxy and not be authorised to issue requests against S3 API endpoints
write a wrapper library to add the SSE meta data automatically and oblige developer to use your library on top of the SDK.
The later today are a matter of discipline in the organisation, as it is not easy to enforce them at a technical level.
Seb

Amazon S3 Write Only access

I'm backing up files from several customers directly into an Amazon S3 bucket - each customer to a different folder. I'm using a simple .Net client running under a Windows task once a night. To allow writing to the bucket, my client requires both the AWS access key and the secret key (I created a new pair).
My problem is:
How do I make sure none of my customers could potentially use the pair to peek in the bucket and in a folder not his own? Can I create a "write only" access pair?
Am I approaching this the right way? Should this be solved through AWS access settings, or should I client-side encrypt files on the customer's machine (each customer with a different key) prior to uploading and avoid a the above mentioned cross-access?
I just created a write-only policy like this and it seems to be working:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::BUCKET_NAME/*"
]
}
]
}
I think creating a drop like that is a much neater solution.
Use IAM to create a separate user for each customer (not just an additional key pair), then give each user access to only their S3 folder. For instance, if the bucket is called everybodysbucket, and customer A's files all start with userA/ (and customer B's with userB/), then you can grant permission to everybodysbucket/userA/* to the user for customer A, and to everybodysbucket/userB/* for customer B.
That will prevent each user from seeing any resources not their own.
Use can also control specific S3 operations, not just resources, that each user can access. So yes, you can grant write-only permission to the users if you want.
As a variation on the approach recommended in Charles' answer, you can also manage access control at the user level via SFTP. These users can all share the same global IAM policy:
SFTP does support user-specific home directories (akin to "chroot").
SFTP allows you to manage user access via service managed authentication or via your own authentication provider. I am unsure if service managed authentication has user limits.
If you wish to allow users to access uploaded files using a client, SFTP provides it very cleanly.

Multiple access_keys for different privileges with same S3 account?

I have a single S3/AWS account. I have several websites each which use their own bucket on S3 for reading/writing storage. I also host a lot of personal stuff (backups, etc) on other buckets on S3, which are not publicly accessible.
I would like to not have these websites-- some of which may have other people accessing their source code and configuration properties and seeing the S3 keys-- having access to my private data!
It seems from reading Amazon's docs that I need to partition privileges, by Amazon USER per bucket, not by access key per bucket. But that's not going to work. It also seems like I only get 2 access keys. I need to have one access key which is the master key, and several others which have much more circumscribed permissions-- only for certain buckets.
Is there any way to do that, or to approximate that?
You can achieve your goal by facilitating AWS Identity and Access Management (IAM):
AWS Identity and Access Management (IAM) enables you to securely
control access to AWS services and resources for your users. IAM
enables you to create and manage users in AWS, and it also enables you
to grant access to AWS resources for users managed outside of AWS in
your corporate directory. IAM offers greater security, flexibility,
and control when using AWS. [emphasis mine]
As emphasized, using IAM is strongly recommended for all things AWS anyway, i.e. ideally you should never use your main account credentials for anything but setting up IAM initially (as mentioned by Judge Mental already, you can generate as many access keys as you want like so).
You can use IAM just fine via the AWS Management Console (i.e. their is no need for 3rd party tools to use all available functionality in principle).
Generating the required policies can be a bit tricky in times, but the AWS Policy Generator is extremely helpful to get you started and explore what's available.
For the use case at hand you'll need a S3 Bucket Policy, see Using Bucket Policies in particular and Access Control for a general overview of the various available S3 access control mechanisms (which can interfere in subtle ways, see e.g. Using ACLs and Bucket Policies Together).
Good luck!
Yes, To access different login account with different permission using same AWS Account, you can use AWS IAM. As a developer of Bucket Explorer, I am suggesting try Bucket Explorer- team edition if you are looking for the tool provide you gui interface with different login and different access permission. read http://team20.bucketexplorer.com
Simply create a custom IAM group policy that limits access to a particular bucket
Such as...
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:ListAllMyBuckets"],
"Resource": "arn:aws:s3:::*"
},
{
"Effect": "Allow",
"Action": "s3:*",
"Resource":
[
"arn:aws:s3:::my.bucket",
"arn:aws:s3:::my.bucket/*"
]
}
]
}
The first action s3:ListAllMyBuckets allows the users to list all of the buckets. Without that, their S3 client will show nothing in the bucket listing when the users logon.
The second action grants full S3 privileges to the user for the bucket named 'my.bucket'. That means they're free to create/list/delete bucket resources and user ACLs within the bucket.
Granting s3:* access is pretty lax. If you want tighter controls on the bucket just look up the relevant actions you want to grant and add them as a list.
"Action": "s3:Lists3:GetObject, s3:PutObject, s3:DeleteObject"
I suggest you create this policy as a group (ex my.bucket_User) so you can assign it to every user who needs access to this bucket without any unnecessary copypasta.