How to implement access control for individual object in S3 bucket? - amazon-s3

I have an S3 bucket which is by default private.
Now I want to implement access control on the objects of this bucket.
For example, if bucket has three objects A, B, C then object A, B could be public and object C could be private.
It should be possible to make the public object private and vice-versa from the application. The private objects will be accessible by selected application users only and public objects will be accessible by everyone.
Is there any way to implement it?
So far I have looked into object tagging. But I am not entirely sure if it applies in my situation.

You are correct that objects in Amazon S3 are private by default.
You can then grant access via several methods:
Set the Access Control List (ACL) on an individual object to make it Public (or set it back to being Private)
Create a Bucket Policy to grant access to a whole bucket, or a path within a bucket
Add a policy to an IAM User to allow that user access to all/part of an Amazon S3 bucket
Create Pre-signed URLs to grant time-limited access to a private object
So, you can certainly make specific objects Public and then change them to Private by using ACLs.
To make particular objects accessible to particular users, you should use Pre-Signed URLs. They work as follows:
Users authenticate to your application
When a user wishes to access a private object, the application is responsible for determining whether they are permitted to access it
If so, then the application generates a pre-signed URL. It only takes a few lines of code and does not require an API call to S3.
The application then provides the pre-signed URL just like any other link, such as putting it in an <a> tag, or in an <img> tag. The object will be accessible for a limited time duration just like any other resource on the Internet.
Once the duration has expired, the URL will no longer work (Access Denied)
This way, your application is totally in control of who can access which objects and authorized users can access the objects directly from Amazon S3. Also, the application users do not require AWS-specific logins; they would authenticate to your application.
See: Share an Object with Others - Amazon Simple Storage Service

Related

AWS S3 event notification on object permission change

Can you some one guide me how to setup an event notification for object level permission change.Currently notification available for read,write,delete etc..
But I am looking to setup a email trigger if some one changed access permission in an s3 object inside a bucket.
There are two ways to deal with this kind of concern:
Proactive: write IAM policies that prevent users from putting object
with public access
Reactive: use CloudWatch Events to detect issues and respond to them (see blog post)

Amazon S3 - does root user have access to buckets?

I am testing S3 calls using DHC REST client in Chrome. In these tests, the Authorization is all based on my root user credentials.
I can do a GET with //mybucket.s3.amazonws.com, and a list of the items in mybucket is returned.
If I add an item to retrieve (//mybucket.s3.amazonws.com/myitem), I always get 403 Forbidden.
I thought that the root user had automatic access to the objects, but am I wrong about that?
I took screen prints of both tests, which I'll supply if needed.
After some further monkeying around, I found my answer. Yes, the AWS root user can access individual items. But the Authorization header string changes. When you retrieve an object, that object's key participates in the calculation of the auth string. Thus, the same string used to retrieve the bucket list does not work when retrieving an object.

How to hide real URL with Google Cloud Storage?

Scenario: I place some files on Google web storage.
And I want only paid users can download this file. So my question is, how to hide this file from paid user to prevent them from sharing this URL with other unpaid users.
So, is there a way to hide the real file location? Single-use or time-restricted URLs or any other?
May be hiding URL is possible with other CDN providers - MIcrosoft Azure Storage or Amazon S3?
Amazon S3 provides query string authentication (usually referred to as pre-signed URLs) for this purpose, see Using Query String Authentication:
Query string authentication is useful for giving HTTP or browser
access to resources that would normally require authentication. The
signature in the query string secures the request. Query string
authentication requests require an expiration date. [...]
All AWS Software Development Kits (SDKs) provide support for this, here is an example using the GetPreSignedUrlRequest Class from the AWS SDK for .NET, generating a pre-signed URL expiring 42 minutes from now:
using (var s3Client = AWSClientFactory.CreateAmazonS3Client("AccessKey", "SecretKey"))
{
GetPreSignedUrlRequest request = new GetPreSignedUrlRequest()
.WithBucketName("BucketName")
.WithKey("Key")
.WithProtocol(Protocol.HTTP)
.WithExpires(DateTime.Now.AddMinutes(42));
string url = s3Client.GetPreSignedURL(request);
}
Azure Storage has the concept of a Shared Access Signature. It's basically the URL for a BLOB (file) with parameters that limit access. I believe it's nearly identical to the Amazon S3 query string authentication mentioned in Steffen Opel's answer.
Microsoft provides a .NET library for handling Shared Access Signatures. They also provide the documentation you would need to roll your own library.
You can use Signed URLs in Google Cloud Storage to do this:
https://developers.google.com/storage/docs/accesscontrol#Signed-URLs
One way would be to create a Google Group containing only your paid users. Then, for the object's of interest, grant read permission to the group's email address (via the object's Access Control List). With that arrangement, only your paid members will be able to download these projected objects. If someone outside that group tries to access the URL, they'll get an access denied error.
After you set this up, you'll be able to control who can access your objects by editing your group membership, without needing to mess with object ACLs.
Here's an alternative that truly hides the S3 URL. Instead of creating a query string authenticated URL that has a limited viability, this approach takes a user's request, authorizes the user, fetches the S3 data, and finally returns the data to the requestor.
The advantage of this approach is that the user has no way of knowing the S3 URL and cannot pass the URL along to anyone else, as is the case in the query string authenticated URL during its validity period. The disadvantages to this approach are: 1) there is an extra intermediary in the middle of the S3 "get", and 2) it's possible that extra bandwidth charges will be incurred, depending on where the S3 data physically resides.
public void streamContent( User requestor, String contentFilename, OutputStream outputStream ) throws Exception {
// is the requestor entitled to this content?
Boolean isAuthorized = authorizeUser( requestor, filename );
if( isAuthorized ) {
AWSCredentials myCredentials = new BasicAWSCredentials( s3accessKey, s3secretKey );
AmazonS3 s3 = new AmazonS3Client( myCredentials );
S3Object object = s3.getObject( s3bucketName, contentFilename );
FileCopyUtils.copy( object.getObjectContent(), outputStream );
}
}

S3 with IAM Policy

I've created a group with read-only access to S3 objects and then added a new user within that group.
I'm having trouble understanding what the url to the file will be. I have the link thus far as:
https://s3.amazonaws.com/my-bucket/
Do I need to pass in some sort of id and key to let people in that group get access? What are those params and where I do I find the values?
To let people access your files you either need to make the bucket public and then access the URL's of each object which you can find out by checking the properties of each object in AWS management console. The catch is that anyone who knows this URL can access the files. To make it more secure, use ACL to limit read only access for all users. Go to permissions and add a new permission for "Everyone" and check "Open/download" check box only.
if you want to limit access to only few users then you will have yo use IAM policies. With IAM policy you will get a security key and secret access key. Now, you CANNOT append the secret access key at the end of your string to give access to users. The secret user key is meant to be "SECRET".but what you can do is provide Presigned URL's to your users through code. Here is a C# example
private void GetWebUrl()
{
var request =
new GetPreSignedUrlRequest().WithBucketName(your bucketName)
.WithKey(Your obj KEY);
request.WithExpires(DateTime.Now.Add(new TimeSpan(0, 0, 0, 50)));// Time you want URL to be active for
var url = S3.GetPreSignedURL(request);
}
The catch is the URL will only be active for the given time. you can increase it though.
Also, in the above function "S3' is an instance of amazonS3 client that I have created:
private AmazonS3Client S3;
public static AmazonS3Client CreateS3Client()
{
var appConfig = ConfigurationManager.AppSettings;
var accessKeyId = appConfig["AWSAccessKey"];
var secretAccessKeyId = appConfig["AWSSecretKey"];
S3= new AmazonS3Client(accessKeyId, secretAccessKeyId);
return S3;
}
Each IAM user will have their own Cert and Key that they setup with whatever S3tools they are using. Just give them the url to login to their IAM AWS accont, and their credentials to login, and they should be good to go. The bucket URL will be the same.
The URL to a file on S3 does not change.
Usually you would not use the form that you have but rather:
https://my-bucket.s3.amazonaws.com/path/to/file.png - where path/to/file.png is the key. (see virtual hosted compatible bucket names).
The issue is that this url will result in a 404 unless the person asking for it has the right credentials, or the bucket is publicly readable.
You can take a URL like that and sign it (using a single call in some SDK) with any credentials that have read only access to the file, which will result in a time - limited URL that anyone can use. When the url is signed it has some extra args added, so that AWS can tell it was an authorized person that made up the URL.
OR
You can use the AWS API, and use the credentials that you have to directly download the file.
It depends on what you are doing.
For instance, to make a web page which has links to files, you can create a bunch of time limited URLs, one for each file. This page would be generated by some code on your server, upon a user logging in and having some sort of IAM credentials.
Or if you wanted to write a tool to manage an S3 repo, you would perhaps just download/upload, etc directly using the API.

How to restrict Amazon S3 API access?

Is there a way to create a different identity to (access key / secret key) to access Amazon S3 buckets via the REST API where I can restrict access (read only for example)?
The recommended way is to use IAM to create a new user, then apply a policy to that user.
Yes, you can. The S3 API documentation describes the Authentication and Access Control services available to you. You can set up a bucket so that another Amazon S3 account can read but not modify items in the bucket.
Check out the details at http://docs.amazonwebservices.com/AmazonS3/2006-03-01/dev/index.html?UsingAuthAccess.html (follow the link to "Using Query String Authentication")- this is a subdocument to the one Greg Posted, and describes how to generate access URLs on the fly.
This uses a hashed form of the private key and allows expiration, so you can give brief access to files in a bucket without allowed unfettered access to the rest of the S3 store.
Constructing the REST URL is quite difficult, it took me about 3 hours of coding to get it right, but this is a very powerful access technique.