S3 with IAM Policy - amazon-s3

I've created a group with read-only access to S3 objects and then added a new user within that group.
I'm having trouble understanding what the url to the file will be. I have the link thus far as:
https://s3.amazonaws.com/my-bucket/
Do I need to pass in some sort of id and key to let people in that group get access? What are those params and where I do I find the values?

To let people access your files you either need to make the bucket public and then access the URL's of each object which you can find out by checking the properties of each object in AWS management console. The catch is that anyone who knows this URL can access the files. To make it more secure, use ACL to limit read only access for all users. Go to permissions and add a new permission for "Everyone" and check "Open/download" check box only.
if you want to limit access to only few users then you will have yo use IAM policies. With IAM policy you will get a security key and secret access key. Now, you CANNOT append the secret access key at the end of your string to give access to users. The secret user key is meant to be "SECRET".but what you can do is provide Presigned URL's to your users through code. Here is a C# example
private void GetWebUrl()
{
var request =
new GetPreSignedUrlRequest().WithBucketName(your bucketName)
.WithKey(Your obj KEY);
request.WithExpires(DateTime.Now.Add(new TimeSpan(0, 0, 0, 50)));// Time you want URL to be active for
var url = S3.GetPreSignedURL(request);
}
The catch is the URL will only be active for the given time. you can increase it though.
Also, in the above function "S3' is an instance of amazonS3 client that I have created:
private AmazonS3Client S3;
public static AmazonS3Client CreateS3Client()
{
var appConfig = ConfigurationManager.AppSettings;
var accessKeyId = appConfig["AWSAccessKey"];
var secretAccessKeyId = appConfig["AWSSecretKey"];
S3= new AmazonS3Client(accessKeyId, secretAccessKeyId);
return S3;
}

Each IAM user will have their own Cert and Key that they setup with whatever S3tools they are using. Just give them the url to login to their IAM AWS accont, and their credentials to login, and they should be good to go. The bucket URL will be the same.

The URL to a file on S3 does not change.
Usually you would not use the form that you have but rather:
https://my-bucket.s3.amazonaws.com/path/to/file.png - where path/to/file.png is the key. (see virtual hosted compatible bucket names).
The issue is that this url will result in a 404 unless the person asking for it has the right credentials, or the bucket is publicly readable.
You can take a URL like that and sign it (using a single call in some SDK) with any credentials that have read only access to the file, which will result in a time - limited URL that anyone can use. When the url is signed it has some extra args added, so that AWS can tell it was an authorized person that made up the URL.
OR
You can use the AWS API, and use the credentials that you have to directly download the file.
It depends on what you are doing.
For instance, to make a web page which has links to files, you can create a bunch of time limited URLs, one for each file. This page would be generated by some code on your server, upon a user logging in and having some sort of IAM credentials.
Or if you wanted to write a tool to manage an S3 repo, you would perhaps just download/upload, etc directly using the API.

Related

Authorization for pre-signed url generate api?

Our application only have Front End and Back End. Each user can have a lot of ducument which is uploaded by backend side but need to be display on FE. We trying to use pre-signed URL to let FE download document directly from S3.
BE return the docutments path, and expose an API which can be use to generate pre-signed url for each document.
POST: /app/v1/generate-pre-signed-url/path-to-document
The issue is above generate pre-signed url API need to be verity that current user has access to the document or not, and it's realy hard to map the user permission with a document path.
Any suggestion design or pattern would be appreciate!
As per AWS Docs
When you create a presigned URL for your object, you must provide your security credentials and then specify a bucket name, an object key, an HTTP method (GET to download the object), and an expiration date and time.
So in summary, only object key would not suffice (like in your case).

Copy between S3 buckets using signed URL with boto? [duplicate]

I'm using a service that puts the data I need on S3 and gives me a list of presigned URLs to download (http://.s3.amazonaws.com/?AWSAccessKeyID=...&Signature=...&Expires=...).
I want to copy those files into my S3 bucket without having to download them and upload again.
I'm using the Ruby SDK (but willing to try something else if it works..) and couldn't write anything like this.
I was able to initialize the S3 object with my credentials (access_key and secret) that grants me access to my bucket, but how do I pass the "source-side" access_key_id, signature and expires parameters?
To make the problem a bit simpler - I can't even do a GET request to the object using the presigned parameters. (not with regular HTTP, I want to do it through the SDK API).
I found a lot of examples of how to create a presigned URL but nothing about how to authenticate using an already given parameters (I obviously don't have the secret_key of my data provider).
Thanks!
You can't do this with a signed url, but as has been mentioned, if you fetch and upload within EC2 in an appropriate region for the buckets in question, there's essentially no additional cost.
Also worth noting, both buckets do not have to be in the same account, but the aws key that you use to make the request have to have permission to put the target object and get the source object. Permissions can be granted across accounts... though in many cases, that's unlikely to be granted.
http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectCOPY.html
You actually can do a copy with a presigned URL. To do this, you need to create a presigned PUT request that also includes a header like x-amz-copy-source: /sourceBucket/sourceObject in order to specify where you are copying from. In addition, if you want the copied object to have new metadata, you will also need to add the header x-amz-metadata-directive: REPLACE. See the REST API documentation for more details.

Amazon S3 - does root user have access to buckets?

I am testing S3 calls using DHC REST client in Chrome. In these tests, the Authorization is all based on my root user credentials.
I can do a GET with //mybucket.s3.amazonws.com, and a list of the items in mybucket is returned.
If I add an item to retrieve (//mybucket.s3.amazonws.com/myitem), I always get 403 Forbidden.
I thought that the root user had automatic access to the objects, but am I wrong about that?
I took screen prints of both tests, which I'll supply if needed.
After some further monkeying around, I found my answer. Yes, the AWS root user can access individual items. But the Authorization header string changes. When you retrieve an object, that object's key participates in the calculation of the auth string. Thus, the same string used to retrieve the bucket list does not work when retrieving an object.

How to hide real URL with Google Cloud Storage?

Scenario: I place some files on Google web storage.
And I want only paid users can download this file. So my question is, how to hide this file from paid user to prevent them from sharing this URL with other unpaid users.
So, is there a way to hide the real file location? Single-use or time-restricted URLs or any other?
May be hiding URL is possible with other CDN providers - MIcrosoft Azure Storage or Amazon S3?
Amazon S3 provides query string authentication (usually referred to as pre-signed URLs) for this purpose, see Using Query String Authentication:
Query string authentication is useful for giving HTTP or browser
access to resources that would normally require authentication. The
signature in the query string secures the request. Query string
authentication requests require an expiration date. [...]
All AWS Software Development Kits (SDKs) provide support for this, here is an example using the GetPreSignedUrlRequest Class from the AWS SDK for .NET, generating a pre-signed URL expiring 42 minutes from now:
using (var s3Client = AWSClientFactory.CreateAmazonS3Client("AccessKey", "SecretKey"))
{
GetPreSignedUrlRequest request = new GetPreSignedUrlRequest()
.WithBucketName("BucketName")
.WithKey("Key")
.WithProtocol(Protocol.HTTP)
.WithExpires(DateTime.Now.AddMinutes(42));
string url = s3Client.GetPreSignedURL(request);
}
Azure Storage has the concept of a Shared Access Signature. It's basically the URL for a BLOB (file) with parameters that limit access. I believe it's nearly identical to the Amazon S3 query string authentication mentioned in Steffen Opel's answer.
Microsoft provides a .NET library for handling Shared Access Signatures. They also provide the documentation you would need to roll your own library.
You can use Signed URLs in Google Cloud Storage to do this:
https://developers.google.com/storage/docs/accesscontrol#Signed-URLs
One way would be to create a Google Group containing only your paid users. Then, for the object's of interest, grant read permission to the group's email address (via the object's Access Control List). With that arrangement, only your paid members will be able to download these projected objects. If someone outside that group tries to access the URL, they'll get an access denied error.
After you set this up, you'll be able to control who can access your objects by editing your group membership, without needing to mess with object ACLs.
Here's an alternative that truly hides the S3 URL. Instead of creating a query string authenticated URL that has a limited viability, this approach takes a user's request, authorizes the user, fetches the S3 data, and finally returns the data to the requestor.
The advantage of this approach is that the user has no way of knowing the S3 URL and cannot pass the URL along to anyone else, as is the case in the query string authenticated URL during its validity period. The disadvantages to this approach are: 1) there is an extra intermediary in the middle of the S3 "get", and 2) it's possible that extra bandwidth charges will be incurred, depending on where the S3 data physically resides.
public void streamContent( User requestor, String contentFilename, OutputStream outputStream ) throws Exception {
// is the requestor entitled to this content?
Boolean isAuthorized = authorizeUser( requestor, filename );
if( isAuthorized ) {
AWSCredentials myCredentials = new BasicAWSCredentials( s3accessKey, s3secretKey );
AmazonS3 s3 = new AmazonS3Client( myCredentials );
S3Object object = s3.getObject( s3bucketName, contentFilename );
FileCopyUtils.copy( object.getObjectContent(), outputStream );
}
}

How do you let only authorized user have access contents stored in Amazon's S3?

Once you stored contents in S3 and make it public, then everyone have access to it. Is there a way to let only authorized users have access to the content stored in S3? For example, I have a site that let people store their documents. The server stores these documents in S3 and I would like only the user who uploaded the document to have access to it.
I know I can copy the S3 contents to my server and let only authorized users have access, but this will make the server slow. I would like to be able server the contents directly to the client's browser from the S3.
Thanks.
The link given in the above answer is no longer correct -- Amazon had it's documentation reorganized. I think these are the correct pages to read:
http://docs.amazonwebservices.com/AmazonS3/2006-03-01/dev/index.html?RESTAuthentication.html#RESTAuthenticationQueryStringAuth
http://s3.amazonaws.com/doc/s3-developer-guide/RESTAuthentication.html
You want to read the section called 'Query String Request Authentication Alternative' found here
It explains how to create a time-based expiring link to an S3 object
You would then have to write the code that manages the users (the who owns which object part of your question).