Amazon S3 - does root user have access to buckets? - amazon-s3

I am testing S3 calls using DHC REST client in Chrome. In these tests, the Authorization is all based on my root user credentials.
I can do a GET with //mybucket.s3.amazonws.com, and a list of the items in mybucket is returned.
If I add an item to retrieve (//mybucket.s3.amazonws.com/myitem), I always get 403 Forbidden.
I thought that the root user had automatic access to the objects, but am I wrong about that?
I took screen prints of both tests, which I'll supply if needed.

After some further monkeying around, I found my answer. Yes, the AWS root user can access individual items. But the Authorization header string changes. When you retrieve an object, that object's key participates in the calculation of the auth string. Thus, the same string used to retrieve the bucket list does not work when retrieving an object.

Related

Authorization for pre-signed url generate api?

Our application only have Front End and Back End. Each user can have a lot of ducument which is uploaded by backend side but need to be display on FE. We trying to use pre-signed URL to let FE download document directly from S3.
BE return the docutments path, and expose an API which can be use to generate pre-signed url for each document.
POST: /app/v1/generate-pre-signed-url/path-to-document
The issue is above generate pre-signed url API need to be verity that current user has access to the document or not, and it's realy hard to map the user permission with a document path.
Any suggestion design or pattern would be appreciate!
As per AWS Docs
When you create a presigned URL for your object, you must provide your security credentials and then specify a bucket name, an object key, an HTTP method (GET to download the object), and an expiration date and time.
So in summary, only object key would not suffice (like in your case).

Copy between S3 buckets using signed URL with boto? [duplicate]

I'm using a service that puts the data I need on S3 and gives me a list of presigned URLs to download (http://.s3.amazonaws.com/?AWSAccessKeyID=...&Signature=...&Expires=...).
I want to copy those files into my S3 bucket without having to download them and upload again.
I'm using the Ruby SDK (but willing to try something else if it works..) and couldn't write anything like this.
I was able to initialize the S3 object with my credentials (access_key and secret) that grants me access to my bucket, but how do I pass the "source-side" access_key_id, signature and expires parameters?
To make the problem a bit simpler - I can't even do a GET request to the object using the presigned parameters. (not with regular HTTP, I want to do it through the SDK API).
I found a lot of examples of how to create a presigned URL but nothing about how to authenticate using an already given parameters (I obviously don't have the secret_key of my data provider).
Thanks!
You can't do this with a signed url, but as has been mentioned, if you fetch and upload within EC2 in an appropriate region for the buckets in question, there's essentially no additional cost.
Also worth noting, both buckets do not have to be in the same account, but the aws key that you use to make the request have to have permission to put the target object and get the source object. Permissions can be granted across accounts... though in many cases, that's unlikely to be granted.
http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectCOPY.html
You actually can do a copy with a presigned URL. To do this, you need to create a presigned PUT request that also includes a header like x-amz-copy-source: /sourceBucket/sourceObject in order to specify where you are copying from. In addition, if you want the copied object to have new metadata, you will also need to add the header x-amz-metadata-directive: REPLACE. See the REST API documentation for more details.

How to implement access control for individual object in S3 bucket?

I have an S3 bucket which is by default private.
Now I want to implement access control on the objects of this bucket.
For example, if bucket has three objects A, B, C then object A, B could be public and object C could be private.
It should be possible to make the public object private and vice-versa from the application. The private objects will be accessible by selected application users only and public objects will be accessible by everyone.
Is there any way to implement it?
So far I have looked into object tagging. But I am not entirely sure if it applies in my situation.
You are correct that objects in Amazon S3 are private by default.
You can then grant access via several methods:
Set the Access Control List (ACL) on an individual object to make it Public (or set it back to being Private)
Create a Bucket Policy to grant access to a whole bucket, or a path within a bucket
Add a policy to an IAM User to allow that user access to all/part of an Amazon S3 bucket
Create Pre-signed URLs to grant time-limited access to a private object
So, you can certainly make specific objects Public and then change them to Private by using ACLs.
To make particular objects accessible to particular users, you should use Pre-Signed URLs. They work as follows:
Users authenticate to your application
When a user wishes to access a private object, the application is responsible for determining whether they are permitted to access it
If so, then the application generates a pre-signed URL. It only takes a few lines of code and does not require an API call to S3.
The application then provides the pre-signed URL just like any other link, such as putting it in an <a> tag, or in an <img> tag. The object will be accessible for a limited time duration just like any other resource on the Internet.
Once the duration has expired, the URL will no longer work (Access Denied)
This way, your application is totally in control of who can access which objects and authorized users can access the objects directly from Amazon S3. Also, the application users do not require AWS-specific logins; they would authenticate to your application.
See: Share an Object with Others - Amazon Simple Storage Service

Temporary authentication via query string

My goal is to be able to generate a special URL that would allow someone to view a normally "protected" view temporarily. In fact, if they leave the page, any temporary authentication that was granted should be taken away.
Basically the problem is that I have content on my website that I NORMALLY want to be protected by requiring a login. However, I'd like to be able to give temporary access to a specific asset and not require a login.
Should I somehow use a URL with a query string that automatically authenticates the user? Or should I instead generate a separate page with that asset that does not require authentication at all?
edit: I forgot to mention that the generated link should be accessable for more than one person. In other words, it can't limit by the number of times accessed, but rather a time period or until we manually force it to expire.
You can create a database table like tokens, where you store unique access tokens which are valid only for 1 single request. In your action, this token could be a URL parameter. If no token is present in URL or if the token was not found in the DB table, access is denied. If a token was found, you delete it from DB and perform the action.
Now whenever you want to give someone this kind of one-off access, you create such a token and store it to DB. The token could be a random MD5 hash, that you generate e.g. through md5(mt_rand().mt_rand()). Then you can create a URL with that token as parameter and hand it out to the user.
You can also enhance the system and add an expiration time to your tokens table. Then you'd only grant access if the expiration time is in the future.
vyce: "It should first be for a rendered view that also contains PDF files."
If you have PDF files (or any other files) accessible under your webroot, anyone can access them at any time. So even if you will only serve a view to your user once, he/she could still get to the PDF file if they have kept the PDF's URL. The user can also share that URL with others.
This problem can be resolved by:
Storing the PDF file under the document root (or in another location that is made inaccessible with .htaccess)
Once you have determined that your user is allowed a one-time peek at the PDF, you serve it as described here

S3 with IAM Policy

I've created a group with read-only access to S3 objects and then added a new user within that group.
I'm having trouble understanding what the url to the file will be. I have the link thus far as:
https://s3.amazonaws.com/my-bucket/
Do I need to pass in some sort of id and key to let people in that group get access? What are those params and where I do I find the values?
To let people access your files you either need to make the bucket public and then access the URL's of each object which you can find out by checking the properties of each object in AWS management console. The catch is that anyone who knows this URL can access the files. To make it more secure, use ACL to limit read only access for all users. Go to permissions and add a new permission for "Everyone" and check "Open/download" check box only.
if you want to limit access to only few users then you will have yo use IAM policies. With IAM policy you will get a security key and secret access key. Now, you CANNOT append the secret access key at the end of your string to give access to users. The secret user key is meant to be "SECRET".but what you can do is provide Presigned URL's to your users through code. Here is a C# example
private void GetWebUrl()
{
var request =
new GetPreSignedUrlRequest().WithBucketName(your bucketName)
.WithKey(Your obj KEY);
request.WithExpires(DateTime.Now.Add(new TimeSpan(0, 0, 0, 50)));// Time you want URL to be active for
var url = S3.GetPreSignedURL(request);
}
The catch is the URL will only be active for the given time. you can increase it though.
Also, in the above function "S3' is an instance of amazonS3 client that I have created:
private AmazonS3Client S3;
public static AmazonS3Client CreateS3Client()
{
var appConfig = ConfigurationManager.AppSettings;
var accessKeyId = appConfig["AWSAccessKey"];
var secretAccessKeyId = appConfig["AWSSecretKey"];
S3= new AmazonS3Client(accessKeyId, secretAccessKeyId);
return S3;
}
Each IAM user will have their own Cert and Key that they setup with whatever S3tools they are using. Just give them the url to login to their IAM AWS accont, and their credentials to login, and they should be good to go. The bucket URL will be the same.
The URL to a file on S3 does not change.
Usually you would not use the form that you have but rather:
https://my-bucket.s3.amazonaws.com/path/to/file.png - where path/to/file.png is the key. (see virtual hosted compatible bucket names).
The issue is that this url will result in a 404 unless the person asking for it has the right credentials, or the bucket is publicly readable.
You can take a URL like that and sign it (using a single call in some SDK) with any credentials that have read only access to the file, which will result in a time - limited URL that anyone can use. When the url is signed it has some extra args added, so that AWS can tell it was an authorized person that made up the URL.
OR
You can use the AWS API, and use the credentials that you have to directly download the file.
It depends on what you are doing.
For instance, to make a web page which has links to files, you can create a bunch of time limited URLs, one for each file. This page would be generated by some code on your server, upon a user logging in and having some sort of IAM credentials.
Or if you wanted to write a tool to manage an S3 repo, you would perhaps just download/upload, etc directly using the API.