Amazon S3 authentiaction model - amazon-s3

What is the proper way of delegating file access authentication from S3 to our authentiation service?
For example: web site's user(he have our session id in headers) sending request to S3 to get file by url. S3 sends request to our authentication service asking if user with such headers can access that file, and if our auth service allow getting that file it will be downloaded.
There are a lot of information about presigned requests but absolutely nothing about s3 quering with "hidden" authentication.

If a file has been made public on S3, then of course anyone can download it, using a direct link to the file.
If the file is not public, then there needs to be some type of authentication. There are really only two ways a file from S3 can be obtained if it is not public, one is via a pre-signed url, and the other is to be an Amazon user who has access to S3. Obviously this is how it works when you yourself want to access an object on S3, you must provide your access key and a signature in the header of the GET request. You can grant other users access to S3 via Amazon IAM, which is more like the 'hidden' authentication you mentioned. Via the IAM route, there are different ways of providing access including Federated Users. Visit this link to learn more:
http://docs.aws.amazon.com/AmazonS3/latest/dev/MakingAuthenticatedRequests.html
If you are simply trying to provide a authenticated user access to a file, the best and easiest way to do that would be to create a pre-signed url with an expiration time. The expiration time can be something short, like 10 minutes or even 1 minute, to prevent the user from passing the link to others.

Related

Multiple users uploading into the same storage account via desktop app

would love to hear your ideas.
In this project, multiple users (let's say 1000 users) will upload files into the same storage account (AWS S3, Azure Blob Storage or DigitalOcean Spaces) using a Windows desktop app C#
The desktop app does have user authentication from a Web API
Questions
Is it correct that each user will have his/her own bucket?
What is the best way to securely introduce API key and bucket information into the desktop app so that files will be uploaded to the correct bucket and storage account?
Think about the structure of your S3 bucket and how you would later identify each object, which a user uploaded.
I would create for each user an initial key, which a user is able to upload the files, e.g.
username1/object1
/object2
/objectx
username2/object1
username3/object1
usernamex/objectx
This will give you the possibility, if a user is deleted, that you can just delete all objects with that username too. If you are using a generated key to identify the user, than you also can use the keyID instead of username.
The most interesting question is on how you will secure this, so that no other user will be able to see objects from others. If you have a underlying API, than it's "easy"... give the API the access to the S3 bucket and secure the requests, that only those objects will be listed for which the username or keyID matches.
If you are using IAM users (or roles), than you have automatically generate a policy for each base key (username1 or keyID) for the specific actions.
If you set up something like that, please be really sure to harden your security and also try to enable logging of this bucket to be sure, that user1 can't access objects from user2.

Obtain short lived access token from dropbox without redirect url(through .net code or javascript)

I am trying to implement a way to obtain a short lived access token from dropbox, then upload multiple file to a folder. and finally revoke the token.
This can obtain access token can be .net code or in javascript. But what I am seeing from the dropbox documentation is having a redirect url which I don't want to redirect anything.
also I do not need to ask the end user to allow anything, what I need is to get a temp access token and upload some files.
This scenario is doable in Amazon S3 by generating a short lived policy for file upload.
Thanks for any help.

Backblaze B2 download with "presigned URL"

Situation: I run a Django app in the web, where logged-in users can also download .pdf files (non-public, with specific restrictions, depending on user rights). The most convenient way to do so (e.g. in S3) is to use a time-restricted, pre-signed URL because they open immediately in the browser, plus the app server does not have to handle additional traffic.
Problem: Backblaze B2 oviously does not offer an explicit method for creating presigned URLs to download non-public files directly in the browser.
Generating the api URL and the authorization token, and fetching the file from the object store happens at the app server level and the process is not exposed to the "ordinary" user.
But in the end, the API operation "b2_download_file_by_name" just uses a GET request, which means I can add the authorization token to the request's URL using "?Authorization=123xyz........". This way I get a presigned URL that works perfectly fine in the browser to allow access to a specific non-public file for a limited time. (Please note: B2 downloads can be restricted to files with specific prefixes [like s3 pseudo-folders], but if the specified "prefix" is long enough, I can make the auth token specific for one file.)
Question: As I wrote above, usually the authorization token is not exposed to the user. Now, if I make the URL visible, does this imply a security risk? In other words, could a user that posesses one or many tokens, extract the general access key from the token, or is the token encrypted well enough to avoid this?
According to the documentation for the b2_download_file_by_name call you can use the download authorization in a URL in the way you describe.
An authorization token can be provided in the URL query string instead of being passed in the HTTP header. An account authorization token obtained from b2_authorize_account will allow access to all files in a private bucket. A download authorization token obtained from b2_get_download_authorization will allow access to files whose names begin with the filename prefix used to generate the download authorization token.
However it seems that the expiry time set in the b2_get_download_authorization call is being ignored so the resulting URL never expires which is not secure of course. I have a support ticket in with B2 about this so hoping for a solution.

AWS s3 image access through REST API

I am looking for a solution on the best practice that needs to be followed in AWS S3 access by third party who do not have account in S3.
In my case there are REST interface which would need to provide the link of images .This images resides on AWS S3. Based on the identity of the caller is there a way we can give access to the user. I would not want to make the access level of the bucket to public.
Say if we get a call from user X ( may be we ask them to set a new header ) we allow them the access to the bucket.
As this API is enterprise and we have partners using this API we would want only some of the identified callers to have access to the images.
Any pointers will help a lot.
Signed S3 URL's, make the bucket private, only accessible to your API via an IAM role, if the API is running on EC2, lambda etc.
Your API would do the authentication and authorization, then provide the caller a signed s3 url to download the image.
When you create a pre-signed URL for your object, you must provide your security credentials, specify a bucket name, an object key, specify the HTTP method (GET to download the object) and expiration date and time. The pre-signed URLs are valid only for the specified duration.
Anyone who receives the pre-signed URL can then access the object. For
example, if you have a video in your bucket and both the bucket and
the object are private, you can share the video with others by
generating a pre-signed URL.

Setting different S3 read permissions based on uploader

I'm trying to arrive at a situation, where
one class of users can upload files that are subsequently not publicly available
another class of users can upload files that are publicly available.
I think I need to use two IAM users:
the first which has putObject permissions only and where I bake the secret key into javascript. (I use the AWS SDK putObject here, and bake in the first secret key)
the other where I keep the secret key on the server, and provide signatures for uploading to signed-in users of the right category. (I ended up using a POST command for this with multipart form-data, as I could not understand how to do it with the SDK other than baking in the second secret key, which would be bad as files can be uploaded and downloaded)
But I'm struggling to set up bucket permissions that support some files being publicly available while others are not at all.
Is there a way, or do I need to use separate buckets?
Update
Based on the first comment, I tried to add "acl": "public-read" to my policy and POST form data fields. The signatures are matching correctly, but I am now getting a forbidden response from AWS, which I don't get when this field is absent (but then the uploads are not publicly visible)