How to Give Access to non-public Amazon S3 bucket folders using Parse authenticated user - amazon-s3

We are developing a mobile app using Parse as our BAAS solution but using Amazon S3 for storage of our media files. All of our users upload media files into their own individual folders inside of our app's bucket. As the user uploads media files we update their records in Parse so it knows where to download the files. That's the easy part.
I've spent quite a bit of time researching the different policies for S3 buckets and I am trying to get a grip on the proper way to ensure the security of the content uploaded. If you do all of your work with DynamoDB or SimpleDB then it's easy because you're essentially adjusting your ACLs with the IAM accounts and whatnot. If you use Amazon Cognito it's also easy because authentication happens through Google, Facebook or Amazon accounts. In my case I am using Parse to authenticate users which cannot speak to Amazon directly.
My goal is that only the currently logged in Parse user with ID #1234567 can access their own 1234567 folder and files (as well as any other user given permission by this person for collaboration). Here is a post similar to what I'm trying to accomplish: amazon S3 bucket policy - restricting access by referer BUT not restricting if urls are generated via query string authentication
...but how do I accomplish this with the current user's ID number?
Even better question is whether that post mentioned above is best practice or should I instead be looking at creating an EC2 server to handle access to these files? Should I be looking at CloudFront to serve private content? Or is there another method that works better for what I am trying to accomplish? I am going in circles and my head is spinning.
Thanks to whoever can help straighten me out.

Well since Parse is being shut down I am migrating to another service. This question is no longer relevant.

Related

Cloud Storage customer access best practices

Let's say I have a use case where users can buy mp3 files inside an app. The objects are stored in GCP Cloud Storage . What is the best practice to deliver those objects only to the users that purchased the files?
After researching the topic I came up with three solutions:
Client calls a REST (e.g. one running inside App Engine) service. This service downloads the files from Cloud Storage and then sends them back to the client.
Instead of sending the files via the REST call, I could send the download URL (from Cloud Storage) to the client. This would be more cost efficient, however this sounds like a security concern to me as anyone who simply monitors his network could capture the URL.
Creating a (time-limited) signed url to allow the user the download
Obviously a permission check would have to happen first, e.g. a database that contains if user X purchased mp3 Y.
This problem could also be applied to Azure Blob Storage or AWS S3...
In your use case, you have a constant:
You need a backend to authenticate the user (for example Authentication performed with Cloud Identity Platform and hosted on App Engine or Cloud Run
You need to check the list of MP3 that it has bought (stored in Firestore for example)
And then, you need to allow him to download the file. On this last point I recommend you to generated a signedURL. Download URL exists only in Firebase area (maybe your project is a firebase projet?) but it's the same thing than signerURL. Finally I don't recommend you the #1 proposal. It will work, but in case of long download (because network is poor), the connexion will be interrupted after 60 seconds. And this will keep your AppEngine up for nothing (and you will pay for this...).

access control via pre-signed url

My media storage is Openstack object storage (swift) in the cloud (OVH).
Regarding the user-rights on the uploaded media:
Images [A] are viewable by all users, but only deletable by
user-owner/ uploader.
Images [B] are very private. CRUD by user-owner/ uploader and
viewable by some other users.
I looked around for solutions and came across pre-signed (temporary) urls., see also this article.
I was wondering whether this provides an acceptable security level. An alternative I could think of is authenticating all users via openstack's authentication module, Keystone. But maybe that's just completely stupid and/ or overkill. I started to look in that direction as it might be similar to AWS S3 use of IAM policies.
My questions:
Is the pre-signed url solution the way to go? And if not why not?
How would processing images (creating thumbnails) look like? You
grab it from the storage, process and store it back and delete local
versions, I suppose?

Uploading static files to Keystone.js

I'm evaluating potential content management systems I want to use for a project. Many of the users will need to upload static files and include links to the in their posts.
In the Admin UI I can only see the ability to upload an image in a post. Does anyone know if it is possible to upload files to Keystone through the Admin UI?
You could use their Amazon S3 storage adapter. Depending on which version of Keystone you're using (3 or 4), you'll have to do some different things. Either way, you need to make some credentials for Amazon S3's service and configure Keystone to work with it. From there, you can use Types.S3File to allow a certain part of your MongoDB model to be a reference to an S3 object. See this page for more info on the S3File type in Keystone.

Allowing read and write access to Google Drive files to unauthenticated clients

We have been working on a web service (http://www.genomespace.org/) that allows computational biology tools that implement our REST API to, among other things, read and write files stored on "The Cloud".
Our default file storage is a common Amazon S3 bucket, but we now allow users to mount their own private S3 bucket as well as files on Dropbox.
We are now trying to enable similar functionality for Google Drive and have run into some problems unique to Google Drive that we have not encountered with S3 or Dropbox.
Only way to allow clients that are not Google-authenticated to read files unobtrusively is to make the files "Public". Our preference would be that once the user has authorized access to our application via OAuth2, the user files could remain "Private" in Google Drive.
However, even though the user has already authorized our web service to offline access to their "Private" files, we have not found a way to generate a URL that a client authorized by our system can use to GET the file directly without the client being logged into Google as well.
The closest we have come to this functionality has been to change the file permissions to "Anyone with Link", except that for files greater than 20MB Google insists on returning an intermediate web page warning that the file has not been scanned for viruses. In addition to having to mess with file permissions, this would break our existing clients. Only when the file is "Public" and we utilize URLs of the form https://googledrive.com/host/PARENT_FOLDER_ID/FILENAME can non-Google clients read the files without interference.
Have not found any way for clients that are not Google-authenticated to upload a file to Google Drive. Our API allows our authorized clients to PUT files directly to the backing file storage using URLs provided by our server. However, even if a folder is marked "Public", the client requires Google authentication credentials to save to Google Drive. We could deal with both of these issues with intermediate hops through our system (e.g., our web server would first download the file from Google Drive and then allow the client to GET it) however this would be woefully inefficient and, hopefully, unnecessary. These problems have been discussed multiple times before on stackoverflow (e.g. here and here and have read the responses very carefully, but have not seen any recent discussion.
The Google folks direct their API users to post on stackoverflow for support, so I am hoping for a fresh look from insiders.
The general answer is: dont make the drive requests through the user's browser. Insead do everything from your servers. You are the one having the (refresh) tokens for users, so you should make all requests like a proxy between the user and Drive. Same for downloading, you download it and return to the user. As long as you use each drive's token there shouldnt be rate limit/quota issues.

Is it possible to create dynamic user permissions in S3 and How?

I've a iOS app and I would like my users to upload images to S3 directly.
I need UserX to be able to upload folders to __MY_BUCKET__/UserX/* Only. So that each one of my users have their own folder and only they can modify content in them.
Given that scenario I need to create dynamic permissions to my S3 bucket
Is that possible?
If it is... maybe I am in the right path or not....
What I've done so far is
I am using this guide to create Elastic Beanstalk with a Token Vending Machine. Then I used this other guide to configure the TVM.
Now my issue with that in none of the guides it shows an example of how to register a dynamic user (my app user) or how get the token from the TVM or how to say "Hey TVM, this userID needs upload access to __BUCKET_/ThiUserID/*" from a iOS app.
So I guess what I wonder, is how do I fill the gaps if what I am trying to achieve is possible?
I'm one of the maintainers of the AWS Mobile SDKs. The page you linked to includes projects for both iOS and Android that show how to integrate the customized TVM code in a mobile application. I suggest you look there and if you need further clarification, please update your with specific questions about the code.
You may also want to look at our web identity federation sample which is included with the SDKs. In combination with IAM policy variables, you can generate dynamic policies without the use of a Token Vending Machine.