User permissions per directory in static cloud storage based website - amazon-s3

My issue is that I want to host my website in a bucket on either Amazon S3 or Google Cloud Storage. In my website, each user has his/her own directory that they should have read/write permissions to. But I don't want each user to have read/write permissions to the entire website or any of the other user's directories.
I can think of 3 possible solutions:
1.) I have a separate bucket for each user for which they have read/write permissions. I am trying to find a way to redirect the directories in the website bucket to the user's buckets. Would it work to just redirect the index.html page in the user's directory to their personal bucket?
2.) I have a single bucket for the entire website and I set an ACL entry for each user's directory to given them read/write permissions to only their directory. The issue is that I don't think it is possible to set ACLs for directories, only for entire buckets or individual objects.
3.) I use separate git repos for each user's directory then manually and periodically (with a script) fetch all user's most recent changes from their git repositories and update the source in the website bucket myself.
I really want to avoid option 3 if possible. So my question is: Is there a way to give user's read/write permissions to only their own directories within a website that is hosted on a cloud storage bucket?

For scenario #1 on Google Cloud Storage, you can try out Google Cloud Load Balancer's support for Google Cloud Storage, currently in Alpha. You would have individual buckets per user with the right ACLs for that user, and then create URL map entries for each user pointing to their bucket.

Related

Amplify logged in users cannot access S3 files using the Auth_Role

I have a small question regarding Cognito authentication and S3. I tried to connect external S3 and Cognito instances to Amplify (Not managed by Amplify, I created manually). In the system, we can upload files to the S3 bucket and view them. I use Amplify JS's Storage class to do all this. The thing that got me confused is that when I give S3 permissions to the Cognito Auth_Role, my users cannot upload or view any files even-though they are logged into the system. When I give permission to the Unauth_Role, it works. Any idea why?

Download files from a specific user's drive with google drive api

Is there a way to download files from a specific google drive, by using the google drive api? Currently i can only read the drive of the google user logged in.
Inorder to access data owned by someone on Google drive you need their permission. You can't just access my files unless I let you. The most common method for this is oauth2 but you can also use a service account.
Now if I set a file to public you would be able to read it using an API key but I would have to give you the file id.

How to Give Access to non-public Amazon S3 bucket folders using Parse authenticated user

We are developing a mobile app using Parse as our BAAS solution but using Amazon S3 for storage of our media files. All of our users upload media files into their own individual folders inside of our app's bucket. As the user uploads media files we update their records in Parse so it knows where to download the files. That's the easy part.
I've spent quite a bit of time researching the different policies for S3 buckets and I am trying to get a grip on the proper way to ensure the security of the content uploaded. If you do all of your work with DynamoDB or SimpleDB then it's easy because you're essentially adjusting your ACLs with the IAM accounts and whatnot. If you use Amazon Cognito it's also easy because authentication happens through Google, Facebook or Amazon accounts. In my case I am using Parse to authenticate users which cannot speak to Amazon directly.
My goal is that only the currently logged in Parse user with ID #1234567 can access their own 1234567 folder and files (as well as any other user given permission by this person for collaboration). Here is a post similar to what I'm trying to accomplish: amazon S3 bucket policy - restricting access by referer BUT not restricting if urls are generated via query string authentication
...but how do I accomplish this with the current user's ID number?
Even better question is whether that post mentioned above is best practice or should I instead be looking at creating an EC2 server to handle access to these files? Should I be looking at CloudFront to serve private content? Or is there another method that works better for what I am trying to accomplish? I am going in circles and my head is spinning.
Thanks to whoever can help straighten me out.
Well since Parse is being shut down I am migrating to another service. This question is no longer relevant.

Allowing read and write access to Google Drive files to unauthenticated clients

We have been working on a web service (http://www.genomespace.org/) that allows computational biology tools that implement our REST API to, among other things, read and write files stored on "The Cloud".
Our default file storage is a common Amazon S3 bucket, but we now allow users to mount their own private S3 bucket as well as files on Dropbox.
We are now trying to enable similar functionality for Google Drive and have run into some problems unique to Google Drive that we have not encountered with S3 or Dropbox.
Only way to allow clients that are not Google-authenticated to read files unobtrusively is to make the files "Public". Our preference would be that once the user has authorized access to our application via OAuth2, the user files could remain "Private" in Google Drive.
However, even though the user has already authorized our web service to offline access to their "Private" files, we have not found a way to generate a URL that a client authorized by our system can use to GET the file directly without the client being logged into Google as well.
The closest we have come to this functionality has been to change the file permissions to "Anyone with Link", except that for files greater than 20MB Google insists on returning an intermediate web page warning that the file has not been scanned for viruses. In addition to having to mess with file permissions, this would break our existing clients. Only when the file is "Public" and we utilize URLs of the form https://googledrive.com/host/PARENT_FOLDER_ID/FILENAME can non-Google clients read the files without interference.
Have not found any way for clients that are not Google-authenticated to upload a file to Google Drive. Our API allows our authorized clients to PUT files directly to the backing file storage using URLs provided by our server. However, even if a folder is marked "Public", the client requires Google authentication credentials to save to Google Drive. We could deal with both of these issues with intermediate hops through our system (e.g., our web server would first download the file from Google Drive and then allow the client to GET it) however this would be woefully inefficient and, hopefully, unnecessary. These problems have been discussed multiple times before on stackoverflow (e.g. here and here and have read the responses very carefully, but have not seen any recent discussion.
The Google folks direct their API users to post on stackoverflow for support, so I am hoping for a fresh look from insiders.
The general answer is: dont make the drive requests through the user's browser. Insead do everything from your servers. You are the one having the (refresh) tokens for users, so you should make all requests like a proxy between the user and Drive. Same for downloading, you download it and return to the user. As long as you use each drive's token there shouldnt be rate limit/quota issues.

how to link a video from amazon s3 account into my website

I have an amazon s3 account. As i am expecting a large amount of traffic to my site i want to put the videos to be placed in my s3 account and stream it up to my website. How can i do that?
Thanks
Prady
The simplest way is to upload it using the AWS Management Console for S3, use that to set its permissions to be publicly accessible, and then just access the usual S3 URL for it:
http://bucket-name.s3.amazonaws.com/key-name
Depending on exactly how much traffic you're getting, you can look into using Amazon's CloudFront distribution network. That will speed things up for your users, especially if they span the globe.