Edit S3 doc with google docs and store it back to S3 - amazon-s3

I am trying to write a program that would allow my users to edit their S3 docs with Google Docs and then the program would store it back to their S3 bucket.
Any ideas on how to start?
I know its possible to simple open a document with google docs by supplying a URL.

While Google Docs is able to open a document from a URL, it cannot "save" the document back to Amazon S3.
Your users would need to save the document within Google Docs (on Google Drive), then your program would need to retrieve that document and save it into Amazon S3.
The problem is... how do you trigger your program to perform the export?
As an alternative, you could synchronize between Google Drive and Amazon S3. See:
Zapier
CloudHQ
GoodSync
...and probably many more!

Related

Best way to preview private S3 documents

Users can upload documents through the my site. It it stored in a private AWS S3 bucket. If they want to preview those uploaded files from my site, I generate a pre-signed URL and stick it in an <embed> tag. I was wondering if this is the best way to do this?
It currently works most of the time, however a few users don't get the preview, instead their browser wants to download the file. Mostly on Windows machines.
Any insight or help would be greatly appreciated!

Migrate videos from Vimeo to S3

I have a large quantity of videos on my Vimeo account that I would like to migrate to my AWS S3 account.
Rather than go through the time consuming process of downloading from Vimeo to my local machine then uploading from my local machine to S3, is there a way where I can do a direct transfer from Vimeo to S3?
If possible, I would want to create a script to iterate through each video via Vimeo API and set up the path to where it would go into S3 then initiate a direct transfer. Any ideas or suggestions would be much appreciated!
If you have a PRO account or higher, you can use the API to get download links for videos on your account, including download links for the original source file. Those download file links should be able to be used for importing into S3. Note that the links provided via the Vimeo API are expiring HTTP 302 redirects to the video file resource, so make sure you take note of the expiration time also provided in the response.
Download links are returned with the rest of a video's metadata, so I suggest using the fields parameter to only return the metadata needed.
http://developer.vimeo.com/api/common-formats#json-filter
https://developer.vimeo.com/api/reference/videos#GET/users/{user_id}/videos

Download files from a specific user's drive with google drive api

Is there a way to download files from a specific google drive, by using the google drive api? Currently i can only read the drive of the google user logged in.
Inorder to access data owned by someone on Google drive you need their permission. You can't just access my files unless I let you. The most common method for this is oauth2 but you can also use a service account.
Now if I set a file to public you would be able to read it using an API key but I would have to give you the file id.

How to open documents with Office365 in my web application?

We have a cloud based audit application. While performing audit a user typically uploads a lot of documents. Currently in order to view the documents he has to download them. Business requirement is that on clicking the document it should directly open up in another browser tab using office 365 just like dropbox/onedrive. The user should be able to view, edit, save it on server (without downloading) and close it. How to achieve that in our application?
Our webapp is built using ReactJS, NodeJS & MongoDB. Whenever a user uploads a document it gets saved in a AWS S3 bucket.
I went through Microsoft Graph API and OneDrive RestAPI's. Looks like the only solution is to use the OneDrive API's to save files in OneDrive instead of S3. And then it should allow you to use the Office365 apps. Is this the right solution? Am I missing anything?
Is there any other solution?
While the easiest solution is indeed to store the documents in OneDrive, there's also another way. You can enroll in Microsoft's Cloud Storage Partners Program and implement the WOPI protocol on your service. This would allow the Office Online viewers/editors to integrate with your service's data directly.
You need to use both aws and O365 api to reach a working solution. Try the following steps (PS: I have not tried this. But I have saved edited documents from Office 365 to AWS)
Read the uploaded document from AWS using AWS api's and upload it to office doc using office doc api.
Edit the doc using office docs api.
Save the doc back to S3

How to Give Access to non-public Amazon S3 bucket folders using Parse authenticated user

We are developing a mobile app using Parse as our BAAS solution but using Amazon S3 for storage of our media files. All of our users upload media files into their own individual folders inside of our app's bucket. As the user uploads media files we update their records in Parse so it knows where to download the files. That's the easy part.
I've spent quite a bit of time researching the different policies for S3 buckets and I am trying to get a grip on the proper way to ensure the security of the content uploaded. If you do all of your work with DynamoDB or SimpleDB then it's easy because you're essentially adjusting your ACLs with the IAM accounts and whatnot. If you use Amazon Cognito it's also easy because authentication happens through Google, Facebook or Amazon accounts. In my case I am using Parse to authenticate users which cannot speak to Amazon directly.
My goal is that only the currently logged in Parse user with ID #1234567 can access their own 1234567 folder and files (as well as any other user given permission by this person for collaboration). Here is a post similar to what I'm trying to accomplish: amazon S3 bucket policy - restricting access by referer BUT not restricting if urls are generated via query string authentication
...but how do I accomplish this with the current user's ID number?
Even better question is whether that post mentioned above is best practice or should I instead be looking at creating an EC2 server to handle access to these files? Should I be looking at CloudFront to serve private content? Or is there another method that works better for what I am trying to accomplish? I am going in circles and my head is spinning.
Thanks to whoever can help straighten me out.
Well since Parse is being shut down I am migrating to another service. This question is no longer relevant.