copy documents from google drive to amzon-s3 programmatically by using java - amazon-s3

I have download files from Google drive and save into my local system by using google drive api with java.My aim is to make a copy of documents from gdrive to amazon s3.
I can achieve this by download the Gdrive documents into my local directory and upload into amazon-s3 by using the s3Utility's public void uploadToBucket(int userId, String bucketName, String fileName, File fileData) method.
Is there any direct way to achieve this? that is i want to reduce one step. i don't like to download documents into my local.Instead of this i would like to give the gdrive document's downloadurl into s3 method,it will need to save the document into s3. Is it possible? Any Suggestions? sorry the essay type of question

You will need a server somewhere to run your code as both GDrive and Amazon S3 are closed services - you cannot add your own code to them.

Related

Directly download from a link and upload file to GCS

Is there a way to download a MP4 file directly and store on Google bucket. We have a use-case to get a file URL to download and upload it on cloud. However, since file size can be more than 1 GB, it is not feasible to download in local storage first and then upload the file to cloud bucket. We are specifically looking for google cloud storage to upload files and solution should be specific to same.
Some Ref doc we found but does not look like the feasible solution as it uploads file from local storage not directly from link.
https://googleapis.dev/ruby/google-cloud-storage/latest/Google/Cloud/Storage.html
https://www.mydatahack.com/uploading-and-downloading-files-in-s3-with-ruby/
Google Cloud Storage does not offer compute features. That means you cannot directly load an object into Cloud Storage from a URL. You must fetch the object and then upload it into Cloud Storage.

How to create and interaction between google drive and aws s3?

I'm trying to set up a connection a Google Drive folder and S3 bucket, but I'm not sure where to start.
I've already created a sort of "Frankenstein process", but it's easy to use only by me and sharing it to my co-workers it's a pain.
I have a script that generates a plain text file and saves it into a drive folder. And to upload, I've installed Drive file stream to save it in my mac, then all I did was create a script using Python3, with the boto3 library, to upload the text file into different s3 buckets depending on the file name.
I was thinking that I can create a lambda to process the file into the s3 buckets but I cannot resolve how to create the connection between drive and s3. I would appreciate if someone could give me a piece of advise on how to start with this.
Thanks
if you just simply want to connect google drive and aws s3 there is one service name zapier which provide different type of integration without line of code
https://zapier.com/apps/amazon-s3/integrations/google-drive
For more details you can check this link out

How can I export all contents to my local drive from cloudinary?

Is there any method to download all contents in cloudinary as one zip file or download all contents using any plugin.
I have multiple folders and subfolders containing images in Cloudinary.
Bulk downloading images from your Cloudinary account can currently be done in the following ways:
Using the Admin API. Listing all resources and extracting their URLs for download.
Download as a zip file using the ZIP generation API
Backup to a private S3 bucket. Please see - http://support.cloudinary.com/hc/en-us/articles/203744391-How-do-I-grant-Cloudinary-with-the-permissions-to-backup-on-my-private-S3-bucket-

Prevent Amazon S3 bucket forcing download

I'm using an Amazon S3 bucket to store user uploads for public access.
When you click a link to the resource it seems to force a download even when the files could be viewed in the browser (ie JPGs, etc).
As pointed out by TheZuck the problem was the content type wasn't being set at the point the file was uploaded.
I'm using an Amazon S3 PHP class (http://undesigned.org.za/2007/10/22/amazon-s3-php-class) so simply had to add the content type (mime_content_type($file) in PHP) when using the putObjectFile method:
$s3class->putObjectFile($file, S3BUCKET, $target_location, S3::ACL_PUBLIC_READ, NULL, mime_content_type($file));

How to receive an uploaded file using node.js formidable library and save it to Amazon S3 using knox?

I would like to upload a form from a web page and directly save the file to S3 without first saving it to disk. This node.js app will be deployed to Heroku, where there is no local disk to save the file to.
The node-formidable library provides a great way to upload files and save them to disk. I am not sure how to turn off formidable (or connect-form) from saving file first. The Knox library on the other hand provides a way to read a file from the disk and save it on Amazon S3.
1) Is there a way to hook into formidable's events (on Data) to send the stream to Knox's events, so that I can directly save the uploaded file in my Amazon S3 bucket?
2) Are there any libraries or code snippets that can allow me to directly take the uploaded file and save it Amazon S3 using node.js?
There is a similar question here but the answers there do not address NOT saving the file to disk.
It looks like there is no good way to do it. One reason might be that the node-formidable library saves the uploaded file to disk. I could not find any options to do otherwise. The knox library takes the saved file on the disk and using your Amazon S3 credentials uploads it to Amazon.
Since on Heroku I cannot save files locally, I ended up using transloadit service. Though their authentication docs have some learning curve, I found the service useful.
For those who want to use transloadit using node.js, the following code sample may help (transloadit page had only Ruby and PHP examples)
var crypto, signature;
crypto = require('crypto');
signature = crypto.createHmac("sha1", 'auth secret').
update('some string').
digest("hex")
console.log(signature);
this is Andy, creator of AwsSum:
https://github.com/appsattic/node-awssum/
I just released v0.2.0 of this library. It uploads the files that were created by Express' bodyParser() though as you say, this won't work on Heroku:
https://github.com/appsattic/connect-stream-s3
However, I shall be looking at adding the ability to stream from formidable directly to S3 in the next (v0.3.0) version. For the moment though, take a look and see if it can help. :)