Automatically backup google drive to local server
Hello,
We use Google Workspace.
I would like to find a way to automatically back up our files to Google on a local server via a cron job.
I know doing a local backup to google drive is possible via rclone.
Would it be possible to use rclone in the direction Google drive -> local server?
Obviously Google offers a way to retrieve the data via https://admin.google.com/ac/customertakeout but that does not correspond to what I want to do. Ideally, I would like to have automatic local backups in case of hacks etc.
Otherwise, maybe only with a python script and the google API, but I can't find anything in the google documentation that explains this.
Related
I am working on customizing the oracle service cloud customer portal, but since OSvC provides only WebDAV to connect to it. It is very time-consuming to edit files and then upload them to WebDAV even for a single word change.
I am looking for a solution to serve it locally make desired changes and then upload the desired code to webDEV.
But after searching the file structure I can not make which framework it uses, I tried to use websites like https://builtwith.com/ and WhatRuns but they are also not able to find anything useful.
Although after searching in the file structure, I find some files of CodeIgnitor but the structure is way more different than the CodeIgnitor folder structure.
The short answer is no, you will not be able to run Customer Portal locally. While it is a fork of CodeIgniter from many years ago, there are server-side dependencies that will prevent you from running it in a local sandbox.
That said, it is possible to automate many of the manual tasks of interacting with WebDAV for change testing. If you edit locally, then you can use scripting hooks or event RPA robots to automate some of the manual file movement. Personally I have a flow to edit remotely in my test environment with an editor (like VSCode or Nova) that can connect to a remote server via WebDAV and edit files directly in the development area of a site. Then, when finished, I have a script that pulls down the latest version of all files and then allows me to commit changes to Git for SCM.
Another option is RPA. You can develop a robot that can be run to automate the manual tasks that you face in your workflow. Personally, I think that scripting is a better solution than RPA since you can automate all of the actions via scripting or a shell. But, it's another option to consider.
Another way of "Live editing" the OSvC CP code is to connect to WebDav via a software that supports it like Mountain Duck which uploads your code to OSvC on save.
OR use the better solution Windows Explorer which supports connecting to WebDav and treating it like a network drive, by going on My Computer -> Computer -> Map Network Drive then put https://yoursite.custhelp.com/dav/cp click Next then you'll be promoted to login using your OSvC login.
I am a newbie to AWS and one of the tasks I have is to figure out how to download MSIs, ISOs stored in S3 through a web browser. I read that I could use CLI behind the scenes. So if a customer clicks on one download; the app would make a request to S3 using one of the commands and that would download the file lets say through Google Chrome or IE (Please correct me if I'm wrong in the usage of CLI).
Now if the download stops for some reason due to internet failure; is there a way to resume the download? How do I get a download done through a client.
Thanks in advance for helping. Unfortunately the AWS links gave me very little information so seeking help here!
May
Files stored in Amazon S3 can be directly accessed via web browser, just like clicking a link on any website.
If the files are marked as publicly-accessible, anyone with the link can download the file.
If you wish to limit access to the files, your application can generate a pre-signed URL that will work for a limited time period that you specify (eg 5 minutes). Users can use/click that link to download the file within that time period.
You can also download files using the AWS Command-Line Interface (CLI), which has Copy and Sync commands. This would, however, require installation of the CLI on the user's computer. This is great if they are regularly download files or if you wish to automate the download (eg every hour or daily).
If you wish to explore AWS, sign-up for an account and make use of the Free Usage Tier, which lets you try some services for no charge.
We have a requirement where we should provide capability to upload files up to 100 GB size. Current flow which we have is to put the file from client location/local system to the application server. Then application server pushes the file to a service account in Google Drive server. I would like to know if there is a way to push the file from local system directly to service account in Google Drive. This would help us to not have to store such big files in application server. Please let me know. Also would like to know if we can actually have Drive installed in our local system to point to a service account. This way these big files can be put into the drive location and it will be synced to server in the background.
I would like to know if there is a way to push the file from local system directly to service account in Google Drive
The only way I know is for you to upload them. The Upload Files page in the Drive API documentation details this feature. In your case, you'll have to use uploadType=resumable due to the file size you'll upload.
Also would like to know if we can actually have Drive installed in our local system to point to a service account
Syncing ala-Dropbox might be a bit tricky, I haven't read anything in the Drive documentation that has this feature. Syncing to desktop is usually just a .glink shortcut that will open up a browser.
I am looking for a way to transfer files from a server to Amazon S3 bucket, without first downloading the files to my computer. All of the files I plan to transfer can be accessed publicly (e.g. http://something.com/file.ext). Everything I tried only allows me to directly upload files from my Mac to S3.
P.S. Although I have access to windows, a Mac app that can do this would be great... or maybe a browser-based solution :)
You can check out this PHP class (and a net tuts tutorial on it), it works well, I've been using it for a while now. It includes bucket creation, deletion, adding files and more. You can easily add files remotely from another server, or from the same server you're running it on.
I am starting to use Jungle Disk to upload files to an Amazon S3 bucket which corresponds to a Cloudfront distribution. i.e. I can access it via an http:// URL and I am using Amazon as a CDN.
The problem I am facing is that Jungle Disk doesn't set 'read' permissions on the files so when I go to the corresponding URL in a browser I get an Amazon 'AccessDenied' error. If I use a tool like BucketExplorer to set the ACL then that URL now returns a 200.
I really really like the simplicity of dragging files to a network drive. JungleDisk is the best program I've found to do this reliably without tripping over itself and getting confused. However it doesn't seem to have an option to make the files read-able.
I really don't want to have to go to a different tool (especially if i have to buy it) to just change the permissions - and this seems really slow anyway because they generally seem to traverse the whole directory structure.
JungleDisk provides some kind of 'web access' - but this is a paid feature and I'm not sure if it will work or not.
S3 doesn't appear to propagate permissions down which is a real pain.
I'm considering writing a manual tool to traverse my tree and set everything to 'read' but I'd rather not do this if this is a problem someone else has already solved.
Disclaimer: I am the developer of this tool, but I think it may answer your question.
If you are on Windows you can use CloudBerry Explorer Amazon S3 client. It supports most of the Amazon S3 and CloudFront features and It is freeware.
I use the Transmit Mac app to modify permissions on files I've already uploaded with JungleDisk. If you're looking for a more cross-platform solution, the S3Fox browser plugin for Firefox claims to be able to modify permissions on S3 files as well.
If you need a web based tool, you can use S3fm, free online Amazon S3 file manager.
It's a pure Ajax app that runs in your browser and doesn't require sharing your credentials with a 3rd party web site.
If you need a reliable cross-platform tool to handle permissions, you can have a look at CrossFTP Pro. It supports most of the Amazon S3 and CloudFront features as well.