How to load CSV file automatically in Google cloud platform? - google-bigquery

I am new to GCP. I want to load CSV file automatically in any google cloud platform component like Bigquery, Bigtable etc.. I do not want manually work for loading a file everyday on GCP. I want to handle this manual work automatically by GCP. Please suggest me any scenario so i can load file automatically.
Thanks in advance

Building on Pentium's solid answer, you also have the option on the following (serverless) conga line:
GCS -> Cloud Functions -> Dataflow (template) -> BigQuery
We use this pattern a lot of our projects, and it works beautifully. It's event driven, PB scalable, fully automated and zero-ops.

You have the option to watch for Object Change Notification in GCS.
So whenever you upload a file you can have a webhook pinging an URL.
Then you can setup either an App Engine application or Cloud Function to do your import, all this is serverless.

Related

Backup google drive workspace to local server

Automatically backup google drive to local server
Hello,
We use Google Workspace.
I would like to find a way to automatically back up our files to Google on a local server via a cron job.
I know doing a local backup to google drive is possible via rclone.
Would it be possible to use rclone in the direction Google drive -> local server?
Obviously Google offers a way to retrieve the data via https://admin.google.com/ac/customertakeout but that does not correspond to what I want to do. Ideally, I would like to have automatic local backups in case of hacks etc.
Otherwise, maybe only with a python script and the google API, but I can't find anything in the google documentation that explains this.

AWS download files from S3 in web browser

I am a newbie to AWS and one of the tasks I have is to figure out how to download MSIs, ISOs stored in S3 through a web browser. I read that I could use CLI behind the scenes. So if a customer clicks on one download; the app would make a request to S3 using one of the commands and that would download the file lets say through Google Chrome or IE (Please correct me if I'm wrong in the usage of CLI).
Now if the download stops for some reason due to internet failure; is there a way to resume the download? How do I get a download done through a client.
Thanks in advance for helping. Unfortunately the AWS links gave me very little information so seeking help here!
May
Files stored in Amazon S3 can be directly accessed via web browser, just like clicking a link on any website.
If the files are marked as publicly-accessible, anyone with the link can download the file.
If you wish to limit access to the files, your application can generate a pre-signed URL that will work for a limited time period that you specify (eg 5 minutes). Users can use/click that link to download the file within that time period.
You can also download files using the AWS Command-Line Interface (CLI), which has Copy and Sync commands. This would, however, require installation of the CLI on the user's computer. This is great if they are regularly download files or if you wish to automate the download (eg every hour or daily).
If you wish to explore AWS, sign-up for an account and make use of the Free Usage Tier, which lets you try some services for no charge.

AdWords Reports into BigQuery

I try to use an AdWords script to Export AdWords Reports into BigQuery -
I have BigQuery project with enabled BigQuery API: http://prntscr.com/g8peb5
And I use correct Project ID in the script: http://prntscr.com/g8peup
But when I try to run the script, I encounter an error:
"Access Not Configured. BigQuery API has not been used in project
333669768108 before or it is disabled. Enable it by visiting
console.developers.google
com/apis/api/bigquery.googleapis.com/overview?project=333669768108
then retry. If you enabled this API recently, wait a few minutes for
the action to propagate to our systems and retry. (line 135)"
The fact is that I do not have a project with a similar ID (333669768108) and the link provided does not work correctly.
Why can there be such a problem?
Thanks in advance
I ran into a similar problem. I couldn't find a way to let the AdWords Script using BQ API run under an existing project in the Google Developer Console. Somehow AdWords needs to create a separate project for each script using Advanced Google Services. I had a similar issue as well with Gmail API in a script tied to Google Sheets workbook.
I am not sure what you mean with the link from the log not working correctly. Do you get a message like this one when you open it?
The API "bigquery.googleapis.com" doesn't exist or you don't have permission to access it
If so, you can just click on 'Library' in the panel on the left-hand side, and activate BigQuery API from there. When you run the script next time, it should push the data to BigQuery without any problems (assuming the script is correct).

Is it possible to create a shortcut to a file that is in Google Drive?

I have asked this question so that they can respond that it is possible to create a shortcut for a file that is in the cloud, this access will be created in the device memory, what is the purpose of this: My application has integrated a function to upload a file to the cloud and then run it from a system application like player, gallery, among others, but without having to download anything, but from an application that is installed Installed on the device (nothing external). Thank you very much.
You may want to check Create a shortcut to a file. As mentioned,
To create a shortcut instead of a file stored in Drive, use the files.create method of the API and make sure you set the MIME type application/vnd.google-apps.drive-sdk. Do not upload any content when creating the file.
However, for Google Drive Android API, you may want to check Creating Files for more information.

Updating permissions on Amazon S3 files that were uploaded via JungleDisk

I am starting to use Jungle Disk to upload files to an Amazon S3 bucket which corresponds to a Cloudfront distribution. i.e. I can access it via an http:// URL and I am using Amazon as a CDN.
The problem I am facing is that Jungle Disk doesn't set 'read' permissions on the files so when I go to the corresponding URL in a browser I get an Amazon 'AccessDenied' error. If I use a tool like BucketExplorer to set the ACL then that URL now returns a 200.
I really really like the simplicity of dragging files to a network drive. JungleDisk is the best program I've found to do this reliably without tripping over itself and getting confused. However it doesn't seem to have an option to make the files read-able.
I really don't want to have to go to a different tool (especially if i have to buy it) to just change the permissions - and this seems really slow anyway because they generally seem to traverse the whole directory structure.
JungleDisk provides some kind of 'web access' - but this is a paid feature and I'm not sure if it will work or not.
S3 doesn't appear to propagate permissions down which is a real pain.
I'm considering writing a manual tool to traverse my tree and set everything to 'read' but I'd rather not do this if this is a problem someone else has already solved.
Disclaimer: I am the developer of this tool, but I think it may answer your question.
If you are on Windows you can use CloudBerry Explorer Amazon S3 client. It supports most of the Amazon S3 and CloudFront features and It is freeware.
I use the Transmit Mac app to modify permissions on files I've already uploaded with JungleDisk. If you're looking for a more cross-platform solution, the S3Fox browser plugin for Firefox claims to be able to modify permissions on S3 files as well.
If you need a web based tool, you can use S3fm, free online Amazon S3 file manager.
It's a pure Ajax app that runs in your browser and doesn't require sharing your credentials with a 3rd party web site.
If you need a reliable cross-platform tool to handle permissions, you can have a look at CrossFTP Pro. It supports most of the Amazon S3 and CloudFront features as well.