BigQuery - load a datasource in Google big query - google-bigquery

I have a MySQL DB in AWS and can I use the database as a data source in Big Query.
I m going with CSV upload to Google Cloud Storage bucket and loading into it.
I would like to keep it Synchronised by directly giving the data source itself than loading it every time.

You can create a permanent external table in BigQuery that is connected to Cloud Storage. Then BQ is just the interface while the data resides in GCS. It can be connected to a single CSV file and you are free to update/overwrite that file. But not sure if you can link BQ to a directory full of CSV files or even are tree of directories.
Anyway, have a look here: https://cloud.google.com/bigquery/external-data-cloud-storage

Related

Loading a csv file on local system to redshift database

I want to load a csv file which is present on my local system to my redshift database. What are the options I can use to upload it?
I am currently using Dbeaver for the db connection and tried importing the data directly. However, that doesn't seem to work.
You can't, Redshift can't see your local machine. Transfer it to S3, then Redshift can read it. Or write A Lot of insert statements.
https://docs.aws.amazon.com/redshift/latest/dg/r_COPY.html

SSIS sending source Oledb data to S3 Buckets in parquet File

My source is SQL Server and I am using SSIS to export data to S3 Buckets, but now my requirement is to send files as parquet File formate.
Can you guys give some clues on how to achieve this?
Thanks,
Ven
For folks stumbling on this answer, Apache Parquet is a project that specifies a columnar file format employed by Hadoop and other Apache projects.
Unless you find a custom component or write some .NET code to do it, you're not going to be able to export data from SQL Server to a Parquet file. KingswaySoft's SSIS Big Data Components might offer one such custom component, but I've got no familiarity.
If you were exporting to Azure, you'd have two options:
Use the Flexible File Destination component (part of the Azure feature pack), which exports to a Parquet file hosted in Azure Blob or Data Lake Gen2 storage.
Leverage PolyBase, a SQL Server feature. It let's you export to a Parquet file via the external table feature. However, that file has to be hosted in a location mentioned here. Unfortunately S3 isn't an option.
If it were me, I'd move the data to S3 as a CSV file then use Athena to convert the CSV file to Pqrquet. There is a nifty article here that talks through the Athena piece:
https://www.cloudforecast.io/blog/Athena-to-transform-CSV-to-Parquet/
Net-net, you'll need to spend a little money, get creative, switch to Azure, or do the conversion in AWS.

Best approach loading a text file (.txt) to bigquery table

Anyone got any pratical idea with regard to what is the best possible approach to upload a text file to a bigquery table? I have a few zipped text files I need to download from a remote SFTP server and load it into a bigquery table. Should I download it to a google cloud storage and upload it from there to bigquery for faster speed? The text files are about 5GB each and will grow further.
Thank you.
First thing to consider if you are loading files from your local data source is that there are limitations for that, according to the documentation.
Loading data from a local data source is subject to the following limitations:
Wildcards and comma separated lists are not supported when you load
files from a local data source. Files must be loaded individually.
When using the classic BigQuery web UI, files loaded from a local data
source must be 10 MB or less and must contain fewer than 16,000 rows.
Besides that, with this provided above link , there are instructions how to upload your data with Console or CLI.
Nevertheless, using the cloud storage you can take advantage of long term storage, which means that you are not charged by loading data into bigquery instead for storing the data in Cloud Storage. You can read more about it here.
Finally, I would like you to consider two points External and Natives tables in bigquery.
Native tables: tables backed by native BigQuery storage.
External tables: tables backed by storage external to BigQuery. For more
information, see Querying External Data Sources.
In other words, using Native tables you import the full data inside BigQuery. Thus, it tends to me faster when executing data analysis. Meanwhile, external tables do not store data in BigQuery, instead references the data from an external source.
The cost of storing in BigQuery is higher than in Cloud storage. Although, querying external tables is slower than querying against native tables, mainly if the files are significantly large. Lastly, since external tables are pointers to files, you do not have to wait for the data to load.

Transferring Play console data to BigQuery using BigQuery Transfer

I am trying to transfer my play store console data to BigQuery using BigQuery transfer service. My play console data is stored in a GCP bucket, which has 3 folders (reviews, stats, acquisition).
While running my BQ transfer job, only the last folder data is getting moved to BigQuery.
Is there any solution to migrate all the three folder data to BigQuery?
A wildcard in your load process can be used like shown in the following link

How to refresh google drive data source - Google Big Query

I have a question regarding refreshing google big query table where the data source is google drive.
Imagine, you have CSV file on google drive and every day someone updates for you.
1. The filename is not changing
2. location URI is same
How can I refresh my big query table by using this google drive file?
Could you please guide me or send me related links?
Thanks
From the BigQuery docs:
Loading data into BigQuery from Google Drive is not currently
supported, but you can query data in Google Drive using an external
table.
The link above provides instructions on how to create an external table that references your stored-in-Drive data source. Considering that you want to be querying data from a Google Drive file which you will be updating in Drive, this is the solution you are looking for (in contrast to downloading your csv locally and then loading it into BQ, in which case you would then have to be updating directly in BQ).