Public bigquery dataset with partitioned table - google-bigquery

I'm searching for an example for a bigquery public dataset containing a partitioned table.
I searched https://cloud.google.com/bigquery/public-data without luck.

Fount it. goolge_analytics_sample dataset contains the partitioned table ga_sessions
https://www.ga4bigquery.com/how-to-query-multiple-tables-with-table-suffix-ua/

Related

How to insert/update a partitioned table in Big Query

Problem statement:
I need to insert/update a few columns in a big query table that is partitioned by date.So basically I need to do the necessary changes for each partitioned date (done by day).
(its the sessions table that is created automatically by linking the GA View to BQ so I haven't done the partition manually but its automatically taken care by google).
query reference from google_docs
my query:
I also tried the below :
Can anyone help me here ? sorry I am a bit naive with BQ.
You are trying to insert into a wildcard table, a meta-table that is actually composed of multiple tables. Wildcard table is read only and cannot be inserted into.
As Hua said, ga_sessions_* is not a partitioned table, but represents many tables, each with a different suffix.
You probably want to do this then:
INSERT INTO `p.d.ga_sessions_20191125` (visitNumber, visitId)
SELECT 1, 1574

Writing to Hive with target table being driven from columns in data

I have DataSet with uber schema and the requirement is to write to different Hive table depending on some column values. Basically, the combined column values determine the target Hive table. I thought about using groupBy but the result is for aggregation and using repartition doesn't always guarantee one partition maps to one Hive table. Any other options?

BigQuery: Generating a new table from list of tables with different schemes

I am working with large denormalized genomic data from different sources.
The data is already uploaded to google bigQuery datasets and tables.
I would like to generate a new inclusive table that will include all the genomic samples in all the other tables with subset of the columns.
Each / some source table columns will be mapped to a different column names in the new inclusive table.
Is this possible? If yes, what would be the right (and cheap?) way to do so?
Many thanks,
Eilalan

Storeing query result as table in big query retains its original tables's schema structure?

My goal is to update all the rows of google BigQuery table. But to do so I have to recreate tables from older data with adding new column. So I run a select query with all the fields and some hashing and encoding/decoding function. and then storing output as new table and same name as older one with dropping old table. But my question is when I create a new table will it retain its original schema structure specially when original has some nested structures.
When you run the job make sure you do not flatten results and the nesting of the schema will be retained. You can compare the schemas of the original and new table within the web ui.

Columns in Dynamic partitioning in Hive

In dynamic partitioning in Hive, suppose we want to partition a column which is there in the middle of the table, we must be creating a new table and then reordering the columns to get the column which is to be partitioned in the last.
Is it really fine if we do this on the cluster where we get huge data?