Append data from SELECT to existing table - google-bigquery

I'm trying to append data fetched from a SELECT to another existing table but I keep getting the following error:
Provided Schema does not match Table projectId:datasetId.existingTable
Here is my request body:
{'projectId': projectId,
'configuration': {
'query': {
'query': query,
'destinationTable': {
'projectId': projectId,
'datasetId': datasetId,
'tableId': tableId
},
'writeDisposition': "WRITE_APPEND"
}
}
}
Seems like the writeDisposition option does not get evaluated.

In order for the append to work, the schema of the existing table must match exactly the schema of the query results you're appending. Can you verify that this is the case (one way to check this would be to save this query as a table and compare the schema with the table you are appending to).

Ok think I got something here. That's a weird one...
Actually it does not work if you have the same schema exactly (field mode).
Here is the source table schema:
"schema": {
"fields": [
{
"name": "ID_CLIENT",
"type": "INTEGER",
"mode": "NULLABLE"
},
{
"name": "IDENTITE",
"type": "STRING",
"mode": "NULLABLE"
}
]
}
If if I use the copy functionality from the browser interface (bigquery.cloud.google.com), I get the exact same schema which is expected:
"schema": {
"fields": [
{
"name": "ID_CLIENT",
"type": "INTEGER",
"mode": "NULLABLE"
},
{
"name": "IDENTITE",
"type": "STRING",
"mode": "NULLABLE"
}
]
}
But then I cannot append from the following fetch to the copied table:
SELECT ID_CLIENT + 1 AS ID_CLIENT, RIGHT(IDENTITE,12) AS IDENTITE FROM datasetid.client
although it returns the same schema, at least from the browser interface view, internally this returns the following schema:
"schema": {
"fields": [
{
"name": "ID_CLIENT",
"type": "INTEGER",
"mode": "REQUIRED"
},
{
"name": "IDENTITE",
"type": "STRING",
"mode": "NULLABLE"
}
]
}
Which isn't the same schema exactly (check mode).
And weirder this select:
SELECT ID_CLIENT, IDENTITE FROM datasetid.client
returns this schema:
"schema": {
"fields": [
{
"name": "ID_CLIENT",
"type": "INTEGER",
"mode": "REQUIRED"
},
{
"name": "IDENTITE",
"type": "STRING",
"mode": "REQUIRED"
}
]
}
Conclusion:
Don't rely on tables schema information from the browser interface, always use Tables.get API.
Copy doesn't really work as expected...

I have successfully appended data to existing table from a CSV file using bq command line tool. The only difference i see here is the configuration to have
write_disposition instead of writeDisposition as shown in the original question.
What i did is add the append flag to bq command line utility (python scripts) for load and it worked like charm.
I have to update the bq.py with the following.
Added a new flag called --append for load function
in the _Load class under RunWithArgs checked to see if append was set if so set 'write_disposition' = 'WRITE_APPEND'
The code is changed for bq.py as follows
In the __init__ function for _Load Class add the following
**flags.DEFINE_boolean(
'append', False,
'If true then data is appended to the existing table',
flag_values=fv)**
And in the function RunWithArgs for _Load class after the following statement
if self.replace:
opts['write_disposition'] = 'WRITE_TRUNCATE'
---> Add the following text
**if self.append:
opts['write_disposition'] = 'WRITE_APPEND'**
Now in the command line
> bq.py --append=true <mydataset>.<existingtable> <filename>.gz
will append the contents of compressed (gzipped) csv file to the existing table.

Related

How to create a bigquery external table with custom partitions

I would like to create a biggqury external table for data stored in:
gs://mybucket/market/US/v1/part-001.avro
gs://mybucket/market/US/v2/part-001.avro
...
gs://mybucket/market/CA/v1/part-001.avro
gs://mybucket/market/CA/v2/part-001.avro
...
With a table definition file
{
"avroOptions": {
"useAvroLogicalTypes": true
},
"hivePartitioningOptions": {
"mode": "CUSTOM",
"sourceUriPrefix": "gs://mybucket/market/{cc:STRING}/{version:STRING}"
},
"sourceFormat": "AVRO",
"sourceUris": [
"gs://mybucket/market/*/*/*.avro"
]
}
and a schema file:
[
{
"mode": "NULLABLE",
"name": "id",
"type": "STRING"
},
{
"mode": "NULLABLE",
"name": "name",
"type": "STRING"
},
...
The table is created with command:
bq mk --external_table_definition=tabledef.json ds.mytable schema.json
But when trying to query the table, following error is reported
Failed to expand table ds.mytable with file pattern gs://mybucket/market/*/*/*.avro: matched no files. File: gs://mybucket/market/*/*/*.avro
I also tried many variations as below, but none of them worked.
gs://mybucket/market/*
gs://mybucket/market/**.avro
gs://mybucket/market/**/*.avro
Can anyone see what the problem might be?
Thanks

Load Avro file to GCS with nested record using customized column name

I was trying to load an Avro file with nested record. One of the record was having a union of schema. When loaded to BigQuery, it created a very long name like com_mycompany_data_nestedClassname_value on each union element. That name is long. Wondering if there is a way to specify name without having the full package name prefixed.
For example. The following Avro schema
{
"type": "record",
"name": "EventRecording",
"namespace": "com.something.event",
"fields": [
{
"name": "eventName",
"type": "string"
},
{
"name": "eventTime",
"type": "long"
},
{
"name": "userId",
"type": "string"
},
{
"name": "eventDetail",
"type": [
{
"type": "record",
"name": "Network",
"namespace": "com.something.event",
"fields": [
{
"name": "hostName",
"type": "string"
},
{
"name": "ipAddress",
"type": "string"
}
]
},
{
"type": "record",
"name": "DiskIO",
"namespace": "com.something.event",
"fields": [
{
"name": "path",
"type": "string"
},
{
"name": "bytesRead",
"type": "long"
}
]
}
]
}
]
}
Came up with
Is that possible to make the long field name like eventDetail.com_something_event_Network_value to be something like eventDetail.Network
Avro loading is not as flexible as it should be in BigQuery (basic example is that it does not support load a subset of the fields (reader schema). Also, renaming of the columns is not supported today in BigQuery refer here. Only options are recreate your table with the proper names (create a new table from your existing table) or recreate the table from your previous table

Bigquery fails to load data from Google Cloud Storage

I have a json record which looks like
{"customer_id":"2349uslvn2q3","order_id":"9sufd23rdl40",
"line_item": [{"line":"1","sku":"10","amount":10},
{"line":"2","sku":"20","amount":20}]}
I am trying to load record stated above into the table which has schema definition as,
"fields": [
{
"mode": "NULLABLE",
"name": "customer_id",
"type": "STRING"
},
{
"mode": "NULLABLE",
"name": "order_id",
"type": "STRING"
},
{
"mode": "REPEATED",
"name": "line_item",
"type": "STRING"
}
]
I am getting following error "message":
JSON parsing error in row starting at position 0 at file:
gs://gcs_bucket/file0. JSON object specified for non-record field:
line_item
I want line_item json string which can have more than 1 row as array of json string in line item column in table.
Any suggestion?
The first thing is that your input JSON should't have a "\n" character, so you should save it like:
{"customer_id":"2349uslvn2q3","order_id":"9sufd23rdl40", "line_item": [{"line":"1","sku":"10","amount":10}, {"line":"2","sku":"20","amount":20}]}
One example of how your JSON file should look like:
{"customer_id":"2349uslvn2q3","order_id":"9sufd23rdl40", "line_item": [{"line":"1","sku":"10","amount":10}, {"line":"2","sku":"20","amount":20}]}
{"customer_id":"2","order_id":"2", "line_item": [{"line":"2","sku":"20","amount":20}, {"line":"2","sku":"20","amount":20}]}
{"customer_id":"3","order_id":"3", "line_item": [{"line":"3","sku":"30","amount":30}, {"line":"3","sku":"30","amount":30}]}
And also your schema is not correct. It should be:
[
{
"mode": "NULLABLE",
"name": "customer_id",
"type": "STRING"
},
{
"mode": "NULLABLE",
"name": "order_id",
"type": "STRING"
},
{
"mode": "REPEATED",
"name": "line_item",
"type": "RECORD",
"fields": [{"name": "line", "type": "STRING"}, {"name": "sku", "type": "STRING"}, {"name": "amount", "type": "INTEGER"}]
}
]
For a better understanding of how schemas work, I've tried writing sort of a guide in this answer. Hopefully it can be of some value.
If your data content is saved for instance in a filed called gs://gcs_bucket/file0 and your schema in schema.json then this command should work for you:
bq load --source_format=NEWLINE_DELIMITED_JSON dataset.table gs://gcs_bucket/file0 schema.json
(supposing you are using the CLI tool as it seems to be the case in your question).

Schema to load JSON to Google BigQuery

Suppose I have the following JSON, which is the result of parsing urls parameters from a log file.
{
"title": "History of Alphabet",
"author": [
{
"name": "Larry"
},
]
}
{
"title": "History of ABC",
}
{
"number_pages": "321",
"year": "1999",
}
{
"title": "History of XYZ",
"author": [
{
"name": "Steve",
"age": "63"
},
{
"nickname": "Bill",
"dob": "1955-03-29"
}
]
}
All the fields in top-level, "title", "author", "number_pages", "year" are optional. And so are the fields in the second level, inside "author", for example.
How should I make a schema for this JSON when loading it to BQ?
A related question:
For example, suppose there is another similar table, but the data is from different date, so it's possible to have different schema. Is it possible to query across these 2 tables?
How should I make a schema for this JSON when loading it to BQ?
The following schema should work. You may want to change some of the types (e.g. maybe you want the dob field to be a TIMESTAMP instead of a STRING), but the general structure should be similar. Since types are NULLABLE by default, all of these fields should handle not being present for a given row.
[
{
"name": "title",
"type": "STRING"
},
{
"name": "author",
"type": "RECORD",
"fields": [
{
"name": "name",
"type": "STRING"
},
{
"name": "age",
"type": "STRING"
},
{
"name": "nickname",
"type": "STRING"
},
{
"name": "dob",
"type": "STRING"
}
]
},
{
"name": "number_pages",
"type": "INTEGER"
},
{
"name": "year",
"type": "INTEGER"
}
]
A related question: For example, suppose there is another similar table, but the data is from different date, so it's possible to have different schema. Is it possible to query across these 2 tables?
It should be possible to union two tables with differing schemas without too much difficulty.
Here's a quick example of how it works over public data (kind of a silly example, since the tables contain zero fields in common, but shows the concept):
SELECT * FROM
(SELECT * FROM publicdata:samples.natality),
(SELECT * FROM publicdata:samples.shakespeare)
LIMIT 100;
Note that you need the SELECT * around each table or the query will complain about the differing schemas.

How to create new table with nested schema entirely in BigQuery

I've got a nested table A in BigQuery with a schema as follows:
{
"name": "page_event",
"mode": "repeated",
"type": "RECORD",
"fields": [
{
"name": "id",
"type": "STRING"
}
]
}
I would like to enrich table A with data from other table and save result as a new nested table. Let's say I would like to add "description" field to table A (creating table B), so my schema will be as follows:
{
"name": "page_event",
"mode": "repeated",
"type": "RECORD",
"fields": [
{
"name": "id",
"type": "STRING"
},
{
"name": "description",
"type": "STRING"
}
]
}
How do I do this in BigQuery? It seems, that there are no functions for creating nested structures in BigQuery SQL (except NEST functions, which produces a list - but this function doesn't seem to work, failing with Unexpected error)
The only way of doing this I can think of, is to:
use string concatenation functions to produce table B with single field called "json" with content being enriched data from A, converted to json string
export B to GCS as set of files F
load F as table C
Is there an easier way to do it?
To enrich schema of existing table one can use tables patch API
https://cloud.google.com/bigquery/docs/reference/v2/tables/patch
Request will look like below
PATCH https://www.googleapis.com/bigquery/v2/projects/{project_id}/datasets/{dataset_id}/tables/{table_id}?key={YOUR_API_KEY}
{
"schema": {
"fields": [
{
"name": "page_event",
"mode": "repeated",
"type": "RECORD",
"fields": [
{
"name": "id",
"type": "STRING"
},
{
"name": "description",
"type": "STRING"
}
]
}
]
}
}
Before Patch
After Patch