I'm using avro as the schema for google pub/sub to write directly to BigQuery.
One of the fields can be null, so I've written my avro schema like this:
{
"type": "record",
"name": "Avro",
"fields": [
{
"name": "id",
"type": "string"
},
{
"name": "status",
"type": "string"
},
{
"name": "createDate",
"type": "string"
},
{
"name": "purchaseDate",
"type": ["null", "string"]
}
]
}
However, for an input to fit with this schema, it has to look something like one of the below:
{
"id": "123",
"status": "not-purchased",
"createDate": "2023-01-17T04:49:16.966Z",
"purchaseDate": null
}
{
"id": "123",
"status": "purchased",
"createDate": "2023-01-17T04:49:16.966Z",
"purchaseDate": {
"string": "2023-01-17T04:49:16.966Z"
}
}
The input in the 2nd example above is not in a format that is expected by the BigQuery subscription. I'm looking for something that looks like this instead:
{
"id": "123",
"status": "purchased",
"createDate": "2023-01-17T04:49:16.966Z",
"purchaseDate": "2023-01-17T04:49:16.966Z"
}
Is there something I did wrong with the avro schema or is it just the way it is for how nullable fields works in avro?
This is a known issue with Pub/Sub BigQuery subscriptions. You can follow the progress on a fix in the issue tracker. Once fixed, the example that uses the string keyword should work for inserting into BigQuery via Pub/Sub subscription.
Nullable doesn't matter. It's just a union type.
The string key is required for union types, as per the Avro JSON encoding specification.
Related
I was trying to load an Avro file with nested record. One of the record was having a union of schema. When loaded to BigQuery, it created a very long name like com_mycompany_data_nestedClassname_value on each union element. That name is long. Wondering if there is a way to specify name without having the full package name prefixed.
For example. The following Avro schema
{
"type": "record",
"name": "EventRecording",
"namespace": "com.something.event",
"fields": [
{
"name": "eventName",
"type": "string"
},
{
"name": "eventTime",
"type": "long"
},
{
"name": "userId",
"type": "string"
},
{
"name": "eventDetail",
"type": [
{
"type": "record",
"name": "Network",
"namespace": "com.something.event",
"fields": [
{
"name": "hostName",
"type": "string"
},
{
"name": "ipAddress",
"type": "string"
}
]
},
{
"type": "record",
"name": "DiskIO",
"namespace": "com.something.event",
"fields": [
{
"name": "path",
"type": "string"
},
{
"name": "bytesRead",
"type": "long"
}
]
}
]
}
]
}
Came up with
Is that possible to make the long field name like eventDetail.com_something_event_Network_value to be something like eventDetail.Network
Avro loading is not as flexible as it should be in BigQuery (basic example is that it does not support load a subset of the fields (reader schema). Also, renaming of the columns is not supported today in BigQuery refer here. Only options are recreate your table with the proper names (create a new table from your existing table) or recreate the table from your previous table
I want to copy data from Azure Table Storage to Azure SQL Server using Azure Data Factory, but I get a strange error.
In my Azure Table Storage I have a column which contains multiple data types (this is how Table Storage works) E.G. Date time and String.
In my Data Factory project I mentioned that the entire column is string, but for some reason the Data Factory assumes the data type based on the first cell that it encounters during the extraction process.
In my Azure SQL Server database all columns are string.
Example
I have this table in Azure Table Storage: Flights
RowKey PartitionKey ArrivalTime
--------------------------------------------------
1332-2 2213dcsa-213 04/11/2017 04:53:21.707 PM - this cell is DateTime
1332-2 2213dcsa-214 DateTime.Null - this cell is String
If my table is like the one below, the copy process will work, because the first row is string and it will convert the entire column to string.
RowKey PartitionKey ArrivalTime
--------------------------------------------------
1332-2 2213dcsa-214 DateTime.Null - this cell is String
1332-2 2213dcsa-213 04/11/2017 04:53:21.707 PM - this cell is DateTime
Note: I am not allowed to change the data type in Azure Table Storage, move the rows or to add new ones.
Below are the input and output data sets from Azure Data Factory:
"datasets": [
{
"name": "InputDataset",
"properties": {
"structure": [
{
"name": "PartitionKey",
"type": "String"
},
{
"name": "RowKey",
"type": "String"
},
{
"name": "ArrivalTime",
"type": "String"
}
],
"published": false,
"type": "AzureTable",
"linkedServiceName": "Source-AzureTable",
"typeProperties": {
"tableName": "flights"
},
"availability": {
"frequency": "Day",
"interval": 1
},
"external": true,
"policy": {}
}
},
{
"name": "OutputDataset",
"properties": {
"structure": [
{
"name": "PartitionKey",
"type": "String"
},
{
"name": "RowKey",
"type": "String"
},
{
"name": "ArrivalTime",
"type": "String"
}
],
"published": false,
"type": "AzureSqlTable",
"linkedServiceName": "Destination-SQLAzure",
"typeProperties": {
"tableName": "[dbo].[flights]"
},
"availability": {
"frequency": "Day",
"interval": 1
},
"external": false,
"policy": {}
}
}
]
Does anyone knows a solution to this issue?
I've just been playing around with this. I think you have 2 options to deal with this.
Option 1
Simply remove the data type attribute from your input dataset. In the 'structure' block of the input JSON table dataset you don't have to specify the type attribute. Remove or comment it out.
For example:
{
"name": "InputDataset-ghm",
"properties": {
"structure": [
{
"name": "PartitionKey",
"type": "String"
},
{
"name": "RowKey",
"type": "String"
},
{
"name": "ArrivalTime"
/* "type": "String" --<<<<<< Optional! */
},
This should mean the data type is not validated on read.
Option 2
Use a custom activity upstream of the SQL DB table load to cleanse and transform the table data. This will mean breaking out the C# and require a lot more dev time. But you may want to reuse the cleaning code for other datasets.
Hope this helps.
Suppose I have the following JSON, which is the result of parsing urls parameters from a log file.
{
"title": "History of Alphabet",
"author": [
{
"name": "Larry"
},
]
}
{
"title": "History of ABC",
}
{
"number_pages": "321",
"year": "1999",
}
{
"title": "History of XYZ",
"author": [
{
"name": "Steve",
"age": "63"
},
{
"nickname": "Bill",
"dob": "1955-03-29"
}
]
}
All the fields in top-level, "title", "author", "number_pages", "year" are optional. And so are the fields in the second level, inside "author", for example.
How should I make a schema for this JSON when loading it to BQ?
A related question:
For example, suppose there is another similar table, but the data is from different date, so it's possible to have different schema. Is it possible to query across these 2 tables?
How should I make a schema for this JSON when loading it to BQ?
The following schema should work. You may want to change some of the types (e.g. maybe you want the dob field to be a TIMESTAMP instead of a STRING), but the general structure should be similar. Since types are NULLABLE by default, all of these fields should handle not being present for a given row.
[
{
"name": "title",
"type": "STRING"
},
{
"name": "author",
"type": "RECORD",
"fields": [
{
"name": "name",
"type": "STRING"
},
{
"name": "age",
"type": "STRING"
},
{
"name": "nickname",
"type": "STRING"
},
{
"name": "dob",
"type": "STRING"
}
]
},
{
"name": "number_pages",
"type": "INTEGER"
},
{
"name": "year",
"type": "INTEGER"
}
]
A related question: For example, suppose there is another similar table, but the data is from different date, so it's possible to have different schema. Is it possible to query across these 2 tables?
It should be possible to union two tables with differing schemas without too much difficulty.
Here's a quick example of how it works over public data (kind of a silly example, since the tables contain zero fields in common, but shows the concept):
SELECT * FROM
(SELECT * FROM publicdata:samples.natality),
(SELECT * FROM publicdata:samples.shakespeare)
LIMIT 100;
Note that you need the SELECT * around each table or the query will complain about the differing schemas.
I've got a nested table A in BigQuery with a schema as follows:
{
"name": "page_event",
"mode": "repeated",
"type": "RECORD",
"fields": [
{
"name": "id",
"type": "STRING"
}
]
}
I would like to enrich table A with data from other table and save result as a new nested table. Let's say I would like to add "description" field to table A (creating table B), so my schema will be as follows:
{
"name": "page_event",
"mode": "repeated",
"type": "RECORD",
"fields": [
{
"name": "id",
"type": "STRING"
},
{
"name": "description",
"type": "STRING"
}
]
}
How do I do this in BigQuery? It seems, that there are no functions for creating nested structures in BigQuery SQL (except NEST functions, which produces a list - but this function doesn't seem to work, failing with Unexpected error)
The only way of doing this I can think of, is to:
use string concatenation functions to produce table B with single field called "json" with content being enriched data from A, converted to json string
export B to GCS as set of files F
load F as table C
Is there an easier way to do it?
To enrich schema of existing table one can use tables patch API
https://cloud.google.com/bigquery/docs/reference/v2/tables/patch
Request will look like below
PATCH https://www.googleapis.com/bigquery/v2/projects/{project_id}/datasets/{dataset_id}/tables/{table_id}?key={YOUR_API_KEY}
{
"schema": {
"fields": [
{
"name": "page_event",
"mode": "repeated",
"type": "RECORD",
"fields": [
{
"name": "id",
"type": "STRING"
},
{
"name": "description",
"type": "STRING"
}
]
}
]
}
}
Before Patch
After Patch
I have two types of AvroRecords that both extend avro.SpecificRecord. Is there a way to make one a subclass of the other in Java? One of them is PersonRecord and the one I would like to be its subclass is EmployeeRecord. The reason I don't want to populate normal Java classes with the avro data is that I am using hadoop and would like to work with the avro files directly if it is possible.
To clarify, it is the polymorphism that I am interested in. I would like to be able to use a function that takes as argument a PersonRecord with an EmployeeRecord.
Thanks!
I think I understand what you're trying to do ("subclass" an Avro record in the Avro Schema definition file) but I don't think it's possible.
Instead, a way to do this would be to have EmployeeRecord have a PersonRecord member nested within it, and then the Employee-specific related info following. For example :
{
"type": "record",
"name": "PersonRecord",
"namespace": "com.yourapp",
"fields": [
{
"name": "first",
"type": "string"
},
{ etc... }
]
}
{
"type": "record",
"name": "EmployeeRecord",
"namespace": "com.yourapp",
"fields": [
{
"name": "PersonInfo",
"type": "PersonRecord"
},
{
"name": "salary",
"type": "int"
},
{ etc... }
]
}