BigQuery external table over GCS path with partitions - google-bigquery

I have some data stored in GCS bucket in the following path:
gcs://my-bucket/my_data/subfolder1/subfolder2/**.csv.gz
I intent to create an external table mapping to my_data and want the external table is able to partition the data by different level of subfolders. Note that subfolder1 or subfolder2 don't have a hive partition prefix, i.e, not in the format of prefix=value.
If I would write some pseudo code in Athena syntax, it would be something like below:
CREATE EXTERNAL TABLE `my_data`(
--Column specs go here---
)
PARTITIONED BY (
`partition_0` string,
`partition_1` string)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
STORED AS INPUTFORMAT
'org.apache.hadoop.mapred.TextInputFormat'
OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
LOCATION
'gcs://my-bucket/my-data/'
TBLPROPERTIES (...)
As a result of the pseudo code, the table will consists of two partition columns in addition to columns defined in the column spec.
partition_0
partition_1
Queries filtering on these two columns will then benefits from partition pruning.
Would anyone please advise if this possible in BigQuery. If yes, how I should go about it in SQL?

Related

Hive table with Avro Schema

I have created a hive external table with Avro Schema (complex types) with partition columns. After adding the required partition files, select query returns null values for all the columns except the partition columns. Avro schema has arrays and structs types inside.
Here is the DDL,
CREATE EXTERNAL TABLE mytable PARTITIONED BY(date int, city string)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.avro.AvroSerDe'
STORED as INPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat'
OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat'
LOCATION '/cloud/location'
TBLPROPERTIES ('avro.schema.url'='cloud location for schema file');
I tried to give the schema file directly and tried with schema literal as well in the TBLPROPERTIES.
Select query returns null for all the columns.
Any suggestions for fixing this issue? is there anything missing in this scenario?

Define nested items while creating table in Hive

I am trying to create an external hive table using CSV as input file.
How my data looks like:
xxx|2021-08-14 07:10:41.080|[{"sub1","90"},{"sub2","95"}]
I am creating the table using below sql:
CREATE EXTERNAL TABLE mydb.mytable (
Name string,
Last_upd_timestamp timestamp,
subjects array<struct<sub_code:string,sub_marks:string>>
ROW FORMAT SERDE
'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'
WITH SERDEPROPERTIES ('collection.delim'=',','field.delim'='|','serialization.format'='|')
STORED AS INPUTFORMAT 'org.apache.hadoop.mapred.TextInputFormat'
OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
Location 'hdfs://nameservice1/myinputfile'
)
When i try the above, table is created with subjects column like:
[{"sub_code":"[{\"sub1\",\"90\"},{\"sub2\",\"95\"}]","sub_marks":null}]
Not sure what I am doing wrong in the above. Would highly appreciate if someone can help me with how I can create the table in expected output.

Is defining a delimiter in a hive ORC Table useless?

When you create a ORC table in hive, you are changing the file type to be orc. This means you can't look at a specific file outside of the orc table.
Here's an example orc create table statement
CREATE TABLE IF NOT EXISTS table_orc_v1
(
col1 int,
col2 int
)
PARTITIONED BY (odate date)
CLUSTERED BY (col1) INTO 10 BUCKETS
STORED AS ORC TBLPROPERTIES('transactional'='true');
If I try to make this a csv table (like you do on a non-orc table) will it
1) not affect table performance
2) slow down performance as it converts things to a csv file that you can never read
3) give me some benefit that I'm not aware of
4) do something else
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
if you are using any binary format (ORC, AVRO, Parquet) to store you data then ROW FORMAT DELIMITED FIELDS TERMINATED BY is just ignored, you can use it in your table syntax, it might not give you any error. However they are not being used

data appears as null on redshift external table while working right on athena

So I'm trying to run the following simple query on redshift spectrum:
select * from company.vehicles where vehicle_id is not null
and it return 0 rows(all of the rows in the table are null). However when I run the same query on athena it works fine and return results. Tried msck repair but both athena and redshift are using the same metastore so it shouldn't matter.
I also don't see any errors.
The format of the files is orc.
The create table query is:
CREATE EXTERNAL TABLE 'vehicles'(
'vehicle_id' bigint,
'parent_id' bigint,
'client_id' bigint,
'assets_group' int,
'drivers_group' int)
PARTITIONED BY (
'dt' string,
'datacenter' string)
ROW FORMAT SERDE
'org.apache.hadoop.hive.ql.io.orc.OrcSerde'
STORED AS INPUTFORMAT
'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat'
OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat'
LOCATION
's3://company-rt-data/metadata/out/vehicles/'
TBLPROPERTIES (
'CrawlerSchemaDeserializerVersion'='1.0',
'CrawlerSchemaSerializerVersion'='1.0',
'classification'='orc',
'compressionType'='none')
Any idea?
How did you create your external table ??
For Spectrum,you have to explicitly set the parameters to treat what should be treated as null
add the parameter 'serialization.null.format'='' in TABLE PROPERTIES so that all columns with '' will be treated as NULL to your external table in spectrum
**
CREATE EXTERNAL TABLE external_schema.your_table_name(
)
row format delimited
fields terminated by ','
stored as textfile
LOCATION [filelocation]
TABLE PROPERTIES('numRows'='100', 'skip.header.line.count'='1','serialization.null.format'='');
**
Alternatively,you can setup the SERDE-PROPERTIES while creating the external table which will automatically recognize NULL values
Eventually it turned out to be a bug in redshift. In order to fix it, we needed to run the following command:
ALTER TABLE table_name SET TABLE properties(‘orc.schema.resolution’=‘position’);
I had a similar problem and found this solution.
In my case I had external tables that were created with Athena pointing to an S3 bucket that contained heavily nested JSON data. To access them with Redshift I used json_serialization_enable to true; before my queries to make the nested JSON columns queryable. This lead to some columns being NULL when the JSON exceeded a size limit, see here:
If the serialization overflows the maximum VARCHAR size of 65535, the cell is set to NULL.
To solve this issue I used Amazon Redshift Spectrum instead of serialization: https://docs.aws.amazon.com/redshift/latest/dg/tutorial-query-nested-data.html.

Parquet Files Generation with hive

I'm trying to generate some parquet files with hive,to accomplish this i loaded a regular hive table from some .tbl files, throuh this command in hive:
CREATE TABLE REGION (
R_REGIONKEY BIGINT,
R_NAME STRING,
R_COMMENT STRING)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY '|'
STORED AS TEXTFILE
location '/tmp/tpch-generate';
After this i just execute this 2 lines:
create table parquet_reion LIKE region STORED AS PARQUET;
insert into parquet_region select * from region;
But when i check the output generated in HDFS, i dont find any .parquet file, intead i find files names like 0000_0 to 0000_21, and the sum of their sizes are much bigger that the original tbl file.
What im i doing Wrong?
Insert statement doesn't create file with extension but these are the parquet files.
You can use DESCRIBE FORMATTED <table> to show table information.
hive> DESCRIBE FORMATTED <table_name>
Additional Note: You can also create new table from source table using below query:
CREATE TABLE new_test row STORED AS PARQUET AS select * from source_table
It will create new table as parquet format and copies the structure as well as the data.