I am trying to create an external hive table using CSV as input file.
How my data looks like:
xxx|2021-08-14 07:10:41.080|[{"sub1","90"},{"sub2","95"}]
I am creating the table using below sql:
CREATE EXTERNAL TABLE mydb.mytable (
Name string,
Last_upd_timestamp timestamp,
subjects array<struct<sub_code:string,sub_marks:string>>
ROW FORMAT SERDE
'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'
WITH SERDEPROPERTIES ('collection.delim'=',','field.delim'='|','serialization.format'='|')
STORED AS INPUTFORMAT 'org.apache.hadoop.mapred.TextInputFormat'
OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
Location 'hdfs://nameservice1/myinputfile'
)
When i try the above, table is created with subjects column like:
[{"sub_code":"[{\"sub1\",\"90\"},{\"sub2\",\"95\"}]","sub_marks":null}]
Not sure what I am doing wrong in the above. Would highly appreciate if someone can help me with how I can create the table in expected output.
Related
I have some data stored in GCS bucket in the following path:
gcs://my-bucket/my_data/subfolder1/subfolder2/**.csv.gz
I intent to create an external table mapping to my_data and want the external table is able to partition the data by different level of subfolders. Note that subfolder1 or subfolder2 don't have a hive partition prefix, i.e, not in the format of prefix=value.
If I would write some pseudo code in Athena syntax, it would be something like below:
CREATE EXTERNAL TABLE `my_data`(
--Column specs go here---
)
PARTITIONED BY (
`partition_0` string,
`partition_1` string)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
STORED AS INPUTFORMAT
'org.apache.hadoop.mapred.TextInputFormat'
OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
LOCATION
'gcs://my-bucket/my-data/'
TBLPROPERTIES (...)
As a result of the pseudo code, the table will consists of two partition columns in addition to columns defined in the column spec.
partition_0
partition_1
Queries filtering on these two columns will then benefits from partition pruning.
Would anyone please advise if this possible in BigQuery. If yes, how I should go about it in SQL?
I have created a hive external table with Avro Schema (complex types) with partition columns. After adding the required partition files, select query returns null values for all the columns except the partition columns. Avro schema has arrays and structs types inside.
Here is the DDL,
CREATE EXTERNAL TABLE mytable PARTITIONED BY(date int, city string)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.avro.AvroSerDe'
STORED as INPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat'
OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat'
LOCATION '/cloud/location'
TBLPROPERTIES ('avro.schema.url'='cloud location for schema file');
I tried to give the schema file directly and tried with schema literal as well in the TBLPROPERTIES.
Select query returns null for all the columns.
Any suggestions for fixing this issue? is there anything missing in this scenario?
I was able to create an external Hive table with just one column containing an Avro data stored into HBase through the following query:
CREATE EXTERNAL TABLE test_hbase_avro
ROW FORMAT SERDE 'org.apache.hadoop.hive.hbase.HBaseSerDe'
STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
WITH SERDEPROPERTIES (
"hbase.columns.mapping" = ":key,familyTest:columnTest",
"familyTest.columnTest.serialization.type" = "avro",
"familyTest.columnTest.avro.schema.url" = "hdfs://path/person.avsc")
TBLPROPERTIES (
"hbase.table.name" = "otherTest",
"hbase.mapred.output.outputtable" = "hbase_avro_table",
"hbase.struct.autogenerate"="true");
What I wish to do is to create a table with the same avro file and other columns containing strings or integer but I was not able to do that and didn't find any example. Can anyone help me? Thank you
I have created a partitioned external table in hive that stores parquet format files. I have timestamp column in that table, when i load data its giving nulls in timestamp column.
create table query
CREATE EXTERNAL TABLE abc(
timestamp1 timestamp,
tagname string,
value string,
quality bigint,
own string)
PARTITIONED BY (
etldate string)
ROW FORMAT SERDE
'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'
STORED AS INPUTFORMAT
'org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat'
OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat'
LOCATION
'adl://refdatalakeprod.azuredatalakestore.net/iconic'
TBLPROPERTIES (
'PARQUET.COMPRESS'='SNAPPY');
Any suggestions pls?
Thanks in advance.
Your question is wrong.It's not timestamp type, it is a string type.I think you need to check your data.
I have an hbase table in the above format:
key : userId#country
column family: k
columns: date#visits, visits
How to i make an hive table which looks like this:
userId, date, country, visits
i tried to fiddle my way around with column mapping and so far i only managed to do this:
CREATE EXTERNAL TABLE hbase_table(key string, visits int)
ROW FORMAT DELIMITED
COLLECTION ITEMS TERMINATED BY '#'
STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,k:visits")
TBLPROPERTIES ("hbase.table.name" = "kpi");
I had been working this for hours, and didn't had much progress. Can some1 point me in the right direction?
I found out how to map a hbase key into a hive row, it's not exactly what I want but it helps...:
CREATE EXTERNAL TABLE hbase_table(key struct<id:string, country:string>, visits int)
ROW FORMAT DELIMITED
COLLECTION ITEMS TERMINATED BY '#'
STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,k:visits")
TBLPROPERTIES ("hbase.table.name" = "kpi");
Is userId a coulmn in your columnfamily 'k'? if it is then dont give ":key" inside the mapping. Try giving "k:userId"