Hive - Create Table statement with 'select query' and 'fields terminated by' commands - hive

I want to create a table in Hive using a select statement which takes a subset of a data from another table. I used the following query to do so :
create table sample_db.out_table as
select * from sample_db.in_table where country = 'Canada';
When I looked into the HDFS location of this table, there are no field separators.
But I need to create a table with filtered data from another table along with a field separator. For example I am trying to do something like :
create table sample_db.out_table as
select * from sample_db.in_table where country = 'Canada'
ROW FORMAT SERDE
FIELDS TERMINATED BY '|';
This is not working though. I know the alternate way is to create a table structure with field names and the "FIELDS TERMINATED BY '|'" command and then load the data.
But is there any other way to combine the two into a single query that enables me to create a table with filtered data from another table and also with a field separator ?

Put row format delimited .. in front of AS select
do it like this
Change the query to yours
hive> CREATE TABLE ttt row format delimited fields terminated by '|' AS select *,count(1) from t1 group by id ,name ;
Query ID = root_20180702153737_37802c0e-525a-4b00-b8ec-9fac4a6d895b
here is the result
[root#hadoop1 ~]# hadoop fs -cat /user/hive/warehouse/ttt/**
2|\N|1
3|\N|1
4|\N|1

As you can see in the documentation, when using the CTAS (Create Table As Select) statement, the ROW FORMAT statement (in fact, all the settings related to the new table) goes before the SELECT statement.

Related

How to create imapala table with complex data type and how I can specify delimiter for array type column

I am trying to create impala table with array column type, I have to use custom delimiter for array type column.
I tried below query. But, its throwing error.
Create table array_demo( arra_col ARRAY<string>) row format delimited fields terminated by ','
collection items terminated by '|' stored as parquet
You should omit the ROW FORMAT clause and the subclauses specifying the terminators, and include a STORED AS clause (Parquet is the only format Impala supports with complex data).
The data files to load the table have to be in parquet format too.
If you don't have the data file in Parquet format, you can create the table in Hive,
then create a copy using CREATE TABLE … AS SELECT (CTAS statement), with STORED AS PARQUET.
You then can query the table in Impala.
As an example
-- Create table in Hive
CREATE TABLE array_demo( arra_col ARRAY<STRING>)
ROW FORMAT DELIMITED FIELDS TERMINATED BY ','
COLLECTION ITEMS TERMINATED BY '|'
STORED AS TEXTFILE;
-- Copy the table as parquet format
CREATE TABLE array_demo_impala AS
SELECT *
FROM array_demo
STORED AS PARQUET;

BigQuery - Append missing records from one table to another

I have two tables - 'todays_data' and 'full_data' with same schema(Id string, name string, Age string). Records in 'todays_data' may or may not be available in 'full_data'. I need to identify the new records(new Id) in 'todays_data' and append it to 'full_data'(Id is the reference key). How to achieve this using 1)Web-UI SQL statement and 2)bq command
Below is a query you should run with a full_data table as a Destination Table and with Append to table as a Write Preference
SELECT id, name, age
FROM todays_data
WHERE NOT id IN (
SELECT id
FROM full_data
GROUP BY id
)
See more for how to achieve this for WebUI and in Commmand line in Storing results in a permanent table

BigQuery: Append to table from select with nested record ('Insert into tablename select')

Hi is there a way to append the selected result with a nested column into an existing table?
BigQuery seems not to support 'insert into tablename select...' so i tried it over the .net-api. It works fine but if my select contains a nested record i will get the error (with or without flatten-result-flag):
'Field Products from table oxidation.2016_91 is not a leaf field. '
The tableschema for this column in the destination table is the same.
It seems only to work if i write out the column Name in the nested column, but i want to have the destination table structure to stay the same.
If schemas the same, below should work

Dynamic partition cannot be the parent of a static partition

I'm trying to aggregate data from 1 table (whose data is re-calculated monthly) in another table (holding the same data but for all time) in Hive. However, whenever I try to combine the data, I get the following error:
FAILED: SemanticException [Error 10094]: Line 3:74 Dynamic partition cannot be the parent of a static partition 'category'
The code I'm using to create the tables is below:
create table my_data_by_category (views int, submissions int)
partitioned by (category string)
row format delimited
fields terminated by ','
escaped by '\\'
location '${hiveconf:OUTPUT}/${hiveconf:DATE_DIR}/my_data_by_category';
create table if not exists my_data_lifetime_total_by_category
like my_data_by_category
row format delimited
fields terminated by ','
escaped by '\\'
stored as textfile
location '${hiveconf:OUTPUT}/lifetime-totals/my_data_by_category';
The code I'm using to populate the tables is below:
insert overwrite table my_data_by_category partition(category)
select mdcc.col1, mdcc2.col2, pcc.category
from my_data_col1_counts_by_category mdcc
left outer join my_data_col2_counts_by_category mdcc2 where mdcc.category = mdcc2.category
group by mdcc.category, mdcc.col1, mdcc2.col2;
insert overwrite table my_data_lifetime_total_by_category partition(category)
select mdltc.col1 + mdc.col1 as col1, mdltc.col2 + mdc.col2, mdc.category
from my_data_lifetime_total_by_category mdltc
full outer join my_data_by_category mdc on mdltc.category = mdc.category
where mdltc.col1 is not null and mdltc.col2 is not null;
The frustrating part is that I have this data partitioned on another column and repeating this same process with that partition works without a problem. I've tried Googling the "Dynamic partition cannot be the parent of a static partition" error message, but I can't find any guidance on what causes this or how it can be fixed. I'm pretty sure that there's an issue with a way that 1 or more of my tables is set up, but I can't see what. What's causing this error and what I can I do resolve it?
There is no partitioned by clause in this script. As you are trying to insert into non partitioned table using partition in insert statement, it is failing.
create table if not exists my_data_lifetime_total_by_category
like my_data_by_category
row format delimited
fields terminated by ','
escaped by '\\'
stored as textfile
location '${hiveconf:OUTPUT}/lifetime-totals/my_data_by_category';
No. You don't need to add partition clause.
You are doing group by mdcc.category in insert overwrite table my_data_by_category partition(category)..... but you are not using any UDAF.
Are you sure you can do this?
I think that if you change your second create statement to:
create table if not exists my_data_lifetime_total_by_category
partitioned by (category string)
row format delimited
fields terminated by ','
escaped by '\\'
stored as textfile
location '${hiveconf:OUTPUT}/lifetime-totals/my_data_by_category';
you should then be free of errors

How can I insert a key-value pair into a hive map?

Based on the following tutorial, Hive has a map type. However, there does not seem to be a documented way to insert a new key-value pair into a Hive map, via a SELECT with some UDF or built-in function. Is this possible?
As a clarification, suppose I have a table called foo with a single column, typed map, named column_containing_map.
Now I want to create a new table that also has one column, typed map, but I want each map (which is contained within a single column) to have an additional key-value pair.
A query might look like this:
CREATE TABLE IF NOT EXISTS bar AS
SELECT ADD_TO_MAP(column_containing_map, "NewKey", "NewValue")
FROM foo;
Then the table bar would contain the same maps as table foo except each map in bar would have an additional key-value pair.
Consider you have a student table which contains student marks in various subjects.
hive> desc student;
id string
name string
class string
marks map<string,string>
You can insert values directly to table as below.
INSERT INTO TABLE student
SELECT STACK(1,
'100','Sekar','Mathematics',map("Mathematics","78")
)
FROM empinfo
LIMIT 1;
Here 'empinfo' table can be any table in your database.
And Results are:
100 Sekar Mathematics {"Mathematics":"78"}
for key-value pairs, you can insert like following sql:
INSERT INTO TABLE student values( "id","name",'class',
map("key1","value1","key2","value2","key3","value3","key4","value4") )
please pay attention to sequence of the values in map.
I think the combine function from brickhouse will do what you need. Slightly modifying the query in your original question, it would look something like this
SELECT
combine(column_containing_map, str_to_map("NewKey:NewValue"))
FROM
foo;
The limitation with this example is that str_to_map creates a MAP< STRING,STRING >. If your hive map contains other primitive types for the keys or values, this won't work.
I'm sorry, I didn't quite get this. What do you mean by with some UDF or built-in function?If you wish to insert into a table which has a Map field it's similar to any other datatype. For example :
I have a table called complex1, created like this :
CREATE TABLE complex1(c1 array<string>, c2 map<int,string> ) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' COLLECTION ITEMS TERMINATED BY '-' MAP KEYS TERMINATED BY ':' LINES TERMINATED BY '\n';
I also have a file, called com.txt, which contains this :
Mohammad-Tariq,007:Bond
Now, i'll load this data into the above created table :
load data inpath '/inputs/com.txt' into table complex1;
So this table contains :
select * from complex1;
OK
["Mohammad","Tariq"] {7:"Bond"}
Time taken: 0.062 seconds
I have one more table, called complex2 :
CREATE TABLE complex2(c1 map<int,string>);
Now, to select data from complex1 and insert into complex2 i'll do this :
insert into table complex2 select c2 from complex1;
Scan the table to cross check :
select * from complex2;
OK
{7:"Bond"}
Time taken: 0.062 seconds
HTH