Can I use PARTITIONED BY after the table has been created? - hive

create table t1 as select * from t2 where 1=2;
I am using the above code to create a table t1 from table t2. In this table t2 is partitioned on 3 vaules, i.e. month, day, year. Once the table t1 is created it is not partitioned on the values mentioned above.
I have tried the below code but it is giving me errors. Help!
create table t1 as
select * from t2 PARTITIONED BY( YEAR STRING, MONTH STRING, DAY STRING);
[42000]: Error while compiling statement: FAILED: ParseException line 1:0 cannot recognize input near 'PARTITIONED' 'BY' '(' in table source

Just need to correct the syntax. partitioned by ... goes after create table.
create table t1 PARTITIONED BY(YEAR STRING,MONTH STRING,DAY STRING) as
select /*add other columns here*/,year,month,day
from t2;
It is suggested to explicitly call out the columns instead of * and specify the partitioning columns towards the end of select.

The above answer is right, solution for creating partition at/during the time of table creation.
In-case table already created without partition, then one of ways is using INSERT OVERWRITE.
SET hive.exec.dynamic.partition = true;
SET hive.exec.dynamic.partition.mode = nonstrict;
INSERT OVERWRITE TABLE <table_name> PARTITION(<partition_name>)
SELECT <column_1,... column_n, partition_name> from <table_name>;

Related

Is there a workaround to my attempted Hive insert

I copy the structure of schema2.card_master over to schema1.card_master using
hive> create table schema1.card_master like schema2.card_master;
That works, and it is partitioned as was the original on a field. This new table has hundreds of fields so they are inconvenient to list out, but I want all the fields populated from the original table using a Join filter. Now I want to populate it using a JOIN:
hive> insert overwrite table schema1.card_master (select * from schema2.card_master ccm INNER JOIN schema1.accounts da on ccm.cm13 = da.cm13);
FAILED: SemanticException 1:23 Need to specify partition columns because the destination table is partitioned. Error encountered near token 'cmdl_card_master'
I checked the partition that was copied over, and it was a field mkt_cd that could take on 2 values, US or PR.
So I try
hive> insert overwrite table schema1.card_master PARTITION (mkt_cd='US') (select * from schema2.card_master ccm INNER JOIN schema1.accounts da on ccm.cm13 = da.cm13);
FAILED: SemanticException [Error 10044]: Line 1:23 Cannot insert into target table because column number/types are different ''US'': Table insclause-0 has 255 columns, but query has 257 columns.
hive>
What is going on here? Is there any work around to load my data without having to explicitly mention all the fields in the Select statement for schema2.card_master ?
select * selects columns from each table in a join. Use select ccm.* instead of select * to select columns from ccm table only. Also remove static partition specification ('US'), use dynamic instead, because ccm.* contains partition column, and when you are loading static partition you should not have partition column in the select.
set hive.exec.dynamic.partition=true;
set hive.exec.dynamic.partition.mode=nonstrict;
insert overwrite table schema1.card_master partition(mkt_cd) --dynamic partition
select ccm.* --use alias
from schema2.card_master ccm
INNER JOIN schema1.accounts da on ccm.cm13 = da.cm13
;

Create Temporary Table with Select and Values

I'm trying to create a temporary table in Hive as follows:
CREATE TEMPORARY TABLE mydb.tmp2
AS SELECT * FROM (VALUES (0, 'abc'))
AS T (id , mystr);
But that gives me the following error:
SemanticException [Error 10296]: Values clause with table constructor not yet supported
Is there another way to create a temporary table by explicitly and directly providing the values in the same command?
My ultimate goal is to run a MERGE command, and the temporary table would be inserted after the USING command. So something like this:
MERGE INTO mydb.mytbl
USING <temporary table>
...
Use subquery instead of temporary table:
MERGE INTO mydb.mytbl t
USING (SELECT 0 as id, 'abc' as mystr) tmp on tmp.id = t.id
Hive does not support values constructor yet. You can achieve this using below query:
CREATE TEMPORARY TABLE mydb.tmp2
AS SELECT 0 as id, 'abc' as mystr;
For merge, you can use temporary table as below:
merge into target_table
using ( select * from mydb.tmp2) temp
on temp.id = target_table.id
when matched then update set ...
when not matched then insert values (...);

Need to add a constant value in a column while loading a hive table

I created a table named table1 in hive and i need to insert data from table2 into table1. I used the below statemnt to get the output.
Also i need to add a new column with some constant value -- colx = 'colval' along with the columns in table2 but am not sure how to add it.. Thanks!
INSERT INTO TABLE table1 select * FROM table2;
If you are willing to drop table1 and recreate it from scratch, you could do this:
-- I'm using Hive 0.13.0
DROP TABLE IF EXISTS table1;
CREATE TABLE table1 AS SELECT *, 'colval' AS colx FROM TABLE2;
If that is not an option for some reason, you can use INSERT OVERWRITE:
ALTER TABLE table1 ADD COLUMNS (colx STRING); -- Assuming you haven't created the column already
INSERT OVERWRITE TABLE table1 SELECT *, 'colval' FROM table2;

Create new temp table from old temp table using Create As Select

I am doing some testing and unable to create a new temp table from old temp table.
This is my code.
1st table
CREATE TABLE #Temp1 ( Col1 Money, Col2 Money );
This works fine.
2nd table
CREATE TABLE #Temp2
AS (Select Col1, Col2
From #Temp1)
This errors with
Incorrect syntax near '('.
I am following this link to learn, which has the following code
CREATE TABLE new_table
AS (SELECT * FROM old_table);
This is almost the same as mine, except mine are temp tables.
I tried using
CREATE TABLE #Temp2
AS (Select Col1, Col2
From tempdb..#Temp1)
to make sure it finds the path of the temp table
but it gives me
Database name 'tempdb' ignored, referencing object in tempdb.
Is there a different way to do it when both are temp tables ?
The CREATE AS syntax is not valid for SQL Server. That site doesn't say which RDMBS that is for so maybe it is more generic and works on others. Here is the MSDN page for CREATE TABLE.
Creating tables on the fly can be done with the INTO clause of a SELECT statement.
If you want to copy the table (schema and data):
SELECT *
INTO #Temp2
FROM #Temp1
If you only want to create a similar table (schema only):
SELECT *
INTO #Temp2
FROM #Temp1
WHERE 1 = 0;

Hive insert query like SQL

I am new to hive, and want to know if there is anyway to insert data into Hive table like we do in SQL. I want to insert my data into hive like
INSERT INTO tablename VALUES (value1,value2..)
I have read that you can load the data from a file to hive table or you can import data from one table to hive table but is there any way to append the data as in SQL?
Some of the answers here are out of date as of Hive 0.14
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DML#LanguageManualDML-InsertingvaluesintotablesfromSQL
It is now possible to insert using syntax such as:
CREATE TABLE students (name VARCHAR(64), age INT, gpa DECIMAL(3, 2));
INSERT INTO TABLE students
VALUES ('fred flintstone', 35, 1.28), ('barney rubble', 32, 2.32);
You can use the table generating function stack to insert literal values into a table.
First you need a dummy table which contains only one line. You can generate it with the help of limit.
CREATE TABLE one AS
SELECT 1 AS one
FROM any_table_in_your_database
LIMIT 1;
Now you can create a new table with literal values like this:
CREATE TABLE my_table AS
SELECT stack(3
, "row1", 1
, "row2", 2
, "row3", 3
) AS (column1, column2)
FROM one
;
The first argument of stack is the number of rows you are generating.
You can also add values to an existing table:
INSERT INTO TABLE my_table
SELECT stack(2
, "row4", 1
, "row5", 2
) AS (column1, column2)
FROM one
;
Slightly better version of the unique2 suggestion is below:
insert overwrite table target_table
select * from
(
select stack(
3, # generating new table with 3 records
'John', 80, # record_1
'Bill', 61 # record_2
'Martha', 101 # record_3
)
) s;
Which does not require the hack with using an already exiting table.
You can use below approach. With this, You don't need to create temp table OR txt/csv file for further select and load respectively.
INSERT INTO TABLE tablename SELECT value1,value2 FROM tempTable_with_atleast_one_records LIMIT 1.
Where tempTable_with_atleast_one_records is any table with atleast one record.
But problem with this approach is that If you have INSERT statement which inserts multiple rows like below one.
INSERT INTO yourTable values (1 , 'value1') , (2 , 'value2') , (3 , 'value3') ;
Then, You need to have separate INSERT hive statement for each rows. See below.
INSERT INTO TABLE yourTable SELECT 1 , 'value1' FROM tempTable_with_atleast_one_records LIMIT 1;
INSERT INTO TABLE yourTable SELECT 2 , 'value2' FROM tempTable_with_atleast_one_records LIMIT 1;
INSERT INTO TABLE yourTable SELECT 3 , 'value3' FROM tempTable_with_atleast_one_records LIMIT 1;
No. This INSERT INTO tablename VALUES (x,y,z) syntax is currently not supported in Hive.
You could definitely append data into an existing table. (But it is actually not an append at the HDFS level). It's just that whenever you do a LOAD or INSERT operation on an existing Hive table without OVERWRITE clause the new data will be put without replacing the old data. A new file will be created for this newly inserted data inside the directory corresponding to that table. For example :
I have a file named demo.txt which has 2 lines :
ABC
XYZ
Create a table and load this file into it
hive> create table demo(foo string);
hive> load data inpath '/demo.txt' into table demo;
Now,if I do a SELECT on this table it'll give me :
hive> select * from demo;
OK
ABC
XYZ
Suppose, I have one more file named demo2.txt which has :
PQR
And I do a LOAD again on this table without using overwrite,
hive> load data inpath '/demo2.txt' into table demo;
Now, if I do a SELECT now, it'll give me,
hive> select * from demo;
OK
ABC
XYZ
PQR
HTH
Ways to insert data into Hive table:
for demonstration, I am using table name as table1 and table2
create table table2 as select * from table1 where 1=1;
or
create table table2 as select * from table1;
insert overwrite table table2 select * from table1;
--it will insert data from one to another. Note: It will refresh the target.
insert into table table2 select * from table1;
--it will insert data from one to another. Note: It will append into the target.
load data local inpath 'local_path' overwrite into table table1;
--it will load data from local into the target table and also refresh the target table.
load data inpath 'hdfs_path' overwrite into table table1;
--it will load data from hdfs location iand also refresh the target table.
or
create table table2(
col1 string,
col2 string,
col3 string)
row format delimited fields terminated by ','
location 'hdfs_location';
load data local inpath 'local_path' into table table1;
--it will load data from local and also append into the target table.
load data inpath 'hdfs_path' into table table1;
--it will load data from hdfs location and also append into the target table.
insert into table2 values('aa','bb','cc');
--Lets say table2 have 3 columns only.
Multiple insertion into hive table
Yes you can insert but not as similar to SQL.
In SQL we can insert the row level data, but here you can insert by fields (columns).
During this you have to make sure target table and the query should have same datatype and same number of columns.
eg:
CREATE TABLE test(stu_name STRING,stu_id INT,stu_marks INT)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
STORED AS TEXTFILE;
INSERT OVERWRITE TABLE test SELECT lang_name, lang_id, lang_legacy_id FROM export_table;
To insert entire data of table2 in table1. Below is a query:
INSERT INTO TABLE table1 SELECT * FROM table2;
You can't do insert into to insert single record. It's not supported by Hive. You may place all new records that you want to insert in a file and load that file into a temp table in Hive. Then using insert overwrite..select command insert those rows into a new partition of your main Hive table. The constraint here is your main table will have to be pre partitioned. If you don't use partition then your whole table will be replaced with these new records.
Enter the following command to insert data into the testlog table with some condition:
INSERT INTO TABLE testlog SELECT * FROM table1 WHERE some condition;
I think in such scenarios you should be using HBASE which facilitates such kind of insertion but it does not provide any SQL kind of query language. You need you use Java API of HBASE like the put method to do such kind of insertion. Moreover HBASE is column oriented no-sql database.
You still can insert into complex type in Hive - it works
(id is int, colleagues array)
insert into emp (id,colleagues) select 11, array('Alex','Jian') from (select '1')
you can add values to specific columns as well, just specify the column names in which you like to add corresponding values:
Insert into Table (Col1, Col2, Col4,col5,Col7) Values ('Va11','Va2','Val4','Val5','Val7');
Make sure the columns you skip dont have not null value type.
There are few properties to set to make a Hive table support ACID properties and to insert the values into tables as like in SQL .
Conditions to create a ACID table in Hive.
The table should be stored as ORC file. Only ORC format can support ACID prpoperties for now.
The table must be bucketed
Properties to set to create ACID table:
set hive.support.concurrency =true;
set hive.enforce.bucketing =true;
set hive.exec.dynamic.partition.mode =nonstrict
set hive.compactor.initiator.on = true;
set hive.compactor.worker.threads= 1;
set hive.txn.manager = org.apache.hadoop.hive.ql.lockmgr.DbTxnManager;
set the property hive.in.test to true in hive.site.xml
After setting all these properties , the table should be created with tblproperty 'transactional' ='true'. The table should be bucketed and saved as orc
CREATE TABLE table_name (col1 int,col2 string, col3 int) CLUSTERED BY col1 INTO 4
BUCKETS STORED AS orc tblproperties('transactional' ='true');
Now its possible to inserte values into the table like SQL query.
INSERT INTO TABLE table_name VALUES (1,'a',100),(2,'b',200),(3,'c',300);
Yes we can use Insert query in Hive.
hive> create table test (id int, name string);
INSERT: INSERT...VALUES is available starting in version 0.14.
hive> insert into table test values (1,'mytest');
This is going to work for insert. We have to use values keyword.
Note: User cannot insert data into a complex datatype column (array, map, struct, union) using the INSERT INTO...VALUES clause.