Can we DELETE whole table in hive's latest version? - hive

there is table "student" created in hive 0.14 version. i want to delete that table. can use DELETE command for that?

You can use:
DROP TABLE table_name_goes_here;

Related

Turn non-Kudu to Kudu table in Impala

having problem with impala update statement, when I used code below
update john_estares_db.tempdbhue set QU=concat(account_id,"Q",quarter(mrs_change_date)," ",year(mrs_change_date));
it return error message:
AnalysisException: Impala does not support modifying a non-Kudu table: john_estares_db.tempdbhue
I would like to know if I can either change my non-Kudu table into a Kudu table or is there an alternate for update statement for non-Kudu in Impala. TIA
Apache Kudu is a data store (think of it as alternative to HDFS/S3 but stores only structured data) which allows updates based on primary key. This has good integration with Impala. A kudu table on Imapla is a way to query data stored on Kudu.
In short if you do not already have Kudu installed and setup already you cannot create a kudu table on Impala.
If you have it Kudu installed and setup, then you cannot simply convert a table kudu table. You have to create a new kudu table with similar structure with some primary-key columns (Kudu requires primary key for all tables) and insert data into this from old non-kudu table using sql query insert into .. select * from ....
What is the type of table john_estares_db.tempdbhue?
Hive or other table type, update or upsert is not supported.
You can use show create table to check your table type.
show create table
If you have kudu installed you can create a kudu table, and move your data into kudu table,then you can use your update code.

What is the use of PURGE in DROP statement of Hive?

Hi Please describe the difference between both in hive with example.
DROP TABLE [IF EXISTS] table_name [PURGE];
If you don't use purge the table goes to a Trash directory, from there the table can be recovered after drop it. But if you do use purge table won't go to Trash directory, so it can't be recovered.
Regards !!

How to add column to hive table which is already partitioned without using CASCADE?

CASCADE option is only available in latest version, what is the approach to add column to hive table which is already partitioned.

List Hive table properties to create another table

I have a table on Hive already created. Is there a way to copy the table schema to a terminal to pass it to a create table on another Hive server?
Have you tried the SHOW CREATE TABLE <tablename> command? I think it should give you the create ddl you are looking for.
This link provides some background on when this was implemented.

How to update partition metadata in Hive , when partition data is manualy deleted from HDFS

What is the way to automatically update the metadata of Hive partitioned tables?
If new partition data's were added to HDFS (without alter table add partition command execution) . then we can sync up the metadata by executing the command 'msck repair'.
What to be done if a lot of partitioned data were deleted from HDFS (without the execution of alter table drop partition commad execution).
What is the way to syncup the Hive metatdata?
EDIT : Starting with Hive 3.0.0 MSCK can now discover new partitions or remove missing partitions (or both) using the following syntax :
MSCK [REPAIR] TABLE table_name [ADD/DROP/SYNC PARTITIONS]
This was implemented in HIVE-17824
As correctly stated by HakkiBuyukcengiz, MSCK REPAIR doesn't remove partitions if the corresponding folder on HDFS was manually deleted, it only adds partitions if new folders are created.
Extract from offical documentation :
In other words, it will add any partitions that exist on HDFS but not in metastore to the metastore.
This is what I usually do in the presence of external tables if multiple partitions folders are manually deleted on HDFS and I want to quickly refresh the partitions :
Drop the table (DROP TABLE table_name)
(dropping an external table does not delete the underlying partition files)
Recreate the table (CREATE EXTERNAL TABLE table_name ...)
Repair it (MSCK REPAIR TABLE table_name)
Depending on the number of partitions this can take a long time. The other solution is to use ALTER TABLE DROP PARTITION (...) for each deleted partition folder but this can be tedious if multiple partitions were deleted.
Try using
MSCK REPAIR TABLE <tablename>;
Ensure the table is set to external, drop all partitions then run the table repair:
alter table mytable_name set TBLPROPERTIES('EXTERNAL'='TRUE')
alter table mytable_name drop if exists partition (`mypart_name` <> 'null');
msck repair table mytable_name;
If msck repair throws an error, then run hive from the terminal as:
hive --hiveconf hive.msck.path.validation=ignore
or set hive.msck.path.validation=ignore;