How to recover partitions in easy fashion. Here is the scenario :
Have 'n' partitions on existing external table 't'
Dropped table 't'
Recreated table 't' // Note : same table but with excluding some column
How to recover the 'n' partitions that existed for table 't' in step #1 ?
I can manually alter table to add 'n' partition by writing some script. But that's very tedious. Is there something built-in to recover these partitions ?
When the partitions directories still exist in the HDFS, simply run this command:
MSCK REPAIR TABLE table_name;
It adds the partitions definitions to the metastore based on what exists in the table directory.
Metadata is not saved in the trash and is removed permanently; you will not be able to restore the metadata of dropped tables, partitions, etc. Reference: http://www.cloudera.com/documentation/archive/cdh/4-x/4-7-1/CDH4-Installation-Guide/cdh4ig_hive_trash.html
Related
Impala external table partitions still show up in stats with row count 0 after deleting the data in HDFS and altering (like ALTER TABLE table RECOVER PARTITIONS) refreshing (REFRESH table) and invalidation of metadata.
Trying to drop partitions one by one works, but there are tens of partitions which should be removed and it would be quite tedious.
Dropping and recreating the table would also be an option but that way all the statistics would be dropped together with the table.
Is there any kind of other options in impala to get this done?
Found a workaround through HIVE.
By issuing MSCK REPAIR TABLE tablename SYNC PARTITIONS then refreshing the table in impala, the empty partitions disappear.
we have a data corruption issue at our hadoop cluster. We have a managed table on hive which contains three years of data partitioned by year.
Below two queries run fine without any issue
select count(*) from tkt_hist table where yr=2015
select count(*) from tkt_hist table where yr=2016
select count(*) from tkt_hist table where yr=2017
However, when we try to do group by per year, below error is shown.
Error while compiling statement: FAILED: SemanticException java.io.FileNotFoundException: File hdfs://ASIACELLHDP/apps/hive/warehouse/gprod1t_base.db/toll_tkt_hist_old/yr=2015/mn=01/dy=01 does not exist. [ERROR_STATUS]
Even select will not work when we specify a year other than 2015.
//this works fine
Select * from tkt_hist where yr=2015 limit 10;
// below throws same error mentioned above.
Select * from tkt_hist where yr=2016;
Try increasing java heap space (increase reducer memory if it doesn't work).
For example:
set mapreduce.map.java.opts = -Xmx15360m
You will have to drop the partitions manually because msck repair table only adds partitions but doesn't remove existing ones.
You will have to iterate through the corrupt partitions list. For internal tables, you'll have to be specific, as dropping a partition deletes the underlying physical files.
ALTER TABLE tkt_hist DROP IF EXISTS PARTITION(yr=2015, mn=01, dy=01);
You will need to do this for each partition. You could put it in a bash script and execute it with hive -e or beeline -e commands to work with a quoted query string.
If you are using an external table, then it's much easier to remove all partitions and then repair table.
ALTER TABLE tkt_hist DROP IF EXISTS PARTITION(yr<>'', mn<>'', dy<>'');
Make sure to repair the table as the user owning the Hive DB as well as the HDFS path.
MSCK REPAIR TABLE tkt_hist;
This should add partitions folders currently available in the table path without adding the invalid partitions.
Note: If your user isn't the owner of the directory, ensure you have write permissions and do your work in the hive access client as beeline requires absolute ownership rights to work.
While creating hive tables, Can I point the 'LOCATION' to a place in hdfs where data is present. Do I still need to load data or Will the data be available on hive directly?
You can specify any location while creating table and the data will be accessible. If table is partitioned, then use ALTER TABLE ADD PARTITION or MSCK REPAIR TABLE table_name or Amazon version ALTER TABLE table_name RECOVER PARTITIONS , this will add any partitions that exist on HDFS but not in metastore to the metastore. See docs here: https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-RecoverPartitions(MSCKREPAIRTABLE)
If table is not partitioned, you can simply specify the location with data while creating table or change table location using ALTER TABLE SET LOCATION.
When I am using DROP TABLE IF EXISTS <Table Name> in hive, it is not freeing the memory. The files are created as 0000_n.bz2 and they are still on disk.
I have two questions here:
1) Will these files keep on growing for each and every insert?
2) Is there any DROP equivalent to remove the files as well on the disk?
Couple of things you can do:
Check if the table is an external table and in that case you need to drop the files manually on HDFS as dropping tables won't drop the files:
hadoop fs -rm /HDFS_location/filename
Secondly check if you are in the right database. You need to issue use database command before dropping the tables. The database should be same as the one in which tables were created.
There are two types of tables in hive.
Hive managed table: If you drop a hive managed table the data in HDFS are automatically deleted.
External Table: If you drop an external table, hive doesnt delete the underlying data.
I believe yours is an external table.
Drop table if exists table_name purge;
This command will also remove data files from trash folder and cannot be recovered after table drop
What is the way to automatically update the metadata of Hive partitioned tables?
If new partition data's were added to HDFS (without alter table add partition command execution) . then we can sync up the metadata by executing the command 'msck repair'.
What to be done if a lot of partitioned data were deleted from HDFS (without the execution of alter table drop partition commad execution).
What is the way to syncup the Hive metatdata?
EDIT : Starting with Hive 3.0.0 MSCK can now discover new partitions or remove missing partitions (or both) using the following syntax :
MSCK [REPAIR] TABLE table_name [ADD/DROP/SYNC PARTITIONS]
This was implemented in HIVE-17824
As correctly stated by HakkiBuyukcengiz, MSCK REPAIR doesn't remove partitions if the corresponding folder on HDFS was manually deleted, it only adds partitions if new folders are created.
Extract from offical documentation :
In other words, it will add any partitions that exist on HDFS but not in metastore to the metastore.
This is what I usually do in the presence of external tables if multiple partitions folders are manually deleted on HDFS and I want to quickly refresh the partitions :
Drop the table (DROP TABLE table_name)
(dropping an external table does not delete the underlying partition files)
Recreate the table (CREATE EXTERNAL TABLE table_name ...)
Repair it (MSCK REPAIR TABLE table_name)
Depending on the number of partitions this can take a long time. The other solution is to use ALTER TABLE DROP PARTITION (...) for each deleted partition folder but this can be tedious if multiple partitions were deleted.
Try using
MSCK REPAIR TABLE <tablename>;
Ensure the table is set to external, drop all partitions then run the table repair:
alter table mytable_name set TBLPROPERTIES('EXTERNAL'='TRUE')
alter table mytable_name drop if exists partition (`mypart_name` <> 'null');
msck repair table mytable_name;
If msck repair throws an error, then run hive from the terminal as:
hive --hiveconf hive.msck.path.validation=ignore
or set hive.msck.path.validation=ignore;