How to Reload Partition Data into ORC - hive

Is there any Best practices for loading data into ORC with Partition . If I load 120 GB of data into ORC Partition table. And Partition is on 2 columns. If want reload data for particular partition how to do reloading activity. How to Delete Partition, is it Alter Table Drop partition(Partition Value). Even after deleting the partition, I still see ORC partition file in Hive/WareHouse Folder. How to cleanup unsed partition File. If I want to load only single Partition data into delete partition, how to perform and what is best way.
Is ORC with Partition with Bucket can give better performance than ORC with Partition and then ORC (No Partition)

Dropping a table or a partition only removes the metadata information but does not necessarily delete the data for an external table. You should instead use TRUNCATE to delete the data in an external table or partition. Read more here.

Related

hive: what happen to existing data when i alter snappy compression

Hive 3.x
Initially i create a orc table which has no data. i put some millions of data and then i alter the table to apply snappy compression.
Now, when the new data arrive, the snappy would be applied, but what would happen to the existing data?
i checked the data by orcfiledump, i see no difference before and after altering the table. There is also no difference in data size.
should i create a dummy table and insert back my existing data to apply snappy?

Redshift Spectrum: Automatically partition tables by date/folder

We currently generate a daily CSV export that we upload to an S3 bucket, into the following structure:
<report-name>
|--reportDate-<date-stamp>
|-- part0.csv.gz
|-- part1.csv.gz
We want to be able to run reports partitioned by daily export.
According to this page, you can partition data in Redshift Spectrum by a key which is based on the source S3 folder where your Spectrum table sources its data. However, from the example, it looks like you need an ALTER statement for each partition:
alter table spectrum.sales_part
add partition(saledate='2008-01-01')
location 's3://bucket/tickit/spectrum/sales_partition/saledate=2008-01/';
alter table spectrum.sales_part
add partition(saledate='2008-02-01')
location 's3://awssampledbuswest2/tickit/spectrum/sales_partition/saledate=2008-02/';
Is there any way to set the table up so that data is automatically partitioned by the folder it comes from, or do we need a daily job to ALTER the table to add that day's partition?
Solution 1:
At max 20000 partitions can be created per table. You can create a one-time script to add the partitions (at max 20k) for all the future s3 partition folders.
For eg.
If folder s3://bucket/ticket/spectrum/sales_partition/saledate=2017-12/ doesn't exist, you can even add partition for that.
alter table spectrum.sales_part
add partition(saledate='2017-12-01')
location 's3://bucket/tickit/spectrum/sales_partition/saledate=2017-12/';
Solution 2:
https://aws.amazon.com/blogs/big-data/data-lake-ingestion-automatically-partition-hive-external-tables-with-aws/
Another precise way to go about it:
Create a Lambda job that is triggered on the ObjectCreated notification from the S3 bucket, and run the SQL to add the partition:
alter table tblname ADD IF NOT EXISTS PARTITION (partition clause) localtion s3://mybucket/localtion

How to update a hive table's data after copied orc files with hdfs into the folder of that table

After insertion of orc files into the folder of a table with hdfs copy, how to update that hive table's data to see those data when querying with hive.
Best Regards.
If the table is not partitioned then once the files are in HDFS in the folder that is specified in the LOCATION clause, then the data should be available for querying.
If the table is partitioned then u first need to run an ADD PARTITION statement.
As mentioned in upper answer by belostoky. if the table is not partitioned then you can directly query your table with the updated data
But in case if you table is partitioned you need to add partitions first in hive table that you can do using
You can use alter table statement to add partitions like shown below
ALTER TABLE table1
ADD PARTITION (dt='<date>')
location '<hdfs file path>'
once partitions are added hive metastore should be aware of changes so you need to run
msck repair table table1
to add partitions in metastore.
Once done you can query your data

Understanding Partitioning in Hive

I am trying to learn Hive and while referring the The Hadoop Definitive Guide, I had some confusions.
As per the text, partition in Hive is done by creating sub-directories of the same values of partitioning column. But as in Hive data loading simply means copying of files, and no data validation checks are done during loading, but during querying only, so does Hive check the data for partitioning. Or how does it determine which file should go to which directory?
Or how does it determine which file should go to which directory?
It doesn't, you have to set the value of the destination partition in the LOAD DATA command. When you perform a LOAD operation into a partitioned table, you have to specify the specific partition (the directory) in which you are going to load the data by means of the PARTITION argument. According to the documentation:
The target being loaded to can be a table or a partition. If the table
is partitioned, then one must specify a specific partition of the
table by specifying values for all of the partitioning columns.
For instance, in this example:
hive> LOAD DATA LOCAL INPATH './examples/files/kv2.txt' OVERWRITE INTO TABLE invites PARTITION (ds='2008-08-15');
hive> LOAD DATA LOCAL INPATH './examples/files/kv3.txt' OVERWRITE INTO TABLE invites PARTITION (ds='2008-08-08');
The two files will be stored in the invites/ds=2008-08-15 and invites/ds=2008-08-08 folders.

loading data to hive dynamic partitioned tables

I have created a hive table with dynamic partitioning on a column. Is there a way to directly load the data from files using "LOAD DATA" statement? Or do we have to only depend on creating a non-partitioned intermediate table and load file data to it and then inserting data from this intermediate table to partitioned table as mentioned in Hive loading in partitioned table?
No, the LOAD DATA command ONLY copies the files to the destination directory. It doesn't read the records of the input file, so it CANNOT do partitioning based on record values.
If your input data is already split into multiple files based on partitions, you could directly copy the files to table location in HDFS under their partition directory manually created by you (OR just point to their current location in case of EXTERNAL table) and use the following ALTER command to ADD the partition. This way you could skip the LOAD DATA statement altogether.
ALTER TABLE <table-name>
ADD PARTITION (<...>)
No other go, if we need to insert directly, we'll need to specify partitions manually.
For dynamic partitioning, we need staging table and then insert from there.