How to create/copy data to partitions in hive manually - hive

I am working on a hive solution wherein I need to append some values to the high volume files. So instead of appending them, I am trying using map-reduce method
The approach is below
Table creation:
create external table demo_project_data(data string) PARTITIONED BY (business_date string, src_sys_file_nm string, prd_typ_cd string)
ROW FORMAT DELIMITED FIELDS TERMINATED BY '|'
LINES TERMINATED BY '\n'
LOCATION '/user/hive/warehouse/demo/project/data';
hadoop fs -mkdir -p /user/hive/warehouse/demo/project/data/business_date='20180707'/src_sys_file_nm='a_b_c_20180707_1.dat.gz'/prd_typ_cd='abcd'
echo "ALTER TABLE demo_project_data ADD IF NOT EXISTS PARTITION(business_date='20180707',src_sys_file_nm='a ch_ach_fotp_20180707_1.dat.gz',prd_typ_cd='ach')
LOCATION '/user/hive/warehouse/demo/project/data/business_date='20180707'/src_sys_file_nm='a_b_c_20180707_1.dat.gz'/prd_typ_cd='abcd';"|hive
hadoop fs -cp /apps/tdi/data/a_b_c_20180707_1.dat.gz /user/hive/warehouse/demo/project/data/business_date='20180707'/src_sys_file_nm='a_b_c_20180707_1.dat.gz'/prd_typ_cd='abcd'
echo "INSERT OVERWRITE DIRECTORY '/user/20180707' select *,'~karthick~kb~demo' from demo_project_data where src_sys_file_nm='a_b_c_20180707_1.dat.gz' and business_date='20180707' and prd_typ_cd='abcd';"|hive
I have some amount of data in the file but I dont see any results in the above query. The files are properly copied under the correct location.
What is that I am making wrong? Query has no issues
Also I will be looping over multiple dates. I would like to know if this is the right way to do it.

You can Use below command to fetch the results from the partition -
MSCK REPAIR TABLE <tablename>;
Refer,
MSCK REPAIR TABLE:
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-RecoverPartitions(MSCKREPAIRTABLE)

Related

Load data in hive table stored in s3 whose location is stored in another static s3 location

I need to load an s3 data in hive table. This s3 location is dynamic and is stored in another static s3 location.
The dynamic s3 location which I want to load in hive table has path format
s3://s3BucketName/some-path/yyyy-MM-dd
and the static location has data format
{"datasetDate": "datePublished", "s3Location": "s3://s3BucketName/some-path/yyyy-MM-dd"}
Is there a way to read this data in hive? I searched about this a lot but could not find anything.
You can read JSON data from your static location file, parse s3Location field and pass it as a parameter to your add partition clause.
Possible way to read json is using Hive. You can use some other means for the same.
Example using Hive.
create table data_location(location_info string)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY '\t' LINES TERMINATED BY '\n'
STORED AS TEXTFILE
LOCATION 's3://s3BucketName/some-static-location-path/';
Then get the location in the shell script and pass it as a parameter to ADD partition statement.
For example you have table named target_table partitioned by datePublished. You can add partitions like this:
#!/bin/bash
data_location=$(hive -e "set hive.cli.print.header=false; select get_json_object(location_info,'$.s3Location') from data_location")
#get partition name
partition=$(basename ${data_location})
#Create partition in your target table:
hive -e "ALTER TABLE TARGET_TABLE ADD IF NOT EXISTS PARTITION (datePublished='${partition}') LOCATION '${data_location}'"
If you do not want partitioned table, then you can use
ALTER TABLE SET LOCATION instead of adding partition:
hive -e "ALTER TABLE TARGET_TABLE SET LOCATION='${data_location}'"
If it is only the last subfolder name is dynamic (which is date) and base directory is always the same, like s3://s3BucketName/some-path/, only yyyy-MM-dd is changing, you can create table once with location s3://s3BucketName/some-path/ and issue RECOVER PARTITIONS statement. In this case you do not need to read the content of file with location specification. Just schedule RECOVER PARTITIONS to get new partition attached on daily basis.

External Table in Hive - Location

The below table returns no data while running a select statement
CREATE EXTERNAL TABLE foo (
)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY '\073'
LINES TERMINATED BY '\n'
LOCATION '/user/data/CSV/2016/1/27/*/part-*';
I need my hive to point to a dynamic folder so as a mapreduce job puts a part file in a folder and hive loads into the table.
Is there any way the location be made dynamic like
/user/data/CSV/*/*/*/*/part-*
or just /user/data/CSV/* would do fine ?
(The same code works fine when created as internal table and loaded with the file path - hence there is no issues due to formatting)
First of, your table definition is missing columns. Second, external table location always points to folder, not particular files. Hive will consider all files in the folder to be data for the table.
If you have data that is generated e.g. on a daily basis by some external process you should consider partitioning your table by date. Then you need to add a new partition to the table when the data is available.
Hive does not iterate through multiple folders -
Hence for the above scenario
I ran a command line argument that iterates through these multiple folders and cat (print to the console) all the part files and then put it to a desired location.(that Hive points to)
hadoop fs -cat /user/data/CSV/*/*/*/*/part-* | hadoop fs -put - <destination folder>
This line
LOCATION '/user/data/CSV/2016/1/27/*/part-*';
Does not look correct, I don't think that the table can created from multiple locations. Have you tried just importing by a single location to confirm this?
Could also be the delimiter you're using is not correct. If you are using a CSV file to import your data try delimitating by ','.
You can use an alter table statement to change the locations. In the example below partitions are based on dates where data is stored in time dependent file locations. If I want to search many days I have to add an alter table statement for each location. This idea may extend to your situation quite well. You create a script to generate the create table statement as below using some other technology such as python.
CREATE EXTERNAL TABLE foo (
)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY '\073'
LINES TERMINATED BY '\n'
;
alter table foo add partition (date='20160201') location /user/data/CSV/20160201/data;
alter table foo add partition (date='20160202') location /user/data/CSV/20160202/data;
alter table foo add partition (date='20160203') location /user/data/CSV/20160203/data;
alter table foo add partition (date='20160204') location /user/data/CSV/20160204/data;
You can use as many add and drop statements you need to define your locations. Then your table can find data held in many locations in HDFS rather than having all your files in one location.
You may also be able to leverage a
create table like
statement. To create a schema like you have in another table. Then alter the table to point at the files you want.
I know this isn't exactly what you want and is more of a work around. Good luck!

hive 0.13 msck repair table only lists partitions not in metastore

I'm trying to use Hive(0.13) msck repair table command to recover partitions and it only lists the partitions not added to metastore instead of adding them to metastore as well.
here's the ouput of the command
partitions not in metastore externalexample:CreatedAt=26 04%3A50%3A56 UTC 2014/profileLocation="Chicago"
here's how I'm creating the external table
CREATE EXTERNAL TABLE IF NOT EXISTS ExternalExample(
tweetId BIGINT, username STRING,
txt STRING, CreatedAt STRING,
profileLocation STRING,
favc BIGINT,retweet STRING,retcount BIGINT,followerscount BIGINT)
COMMENT 'This is the Twitter streaming data'
ROW FORMAT DELIMITED
FIELDS TERMINATED BY '\t'
STORED AS TEXTFILE
location '/user/hue/exttable/';
Am I missing something?
I had a similar issue with the MSCK REPAIR TABLE listing the partitions that were not in the metastore but not actually adding them (and no error message).
I tried manually adding the partition with the ALTER TABLE ADD PARTITION command, and this gave me an error message, leading me to the root cause which was that the HDFS folder containing the 'missing' partition had been set up with incorrect permissions.
Once the permissions issue was resolved, then the MSCK REPAIR TABLE command worked correctly.
If you encounter this issue, it may be worthwhile to try adding it manually with the ALTER TABLE ADD PARTITION command. It may produce a useful error message that would help you determine the root cause of the problem.
Please make sure that the name of the partitions defined in your table definition match the name of the partition on hdfs.
For example, in your table creation example, I see that you haven't defined any paritions at all.
I think you want to do something like this (note the use of PARTITIONED BY):
create external table ExternalExample(tweetId BIGINT, username STRING, txt STRING,favc BIGINT,retweet STRING,retcount BIGINT,followerscount BIGINT) PARTITIONED BY (CreatedAt STRING, profileLocation STRING) COMMENT 'This is the Twitter streaming data' ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t' STORED AS TEXTFILE location '/user/hue/exttable/';
Then on hdfs you should have the following folder structure:
/user/hue/exttable/CreatedAt=<someString>/profileLocation=<someString>/your-data-file
The partition names for MSCK REPAIR TABLE ExternalTable should be in lowercase then only it will add it to hive metastore, I faced the similar issue in hive 1.2.1 where there was no support for ALTER TABLE ExternalTable RECOVER PARTITION, but after spending some time debugging found the issue that the partition names should be in lowercase i.e /some_external_path/mypartion=01 is valid and /some_external_path/myParition=01 is invalid;
Make your profileLocation to profilelocation or profile_location and test it should work.
My question is here Not able to recover partitions through alter table in Hive 1.2
Hive stores a list of partitions for each table in its metastore. If, however, new partitions are directly added to HDFS (manually by hadoop fs -put command), the metastore will not be aware of these partitions.
you need to add partition
ALTER TABLE ExternalExample ADD PARTITION
for every partition
or in short you can run
MSCK REPAIR TABLE ExternalExample;
It will add any partitions that exist on HDFS but not in metastore to the metastore.
Ref https://issues.apache.org/jira/browse/HIVE-874
1) You need to specify partitions
2) Partition names must have all lower case letters . See this - https://singhanuvrat.com/hive-partition-column-name-camelcase-bad-idea-b89796d4e741#.16d7uqfot
you might not be running as the hive user:
sudo -u hive** hive -e "set hive.msck.path.validation=ignore;msck repair table T1"
set hive.msck.path.validation=ignore; ( this is for tables with large number of partitions.)
You are just missing the PARTITIONED BY (CreatedAt STRING, profileLocation STRING).

Hive External table-CSV File- Header row

Below is the hive table i have created:
CREATE EXTERNAL TABLE Activity (
column1 type, </br>
column2 type
)
ROW FORMAT DELIMITED FIELDS TERMINATED BY ','
LOCATION '/exttable/';
In my HDFS location /exttable, i have lot of CSV files and each CSV file also contain the header row. When i am doing select queries, the result contains the header row as well.
Is there any way in HIVE where we can ignore the header row or first line ?
you can now skip the header count in hive 0.13.0.
tblproperties ("skip.header.line.count"="1");
If you are using Hive version 0.13.0 or higher you can specify "skip.header.line.count"="1" in your table properties to remove the header.
For detailed information on the patch see: https://issues.apache.org/jira/browse/HIVE-5795
Lets say you want to load csv file like below located at /home/test/que.csv
1,TAP (PORTUGAL),AIRLINE
2,ANSA INTERNATIONAL,AUTO RENTAL
3,CARLTON HOTELS,HOTEL-MOTEL
Now, we need to create a location in HDFS that holds this data.
hadoop fs -put /home/test/que.csv /user/mcc
Next step is to create a table. There are two types of them to choose from. Refer this for choosing one.
Example for External Table.
create external table industry_
(
MCC string ,
MCC_Name string,
MCC_Group string
)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
STORED AS TEXTFILE
LOCATION '/user/mcc/'
tblproperties ("skip.header.line.count"="1");
Note: When accessed via Spark SQL, the header row of the CSV will be shown as a data row.
Tested on: spark version 2.4.
There is not. However, you can pre-process your files to skip the first row before loading into HDFS -
tail -n +2 withfirstrow.csv > withoutfirstrow.csv
Alternatively, you can build it into where clause in HIVE to ignore the first row.
If your hive version doesn't support tblproperties ("skip.header.line.count"="1"), you can use below unix command to ignore the first line (column header) and then put it in HDFS.
sed -n '2,$p' File_with_header.csv > File_with_No_header.csv
To remove the header from the csv file in place use:
sed -i 1d filename.csv

hive create table filename 000000_0?

I'm currently creating an external table like that:
CREATE EXTERNAL TABLE site_datatype (
....
yada yada
....
ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t' LINES TERMINATED BY '\n'
LOCATION '/user/accounting/summary/2011-12-15/site_datatype.result'
But instead of creating a file called "site_datatype.result" with the contents in it when i run the insert overwrite table select, it creates a directory "site_datatype.result" with a file called "000000_0" in it (correct contents though).
Is this supposed to work this way? And if yes, how can I workaround this (inside hive) to get it done the way I need it?
Thanks,
Mario
Hive operates at the directory level, so multiple reducers can quickly dump results into HDFS. If it were to operate at the file level, it would have to send it to a single reducer to consolidate into a single file, adding an unnecessary bottleneck.
If you absolutely need data from a Hive table in a single file, you can set the number of reducers to 1, then query your data and push it to a new table or directory (via Insert Overwrite).
Another option would be to get the table from HDFS (hadoop fs -get hive/warehouse/sampletable/ .) and then 'cat' all of the files back together.

Categories