How to load data in partitioned table automatically - hive

I created an external but partitioned table as below
CREATE EXTERNAL TABLE IF NOT EXISTS dividends ( ymd STRING, dividend
FLOAT ) PARTITIONED BY (exchange STRING, symbol STRING) ROW FORMAT
DELIMITED FIELDS TERMINATED BY ',';
I want to load data in such a way that for each unique partition value, it automatically forms a new partition and data goes in that .Is there any way?
Sample data below
NASDAQ,AMTD,2006-01-25,6.0
NASDAQ,AHGP,2009-11-09,0.44
NASDAQ,AHGP,2009-08-10,0.428
NASDAQ,AHGP,2009-05-11,0.415
NASDAQ,AHGP,2009-02-10,0.403
NASDAQ,AHGP,2008-11-07,0.39
NASDAQ,AHGP,2008-08-08,0.353
NASDAQ,AHGP,2008-05-09,0.288
NASDAQ,AHGP,2008-02-08,0.288
NASDAQ,AHGP,2007-11-07,0.265
NASDAQ,AHGP,2007-08-08,0.265
NASDAQ,AHGP,2007-05-09,0.25
NASDAQ,AHGP,2007-02-07,0.25
NASDAQ,AHGP,2006-11-07,0.215
NASDAQ,AHGP,2006-08-09,0.215
NASDAQ,ALEX,2009-11-03,0.315
NASDAQ,ALEX,2009-08-04,0.315
NASDAQ,ALEX,2009-05-12,0.315
NASDAQ,ALEX,2009-02-11,0.315
NASDAQ,ALEX,2008-11-04,0.315
NASDAQ,AFCE,2005-06-06,12.0
NASDAQ,ASRVP,2009-12-28,0.528
NASDAQ,ASRVP,2009-09-25,0.528
NASDAQ,ASRVP,2009-06-25,0.528
NASDAQ,ASRVP,2009-03-26,0.528
NASDAQ,ASRVP,2008-12-26,0.528
NASDAQ,ASRVP,2008-09-25,0.528
NASDAQ,ASRVP,2008-06-25,0.528

I was searching for this. These were my steps, created a Staging table and loaded the csv file and then created and loaded the table using dynamic partition.
CREATE EXTERNAL TABLE stocks ( exchange STRING,
symbol STRING,
ymd STRING,
price_open FLOAT,
price_high FLOAT,
price_low FLOAT,
price_close FLOAT,
volume INT,
price_adj_close FLOAT)
LOCATION '/user/hduser/stocks';
CREATE EXTERNAL TABLE IF NOT EXISTS dividends_stage (
exchange STRING,
symbol STRING,
ymd STRING,
dividend FLOAT )
ROW FORMAT DELIMITED FIELDS TERMINATED BY ','
LOCATION '/user/hduser/div_stage';
hadoop fs -mv /user/hduser/dividends.csv /user/hduser/div_stage
CREATE EXTERNAL TABLE IF NOT EXISTS dividends (
ymd STRING,
dividend FLOAT )
PARTITIONED BY (exchange STRING, symbol STRING)
ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' ;
INSERT OVERWRITE TABLE dividends partition (exchange , symbol)
SELECT ymd,dividend, exchange, symbol from dividends_stage;
SELECT INPUT__FILE__NAME, BLOCK__OFFSET__INSIDE__FILE from dividends ;
Hope this helps and not too late..

Related

How to create external table on parquet file

I have a parquet file on gcp storage. File converted from simple json {"id":1,"name":"John"}.
Could you help me write the correct script? Is it possible to do that without schema?
create external table test (
id string,
name string
)
row format delimited
fields terminated by '\;'
stored as ?????
location '??????'
tblproperties ('skip.header.line.count'='1');
Hive is , as sql databases, working in a write-in schema architecture so you cannot create a table using HQL without using a schema ( not like other cases for NoSql like Hbase for example or others). I advise you to use a Hive version >= 0.14, it is easier:
CREATE TABLE table_name (
string1 string,
string2 string,
int1 int,
boolean1 boolean,
long1 bigint,
float1 float,
double1 double,
inner_record1 struct,
enum1 string,
array1 array,
map1 map,
union1 uniontype,
fixed1 binary,
null1 void,
unionnullint int,
bytes1 binary)
PARTITIONED BY (ds string);

"Error: List index out of range" in HIVE while displaying a recently created empty table

I am using virtual box to run the cloudera v5.4.2-0.
I followed the instructions of an online course to create an empty table using the following HIVEQL:
create table sales_all_years (RowID smallint, OrderID int, OrderDate date, OrderMonthYear date, Quantity int, Quote float, DiscountPct float, Rate float, SaleAmount float, CustomerName string, CompanyName string, Sector string, Industry string, City string, ZipCode string, State string, Region string, ProjectCompleteDate date, DaystoComplete int, ProductKey string, ProductCategory string, ProductSubCategory string, Consultant string, Manager string, HourlyWage float, RowCount int, WageMargin float)
partitioned by (yr int) -- partitioning
row format serde 'com.bizo.hive.serde.csv.CSVSerde'
stored as textfile;
I experimented a little and realized that partitioning the table is causing the error but I could not figure out the reason.
P.S. I am a little new to this. Sorry if the format is of my question is a little off
The problem might be with com.bizo.hive.serde.csv.CSVSerde. Try to create table with different SerDe, for instance
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.OpenCSVSerde'
WITH SERDEPROPERTIES (
"separatorChar" = ",",
"quoteChar" = "'",
"escapeChar" = "\\"
)
STORED AS TEXTFILE
or just textfile with the corresponding separator
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
STORED AS TEXTFILE

Amazon Athena returning "mismatched input 'partitioned' expecting {, 'with'}" error when creating partitions

I'd like to use this query to create a partitioned table in Amazon Athena:
CREATE TABLE IF NOT EXISTS
testing.partitioned_test(order_id bigint, name string, car string, country string)
PARTITIONED BY (year int)
ROW FORMAT SERDE 'parquet.hive.serde.ParquetHiveSerDe'
STORED AS 'PARQUET'
LOCATION 's3://testing-imcm-into/partitions'
Unfortunately I don't get the error message which tells me the following:
line 3:2: mismatched input 'partitioned' expecting {, 'with'}
The quotes around 'PARQUET' seemed to be causing a problem.
Try this:
CREATE EXTERNAL TABLE IF NOT EXISTS
partitioned_test (order_id bigint, name string, car string, country string)
PARTITIONED BY (year int)
STORED AS PARQUET
LOCATION 's3://testing-imcm-into/partitions/'

Hive: Partitioning by part of integer column

I want to create an external Hive table, partitioned by record type and date (year, month, day). One complication is that the date format I have in my data files is a single value integer yyyymmddhhmmss instead of the required date format yyyy-mm-dd hh:mm:ss.
Can I specify 3 new partition column based on just single data value? Something like the example below (which doesn't work)
create external table cdrs (
record_id int,
record_detail tinyint,
datetime_start int
)
partitioned by (record_type int, createyear=datetime_start(0,3) int, createmonth=datetime_start(4,5) int, createday=datetime_start(6,7) int)
row format delimited
fields terminated by '|'
lines terminated by '\n'
stored as TEXTFILE
location 'hdfs://nameservice1/tmp/sbx_unleashed.db'
tblproperties ("skip.header.line.count"="1", "skip.footer.line.count"="1");
If you want to be able to use MSCK REPAIR TABLE to add the partition for you based on the directories structure you should use the following convention:
The nesting of the directories should match the order of the partition columns.
A directory name should be {partition column name}={value}
If you intends to add the partitions manually then the structure has no meaning.
Any set values can be coupled with any directory. e.g. -
alter table cdrs
add if not exist partition (record_type='TYP123',createdate=date '2017-03-22')
location 'hdfs://nameservice1/tmp/sbx_unleashed.db/2017MAR22_OF_TYPE_123';
Assuming directory structure -
.../sbx_unleashed.db/record_type=.../createyear=.../createmonth=.../createday=.../
e.g.
.../sbx_unleashed.db/record_type=TYP123/createyear=2017/createmonth=03/createday=22/
create external table cdrs
(
record_id int
,record_detail tinyint
,datetime_start int
)
partitioned by (record_type int,createyear int, createmonth tinyint, createday tinyint)
row format delimited
fields terminated by '|'
lines terminated by '\n'
stored as TEXTFILE
location 'hdfs://nameservice1/tmp/sbx_unleashed.db'
tblproperties ("skip.header.line.count"="1", "skip.footer.line.count"="1")
;
Assuming directory structure -
.../sbx_unleashed.db/record_type=.../createdate=.../
e.g.
.../sbx_unleashed.db/record_type=TYP123/createdate=2017-03-22/
create external table cdrs
(
record_id int
,record_detail tinyint
,datetime_start int
)
partitioned by (record_type int,createdate date)
row format delimited
fields terminated by '|'
lines terminated by '\n'
stored as TEXTFILE
location 'hdfs://nameservice1/tmp/sbx_unleashed.db'
tblproperties ("skip.header.line.count"="1", "skip.footer.line.count"="1")
;

Adding a comma separated table to Hive

I have a very basic question which is: How can I add a very simple table to Hive. My table is saved in a text file (.txt) which is saved in HDFS. I have tried to create an external table in Hive which points out this file but when I run an SQL query (select * from table_name) I don't get any output.
Here is an example code:
create external table Data (
dummy INT,
account_number INT,
balance INT,
firstname STRING,
lastname STRING,
age INT,
gender CHAR(1),
address STRING,
employer STRING,
email STRING,
city STRING,
state CHAR(2)
)
LOCATION 'hdfs:///KibTEst/Data.txt';
KibTEst/Data.txt is the path of the text file in HDFS.
The rows in the table are seperated by carriage return, and the columns are seperated by commas.
Thanks for your help!
You just need to create an external table pointing to your file
location in hdfs and with delimiter properties as below:
create external table Data (
dummy INT,
account_number INT,
balance INT,
firstname STRING,
lastname STRING,
age INT,
gender CHAR(1),
address STRING,
employer STRING,
email STRING,
city STRING,
state CHAR(2)
)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
LINES TERMINATED BY '\n'
LOCATION 'hdfs:///KibTEst/Data.txt';
You need to run select query(because file is already in HDFS and external table directly fetches data from it when location is specified in create statement). So you test using below select statement:
SELECT * FROM Data;
create external table Data (
dummy INT,
account_number INT,
balance INT,
firstname STRING,
lastname STRING,
age INT,
gender CHAR(1),
address STRING,
employer STRING,
email STRING,
city STRING,
state CHAR(2)
)
row format delimited
FIELDS TERMINATED BY ‘,’
stored as textfile
LOCATION 'Your hdfs location for external table';
If data in HDFS then use :
LOAD DATA INPATH 'hdfs_file_or_directory_path' INTO TABLE tablename
The use select * from table_name
create external table Data (
dummy INT,
account_number INT,
balance INT,
firstname STRING,
lastname STRING,
age INT,
gender CHAR(1),
address STRING,
employer STRING,
email STRING,
city STRING,
state CHAR(2)
)
row format delimited
FIELDS TERMINATED BY ','
stored as textfile
LOCATION '/Data';
Then load file into table
LOAD DATA INPATH '/KibTEst/Data.txt' INTO TABLE Data;
Then
select * from Data;
I hope, below inputs will try to answer the question asked by #mshabeen.
There are different ways that you can use to load data in Hive table that is created as external table.
While creating the Hive external table you can either use the LOCATION option and specify the HDFS, S3 (in case of AWS) or File location, from where you want to load data OR you can use LOAD DATA INPATH option to load data from HDFS, S3 or File after creating the Hive table.
Alternatively you can also use ALTER TABLE command to load data in the Hive partitions.
Below are some details
Using LOCATION - Used while creating the Hive table. In this case data is already loaded and available in Hive table.
**LOAD DATA INPATH** option - This Hive command can be used to load data from specified location. Point to remember here is, the data will get MOVED from input path to Hive warehouse path.
Example -
LOAD DATA INPATH 'hdfs://cluster-ip/path/to/data/location/'
Using ALTER TABLE command - Mostly this is used to add data from other locations into the Hive partitions. In this case it is required that all partitions are already defined and the values for the partitions are already known. In case of dynamic partitions this command is not required.
Example -
ALTER TABLE table_name ADD PARTITION (date_col='2018-02-21') LOCATION 'hdfs/path/to/location/'
The above code will map the partition to the specified data location (in this case HDFS). However, the data will NOT MOVED to Hive internal warehouse location.
Additional details are available here