I need to create a hive table(external) to load data generated by another process. I need to be partitioned by date, But the problem is, there is a random string in the path.
Example input paths :
/user/hadoop/output/FDQM9N4RCQ3V2ZYT/20170314/
/user/hadoop/output/FDPWMUVVBO2X74CA/20170315/
/user/hadoop/output/FDPGNC0ENA6QOF9G/20170316/
.........
.........
Here 4th field in the directory is dynamic(which cannot be guessed). Each of these directories will have multiple .gz files
What location would I give while creating the table?
CREATE EXTERNAL TABLE user (
userId BIGINT,
type INT,
date String
)
PARTITIONED BY (date String)
LOCATION '/user/hadoop/output/';
is this correct? if so how do I partition it based on the date(last field in the directory)?
Since you are not using the partition convention you'll have to add each partition manually.
The table location does not matter, but for clarity leave it as it is now.
I would recommend to use date type for the partition or at least the ISO format, YYYY-MM-DD.
I would not use date as a column name (nor int,string etc.).
PARTITIONED BY (dt date)
alter table user add if not exists partition (dt=date '2017-03-14') location '/user/hadoop/output/FDQM9N4RCQ3V2ZYT/20170314';
alter table user add if not exists partition (dt=date '2017-03-15') location '/user/hadoop/output/FDPWMUVVBO2X74CA/20170315';
alter table user add if not exists partition (dt=date '2017-03-16') location '/user/hadoop/output/FDPGNC0ENA6QOF9G/20170316';
Related
I am trying to create a table in Athena based on a directory in S3 that looks something like this:
folders/
id=1/
folder1/
folder2/
folder3/
dt=***/
dt=***/
id=2/
...
I want to partition by two columns. One is the id, and on is the dt.
So eventually I want my table to have an id column, and for each id, all of the dt's in its sub-folder folder3. Is there any solution for this that doesn't force me to have a path like this: ...\id=\dt=?
I tried to simply set these two columns in the "partition by" section where the location is the "folders" path, then the table has no data.
I then tried using injection and setting a specific id in a where clause when querying the table, but then the table contains data I don't need, and seems the partition doesn't work as I expected.
Table DDL:
CREATE EXTERNAL TABLE IF NOT EXISTS `database`.`test_table` (
`col1` string,
`col2` string,
) PARTITIONED BY (
id string,
dt string
)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'
WITH SERDEPROPERTIES (
'serialization.format' = ',',
'field.delim' = ','
) LOCATION 's3://folders/'
Appreciate any help!
You can "manually" add the partitions using something like
alter table your_table add if not exists
partition (id=1, dt=0)
location '/id=1/folder3/dt=0/'
partition (id=1, dt=1)
location 'id=1/folder3/dt=1'
...
you can programmatically add all your partitions on s3 this way using the aws cli to list all folders, loop over them and add them to the partition table using a query like the above (see the docs).
An alternative is to use partition projection with custom storage locations, which has the benefit of giving you faster queries and removes the need for manually adding new partitions when new data arrives to S3 (see the partition projection docs, specially the section on custom S3 locations).
We have one HIVE table that is partitioned by date. It has currently Sequence file format, I want to convert it into Parquet Table.
Is it possible that we have new Partition with Parquet Serde, and older with Sequence format, so that I don't need to backfill it?
create a external empty table with default serde(LazySimpleSerDe) and default stored(textfile).
add partition.
alter partition set fileformat(or set serde).
Hive LanguageManual DDL
CREATE EXTERNAL TABLE test(ip string, localTime string )
PARTITIONED BY (partition__hive__ STRING) location '/tmp/table/empty';
alter table test add partition (partition__hive__='p_0') location 'hdfs://hdfsTest/hive/table/test/2018/11/21/08';
alter table test partition (partition__hive__='p_0') SET FILEFORMAT parquet;
alter table test add partition (partition__hive__='p_1') location 'hdfs://hdfsTest/hive/table/test/2018/11/21/09';
alter table test partition (partition__hive__='p_1') SET SERDE 'org.apache.hive.hcatalog.data.JsonSerDe';
I have the following file on HDFS:
I create the structure of the external table in Hive:
CREATE EXTERNAL TABLE google_analytics(
`session` INT)
PARTITIONED BY (date_string string)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
LOCATION '/flumania/google_analytics';
ALTER TABLE google_analytics ADD PARTITION (date_string = '2016-09-06') LOCATION '/flumania/google_analytics';
After that, the table structure is created in Hive but I cannot see any data:
Since it's an external table, data insertion should be done automatically, right?
your file should be in this sequence.
int,string
here you file contents are in below sequence
string, int
change your file to below.
86,"2016-08-20"
78,"2016-08-21"
It should work.
Also it is not recommended to use keywords as column names (date);
I think the problem was with the alter table command. The code below solved my problem:
CREATE EXTERNAL TABLE google_analytics(
`session` INT)
PARTITIONED BY (date_string string)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
LOCATION '/flumania/google_analytics/';
ALTER TABLE google_analytics ADD PARTITION (date_string = '2016-09-06');
After these two steps, if you have a date_string=2016-09-06 subfolder with a csv file corresponding to the structure of the table, data will be automatically loaded and you can already use select queries to see the data.
Solved!
Let's imagine I store one file per day in a format:
/path/to/files/2016/07/31.csv
/path/to/files/2016/08/01.csv
/path/to/files/2016/08/02.csv
How can I read the files in a single Hive table for a given date range (for example from 2016-06-04 to 2016-08-03)?
Assuming every files follow the same schema, I would then suggest that you store the files with the following naming convention :
/path/to/files/dt=2016-07-31/data.csv
/path/to/files/dt=2016-08-01/data.csv
/path/to/files/dt=2016-08-02/data.csv
You could then create an external table partitioned by dt and pointing to the location /path/to/files/
CREATE EXTERNAL TABLE yourtable(id int, value int)
PARTITIONED BY (dt string)
ROW FORMAT DELIMITED FIELDS TERMINATED BY ','
LOCATION '/path/to/files/'
If you have several partitions and don't want to write alter table yourtable add partition ... queries for each one, you can simply use the repair command that will automatically add partitions.
msck repair table yourtable
You can then simply select data within a date range by specifying the partition range
SELECT * FROM yourtable WHERE dt BETWEEN '2016-06-04' and '2016-08-03'
Without moving your file:
Design your table schema. In hive shell, create the table (partitioned by date)
Loading files into tables
Query with HiveQL ( select * from table where dt between '2016-06-04 ' and '2016-08-03')
Moving your file:
Design your table schema. In hive shell, create the table (partitioned by date)
move /path/to/files/2016/07/31.csv under /dbname.db/tableName/dt=2016-07-31, then you'll have
/dbname.db/tableName/dt=2016-07-31/file1.csv
/dbname.db/tableName/dt=2016-08-01/file1.csv
/dbname.db/tableName/dt=2016-08-02/file1.csv
load partition with
alter table tableName add partition (dt=2016-07-31);
See Add partitions
In Spark-shell, read hive table
/path/to/data/user_info/dt=2016-07-31/0000-0
1.create sql
val sql = "CREATE EXTERNAL TABLE `user_info`( `userid` string, `name` string) PARTITIONED BY ( `dt` string) ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe' STORED AS INPUTFORMAT 'org.apache.hadoop.mapred.TextInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat' LOCATION 'hdfs://.../data/user_info'"
2. run it
spark.sql(sql)
3.load data
val rlt= spark.sql("alter table user_info add partition (dt=2016-09-21)")
4.now you can select data from table
val df = spark.sql("select * from user_info")
Say I have a single file "fruitsbought.csv" that contains many records that contain a date field.
Is it possible to "partition" for better performance by creating the "fruits" table based on that text file, while creating a partition in which all the rows in fruitsbought.txt that would match that partition, say if I wanted to do it by year and month, to be created?
Or do I have to as part of a separate process, create a directory for each year and then put the appropriate ".csv" files that are filtered down for that year into the directory structure on HDFS prior to creating the table in impala-shell?
I heard that you can create an empty table, set up partitions, then use "Insert" statements that happen to contain the partition that that record goes into. Though in my current case, I already have a single "fruitsbought.csv" that contains every record I want in it that I like how I can just make that into a table right there (though it does not have parititionig).
Do I have to develop a separte process to presplit the one file into the multiple files sorted under the right partition? (The one file is very very big).
Create external table using fruitsbought.csv example (id is just example, ...- mean rest of columns in table):
CREATE EXTERNAL TABLE fruitsboughexternal
(
id INT,
.....
mydate STRING
) ROW FORMAT DELIMITED FIELDS TERMINATED BY ','
LOCATION 'somelocationwithfruitsboughtfile/';
Create table with partition on date
CREATE TABLE fruitsbought(id INT, .....)
PARTITIONED BY (year INT, month INT, day INT)
ROW FORMAT DELIMITED FIELDS TERMINATED BY ',';
Import data to fruitsbought table, partition parameters have to be last in select (of course mydate have to be in format understand by impala like 2014-06-20 06:05:25)
INSERT INTO fruitsbought PARTITION(year, month, day) SELECT id, ..., year(mydate), month(mydate), day(mydate) FROM fruitsboughexternal;