I am trying to create a table in Athena based on a directory in S3 that looks something like this:
folders/
id=1/
folder1/
folder2/
folder3/
dt=***/
dt=***/
id=2/
...
I want to partition by two columns. One is the id, and on is the dt.
So eventually I want my table to have an id column, and for each id, all of the dt's in its sub-folder folder3. Is there any solution for this that doesn't force me to have a path like this: ...\id=\dt=?
I tried to simply set these two columns in the "partition by" section where the location is the "folders" path, then the table has no data.
I then tried using injection and setting a specific id in a where clause when querying the table, but then the table contains data I don't need, and seems the partition doesn't work as I expected.
Table DDL:
CREATE EXTERNAL TABLE IF NOT EXISTS `database`.`test_table` (
`col1` string,
`col2` string,
) PARTITIONED BY (
id string,
dt string
)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'
WITH SERDEPROPERTIES (
'serialization.format' = ',',
'field.delim' = ','
) LOCATION 's3://folders/'
Appreciate any help!
You can "manually" add the partitions using something like
alter table your_table add if not exists
partition (id=1, dt=0)
location '/id=1/folder3/dt=0/'
partition (id=1, dt=1)
location 'id=1/folder3/dt=1'
...
you can programmatically add all your partitions on s3 this way using the aws cli to list all folders, loop over them and add them to the partition table using a query like the above (see the docs).
An alternative is to use partition projection with custom storage locations, which has the benefit of giving you faster queries and removes the need for manually adding new partitions when new data arrives to S3 (see the partition projection docs, specially the section on custom S3 locations).
Related
I have created a table using partition. I tried two ways for my s3 bucket folder as following but both ways I get no records found when I query with where clause containing partition clause.
My S3 bucket looks like following. part*.csv is what I want to query in Athena. There are other folders at same location along side output, within output.
s3://bucket-rootname/ABC-CASE/report/f78dea49-2c3a-481b-a1eb-5169d2a97747/output/part-filename121231.csv
s3://bucket-rootname/XYZ-CASE/report/678d1234-2c3a-481b-a1eb-5169d2a97747/output/part-filename213123.csv
my table looks like following
Version 1:
CREATE EXTERNAL TABLE `mytable_trial1`(
`status` string,
`ref` string)
PARTITIONED BY (
`casename` string,
`id` string)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
LOCATION
's3://bucket-rootname/'
TBLPROPERTIES (
'has_encrypted_data'='false',
'skip.header.line.count'='1')
ALTER TABLE mytable_trial1 add partition (casename="ABC-CASE",id="f78dea49-2c3a-481b-a1eb-5169d2a97747") location "s3://bucket-rootname/casename=ABC-CASE/report/id=f78dea49-2c3a-481b-a1eb-5169d2a97747/output/";
select * from mytable_trial1 where casename='ABC-CASE' and report='report' and id='f78dea49-2c3a-481b-a1eb-5169d2a97747' and foldername='output';
Version 2:
CREATE EXTERNAL TABLE `mytable_trial1`(
`status` string,
`ref` string)
PARTITIONED BY (
`casename` string,
`report` string,
`id` string,
`foldername` string)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
LOCATION
's3://bucket-rootname/'
TBLPROPERTIES (
'has_encrypted_data'='false',
'skip.header.line.count'='1')
ALTER TABLE mytable_trial1 add partition (casename="ABC-CASE",report="report",id="f78dea49-2c3a-481b-a1eb-5169d2a97747",foldername="output") location "s3://bucket-rootname/casename=ABC-CASE/report=report/id=f78dea49-2c3a-481b-a1eb-5169d2a97747/foldername=output/";
select * from mytable_trial1 where casename='ABC-CASE' and id='f78dea49-2c3a-481b-a1eb-5169d2a97747'
Show partitions shows this partition but no records found with where clause.
I worked with the AWS Support and we were able to narrow down the issue. Version 2 was right one to use since it has four partitions like my S3 bucket. Also, the Alter table command had issue with location. I used hive format location which was incorrect since my actual S3 location is not hive format. So correcting the command to following worked for me.
ALTER TABLE mytable_trial1 add partition (casename="ABC-CASE",report="report",id="f78dea49-2c3a-481b-a1eb-5169d2a97747",foldername="output") location "s3://bucket-rootname/ABC-CASE/report/f78dea49-2c3a-481b-a1eb-5169d2a97747/output/";
Preview table now shows my entries.
I have a table in AWS Athena which contains 2 records. Is there a SQL query using which a new column can be inserted in to the table?
You can find more information about adding columns to table in Athena documentation
Or you can use CTAS
For example, you have a table with
CREATE EXTERNAL TABLE sample_test(
id string)
LOCATION
's3://bucket/path'
and you can create another table from sample_test with the query
CREATE TABLE new_test
AS
SELECT *, 'new' AS new_col FROM sample_test
You can use any available query after AS
This is mainly for future readers like me, who was struggling to get this working for Hive table with AVRO data and if you don't want to create new table i.e updating schema of the existing table. It works for csv using 'add columns', but not for Hive + AVRO. For Hive + AVRO, to append columns at the end, before partition columns, the solution is available at this link. However, there are couple of things to note that, we need to pass full schema to the literal attribute and not just the changes; and (not sure why but) we had to alter hive table for all 3 things in the same order - 1. add columns using add columns 2. set tblproperties and 3. set serdeproperties. Hopefully it helps someone.
I need to create a hive table(external) to load data generated by another process. I need to be partitioned by date, But the problem is, there is a random string in the path.
Example input paths :
/user/hadoop/output/FDQM9N4RCQ3V2ZYT/20170314/
/user/hadoop/output/FDPWMUVVBO2X74CA/20170315/
/user/hadoop/output/FDPGNC0ENA6QOF9G/20170316/
.........
.........
Here 4th field in the directory is dynamic(which cannot be guessed). Each of these directories will have multiple .gz files
What location would I give while creating the table?
CREATE EXTERNAL TABLE user (
userId BIGINT,
type INT,
date String
)
PARTITIONED BY (date String)
LOCATION '/user/hadoop/output/';
is this correct? if so how do I partition it based on the date(last field in the directory)?
Since you are not using the partition convention you'll have to add each partition manually.
The table location does not matter, but for clarity leave it as it is now.
I would recommend to use date type for the partition or at least the ISO format, YYYY-MM-DD.
I would not use date as a column name (nor int,string etc.).
PARTITIONED BY (dt date)
alter table user add if not exists partition (dt=date '2017-03-14') location '/user/hadoop/output/FDQM9N4RCQ3V2ZYT/20170314';
alter table user add if not exists partition (dt=date '2017-03-15') location '/user/hadoop/output/FDPWMUVVBO2X74CA/20170315';
alter table user add if not exists partition (dt=date '2017-03-16') location '/user/hadoop/output/FDPGNC0ENA6QOF9G/20170316';
My data are distributed over multiple directories and multiple tab-separated files within those directories. The general structure looks like this:
s3://bucket_name/directory/{year}{month}/{iso_2}/{year}{month}{day}_table.bcp.gz
where {year} is the 4-digit year, {month} is the 2-digit month, {day} is the 2-digit day and {iso_2} is the ISO2 country code.
How do I set this up as a table in Athena?
Athena uses Hive DDL, so you just need to run a normal Hive create statement:
CREATE EXTERNAL TABLE table_name(
col_1 string,
...
col_n string)
PARTITIONED BY (
year_month string,
iso_2 string)
ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t'
STORED AS TEXTFILE
LOCATION 's3://bucket_name/directory/';
Then register these directories as new partitions to the required table by running MSCK REPAIR TABLE table_name. If this fails for some reason (which it sometimes does in Athena) you'll need to run all the add partition statements for your existing directories:
ALTER TABLE table_name ADD PARTITION
(year_month=201601,iso=US) LOCATION 's3://bucket_name/directory/201601/US/';
ALTER TABLE table_name ADD PARTITION
(year_month=201602,iso=US) LOCATION 's3://bucket_name/directory/201602/US/';
ALTER TABLE table_name ADD PARTITION
(year_month=201601,iso=GB) LOCATION 's3://bucket_name/directory/201601/GB/';
etc.
I'm trying to create a bucket in hive by using following commands:
hive> create table emp( id int, name string, country string)
clustered by( country)
row format delimited
fields terminated by ','
stored as textfile ;
Command is executing successfully: when I load data into this table, it executes successfully and all data is shown when using select * from emp.
However, on HDFS it is only creating one table and only one file is there with all data. That is, there is no folder for specific country records.
First of all, in the DDL statement you have to explicitly mention how many buckets you want.
create table emp( id int, name string, country string)
clustered by( country)
INTO 2 BUCKETS
row format delimited
fields terminated by ','
stored as textfile ;
In the above statement I have mention 2 buckets, similarly you can mention any number you want.
Still you are not done!!
After that, while loading data into the table you also have to mention the below hint to hive.
set hive.enforce.bucketing = true;
That should do it.
After this you should be able to see that number of files created under the table directory is same as the number of buckets mentioned in the DDL statement.
Bucketing doesn't create HDFS folders, rather if you want a separate floder to be created for a country then you should PARTITION.
Please go through hive partitioning and bucketing in detail.