Query Athena from s3 database - remove metadata/corrupted data - sql

I was following along with the tutorials for connecting Tableau to Amazon Athena and got hung up when running the query and returning the expected result. I downloaded the student-db.csv from https://github.com/aws-samples/amazon-athena-tableau-integration and uploaded the csv to a S3 bucket that I created. I can create the database within Athena however when I create a table either with the bulk add or directly from the query editor and preview with a query the data gets corrupted. and includes unexpected characters and unexpected/unnecessary punctuations and sometimes all the data is aggregated into a single column and also contains metadata such as "1 ?20220830_185102_00048_tnqre"0 2 ?hive" 3 Query Plan* 4 Query Plan2?varchar8 #H?P?". Also with my Athena - Tableau connected receiving the same issues when I preview the table that was created with Athena and stored in my bucket.
CREATE EXTERNAL TABLE IF NOT EXISTS student(
`school` string,
`country` string,
`gender` string,
`age` string,
`studytime` int,
`failures` int,
`preschool` string,
`higher` string,
`remotestudy` string,
`health` string)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
STORED AS INPUTFORMAT
'org.apache.hadoop.mapred.TextInputFormat'
OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
LOCATION
's3://jj2-test-bucket/'
TBLPROPERTIES (
'has_encrypted_data'='false',
'skip.header.line.count'='1',
'transient_lastDdlTime'='1595149168')
SELECT * FROM "studentdb"."student" limit 10;
Query preview

The solution is to create a separate S3 bucket to house the query results. Additionally, when connecting to Tableau you must set the S3 Staging Directory to the location of the Query Result bucket rather than connecting to the S3 bucket that contains your raw data/csv

Related

Redshift Spectrum query returns 0 row from S3 file

I tried Redshift Spectrum. Both of query below ended success without any error message, but I can't get the right count of the uploaded file in S3, it's just returned 0 row count, even though that file has over 3 million records.
-- Create External Schema
CREATE EXTERNAL SCHEMA spectrum_schema FROM data catalog
database 'spectrum_db'
iam_role 'arn:aws:iam::XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX'
create external database if not exists;
-- Create External Table
create EXTERNAL TABLE spectrum_schema.principals(
tconst VARCHAR (20),
ordering BIGINT,
nconst VARCHAR (20),
category VARCHAR (500),
job VARCHAR (500),
characters VARCHAR(5000)
)
row format delimited
fields terminated by '\t'
stored as textfile
location 's3://xxxxx/xxxxx/'
I also tried the option, 'stored as parquet', the result was same.
My iam role has "s3:","athena:", "glue:*" permissions, and Glue table created successfully.
And just in case, I confirmed the same S3 file could be copied into table in Redshift Cluster successfully. So, I concluded the file/data has no issue by itself.
If there is something wrong with my procedure or query. Any advice would be appreciated.
As your DDL is not scanning any data it looks like the issue seems to be with it not understanding actual data in s3. To figure this out you can simply generate table using AWS Glue crawler.
Once the table is created you can compare this table properties with another table created using DDL in Glue data catalog. That will give you the difference and what is missing in your table that is created using DDL manually.

Amazon Athena partitioning query error "no viable alternative"

I'm trying to make a partitioned data table in Amazon Athena, so that I can analyze the contents of a bucket containing S3 access logs. I've followed the instructions almost exactly as they are written in the docs, just substituting my own info. However, I keep getting the error line 1:8: no viable alternative at input 'create external' (service: amazonathena; status code: 400; error code: invalidrequestexception; request id: 847e3d9c-8d3c-4810-a98c-8527270f8dd8). Here's what I'm inputting:
CREATE EXTERNAL TABLE access_data (
`Date` DATE,
Time STRING,
Location STRING,
Bytes INT,
RequestIP STRING,
Host STRING,
Uri STRING,
Status INT,
Referrer STRING,
os STRING,
Browser STRING,
BrowserVersion STRING
)
ROW FORMAT SERDE 'org.apache.hive.hcatalog.data.JsonSerDe'
WITH serdeproperties ( 'paths'='`Date`,Time, Uri' )
PARTITIONED BY (dt DATE) STORED AS parquet LOCATION 's3://[source bucket]/';
I've looked at other similar questions on here but I don't have a hyphenated table name, no trailing commas, no unbalanced back ticks or missing parentheses, etc... so I really don't know what's wrong. Thanks to anyone who can help!
It appears that these two lines are conflicting with each other:
ROW FORMAT SERDE 'org.apache.hive.hcatalog.data.JsonSerDe' WITH serdeproperties ...
and
STORED AS parquet
Removing one of them allows the table creation to proceed.
Parquet does not store data in JSON format.

Hive partitioning for data on s3

Our data is stored using s3://bucket/YYYY/MM/DD/HH and we are using aws firehouse to land parquet data in there locations in near real time . I can query data using AWS athena just fine however we have a hive query cluster which is giving troubles querying data when partitioning is enabled .
This is what I am doing :
PARTITIONED BY (
`year` string,
`month` string,
`day` string,
`hour` string)
This doesn't seem to work when data on s3 is stored as s3:bucket/YYYY/MM/DD/HH
however this does work for s3:bucket/year=YYYY/month=MM/day=DD/hour=HH
Given the stringent bucket paths of firehose i cannot modify the s3 paths. So my question is what's the right partitioning scheme in hive ddl when you don't have an explicitly defined column name on your data path like year = or month= ?
Now you can specify S3 prefix in firehose.https://docs.aws.amazon.com/firehose/latest/dev/s3-prefixes.html
myPrefix/year=!{timestamp:yyyy}/month=!{timestamp:MM}/day=!{timestamp:dd}/hour=!{timestamp:HH}/
If you can't obtain folder names as per hive naming convention, you will need to map all the partitions manually
ALTER TABLE tableName ADD PARTITION (year='YYYY') LOCATION 's3:bucket/YYYY'

Hive External Table with Azure Blob Storage

Is there a way to create a Hive external table using SerDe with location pointing to Azure Storage, organized in such a way that the data uses the fewest number of blobs. For example if insert 10000 records, I would like it to create just 100 page blobs with 100 line records each instead of maybe 10000 with 1 record each. I am de serializing from the blob, so fewer blobs will require lesser time.What would be the most optimal format in hive?
First, there is a way to create a Hive external table using Serde with localtion pointing to Azure Blob Storage, but not directly, please see the section Create Hive database and tables like the HiveQL below.
create database if not exists <database name>;
CREATE EXTERNAL TABLE if not exists <database name>.<table name>
(
field1 string,
field2 int,
field3 float,
field4 double,
...,
fieldN string
)
ROW FORMAT DELIMITED FIELDS TERMINATED BY '<field separator>' lines terminated by '<line separator>'
STORED AS TEXTFILE LOCATION '<storage location>' TBLPROPERTIES("skip.header.line.count"="1");
And focus the below content for explaination <storage location>.
<storage location>: the Azure storage location to save the data of Hive tables. If you do not specify LOCATION , the database and the tables are stored in hive/warehouse/ directory in the default container of the Hive cluster by default. If you want to specify the storage location, the storage location has to be within the default container for the database and tables. This location has to be referred as location relative to the default container of the cluster in the format of 'wasb:///<directory 1>/' or 'wasb:///<directory 1>/<directory 2>/', etc. After the query is executed, the relative directories are created within the default container.
So it means you can access Azure Blob Storage location on Hive via wasb protocol, which required hadoop-azure library that support Hadoop access HDFS on Azure Storage. If your Hive on Hadoop not deployed on Azure, you need to refer to the Hadoop offical document Hadoop Azure Support: Azure Blob Storage to configure it.
For using serde, it is depended on the file format you used, like for orc file format, the hql code using OrcSerde like below.
CREATE EXTERNAL TABLE IF NOT EXSISTS <table name> (<column_name column_type>, ...)
ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.orc.OrcSerde'
STORED AS ORC
LOCATION '<orcfile path>'
For your second, the most optimal format is ORC File Format in Hive.

How to query data from gz file of Amazon S3 using Qubole Hive query?

I need get specific data from gz.
how to write the sql?
can I just sql as table database?:
Select * from gz_File_Name where key = 'keyname' limit 10.
but it always turn back with an error.
You need to create Hive external table over this file location(folder) to be able to query using Hive. Hive will recognize gzip format. Like this:
create external table hive_schema.your_table (
col_one string,
col_two string
)
stored as textfile --specify your file type, or use serde
LOCATION
's3://your_s3_path_to_the_folder_where_the_file_is_located'
;
See the manual on Hive table here: https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-CreateTableCreate/Drop/TruncateTable
To be precise s3 under the hood does not store folders, filename containing /s in s3 represented by different tools such as Hive like a folder structure. See here: https://stackoverflow.com/a/42877381/2700344