Query S3 Bucket With Amazon Athena and modify values - sql

I have an S3 bucket with 500 csv files that are identical except for the number values in each file.
How do I write query that grabs dividendsPaid and make it positive for each file and send that back to s3?

Amazon Athena is a query engine that can perform queries on objects stored in Amazon S3. It cannot modify files in an S3 bucket. If you want to modify those input files in-place, then you'll need to find another way to do it.
However, it is possible for Amazon Athena to create a new table with the output files stored in a different location. You could use the existing files as input and then store new files as output.
The basic steps are:
Create a table definition (DDL) for the existing data (I would recommend using an AWS Glue crawler to do this for you)
Use CREATE TABLE AS to select data from the table and write it to a different location in S3. The command can include an SQL SELECT statement to modify the data (changing the negatives).
See: Creating a table from query results (CTAS) - Amazon Athena

Related

DynamoDB data to S3 in Kinesis Firehose output format

Kinesis data firehose has a default format to add files into separate partitions in S3 bucket which looks like : s3://bucket/prefix/yyyy/MM/dd/HH/file.extension
I have created event streams to dump data from DynamoDB to S3 using Firehose. There is a transformation lambda in between which converts DDB records into TSV format (tab separated).
All of this is added on an existing table which already contains huge data. I need to backfill the existing data from DynamoDB to S3 bucket maintaining the parity in format with existing Firehose output style.
Solution I tried :
Step 1 : Export the Table to S3 using DDB Export feature. Use Glue crawler to create Data catalog Table.
Step 2 : Used Athena's CREATE TABLE AS SELECT Query to imitate the transformation done by the intermediate Lambda and storing that Output to S3 location.
Step 3 : However, Athena CTAS applies a default compression that cannot be done away with. So I wrote a Glue Job that reads from the previous table and writes to another S3 location. This job also takes care of adding the partitions based on year/month/day/hour as is the format with Firehose, and writes the decompressed S3 tab-separated format files.
However, the problem is that Glue creates Hive-style partitions which look like :
s3://bucket/prefix/year=2021/month=02/day=02/. And I need to match the firehose block style S3 partitions instead.
I am looking for an approach to help achieve this. Couldn't find a way to add block style partitions using Glue. Another approach I have is, to use AWS CLI S3 mv command to move all this data into separate folders with correct file-name which is not clean and optimised.
Leaving the solution I ended up implementing here in case it helps anyone.
I created a Lambda and added S3 event trigger on this bucket. The Lambda did the job of moving the file from Hive-style partitioned S3 folder to correctly structured block-style S3 folder.
The Lambda used Copy and delete function from boto3 s3Client to implement the same.
It worked like a charm even though I had like > 10^6 output files split across different partitions.

Update Athena Table from 2 external tables in Athena from s3

I am relatively new to athena & s3.
I have an s3 bucket which contains 2 folders with csv files in both. I have created 2 external tables for each folder in athena.
I want to create another final table in athena which joins the two files and updates with more rows automatically as more files are added into the s3 bucket. Please could you advise the best way to get the output needed?
I have tried "create table from query" in athena. But the table remains static as i upload more files to s3, and doesnt update.
For this use-case I would suggest creating a view in Athena. You can read more on it here.

Which file format I have to use which supports appending?

Currently We use orc file format to store the incoming traffic in s3 for fraud detection analysis
We did choose orc file format for following reasons
compression
and ability to query the data using athena
Problem :
As the orc files are read only as soon and we want to update the file contents constantly every 20 minutes
which implies we
need to download the orc files from s3,
read the file
write to the end of file
and finally upload it back to s3
This was not a problem but as the data grows significantly every day ~2GB every day. It is highly costly process to download 10Gb files read it and write and upload it
Question :
Is there any way to use another file format which also offers appends/inserts and can be used by athena to query?
From this article it says avro is file format, but not sure
If athena can be used for querying ?
any other issues ?
Note: My skill on big data technologies is on beginner level
If your table is not partitioned, can simply copy (aws s3 cp) your new orc files to the target s3 path for the table and they will be available instantly for querying via Athena.
If your table is partitioned, you can copy new files to the paths corresponding to your specific partitions. At the end of copying new files to the partition, you need to add or update that partition into Athena's metastore.
For example, if your table is partitioned by date, then you need to run this query to ensure your partition gets added/updated:
alter table dataset.tablename add if not exists
partition (date = YYYYMMDD)
location 's3://your-bucket/path_to_table/date=YYYYMMDD/'

AWS - How to extract CSV reports from a set of JSON files in S3

I have a RDS database with the following structure: CustomerId|Date|FileKey.
FileKey points to a JSON file in S3.
Now I want to create CSV reports with a costumer, date range filters and columns definition (ColumnName + JsonPath), like that:
Name => data.person.name
OtherColumn1 => data.exampleList[0]
OtherColumn2 => data.exampleList[2]
I often need to add and remove columns from the columns definition.
I know I can run a SQL SELECT on RDS, get each S3 file (JSON), extract the data and create my CSV file, but, this is not a good solution because I need to query my RDS instance and make millions of requests to S3 for each report request or each columns definition change.
Saving all data on RDS table instead on S3 is also not a good solution because JSON file contains a lot of data and columns not the same for costumers.
Any idea?

How to efficiently append new data to table in AWS Athena?

I have a table in Athena that is created from a csv file stored in S3 and I am using Lambda to query it. But I have incoming data being processed by the lambda function and want to append a new row to the existing table in Athena. How can I do this? Because I saw in documentation that Athena prohibits some SQL statements like INSERT INTO and CREATE TABLE AS SELECT
If you are adding new data you can save the new data file into the same folder (prefix/key) that the table is in reading from. Athena will read from all files in this folder, the format of the new file just needs to be the same as the existing one.