I have a table in Athena that is created from a csv file stored in S3 and I am using Lambda to query it. But I have incoming data being processed by the lambda function and want to append a new row to the existing table in Athena. How can I do this? Because I saw in documentation that Athena prohibits some SQL statements like INSERT INTO and CREATE TABLE AS SELECT
If you are adding new data you can save the new data file into the same folder (prefix/key) that the table is in reading from. Athena will read from all files in this folder, the format of the new file just needs to be the same as the existing one.
Related
Scenario:
I upload a CSV file into a S3 bucket and now I would like to read this table using Trino.
Is it possible to just read the table without the CREATE TABLE statement? Maybe simple SELECT only? Or I have to CREATE TABLE everytime I want to read the CSV file?
I have an S3 bucket with 500 csv files that are identical except for the number values in each file.
How do I write query that grabs dividendsPaid and make it positive for each file and send that back to s3?
Amazon Athena is a query engine that can perform queries on objects stored in Amazon S3. It cannot modify files in an S3 bucket. If you want to modify those input files in-place, then you'll need to find another way to do it.
However, it is possible for Amazon Athena to create a new table with the output files stored in a different location. You could use the existing files as input and then store new files as output.
The basic steps are:
Create a table definition (DDL) for the existing data (I would recommend using an AWS Glue crawler to do this for you)
Use CREATE TABLE AS to select data from the table and write it to a different location in S3. The command can include an SQL SELECT statement to modify the data (changing the negatives).
See: Creating a table from query results (CTAS) - Amazon Athena
We have one s3 bucket called Customers/
Inside this we have multiple folders and again sub folders inside them.
And finally we have parquet files of data.
Now I want to read any parquet file (not specific to any file) and load data into oracle.
For now my script is working for one s3 path where it reads one parquet file e.g. customer_info.parquet and it loads data in oracle database table called customer.customer_info
I need help on generating a generic script where we can read any parquet file and load data in any corresponding database table.
for e.g.
S3 location : s3/Customers/new_customrers/new_customer_info.parquet
Oracle Database: Customer
Oracle table : new_customers
S3 location :s3/Customers/old_customrers/old_customer_info.parquet
Oracle Database:Customer
Oracle table:old_customers
S3 location : s3/Customers/current_customrers/current_customer_info.parquet
Oracle Database :Customer
Oracle table:current_customers**
Is there any way to make this copy process generic. Database will be same only oracle tables will be changed according to the parquet file.
My current script is a pyspark script where we are reading one s3 file data into spark dataframe and writting that dataframe to one oracle table.
I have a table in Athena created from S3. I wanted to update the column values using the update table command. Is the UPDATE Table command not supported in Athena?
Is there any other way to update the table ?
Thanks
Athena only supports External Tables, which are tables created on top of some data on S3. Since the S3 objects are immutable, there is no concept of UPDATE in Athena. What you can do is create a new table using CTAS or a view with the operation performed there, or maybe use Python to read the data from S3, then manipulate it and overwrite it.
I have a RDS database with the following structure: CustomerId|Date|FileKey.
FileKey points to a JSON file in S3.
Now I want to create CSV reports with a costumer, date range filters and columns definition (ColumnName + JsonPath), like that:
Name => data.person.name
OtherColumn1 => data.exampleList[0]
OtherColumn2 => data.exampleList[2]
I often need to add and remove columns from the columns definition.
I know I can run a SQL SELECT on RDS, get each S3 file (JSON), extract the data and create my CSV file, but, this is not a good solution because I need to query my RDS instance and make millions of requests to S3 for each report request or each columns definition change.
Saving all data on RDS table instead on S3 is also not a good solution because JSON file contains a lot of data and columns not the same for costumers.
Any idea?