How to merge multiple parquet files in Glue - pandas

I have Glue job which is writing parquet files in S3 every 6 seconds and S3 is having folder for that hour. At the end of the hour I want to merge all the files in that hour partition then put it in the same location. I don't want to use the Athena tables because job becomes slow. I am trying using Python Shell. But so for I have not found correct solution. Can someone help me with this?
File is also snappy zipped

Depending on how big your Parquet files are, and what the target size is – here's an idea to do this without Glue:
Set up an hourly Cloudwatch cron rule to look in the directory of the previous file to invoke a Lambda function.
Open each Parquet file, and write them to a new parquet file.
Write the resulting Parquet file to the S3 key and remove the parts.
Note there are some limitations/considerations with this design:
Your Parquet files need to stay within the limits of your Lambda's memory capacity. If you aim for getting to parts that are 128mb, you should be able to achieve this
Your separate Parquet schemas need to be identical for you to be reliably "merging" them. If they are not, you need to look into the Parquet file's metadata footer which contains the schema to ensure the schema has the metadata for all the column chunks.
Because the S3 operation is not atomic, you may have a brief moment in which the new S3 Parquet object is uploaded but the old ones haven't been removed. If you don't require to query it within this window, that shouldn't be a problem.
If you require Glue specifically, you may be able to just invoke a Glue job from the Lambda as opposed to trying to do it yourself from within Lambda.

Related

Unzip files from S3 before putting them into Snowflake

I have data available in an S3 bucket we don't own, with a zipped folder containing files for each date.
We are using Snowflake as our data warehouse. Snowflake accepts gzip'd files, but does not ingest zip'd folders.
Is there a way to directly ingest the files into Snowflake that will be more efficient than copying them all into our own S3 bucket and unzipping them there, then pointing e.g. Snowpipe to that bucket? The data is on the order of 10GB per day, so copying is very doable, but would introduce (potentially) unnecessary latency and cost. We also don't have access to their IAM policies, so can't do something like S3 Sync.
I would be happy to write something myself, or use a product/platform like Meltano or Airbyte, but I can't find a suitable solution.
How about using SnowSQL to load the data into Snowflake, and using Snowflake stage table/user/named stage to hold files at stages.
https://docs.snowflake.com/en/user-guide/data-load-local-file-system-create-stage.html
I had a similar use case. I use an event based trigger that runs a Lambda function everytime there is a new zipped file in my S3 folder. The Lambda functions opens the zipped files, gzips each individual file and re-uploads them to a different S3 folder. Here's the full working code: https://betterprogramming.pub/unzip-and-gzip-incoming-s3-files-with-aws-lambda-f7bccf0099c9

AWS Glue check file contents correctness

I have a project in AWS to insert data from some files, which will be in S3, to Redshift. The point is that the ETL has to be scheduled each day to find new files in S3 and then check if those files are correct. However, this has to be done with custom code as the files can have different formats depending of their kind, provider, etc.
I see that AWS Glue allows to schedule, crawl and do the ETL. However I'm lost at how to one can create its own code for the ETL and parse the files to check the correctness before ending up doing the copy instruction from S3 to Redshift. Do you know if that can be done and how?
Another issue is that if the correctness is OK then, the system should upload the data from S3 to a web via some API. But if it's not the file should be left into an ftp email. Here again, do you know if that can be done as well with the AWS Glue and how?
many thanks!
You can write your glue/spark code, upload it to s3 and create a glue job referring to this script/library. Anything you want to write in python can be done in glue. its just a wrapper around spark which in turn uses python....

Approach for large data set for reporting

I am having 220 millions of raw files in AWS s3 which I considering to merge all into a single file which estimate around 10 terabyte. The merge file will be serve as a fact table but in file format for reporting purposes for the audit.
The raw files are source data from an application. If there is any new data changes to the application, the contain of the file will be change.
I would like to ask is anybody come across this end to end process for this user case?
s3--> ETL (file merging)--> s3 --> reporting (tableau)
I haven't personally tried it, but this is kind of what Athena is made for... Skipping your ETL process, and querying directly from the files. Is there a reason you are dumping this all into a single file instead of keeping it dispersed? Rewriting a 10TB file over and over again is very expensive and time consuming... I'd personally at least investigate keeping the files 1-1 with the source files.
Create a s3 trigger that fires when a file is rewritten on s3
Create a Lambda that creates your "audit ready" report files on s3
Use AWS Athena to query those report files
Tableau connector to Athena for your reports

Getting data from S3 (client) to our S3 (company)

We have a requirement to get a .csv files from a bucket which is a client location (They would provide the S3 bucket info and other information required). Every day we need to pull this data into our S3 bucket so we can process it further. Please suggest the best way/technology that we can use to achieve the result.
I am planning to do it by Python boto (or Pandas or Pyspark) or Spark; reason being, once we get this data it might be processed further.
You can try the S3 cross account object copy using the S3 copy option. This is more secure and the suggested one. Please go through the below link for more details. It also works for same account different buckets. After copying then you can trigger some lambda function with custom code(python) to do the processing of the .csv files.
How to copy Amazon S3 objects from one AWS account to another by using the S3 COPY operation
If your customer keeps the data in an s3 bucket to which your account has been granted access to it, then it should be possible to use the .csv files as a direct source of data for a spark job. Use the s3a://theirbucket/nightly/*.csv as the RDD source, and save it to s3a://mybucket/somewhere, ideally in a format other than CSV (Parquet, ORC, ...). This lets you do some basic transformation of the format into one easier to work with.
If you just want the raw CSV files, that S3 Copy operation is what you need, as it copies the data within S3 itself (6+MiB/s if in the same S3 location), and not needing any of your own VMs involved.

Simple way to load new files only into Redshift from S3?

The documentation for the Redshift COPY command specifies two ways to choose files to load from S3, you either provide a base path and it loads all the files under that path, or you specify a manifest file with specific files to load.
However in our case, which I imagine is pretty common, the S3 bucket periodically receives new files with more recent data. We'd like to be able to load only the files that haven't already been loaded.
Given that there is a table stl_file_scan that logs all the files that have been loaded from S3, it would be nice to somehow exclude those that have successfully been loaded. This seems like a fairly obvious feature, but I can't find anything in the docs or online about how to do this.
Even the Redshift S3 loading template in AWS Data Pipeline appears to manage this scenario by loading all the data -- new and old -- to a staging table, and then comparing/upserting to the target table. This seems like an insane amount of overhead when we can tell up front from the filenames that a file has already been loaded.
I know we could probably move the files that have already been loaded out of the bucket, however we can't do that, this bucket is the final storage place for another process which is not our own.
The only alternative I can think of is to have some other process running that tracks files that have been successfully loaded to redshift, and then periodically compares that to the s3 bucket to determine the differences, and then writes the manifest file somewhere before triggering the copy process. But what a pain! We'd need a separate ec2 instance to run the process which would have it's own management and operational overhead.
There must be a better way!
This is how I solved the problem,
S3 -- (Lambda Trigger on newly created Logs) -- Lambda -- Firehose -- Redshift
It works at any scale. With more load, more calls to Lambda, more data to firehose and everything taken care automatically.
If there are issues with the format of the file, you can configure dead letter queues, events will be sent there and you can reprocess once you fix lambda.
Here I would like to mention some steps that includes process that how to load data in redshift.
Export local RDBMS data to flat files (Make sure you remove invalid
characters, apply escape sequence during export).
Split files into 10-15 MB each to get optimal performance during
upload and final Data load.
Compress files to *.gz format so you don’t end up with $1000
surprise bill :) .. In my case Text files were compressed 10-20
times.
List all file names to manifest file so when you issue COPY command
to Redshift its treated as one unit of load.
Upload manifest file to Amazon S3 bucket.
Upload local *.gz files to Amazon S3 bucket.
Issue Redshift COPY command with different options.
Schedule file archiving from on-premises and S3 Staging area on AWS.
Capturing Errors, setting up restart ability if something fails
Doing it easy way you can follow this link.
In general compare of loaded files to existing on S3 files is a bad but possible practice. The common "industrial" practice is to use message queue between data producer and data consumer that actually loads the data. Take a look on RabbitMQ vs Amazon SQS and etc..