paginating unlimited S3 size with filters - amazon-s3

in my bucket, S3 has an unlimited number of objects whose names are the epoch value in which they were created. (e.g. "1503379525")
The Lambda function should concatenate the contents of all files between a specific time range (last 15 minutes).
My solution is to use:
Pagination to get a list of objects
page_iterator.search("Contents[?Key > epoch-for-last-15-min][]")
I need to consider the memory limit of Lambda and need to make sure the paginator works fine in a bucket with an unlimited number of files.
Is this a feasible solution?

Related

How to efficiently filter a dataframe from an S3 bucket

I want to pull a specified number of days from an S3 bucket that is partitioned by year/month/day/hour. This bucket has new files added everyday and will grow to be rather large. I want to do spark.read.parquet(<path>).filter(<condition>), however when I ran this it took significantly longer (1.5 hr) than specifying the paths (.5 hr). I dont understand why it takes longer, should I be adding a .partitionBy() when reading from the bucket? or is it because of the volume of data in the bucket that has to be filtered?
That problem that you are facing is regarding the partition discovery. If you point to the path where your parquet files are with the spark.read.parquet("s3://my_bucket/my_folder") spark will trigger a task in the task manager called
Listing leaf files and directories for <number> paths
This is a partition discovery method. Why that happens? When you call with the path Spark has no place to find where the partitions are and how many partitions are there.
In my case if I run a count like this:
spark.read.parquet("s3://my_bucket/my_folder/").filter('date === "2020-10-10").count()
It will trigger the listing that will take 19 Seconds for around 1700 folders. Plus the 7 seconds to count, it has a total of 26 seconds.
To solve this overhead time you should use a Meta Store. AWS provide a great solution with AWS Glue, to be used just like the Hive Metastore in a Hadoop environment.
With Glue you can store the Table metadata and all the partitions. Instead of you giving the Parquet path you will point to the table just like that:
spark.table("my_db.my_table").filter('date === "2020-10-10").count()
For the same data, with the same filter. The list files doesn't exist and the whole process of counting took only 9 Seconds.
In your case that you partitionate by Year, Month, Day and Hour. We are talking about 8760 folders per year.
I would recommend you take a look at this link and this link
This will show how you can use Glue as your Hive Metastore. That will help a lot to improve the speed of Partition query.

how to limit the size of the file that exporting from bigquery to gcs?

I Used the python code for exporting data from bigquery to gcs,and then using gsutil to export to s3!But after exporting to gcs ,I noticed the some files are more tha 5 GB,which gsutil cannnot deal?So I want to know the way for limiting the size
So after the issue tracker, the correct way to take this is.
Single URI ['gs://[YOUR_BUCKET]/file-name.json']
Use a single URI if you want BigQuery to export your data to a single
file. The maximum exported data with this method is 1 GB.
Please note that data size is up to a maximum of 1GB, and the 1GB is not for the file size that is exported.
Single wildcard URI ['gs://[YOUR_BUCKET]/file-name-*.json']
Use a single wildcard URI if you think your exported data set will be
larger than 1 GB. BigQuery shards your data into multiple files based
on the provided pattern. Exported files size may vary, and files won't
be equally in size.
So again you need to use this method when your data size is above 1 GB, and the resulting files size may vary, and may go beyond the 1 GB, as you mentioned 5GB and 160Mb pair would happen on this method.
Multiple wildcard URIs
['gs://my-bucket/file-name-1-*.json',
'gs://my-bucket/file-name-2-*.json',
'gs://my-bucket/file-name-3-*.json']
Use multiple wildcard URIs if you want to partition the export output.
You would use this option if you're running a parallel processing job
with a service like Hadoop on Google Cloud Platform. Determine how
many workers are available to process the job, and create one URI per
worker. BigQuery treats each URI location as a partition, and uses
parallel processing to shard your data into multiple files in each
location.
the same applies here as well, exported file sizes may vary beyond 1 GB.
Try using single wildcard URI
See documentation for Exporting data into one or more files
Use a single wildcard URI if you think your exported data will be
larger than BigQuery's 1 GB per file maximum value. BigQuery shards
your data into multiple files based on the provided pattern. If you
use a wildcard in a URI component other than the file name, be sure
the path component does not exist before exporting your data.
Property definition:
['gs://[YOUR_BUCKET]/file-name-*.json']
Creates:
gs://my-bucket/file-name-000000000000.json
gs://my-bucket/file-name-000000000001.json
gs://my-bucket/file-name-000000000002.json ...
Property definition:
['gs://[YOUR_BUCKET]/path-component-*/file-name.json']
Creates:
gs://my-bucket/path-component-000000000000/file-name.json
gs://my-bucket/path-component-000000000001/file-name.json
gs://my-bucket/path-component-000000000002/file-name.json

How can we decide the total no. of buckets for a hive table

i am bit new to hadoop. As per my knowledge buckets are fixed no. of partitions in hive table and hive uses the no. of reducers same as the total no. of buckets defined while creating the table. So can anyone tell me how to calculate the total no. of buckets in a hive table. Is there any formula for calculating the total number of buckets ?
Lets take a scenario Where table size is: 2300 MB,
HDFS Block Size: 128 MB
Now, Divide 2300/128=17.96
Now, remember number of bucket will always be in the power of 2.
So we need to find n such that 2^n > 17.96
n=5
So, I am going to use number of buckets as 2^5=32
Hope, It will help some of you.
From the documentation
link
In general, the bucket number is determined by the expression
hash_function(bucketing_column) mod num_buckets. (There's a
'0x7FFFFFFF in there too, but that's not that important). The
hash_function depends on the type of the bucketing column. For an int,
it's easy, hash_int(i) == i. For example, if user_id were an int, and
there were 10 buckets, we would expect all user_id's that end in 0 to
be in bucket 1, all user_id's that end in a 1 to be in bucket 2, etc.
For other datatypes, it's a little tricky. In particular, the hash of
a BIGINT is not the same as the BIGINT. And the hash of a string or a
complex datatype will be some number that's derived from the value,
but not anything humanly-recognizable. For example, if user_id were a
STRING, then the user_id's in bucket 1 would probably not end in 0. In
general, distributing rows based on the hash will give you a even
distribution in the buckets.
If you want to know how many buckets you should choose in your CLUSTER BY clause, I believe it is good to choose a number that results in buckets that are at or just below your HDFS block size.
This should help avoid having HDFS allocate memory to files that are mostly empty.
Also choose a number that is a power of two.
You can check your HDFS block size with:
hdfs getconf -confKey dfs.blocksize
optimal bucket number is ( B * HashTableSize of Table ) / Total Memory of Node, B=1.01

when unloading a table from amazon redshift to s3, how do I make it generate only one file

When I unload a table from amazon redshift to S3, it always splits the table into two parts no matter how small the table. I have read the redshift documentation regarding unloading, but no answers other than it says sometimes it splits the table (I've never seen it not do that). I have two questions:
Has anybody every seen a case where only one file is created?
Is there a way to force redshift to unload into a single file?
Amazon recently added support for unloading to a single file by using PARALLEL OFF in the UNLOAD statement. Note that you still can end up with more than one file if it is bigger than 6.2GB.
By default, each slice creates one file (explanation below). There is a known workaround - adding a LIMIT to the outermost query will force the leader node to process whole response - thus it will create only one file.
SELECT * FROM (YOUR_QUERY) LIMIT 2147483647;
This only works as long as your inner query returns fewer than 2^31 - 1 records, as a LIMIT clause takes an unsigned integer argument.
How files are created? http://docs.aws.amazon.com/redshift/latest/dg/t_Unloading_tables.html
Amazon Redshift splits the results of a select statement across a set of files, one or more files per node slice, to simplify parallel reloading of the data.
So now we know that at least one file per slice is created. But what is a slice? http://docs.aws.amazon.com/redshift/latest/dg/t_Distributing_data.html
The number of slices is equal to the number of processor cores on the node. For example, each XL compute node has two slices, and each 8XL compute node has 16 slices.
It seems that the minimal number of slices is 2, and it will grow larger when more nodes or more powerful nodes is added.
As of May 6, 2014 UNLOAD queries support a new PARALLEL options. Passing PARALLEL OFF will output a single file if your data is less than 6.2 gigs (data is split into 6.2 GB chunks).

Maximum number of CommonPrefixes and MaxKeys in S3 list objects

So the question is simple, what is the biggest number of CommongPrefixes displayed when you list s3 objects, and what is the biggest MaxKeys. Default is 1000.
Amazon s3 api / bucket get
To answer my own question, maximum number of CommonPrefixes and MaxKeys is 1000.
Caution, TOGETHER 1000.
This means that you can have 0 Keys displayed, and maximum 1000 CommonPrefixes or
990 Keys displayed, and maximum 10 CommonPrefixes