S3 partition (file size) for effecient Athena query - amazon-s3

I have a pipeline that load daily records into S3. I then utilize AWS Glue Crawler to create partition for facilitating AWS Athena query. However, there is a large partitioned data, if compared to others.
S3 folders/files are displayed as follows:
s3.ObjectSummary(bucket_name='bucket', key='database/table/2019/00/00/2019-00-00.parquet.gzip') 7.8 MB
s3.ObjectSummary(bucket_name='bucket', key='database/table/2019/01/11/2019-01-11.parquet.gzip') 29.8 KB
s3.ObjectSummary(bucket_name='bucket', key='database/table/2019/01/12/2019-01-12.parquet.gzip') 28.5 KB
s3.ObjectSummary(bucket_name='bucket', key='database/table/2019/01/13/2019-01-13.parquet.gzip') 29.0 KB
s3.ObjectSummary(bucket_name='bucket', key='database/table/2019/01/14/2019-01-14.parquet.gzip') 43.3 KB
s3.ObjectSummary(bucket_name='bucket', key='database/table/2019/01/15/2019-01-15.parquet.gzip') 139.9 KB
with the file size displayed at the end of each line. Note that, 2019-00-00.parquet.gzip contains all records before 2019-01-11 and therefore, its size is large. I have read this and it says that "If your data is heavily skewed to one partition value, and most queries use that value, then the overhead may wipe out the initial benefit."
So, I wonder should I split 2019-00-00.parquet.gzip into smaller parquet files with different partitions. For example,
key='database/table/2019/00/00/2019-00-01.parquet.gzip',
key='database/table/2019/00/00/2019-00-02.parquet.gzip',
key='database/table/2019/00/00/2019-00-03.parquet.gzip', ......
However, I suppose this partitioning is not so useful as it does not reflect when were the old records stored. I am opened for all workarounds. Thank you.

If the full size of your data is less than a couple of gigabytes in total, you don't need to partition your table at all. Partitioning small datasets hurt performance much more than it helps. Keep all the files in the same directory, deep directory structures in unpartitioned tables also hurt performance.
For small datasets you'll be better off without partitioning as long as there aren't too many files (try to keep it below a hundred). If you for some reason must have lots of small files you might get benefits from partitioning, but benchmark it in that case.
When the size of the data is small, like in your case, the overhead of finding the files on S3, opening, and reading them will be higher than actually processing them.
If your data grows to hundreds of megabytes you can start thinking about partitioning, and aim for a partitioning scheme where partitions are around a hundred megabytes to a gigabyte in size. If there is a time component to your data, which there seems to be in your case, time is the best thing to partition on. Start by looking at using year as partition key, then month, and so on. Exactly how to partition your data depends on the query patterns, of course.

Related

BigQuery loading Parquet, 30x space in BQ

A BigQuery noob here.
I have a pretty simple but large table coming from clickhouse and stored in a parquet file to be loaded into BQ.
Size: 50GB in parquet, About 10B rows
Schema:
key:STRING(it was a UUID),type:STRING(cardinality of 4, e.g. CategoryA,CategoryB,CategoryC),value:FLOAT
Size in BigQuery: ~1.5TB
This is about a 30x increase.
Running a SELECT 1 FROM myTable WHERE type=CategoryA shows an expected billing of 500GB, which seems a rather large number given such a low cardinality.
It feels there are two paths:
making the query more efficient (how?)
or even better, making BQ understand the data more and avoid a 30x increase explosion.
Clustering and partitioning could come handy in specific instances of selection, however it seems that the 30x problem still remains, and the moment you start running the wrong query it will just explode in cost
Any idea?
Parquet file is compressed format, so when loaded it will be decompressed.
1.5TB is not huge in BQ world.
Neither the 500GB. * the columns you touch in the where statement are scanned as well.
What you need to do is that you reframe into smaller data sets.
Leverage partitioning and clustering as well.
Never use * in select.
Use materialized views for specific use cases, and turn on BI Engine for optimized queries see a guide here.

How to check Redshift COPY command performance from AWS S3?

I'm working on an application wherein I'll be loading data into Redshift.
I want to upload the files to S3 and use the COPY command to load the data into multiple tables.
For every such iteration, I need to load the data into around 20 tables.
I'm now creating 20 CSV files for loading data into 20 tables wherein for every iteration, the 20 created files will be loaded into 20 tables. And for next iteration, new 20 CSV files will be created and dumped into Redshift.
With the current system that I have, each CSV file may contain a maximum of 1000 rows which should be dumped into tables. Maximum of 20000 rows for every iteration for 20 tables.
I wanted to improve the performance even more. I've gone through https://docs.aws.amazon.com/redshift/latest/dg/t_Loading-data-from-S3.html
At this point, I'm not sure how long it's gonna take for 1 file to load into 1 Redshift table. Is it really worthy to split every file into multiple files and load them parallelly?
Is there any source or calculator to give an approximate performance metrics of data loading into Redshift tables based on number of columns and rows so that I can decide whether to go ahead with splitting files even before moving to Redshift.
You should also read through the recommendations in the Load Data - Best Practices guide: https://docs.aws.amazon.com/redshift/latest/dg/c_loading-data-best-practices.html
Regarding the number of files and loading data in parallel, the recommendations are:
Loading data from a single file forces Redshift to perform a
serialized load, which is much slower than a parallel load.
Load data files should be split so that the files are about equal size,
between 1 MB and 1 GB after compression. For optimum parallelism, the ideal size is between 1 MB and 125 MB after compression.
The number of files should be a multiple of the number of slices in your
cluster.
That last point is significant for achieving maximum throughput - if you have 8 nodes then you want n*8 files e.g. 16, 32, 64 ... this is so all nodes are doing maximum work in parallel.
That said, 20,000 rows is such a small amount of data in Redshift terms I'm not sure any further optimisations would make much significant difference to the speed of your process as it stands currently.

Is partitioning helpful in Amazon Athena if query doesn't filter based on partition?

I have a large amount of data, but there is no particular column I would like to filter based on (that is, my 'where clause' can be any column). In this scenario, does partitioning provide any benefit (maybe helps with read-parallelism?) when the queries end up scanning all the data?
If there is no column all, or most, queries would filter on then partitions will only hurt performance. Instead aim for files around 100 MB, as few as possible, Parquet if possible, and put all files directly under the table's LOCATION.
The reason why partitions would hurt performance is that when Athena starts executing your query it will list all files, and the way it does it is as if S3 was a file system. It starts by listing the table's LOCATION, and if it finds anything that looks like a directory it will list it separately, and so on, recursively. If you have a deep directory structure this can end up taking a lot of time. You want to help Athena by having all your files in a flat structure, but also fewer than 1000 of them, because that's the page size for S3's list operation. With more than 1000 files you want to have directories so that Athena can parallelize the listing (but as few as possible still, because there's a limit to how many listings it will do in parallel).
You want to keep file sizes to around 100 MB because that's a good size that trades off how long it takes to process a file against the overhead of getting it from S3. The exact recommendation is 128 MB.

Apache pig - Best Hive file formats

Could someone explain which file fomats of hive will be efficient to be used in pigScript using HCatalog.
I would like to understand which hive file formats will be efficient, since currently we have a partitioned hive table based on date and the underlying file is a sequential file.
Reading for 80 days of data creates around 70,000 mappers which is very huge. Tried changing the map split size to 2GB and did not reduce much.
So, instead of sequential file looking for other options which will reduce the number of mappers. Size of data per data is 9GB.
Is there any suggestions or some inspiration?
Thank you.
As per my knowledge ORC is most suitable file format for hive it has high compression ration, efficiently work on large amount of data and also faster in read. ORC Stored as columns and compressed, which leads to smaller disk reads. The columnar format is also ideal for vectorization optimizations in hive.

Efficiently storing 7.300.000.000 rows

How would you tackle the following storage and retrieval problem?
Roughly 2.000.000 rows will be added each day (365 days/year) with the following information per row:
id (unique row identifier)
entity_id (takes on values between 1 and 2.000.000 inclusive)
date_id (incremented with one each day - will take on values between 1 and 3.650 (ten years: 1*365*10))
value_1 (takes on values between 1 and 1.000.000 inclusive)
value_2 (takes on values between 1 and 1.000.000 inclusive)
entity_id combined with date_id is unique. Hence, at most one row per entity and date can be added to the table. The database must be able to hold 10 years worth of daily data (7.300.000.000 rows (3.650*2.000.000)).
What is described above is the write patterns. The read pattern is simple: all queries will be made on a specific entity_id. I.e. retrieve all rows describing entity_id = 12345.
Transactional support is not needed, but the storage solution must be open-sourced. Ideally I'd like to use MySQL, but I'm open for suggestions.
Now - how would you tackle the described problem?
Update: I was asked to elaborate regarding the read and write patterns. Writes to the table will be done in one batch per day where the new 2M entries will be added in one go. Reads will be done continuously with one read every second.
"Now - how would you tackle the described problem?"
With simple flat files.
Here's why
"all queries will be made on a
specific entity_id. I.e. retrieve all
rows describing entity_id = 12345."
You have 2.000.000 entities. Partition based on entity number:
level1= entity/10000
level2= (entity/100)%100
level3= entity%100
The each file of data is level1/level2/level3/batch_of_data
You can then read all of the files in a given part of the directory to return samples for processing.
If someone wants a relational database, then load files for a given entity_id into a database for their use.
Edit On day numbers.
The date_id/entity_id uniqueness rule is not something that has to be handled. It's (a) trivially imposed on the file names and (b) irrelevant for querying.
The date_id "rollover" doesn't mean anything -- there's no query, so there's no need to rename anything. The date_id should simply grow without bound from the epoch date. If you want to purge old data, then delete the old files.
Since no query relies on date_id, nothing ever needs to be done with it. It can be the file name for all that it matters.
To include the date_id in the result set, write it in the file with the other four attributes that are in each row of the file.
Edit on open/close
For writing, you have to leave the file(s) open. You do periodic flushes (or close/reopen) to assure that stuff really is going to disk.
You have two choices for the architecture of your writer.
Have a single "writer" process that consolidates the data from the various source(s). This is helpful if queries are relatively frequent. You pay for merging the data at write time.
Have several files open concurrently for writing. When querying, merge these files into a single result. This is helpful is queries are relatively rare. You pay for merging the data at query time.
Use partitioning. With your read pattern you'd want to partition by entity_id hash.
You might want to look at these questions:
Large primary key: 1+ billion rows MySQL + InnoDB?
Large MySQL tables
Personally, I'd also think about calculating your row width to give you an idea of how big your table will be (as per the partitioning note in the first link).
HTH.,
S
Your application appears to have the same characteristics as mine. I wrote a MySQL custom storage engine to efficiently solve the problem. It is described here
Imagine your data is laid out on disk as an array of 2M fixed length entries (one per entity) each containing 3650 rows (one per day) of 20 bytes (the row for one entity per day).
Your read pattern reads one entity. It is contiguous on disk so it takes 1 seek (about 8mllisecs) and read 3650x20 = about 80K at maybe 100MB/sec ... so it is done in a fraction of a second, easily meeting your 1-query-per-second read pattern.
The update has to write 20 bytes in 2M different places on disk. IN simplest case this would take 2M seeks each of which takes about 8millisecs, so it would take 2M*8ms = 4.5 hours. If you spread the data across 4 “raid0” disks it could take 1.125 hours.
However the places are only 80K apart. In the which means there are 200 such places within a 16MB block (typical disk cache size) so it could operate at anything up to 200 times faster. (1 minute) Reality is somewhere between the two.
My storage engine operates on that kind of philosophy, although it is a little more general purpose than a fixed length array.
You could code exactly what I have described. Putting the code into a MySQL pluggable storage engine means that you can use MySQL to query the data with various report generators etc.
By the way, you could eliminate the date and entity id from the stored row (because they are the array indexes) and may be the unique id – if you don't really need it since (entity id, date) is unique, and store the 2 values as 3-byte int. Then your stored row is 6 bytes, and you have 700 updates per 16M and therefore a faster inserts and a smaller file.
Edit Compare to Flat Files
I notice that comments general favor flat files. Don't forget that directories are just indexes implemented by the file system and they are generally optimized for relatively small numbers of relatively large items. Access to files is generally optimized so that it expects a relatively small number of files to be open, and has a relatively high overhead for open and close, and for each file that is open. All of those "relatively" are relative to the typical use of a database.
Using file system names as an index for a entity-Id which I take to be a non-sparse integer 1 to 2Million is counter-intuitive. In a programming you would use an array, not a hash-table, for example, and you are inevitably going to incur a great deal of overhead for an expensive access path that could simply be an array indeing operation.
Therefore if you use flat files, why not use just one flat file and index it?
Edit on performance
The performance of this application is going to be dominated by disk seek times. The calculations I did above determine the best you can do (although you can make INSERT quicker by slowing down SELECT - you can't make them both better). It doesn't matter whether you use a database, flat-files, or one flat-file, except that you can add more seeks that you don't really need and slow it down further. For example, indexing (whether its the file system index or a database index) causes extra I/Os compared to "an array look up", and these will slow you down.
Edit on benchmark measurements
I have a table that looks very much like yours (or almost exactly like one of your partitions). It was 64K entities not 2M (1/32 of yours), and 2788 'days'. The table was created in the same INSERT order that yours will be, and has the same index (entity_id,day). A SELECT on one entity takes 20.3 seconds to inspect the 2788 days, which is about 130 seeks per second as expected (on 8 millisec average seek time disks). The SELECT time is going to be proportional to the number of days, and not much dependent on the number of entities. (It will be faster on disks with faster seek times. I'm using a pair of SATA2s in RAID0 but that isn't making much difference).
If you re-order the table into entity order
ALTER TABLE x ORDER BY (ENTITY,DAY)
Then the same SELECT takes 198 millisecs (because it is reading the order entity in a single disk access).
However the ALTER TABLE operation took 13.98 DAYS to complete (for 182M rows).
There's a few other things the measurements tell you
1. Your index file is going to be as big as your data file. It is 3GB for this sample table. That means (on my system) all the index at disk speeds not memory speeds.
2.Your INSERT rate will decline logarithmically. The INSERT into the data file is linear but the insert of the key into the index is log. At 180M records I was getting 153 INSERTs per second, which is also very close to the seek rate. It shows that MySQL is updating a leaf index block for almost every INSERT (as you would expect because it is indexed on entity but inserted in day order.). So you are looking at 2M/153 secs= 3.6hrs to do your daily insert of 2M rows. (Divided by whatever effect you can get by partition across systems or disks).
I had similar problem (although with much bigger scale - about your yearly usage every day)
Using one big table got me screeching to a halt - you can pull a few months but I guess you'll eventually partition it.
Don't forget to index the table, or else you'll be messing with tiny trickle of data every query; oh, and if you want to do mass queries, use flat files
Your description of the read patterns is not sufficient. You'll need to describe what amounts of data will be retrieved, how often and how much deviation there will be in the queries.
This will allow you to consider doing compression on some of the columns.
Also consider archiving and partitioning.
If you want to handle huge data with millions of rows it can be considered similar to time series database which logs the time and saves the data to the database. Some of the ways to store the data is using InfluxDB and MongoDB.