I'm currently storing ~3 million images files in a single directory on my server, which is causing serious performance issues. I'd like to move them to Amazon S3 and I'm wondering whether I'd need to use a hierarchical folder structure or whether I can store them in a single folder on S3.
I get a large percentage of my traffic from google image search and I don't want to hurt my SEO by changing the image path, so a single folder on S3 would be ideal if there aren't any performance issues. I imagine LIST operations would be slow, but I'm okay with that.
S3 has no limit on the number of items stored in a bucket. In fact, using a 'directory' separator in key names is completely optional.
There is a practical use for using a separator in your key: as you correctly guessed listing the keys will be more difficult as you'll have to page though many list results.
However, as the S3 documentation points out you can use any character as a separator.
Related
How do Athena GET requests on S3 work? I had the impression that one S3 GET request = getting one single file from a bucket. But that doesn't seem to be the case since a single query that uses 4 files is costing me around 400 GET requests.
What's happening exactly?
If you run queries against files that are splittable and are large enough Athena will spin up workers that will read partial files. This improves performance because of parallelization. Splittable files are for example Parquet files.
A 100x amplification sounds very high though. I don't know what size Athena aims for when it comes to splits, and I don't know the sizes for your files. There could also be other explanations for the additional GET operations, both inside of Athena and from other sources – how sure are you that these requests are from Athena?
One way you could investigate further is to turn on object level logging in CloudTrail for the bucket. You should be able to see all the request parameters like what byte ranges are read. If you assume a role and pass a unique session name and make only a single query with the credentials you get you should be able to isolate all the S3 operations made by Athena for that query.
My company has millions of files in an S3 bucket, and every so often I have to search for files whose keys/paths contain some text. This is an extremely slow process because I have to iterate through all files.
I can't use prefix because the text of interest is not always at the beginning. I see other posts (here and here) that say this is a known limitation in S3's API. These posts are from over 3 years ago, so my first question is: does this limitation still exist?
Assuming the answer is yes, my next question is, given that I anticipate arbitrary regex-like searches over millions of S3 files, are there established best practices for workarounds? I've seen some people say that you can store the key names in a relational database, Elasticsearch, or a flat file. Are any of these approaches more common place than others?
Also, out of curiosity, why hasn't S3 supported such a basic use case in a service (S3) that is such an established core product of the overall AWS platform? I've noticed that GCS on Google Cloud has a similar limitation. Is it just really hard to do searches on key name strings well at scale?
S3 is an object store, conceptually similar to a file system. I'd never try to make a database-like environment based on file names in a file system nor would I in S3.
Nevertheless, if this is what you have then I would start by running code to get all of the current file names into a database of some sort. DynamoDB cannot query by regular expression but any of PostgreSQL, MySQL, Aurora, and ElasticSearch can. So start with listing every file and put the file name and S3 location into a database-like structure. Then, create a Lambda that is notified of any changes (see this link for more info) that will do the appropriate thing with your backing store when a file is added or deleted.
Depending on your needs ElasticSearch is super flexible with queries and possibly better suited for these types of queries. But traditional relational database can be made to work too.
Lastly, you'll need an interface to the backing store to query. That will likely require some sort of server. That could be a simple as API gateway to a Lambda or something far more complex.
You might consider using Amazon S3 Inventory, which can provide a daily or weekly CSV file containing a list of all objects in the bucket.
You could then load this file into a database, or even write a script to parse it. Or possibly even just play with it in Excel.
I stuck with the following problem: I need to upload objects in small parts (512KB), so I can not use multipart upload (since the minimum 5MB restriction). On the grounds of that, I have to put my parts in a "partitions" bucket and run a Cron task to download partitions and upload a single concatenated object into a "completed" bucket.
I would like to clarify, however, that there is no more elegant way to do this except direct download and concatenation. AWS CLI suggests one can copy objects as a whole, but I see no way to copy and concatenate several objects into one. Is there a way to do this via AWS S3 means?
UPD: I am not guaranteed 512KB chunk size (in fact, it is 512KB to 16MB), but it is usually 512KB and this limit takes origin from vendor of my IP cameras so I can not really change that. And I know the result size beforehead, the camera tells me "I am going to upload 33MB" with a separate call to my backend, but I have no control over number of chunks or their size except the guaranteed boundaries above.
I'm setting up my client with a system that allows users to upload a video or two. These videos will be stored on Amazon S3, which I've not used before. I'm unsure about buckets, and what they represent. Do you think I would have a single bucket for my application, a bucket per user or a bucket per file?
If I were to just have the one bucket, presumably I'd have to have really long, illogical file names to prevent a file name clash.
There is no limit to the amount of objects you can store in a bucket, so generally you would have a single bucket per application, or even across multiple applications. Bucket names have to be globally unique across S3 so it would certainly be impossible to manage a bucket per object. A bucket per user would also be difficult if you had more than a handful of users.
For more background on buckets you can try reading Working with Amazon S3 Buckets
Your application should generate unique keys for objects you are adding to the bucket. Try and avoid numeric ascending ids, as these are considered inefficient. Simply reversing a numeric id can usually make an effective object key. See Amazon S3 Performance Tips & Tricks for a more detailed explanation.
Well I'm getting booted from my shared host and I'm switching over to a combination of a VPS from Linode and Amazon S3 to host a few million jpegs.
My big worry is keeping some kind of sanity with all these images. Is there any hope of that? My understanding is you're only allowed 100 "buckets" and "buckets" are the only type of structure within S3.
Is putting a few million files in a bucket something you'd advise against?
You may notice in Bucket Restrictions and Limitations, it is stated:
There is no limit to the number of objects that can be stored in a bucket
My experience is that a very large number of objects in a single bucket will not affect the performance of getting a single object by its key (that is, get appears to be of constant complexity).
Having a very large number of object also does not affect the speed of listing a given number of objects:
List performance is not substantially affected by the total number of keys in your bucket
However, I must warn you, that most S3 management tools I've used (like S3Fox) will choke and die a horrible slow death when attempting to access a bucket with a very large number of objects. One tool that seems to cope well with very large numbers of objects is S3 Browser (they have a free version and a Pro version, I am not affiliated with them in any way).
Using "folders" or prefixes, does not change any of these points (get and listing a given number of objects are still constant, most tools still fall over themselves and hang).