I'm going to be using S3 to store user uploaded photos. Obviously, I wont be serving the image files to user agents without resizing them down. However, not one size would do, as some thumbnails will be smaller than other larger previews. So, I was thinking of making a standard set of dimensions scaling from the lowest 16x16 to some highest 1024x1024. Is this a good way to solve this problem? What if I need a new size later on? How would you solve this?
Pre-generating different sizes and storing them in S3 is a fine approach, especially if you know what sizes you need, are likely to use all of the sizes for all of the images, and don't have so many images and sizes that the storage cost is excessive.
Here's another approach I use when I don't want to pre-generate and store all the different sizes for every image, or when I don't know what sizes I will want to use in the future:
Store the original size in S3.
Run a web server that can generate any desired size from the original image on request.
Stick a CDN (CloudFront) in front of the web server.
Now, your web site or application can request a URL like /16x16/someimage.jpg from CloudFront. The first time this happens, CloudFront will get the resized image from your web server, but then CloudFront will cache the image and serve it for you, greatly reducing the amount of traffic that hits your web server.
Here's a service that resizes images from arbitrary URLs, serving them through CloudFront: http://filter.to
This sounds like a good approach. Depending on your application you should define a set of thumbnail sizes that you always generate. But also store the original user file, if your requirements change later. When you want to add a new thumbnail size, you can iterate over all original files and generate the new thumbnails from it. This option gives you flexibilty for later.
Related
As per the title I'm trying to upload an image from the mobile to the server and it takes time even though it's compressed. I have applied based64 encoding but I don't feel a significant change in the time. Can someone suggest a better approach in which less bandwidth is consumed? Thank you
I stuck with the following problem: I need to upload objects in small parts (512KB), so I can not use multipart upload (since the minimum 5MB restriction). On the grounds of that, I have to put my parts in a "partitions" bucket and run a Cron task to download partitions and upload a single concatenated object into a "completed" bucket.
I would like to clarify, however, that there is no more elegant way to do this except direct download and concatenation. AWS CLI suggests one can copy objects as a whole, but I see no way to copy and concatenate several objects into one. Is there a way to do this via AWS S3 means?
UPD: I am not guaranteed 512KB chunk size (in fact, it is 512KB to 16MB), but it is usually 512KB and this limit takes origin from vendor of my IP cameras so I can not really change that. And I know the result size beforehead, the camera tells me "I am going to upload 33MB" with a separate call to my backend, but I have no control over number of chunks or their size except the guaranteed boundaries above.
Suppose I have files in blob storage, and these files are constantly used by my web application hosted in Windows Azure.
Should I perform some sort of caching of these blobs, like downloading them to my app's local hard-drive?
Update: I was requested to provide a case to make it clear why I want to cache content, so here it goes: imagine I have an e-commerce web-site and my product images are all high-resolution. Sometimes, though, I would like to serve them as thumbnails (eg. for product listings), and one possible solution for that is to use an HTTP handler to resize the images on demand. I know I could use output-cache so that the image just needs to be resized once, but for the sake of this example, let us just consider I would process the image every time it was requested. I imagine it would be faster to have the contents cached locally. In this case, would it be better to cache it on the HD or to use local-storage?
Thanks in advance!
Just to start answering your question, yes accessing a static content from Role specific local storage would be faster compare to accessing it from Azure blob storage due to network latency even when both compute and blob are in same data center.
There could be a solution in which you can download X amount of blobs from Azure storage during startup task (or a background task) in Role specific Local Storage and reference these static content via local storage however the real question is for what reason you want to cache the content from Azure blob storage? Is it for faster access or for reliability? If reason is to have static content accessible almost immediately then I could think of having it cached at local storage.
There are pros and cons of each approach however if you can provide the specific why would you want to do that, you may get much better to the point response.
Why not use a local resource? It gives you a path to a folder on the HD, and you can get a lot of space. You can even keep it around between restarts.
Another option is Azure Cloud Drive. It's fast, and would allow you to share the cache among instances (but only can write at once).
Erick
I create a photo-gallery site. I want an each photo to have 3 or 4 instances with different sizes (including original photo).
Is better to resize a photo on client-side (using Flash or HTML5) and upload all the instances of this photo to a server separately? Or it's better to upload a photo to a server only one time, but resize it using server resources (for example GD)?
What would be your suggestions?
Also it's interesting to know, how does big sites do this work? For example 500px.com (this site for each photo creates 4 instances and all works fast enough) or Facebook.
There are several schools of thought on this topic, it really comes down to how many images you have an how likely it is that the images will be viewed more than once. It is most common for all of the image sizes to be created using a tool like Adobe Photoshop, GIMP, Sizzlepig or GD (locally or on A server, not necessarily the web server) then upload all the assets to the server.
Resizing before you host the image takes some of the strain off of the end-user's web browser and more importantly reduces the amount of bandwidth required to host the site (especially useful when you are running a large site and paying per GB transferred)
To answer your part about really big sites, some do image scaling ahead of time, others do it on the fly, but typically it's done server side.
I am working on e-Shop project.
I my design for each product I need a picture with three sizes:
480 * 480
290 * 290
200 * 200
Which one is better ?
Asking e-Shop Admin to upload a picture for all above sizes.
Asking him to upload a picture with size 480 * 480 then generating other sizes via asp.net
Requiring your site admin to upload three separate images is simply pushing unnecessary work overhead onto the admin, and this generally results in them not bothering to create and upload three separate images for each product - resulting in an ecommerce site with missing images.
You're far better to use the technology to make the administrators life easier by requiring them to load a single image and automating the creation of the smaller sized images. Automating manual tasks is the whole point of the IT industry, and not doing so where possible kind of defeats the purpose of building these systems.
There's not really any issue with CPU usage as you only need to generate the 2 smaller images once at the point of loading, or not at all by using CSS to resize (this may not be optimal use of bandwidth). I'd go with creating the 2 smaller images either when it is uploaded by the admin and storing it in a cache, or creating them on the fly upon the first time it is requested and then putting it into a cache.
Upload all three images - will reduce the CPU overhead. You can then use the processing power to enable your site to be more responsive.
In my opinion, preparing three sizes of the image is a time consuming process because it must be repeated for every product.
generating would be better.
on the other hand just uploading a big one and then showing small images with css class' can be useful. (if the visitor will see all the images all the time)