Say I have 100 images that are each 10KB in size. What are the benefits of putting all those into a single spritesheet? I understand there are fewer HTTP requests, and therefore less of a load on the server, but I'm curious as to the specifics. With modern pipelining, is it still worth the performance gains? How significant are the performance gains? Does it result in faster load time for the client, as well as less of a load on the server or just the same amount of load time, but less of a load on the server?
Are there any test cases anyone can point to that answers these questions?
Basically, what I'm asking is -- is it worth it?
Under HTTP/1.1 (which most sites are still
using) there is a massive overhead to downloading many small resources compared to one big one. This is why spriting became popular as an optimisation technique. HTTP/2 mostly solves that so there is less requirement for spriting (and in fact it's now being considered an anti-pattern). Not sure what you mean by "modern pipelining" but that mostly means HTTP/2 as the pipelining in HTTP/1.1 isn't as fully featured or used much.
How bad a performance hit is it over HTTP/1.1? Pretty shockingly bad actually - it can make load time 10 times as slow on an example site I created. It doesn't really impact server or client load too much - the same amount of data needs to be sent either way - but does massively impact load time.
Saying that there are downsides to spriting of images (and concatenation of text files which is similar). You have to download whole sprite even if only using one image, updating it invalidates the old version in the cache, it requires a build step... etc.
Ultimately the best test is to try it, as it will be different from site to site. However once HTTP/2 becomes ubiquitous this will become a lot less common.
More discussion on this topic on this answer: Optimizing File Cacheing and HTTP2
Related
As far as I know, HTTP/2 no longer uses separate TCP connections for every request, which is the main performance-booster of the protocol.
Does that mean it doesn't matter whether I use 10 XHRs with 10kB of content each or one XHR with 100kB and then split the parts client-side?
A precise answer would require a benchmark for your specific case.
In more general terms, from the client point of view, if you can make the 10 XHR at the same time (for example, in a tight loop), then what happens is that those 10 requests will leave the client more or less at the same time, incur in the latency between the client and the server, be processed on the server (more or less in parallel depending on the server architecture), so the result could be similar of a single XHR - although I would expect the single request to be more efficient.
From the server point of view, however, things may be different.
If you multiply by 10 what could have been done with a single request, now your server sees a 10x increase in request rate.
Reading from the network, request parsing and request dispatching are all activities that are heavily optimized in servers, but they do have a cost, and a 10x increase in that cost may be noticeable.
And that 10x increase in request rate to the server may impact the database as well, the filesystem, etc. so there may be ripple effects that can only be noticed by actually performing the benchmark.
Other things that you have to weigh are the amount of work that you need to do in the server to aggregate things, and to split them on the client; along with other less measurable things like code clarity and maintainability, and so forth.
I would say that common pragmatic judgement applies here: if you can make the same work with one request, why making 10 requests ? Do you have a more specific example ?
If you are in doubt, measure.
Been playing with ImageResizer for a bit now, and trying to do something, I am having trouble understanding the way to go about it.
Mainly I would like to stick to the idea of using the pipeline, and not trying to cheat it.
So.... Let's say, I pretty standard use ImageResizer For something like:
giants_logo.jpg?w=280&h=100
The File giants_logo.jpg
Processing Request is for a resized version of 'w=280&h=100'
In a clustered environment, what will happen is if this same request is served by 3 machines.
All 3 would end up doing the resize, and then storing their cached version in a local folder on disc. I could leverage a shared drive or something, but that has it's own limitations.
What I am looking to do, is get the processed file, and then copy it back up to the DB or S3 where the main images are served from.
My thought is.... I might have to write somehting like DiscCache, but with a complelty different guts, using the DB or S3 as the back end instead of the file system.
I realize the point of caching is speed, and what I am suggesting is negating that aspect..... but that's not the case if we layer the things maybe.
Anyway, What I am focused on is trying to keep track of the files generated, as well as avoid processing on multiple servers.
Any thoughts on the route I should look at to accomplish this?
TLDR; When DiskCache actually stops working well (usually between 1 and 20 million unique images), then switch to a CDN (unless it's too expensive), or a reverse proxy (unless your data set is really too huge to be bound by mortal infrastructure).
For petabyte data sets on the cheap when performance isn't king, it's a good plan. But for most people, it's premature. Even users with upwards of 20TB (source images) still use DiskCache. Really. Terabyte drives are cheap.
Latency is the killer.
To make this work you would need a central Redis server. MSSQL won't cut it (at least not on a VM or commodity hardware, we've tried). Given a Redis server, you can track what is done and stored (and perhaps even what is in progress, to de-duplicate effort in real time, as DiskCache does).
If you can track it, you can reuse it, and you can delete it. Reuse will be slower, since you're doubling the network traffic, moving the result twice. (But also decreasing it linearly with the number of servers in the cluster for source image fetches).
If bandwidth saturation is your bottleneck (very common), this could make performance worse. In fact, unless your read/write ratio is write and CPU heavy, you'll likely see worse performance than duplicated CPU effort under individual disk caches.
If you have the infrastructure to test it, put DiskCache on a SAN or shared drive; this will give you a solid estimate of the performance you can expect (assuming said drive and your blob storage system have comparable IO perf).
However, it's a fair amount of work, and you're essentially duplicating a subset of the functionality of reverse proxy (but with worse performance, since every response has to be proxied through the unlucky cluster server, instead of being spooled directly from disk).
CDNs and Reverse proxies to the rescue
Amazon CloudFront or Varnish can serve quite well as reverse proxies/caches for a web farm or cluster. Now, you'll have a bit less control over the 'garbage collection' process, but... also less code to maintain.
There's also ARR, but I've heard neither success nor failure stories about it.
But it sounds fun!
Send me a Github link and I'll help out.
I'd love to get a Redis-coordinated, cloud-agnostic poor-man's blob cache system out there. You bring the petabytes and infrastructure, I'll help you with the integration and troublesome bits. Efficient HTTP proxying is probably the hardest part; the rest is state management and basic threading.
You might want to have a look at a modified AzureReader2 plugin at https://github.com/orbyone/Sensible.ImageResizer.Plugins.AzureReader2
This implementation stores the transformed image back to the Azure blob container on the initial requests, so subsequent requests are redirected to that copy.
This sort of question has been asked before HTTP Requests vs File Size?, but I'm hoping for a better answer. In that linked question, the answerer seemed to do a pretty good job of answering the question with the nifty formula of latency + transfer time with an estimated latency of 80 ms and transfer speed of 5Mb/s. But it seems flaw in at least one respect. Don't multiple requests and transfers happen simultaneously in a normal browsing experience? That's what it looks like when I examine the Network tab in Chrome. Doesn't this mean that request latency isn't such a terrible thing?
Are there any other things to consider? Obviously latency and and bandwidth will vary but is 80ms and 5Mb/s a good rule of thumb? I thought of an analogy and I wonder if it is correct. Imagine a train station with only one track in and one track out (or maybe it is one for both). Http requests are like sending an engine out to get a bunch of cars at another station. They return pulling a long train of railway cars, which represents the requested file being downloaded. So you could send one engine out and have it bring back a massive load. Or you could send multiple engines out and they could each bring back smaller loads, of course they would all have to wait their turn coming back into the station. And some engines couldn't be sent out until other ones had come in. Is this a flawed analogy?
I guess the big question then is how can you predict how much overlap there will be in http requests so that you can know, for example, if it is generally worth it to have two big PNG files on your page or instead have a webp image, plus the Webpjs js and swf files for incompatible browsers. That doubles the number of requests but more than halves the total file size (say 200kB savings).
Your analogy it's not bad in general terms. Obviously if you want to be really precise in all aspects, there're things that are oversimplified or incorrect (but that happens with almost all analogies).
Your estimate of 80ms and 5mb/s might sound logical, but even though most of us likes theory, you should manage this kind of problems in another way.
In order to make good estimates, you should measure to get some data and analyze it. Every estimation depends on some context and you should not ignore it.
Think about not being the same estimating latency and bandwidth for a 3G connection, an ADSL connection in Japan or an ADSL connection in a less technology-developed country. Are clients accessing from the other end of the world or in the same country?. Like your good observation of simultaneous connections on the client, there're millions of possible questions to ask yourself and very little good-quality answers without doing some measure.
I know I'm not answering exactly your question, because I think is unanswerable without so many details about the domain (plus constrains, and a huge etc).
You seem to have some ideas about how to design your solution. My best advice is to implement each one of those and profile them. Make measurements, try to identify what your bottlenecks are and see if you have some control about them.
In some problems this kind of questions might have an optimal solution, but the difference between optimal and suboptimal could be negligible in practice.
This is the kind of answer I'm looking for. I did some simplistic tests to get a feel for the speed of many small files vs one large files.
I created html pages that loaded a bunch of random sized images from placekitten.com. I loaded them in Chrome with the Network tab open.
Here are some results:
# of imgs Total Size (KB) Time (ms)
1 465 4000, 550
1 307 3000, 800, 350, 550, 400
30 192 1200, 900, 800, 900
30 529 7000, 5000, 6500, 7500
So one major thing to note is that single files become much quicker after they have been loaded once. (The comma seperated list of times are page reloads). I did normal refresh and also Empty Cache and Hard Reload. Strangely it didn't seem to make much difference which way I refreshed.
My connection had a latency or return time or whatever of around 120 - 130ms and my download speed varied between 4 and 8Mbps. Chrome seemed to do about 6 requests at a time.
Looking at these few tests it seems that, in this range of file sizes at least, it is obviously better to have less requests when the file sizes are equal, but if you could cut the file size in half, even at the expense of increasing the number of http requests by 30, it would be worth it, at least for a fresh page load.
Any comments or better answers would be appreciated.
In Varnish, does the std.log subroutine have a performance impact I should be concerned with? For example, if I call it 3-4 times a request, will that have a cumulative effect when dealing with a large number of requests?
From what I can tell, std.log logs to shared memory by requesting a lock, writing the message, and releasing the lock. This should be pretty fast, but if it happens during every single request wouldn't that affect concurrent requests?
Varnish uses a shared memory log (shm-log) for all logging. This works as a circular buffer and stores a small amount of log data - 80MB by default. It is fast.
Other tools are provided for analysing and generating output from the shm-log area. These tools are relatively slow since they must output data either to screen or disk, but they don't interfere with the performance of Varnish itself.
I'd be surprised if adding an additional 3 or 4 log entries per request has any measurable performance impact at all, seeing as each request already generates far more than that (one for every request header, for example). I'd say you are far more likely to encounter performance problems with your backend/s.
I have been hoping to find out what different server setups equate to in theory for concurrent page requests, and the answer always seems to be soaked in voodoo and sorcery. What is the approximation of max concurrent page requests for the following setups?
apache+php+mysql(1 server)
apache+php+mysql+caching(like memcached or similiar (still one server))
apache+php+mysql+caching+dedicated Database Server (2 servers)
apache+php+mysql+caching+dedicatedDB+loadbalancing(multi webserver/single dbserver)
apache+php+mysql+caching+dedicatedDB+loadbalancing(multi webserver/multi dbserver)
+distributed (amazon cloud elastic) -- I know this one is "as much as you can afford" but it would be nice to know when to move to it.
I appreciate any constructive criticism, I am just trying to figure out when its time to move from one implementation to the next, because they each come with their own implementation feat either programming wise or setup wise.
In your question you talk about caching and this is probably one of the most important factors in a web architecture r.e performance and capacity.
Memcache is useful, but actually, before that, you should be ensuring proper HTTP cache directives on your server responses. This does 2 things; it reduces the number of requests and speeds up server response times (if you have Apache configured correctly). This can also be improved by using an HTTP accelerator like Varnish and a CDN.
Another factor to consider is whether your system is stateless. By stateless, it usually means that it doesn't store sessions on the server and reference them with every request. A good systems architecture relies on state as little as possible. The less state the more horizontally scalable a system. Most people introduce state when confronted with issues of personalisation - i.e serving up different content for different users. In such cases you should first investigate using the HTML5 session storage (i.e store the complete user data in javascript on the client, obviously over https) or if the data set is smaller, secure javascript cookies. That way you can still serve up cached resources and then personalise with javascript on the client.
Finally, your stack includes a database tier, another potential bottleneck for performance and capacity. If you are only reading data from the system then again it should be quite easy to horizontally scale. If there are reads and writes, its typically better to separate the read write datasets into a separate database and have the read only in another. You can then use more relevant methods to scale.
These setups do not spit out a single answer that you can then compare to each other. The answer will vary on way more factors than you have listed.
Even if they did spit out a single answer, then it is just one metric out of dozens. What makes this the most important metric?
Even worse, each of these alternatives is not free. There is engineering effort and maintenance overhead in each of these. Which could not be analysed without understanding your organisation, your app and your cost/revenue structures.
Options like AWS not only involve development effort but may "lock you in" to a solution so you also need to be aware of that.
I know this response is not complete, but I am pointing out that this question touches on a large complicated area that cannot be reduced to a single metric.
I suspect you are approaching this from exactly the wrong end. Do not go looking for technologies and then figure out how to use them. Instead profile your app (measure, measure, measure), figure out the actual problem you are having, and then solve that problem and that problem only.
If you understand the problem and you understand the technology options then you should have an answer.
If you have already done this and the problem is concurrent page requests then I apologise in advance, but I suspect not.