Web Server Caches - in-memory vs the OS - apache

I'm not entirely sure if this question would be better suited for something like Serverfault - however, since I'm a programmer, and not a sys-admin, I'm asking from the perspective of a programmer.
These days there are a HUGE number of options available for caching static web content. Things like Varnish or Squid are used throughout the industry.
However, I'm somewhat confused here. From a theoretical perspective, I don't see how the caching of static content requires the use of some 3rd party software apart from the web-server and OS.
Dynamic-content, (such as, the result of an expensive PHP script calculation or something), certainly could benefit from a good caching system.
But with static content, what do we gain by caching resources in memory? Wouldn't the OS page cache already provide the same benefits as a dedicated caching system like Varnish or Squid? Or am I missing some of the benefits?
Varnish, in fact, stores data in Virtual Memory using mmap - and lets the OS handle the page swapping. So, how exactly is this even different from just saving cached resources to disk and opening them with fread?

You are correct. For static resources, the memory can just as well be put to use for the page cache instead of using Varnish.
Chaining caches (varnish, pagecache) for identical content that compete for the same resource (server memory) is silly.
If you in addition have some dynamic content, you may choose to combine and serve everything from cache due to operational reasons. For example is it simpler to collect access logs and statistics from a single software stack, than two. This also applies to things like staff training and security patching.

Related

how can we integrate two rails applications deployed within an intranet

Is RESTful services the only route for integrating any application with a rails applications including any other rails applications irrespective of whether it is in same network or not?
For integrating two applications how heavy is a RESTful service compared to the RMI based integration available in other technologies like Java EE?
Is there way to integrate two rails applications using any natively understood binary format which can avoid transformation to a different format ex: HTTP request.
The REST approach means simply that application A will make requests of application B (and potentially the other way around) using the HTTP protocol. The data send can be in whatever format you like, although JSON is the default today (and XML was the default yesterday, and even ... SOAP -- gaq!).
These days, the vast majority of external APIs are implemented this way -- Amazon, Google Maps, Yelp, etc, etc, etc. Why? Because the HTTP (or HTTPS) protocol is well understood and widely deployed. No special configuration is required and the same protocol that serves the application to regular people on web browsers works for other applications. Rails makes this brilliantly easy (if you go with the flow).
Java's RMI is a specific protocol (just as HTTP is). The advantage is that objects defined in A are available as instances in B (after a great deal of work in both). This really makes sense when you have a set of applications all designed up front to work together and whose main requirement is to be distributed across locations, servers, etc. RMI creates a tight binding between applications -- a change in one typically requires a change in the other. It's right for some kinds of applications.
But if you have, for example, two departments in a company who talk to each other, but don't want to be "bound at the hip", a REST interface provides a great deal of flexibility.
Your second question ("how heavy") is very difficult to answer. A company I worked for in 2001 had hundreds of servers all running an instance of a "worker" process -- they were all designed to queue their results to a "controller" process which would process the output and forward to another set servers designed to process and manage the data. In 2001, this was the right architecture because it was completely designed to work together -- persistent socket connections on a single subnet of our intranet running on a room full of servers. Now in 2012, that room full of servers is replaced by a few high-powered processors running 64-bit OS and addressing massive amounts of memory -- it's a whole new world. A doubling of performance in 2001 could save potentially millions of dollars of hardware, operational support, space and so on. In 2012, the most expensive thing is good developers! So "heavy" is really kind of irrelevant in all but the most compute-intensive operations these days. An HTTP request is light and simple.
Final question: natively understood binary format. Sure, if needed. In the end, any binary format that is sent over the wire between two servers needs to be serialized and de-serialized as a stream, and this is work, both for programmers and for machines. JSON is a text format, but one natively understood by JavaScript (JavaScript Object Notation) and has the distinct advantage of being human-readable. Given that most servers are set up to compress output automatically whether something is text or binary becomes kind of less relevant, at least as far as I/O and payload goes. Of course you can come up with any mutually understood format and send it over HTTP, but again, this is something that mattered a decade ago, and today is usually not an issue worth considering. Processors have been getting faster and faster, and memory cheaper (and bigger) -- so (as always) I/O (whether network or disk) is the typical bottleneck in modern applications.
If I were to re-design the application I mentioned from 2001 where hundreds of (today's) servers needed to communicate with (many) peer servers very specifically designed to interoperate, I might work to make sure that the serialize/deserialize process was as lightweight as possible (but only if it turned out to be a bottleneck). For me, being bound to any given platform or language is a non-starter -- the computing world is moving way to fast.
But in almost all realistic business applications today, keeping things simple, standard, and straightforward has both present and future benefits that make the need to worry obsessively about performance a thing of the past.
Hope this helps :-)

How important are website optimizations?

Currently I am running Apache and MySQL and I hear about people talking about GZipping content, something about ETags, using a CDN, adding expire headers, minifying text documents, combining script files, etc. I downloaded a Firefox add-on called YSlow and I noticed that many websites do not employ all of these tactics. I believe even Google has a D rating. So I ask, SO, how important are these optimizations?
They depend highly on your traffic and resources at your disposal.
If you make the website for Joe's Pizza in the middle of nowhere, there is no real need to waste time optimising the site, it will likely have a handful of visits a day.
But Stack Overflow receives thousands of hits a minute (probably more), so they use a CDN, distant expiry headers, minification, etc.
Honestly, if people aren't complaining it's probably not a big deal. If people are complaining, start by looking at the database.
In my years of web development most web application performance problems have stemmed from the DB (this doesn't mean that all performance problems come from the DB but it's a good place to start). While I am fascinated for things like minified JS and css sprites, I suspect that these things do not make a difference in a "day in the life of your average web developer".
It's good that you consider these things, but unless you are working at an extremely high traffic site, it probably won't make a difference.
It all depends on your application.
Minifying, for example, might be great for an application that is very external .js dependent. There is no reason NOT to do this - there is no overhead required and it potentially saves quite a few bytes.
Compression is great for certain content types - terrible for others and involves a slight overhead while transporting pages.
CDNs are up to your affordability, content type and how dynamic the content is. You obviously don't need Akamai backing up the average Drupal site.
etc, etc, etc

Caching architecture for Memcached/wcf/web/ravendb

I have an architecture question - related to my ravendb based setup.
I have the following:
ravendb -> wcf service -> (web/iphone/android)
the web/iphone/android level actually has (at the moment - this is growing) connections to 7 wcf services
at the moment the 7 services talk to the same ravendb - this is likely to be segmented in a future refactoring blitz as they don't need to be on the same instance - there is minimal - if none at all - crossover of the model.
My question is this:
I am looking at using memcached - at which points (i have little experience setting this up) can i / should i use memcached?
between ravendb and wcf?
between wcf and (web/iphone/android)?
between all?
am i likely to run into stale data issues? is this taken care of or am i over simplifying things?
As many people will tell you: Premature optimization is the root of all evil (and they are all quoting Donald Knuth I think). So wait when you have performance issues before doing anything (You don't need to wait for the system to crush. Wait till you see 90% utilization of your resources)
That being said, You should use memcached (or any kind of caching for that matter) when you expect to use the cached data before it is being invalidated (The improvement factor will change upon many other factors like: the operation cost and the frequency in which the data accessed)
To answer your "where" questions that really depends where you will be saving most on resources and it is really application specific and can not be answered here.
As an additional pointer, RavenDB REST interface uses ETags to support HTTP-based cahing capabilities. If your HTTP client plays well with those mechanisms, you'll have some nice caching out of the box.
I am not sure how this plays with the WCF stack, though

What research-operating-system features would you advocate including in Google Chrome Operating System

Imagine that a large player is undertaking the construction of a new operating system, where backward compatibility requirements are limited to:
Run existing applications written in (or compiled to) JavaScript which are presented in HTML5 and styled with CSS3
Plug and play support for printers, external storage, and optical drives
Degrade gracefully when disconnected from the internet
Sufficient process quotas to support safely permitting tasks to run in the background, including timers
What specific features from existing research operating systems (such as Plan 9) would you like to see enter the mainstream through this channel? Please limit your suggestions to things that have been implemented, and provide a link to the implementation (or at least search terms).
From the Plan 9 docs:
Plan 9 began in the late 1980’s as an
attempt to have it both ways: to build
a system that was centrally
administered and cost-effective using
cheap modern microcomputers as its
computing elements.
Netbooks qualify as cheap modern microcomputers, and The Cloud qualifies as centrally administered. There is an opportunity to implement the features (in DDaviesBrackett's words) that we want netbooks to have other than by extending a 1970's time-sharing OS; the research operating systems may have proved the value of alternatives by example.
From the Plan 9 FAQ:
Subject: What are its key ideas?
Plan 9 exploits, as far as possible,
three basic technical ideas: first,
all the system objects present
themselves as named files that are
manipulated by read/write operations;
second, all these files may exist
either locally or remotely, and
respond to a standard protocol; third,
the file system name space - the set
of objects visible to a program - is
dynamically and individually
adjustable for each of the programs
running on a particular machine. The
first two of these ideas were
foreshadowed in Unix and to a lesser
extent in other systems, while the
third is new: it allows a new
engineering solution to the problems
of distributed computing and graphics.
Plan 9's approach means that
application programs don't need to
know where they are running; where,
and on what kind of machine, to run a
Plan 9 program is an economic decision
that doesn't affect the construction
of the application itself.
Does that not appear to be an excellent fit for the netbook/Cloud domain?
What operating system features I would advocate for Chrome OS?
Here my wish list as a Plan 9/Inferno fan:
Resources (ip stack, graphics, etc) as file systems.
Network transparent file system (ie., 9P).
Private per-process namespaces.
Factotum-like auth system (ie., no root user).
Pure UTF-8 everywhere.
Extremely lightweight processes.
Automatic snapshot and de-duplicating storage (ala venti+fossil).
And I guess many others, but this would be enough to make me quite happy.
This is not a 'OS feature' per see, but I would love to have a GUI with mouse-chording.
None.
I'd prefer for a new consumer OS, especially one targeted at Netbooks, to be very very good at doing the things that we already want OSes to be able to do rather than having time spent on features that are, by their nature, experimental.
(Of course, I'd be totally un-bothered by features I wasn't forced to use to develop on the platform; other people's toys are welcome as long as they don't make my job harder.)
I really think that Google might look into Plan9 for inspiration actually. Hearsay (the Internet) claims that several of those that initially developed UNIX and then later scrapped it for a better design (Plan9) are employed by Google. Google is also hosting its own version of Inferno, but I am not sure whether this is any central part of their plan. Further "evidence" could be that the plan9 authorization system (p9auth) for Linux was published by a Google researcher. The third "evidence" would be that Google claim that Chrome OS will have a novel security architecture.
The authorization seems to me to be one of the GREATEST parts of the Plan9 that can be included right now (/net would also be nice but there is no working code for that yet). The idea that a program that needs root access only gets limited access to the parts that are determined by the authorization server is definitely a great step forward compared to the now prevalent user/superuser/root division in Linux, where "a man in the middle" attacks can (theoretically) be done by gaining (full, as opposed to limited by the authorization server) root access via a bug in a program granted root.

Website Hardware Scaling

So I was listening to the latest Stackoverflow podcast (episode 19), and Jeff and Joel talked a bit about scaling server hardware as a website grows. From what Joel was saying, the first few steps are pretty standard:
One server running both the webserver and the database (the current Stackoverflow setup)
One webserver and one database server
Two load-balanced webservers and one database server
They didn't talk much about what comes next though. Do you add more webservers? Another database server? Replicate this three-machine cluster in a different datacenter for redundancy? Where does a web startup go from here in the hardware department?
A reasonable setup supporting an "average" web application might evolve as follows:
Single combined application/database server
Separate database on a different machine
Second application server with DNS round-robin (poor man's load balancing) or, e.g. Perlbal
Second, replicated database server (for read loads, requires some application logic changes so eligible database reads go to a slave)
At this point, evaluating the current state of affairs would help to determine a better scaling path. For example, if read load is high and content doesn't change too often, it might be better to emphasise caching and introduce dedicated front-end caches, e.g. Squid to avoid un-needed database reads, although you will need to consider how to maintain cache coherency, typically in the application.
On the other hand, if content changes reasonably often, then you will probably prefer a more spread-out solution; introduce a few more application servers and database slaves to help mitigate the effects, and use object caching, such as memcached to avoid hitting the database for the less volatile content.
For most sites, this is probably enough, although if you do become a global phenomenon, then you'll probably want to start considering having hardware in regional data centres, and using tricks such as geographic load balancing to direct visitors to the closest "cluster". By that point, you'll probably be in a position to hire engineers who can really fine-tune things.
Probably the most valuable scaling advice I can think of would be to avoid worrying about it all far too soon; concentrate on developing a service people are going to want to use, and making the application reasonably robust. Some easy early optimisations are to make sure your database design is fairly solid, and that indexes are set up so you're not doing anything painfully crazy; also, make sure the application emits cache-control headers that direct browsers on how to cache the data. Doing this sort of work early on in the design can yield benefits later, especially when you don't have to rework the entire thing to deal with cache coherency issues.
The second most valuable piece of advice I want to put across is that you shouldn't assume what works for some other web site will work for you; check your logs, run some analysis on your traffic and profile your application - see where your bottlenecks are and resolve them.
plenty of fish Architecture
some interesitng videos:
Youtube scalibility
Inteview with Dan Farino, System Architect at Myspace
Joel mentioned adding a second datacenter, with the same setup, and then assigning your users randomly to each. Changes to the data are logged and sent from one location to the other, so that both locations contain all the data.
The talk Scalable Web Architectures Common Patterns & Approaches from Cal Henderson (Yahoo) on Web 2.0 Expo was quite interesting. I thought there was an video, but I could not find it. But here are the slides:
http://www.slideshare.net/techdude/scalable-web-architectures-common-patterns-and-approaches
A certain next step would be a cluster of webservers (a web farm) and a clustered system of database servers (replication or Oracle RAC etc. etc.)
If your interested in caching and using .Net, look into the application caching block in enterprise library (of course use this along with the other points above).