When is optimisation premature? - optimization

As Knuth said,
We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.
This is something which often comes up in Stack Overflow answers to questions like "which is the most efficient loop mechanism", "SQL optimisation techniques?" (and so on). The standard answer to these optimisation-tips questions is to profile your code and see if it's a problem first, and if it's not, then therefore your new technique is unneeded.
My question is, if a particular technique is different but not particularly obscure or obfuscated, can that really be considered a premature optimisation?
Here's a related article by Randall Hyde called The Fallacy of Premature Optimization.

Don Knuth started the literate programming movement because he believed that the most important function of computer code is to communicate the programmer's intent to a human reader. Any coding practice that makes your code harder to understand in the name of performance is a premature optimization.
Certain idioms that were introduced in the name of optimization have become so popular that everyone understands them and they have become expected, not premature. Examples include
Using pointer arithmetic instead of array notation in C, including the use of such idioms as
for (p = q; p < lim; p++)
Rebinding global variables to local variables in Lua, as in
local table, io, string, math
= table, io, string, math
Beyond such idioms, take shortcuts at your peril.
All optimization is premature unless
A program is too slow (many people forget this part).
You have a measurement (profile or similar) showing that the optimization could improve things.
(It's also permissible to optimize for memory.)
Direct answer to question:
If your "different" technique makes the program harder to understand, then it's a premature optimization.
EDIT: In response to comments, using quicksort instead of a simpler algorithm like insertion sort is another example of an idiom that everyone understands and expects. (Although if you write your own sort routine instead of using the library sort routine, one hopes you have a very good reason.)

IMHO, 90% of your optimization should occur at design stage, based on percieved current, and more importantly, future requirements. If you have to take out a profiler because your application doesn't scale to the required load you have left it too late, and IMO will waste a lot of time and effort while failing to correct the problem.
Typically the only optimizations that are worthwhile are those that gain you an order of magnitude performance improvement in terms of speed, or a multiplier in terms of storage or bandwidth. These types of optimizations typically relate to algorithm selection and storage strategy, and are extremely difficult to reverse into existing code. They may go as deep as influencing the decision on the language in which you implement your system.
So my advice, optimize early, based on your requirements, not your code, and look to the possible extended life of your app.

If you haven't profiled, it's premature.

My question is, if a particular
technique is different but not
particularly obscure or obfuscated,
can that really be considered a
premature optimisation?
Um... So you have two techniques ready at hand, identical in cost (same effort to use, read, modify) and one is more efficient. No, using the more efficient one would not, in that case, be premature.
Interrupting your code-writing to look for alternatives to common programming constructs / library routines on the off-chance that there's a more efficient version hanging around somewhere even though for all you know the relative speed of what you're writing will never actually matter... That's premature.

Here's the problem I see with the whole concept of avoiding premature optimization.
There's a disconnect between saying it and doing it.
I've done lots of performance tuning, squeezing large factors out of otherwise well-designed code, seemingly done without premature optimization.
Here's an example.
In almost every case, the reason for the suboptimal performance is what I call galloping generality, which is the use of abstract multi-layer classes and thorough object-oriented design, where simple concepts would be less elegant but entirely sufficient.
And in the teaching material where these abstract design concepts are taught, such as notification-driven architecture, and information-hiding where simply setting a boolean property of an object can have an unbounded ripple effect of activities, what is the reason given? Efficiency.
So, was that premature optimization or not?

First, get the code working. Second, verify that the code is correct. Third, make it fast.
Any code change that is done before stage #3 is definitely premature. I am not entirely sure how to classify design choices made before that (like using well-suited data structures), but I prefer to veer towards using abstractions taht are easy to program with rather than those who are well-performing, until I am at a stage where I can start using profiling and having a correct (though frequently slow) reference implementation to compare results with.

From a database perspective, not to consider optimal design at the design stage is foolhardy at best. Databases do not refactor easily. Once they are poorly designed (this is what a design that doesn't consider optimization is no matter how you might try to hide behind the nonsense of premature optimization), is almost never able to recover from that becasue the database is too basic to the operation of the whole system. It is far less costly to design correctly considering the optimal code for the situation you expect than to wait until the there are a million users and people are screaming becasue you used cursors throughout the application. Other optimizations such as using sargeable code, selecting what look to be the best possible indexes, etc. only make sense to do at design time. There is a reason why quick and dirty is called that. Because it can't work well ever, so don't use quickness as a substitute for good code. Also frankly when you understand performance tuning in databases, you can write code that is more likely to perform well in the same time or less than it takes to write code which doesn't perform well. Not taking the time to learn what is good performing database design is developer laziness, not best practice.

What you seem to be talking about is optimization like using a hash-based lookup container vs an indexed one like an array when a lot of key lookups will be done. This is not premature optimization, but something you should decide in the design phase.
The kind of optimization the Knuth rule is about is minimizing the length the most common codepaths, optimizing the code that is run most by for example rewriting in assembly or simplifying the code, making it less general. But doing this has no use until you are certain which parts of code need this kind of optimization and optimizing will (could?) make the code harder to understand or maintain, hence "premature optimization is the root of all evil".
Knuth also says it is always better to, instead of optimizing, change the algorithms your program uses, the approach it takes to a problem. For example whereas a little tweaking might give you a 10% increase of speed with optimization, changing fundamentally the way your program works might make it 10x faster.
In reaction to a lot of the other comments posted on this question: algorithm selection != optimization

The point of the maxim is that, typically, optimization is convoluted and complex. And typically, you the architect/designer/programmer/maintainer need clear and concise code in order to understand what is going on.
If a particular optimization is clear and concise, feel free to experiment with it (but do go back and check whether that optimization is effective). The point is to keep the code clear and concise throughout the development process, until the benefits of performance outweigh the induced costs of writing and maintaining the optimizations.

Optimization can happen at different levels of granularity, from very high-level to very low-level:
Start with a good architecture, loose coupling, modularity, etc.
Choose the right data structures and algorithms for the problem.
Optimize for memory, trying to fit more code/data in the cache. The memory subsystem is 10 to 100 times slower than the CPU, and if your data gets paged to disk, it's 1000 to 10,000 times slower. Being cautious about memory consumption is more likely to provide major gains than optimizing individual instructions.
Within each function, make appropriate use of flow-control statements. (Move immutable expressions outside of the loop body. Put the most common value first in a switch/case, etc.)
Within each statement, use the most efficient expressions yielding the correct result. (Multiply vs. shift, etc)
Nit-picking about whether to use a divide expression or a shift expression isn't necessarily premature optimization. It's only premature if you do so without first optimizing the architecture, data structures, algorithms, memory footprint, and flow-control.
And of course, any optimization is premature if you don't define a goal performance threshold.
In most cases, either:
A) You can reach the goal performance threshold by performing high-level optimizations, so it's not necessary to fiddle with the expressions.
or
B) Even after performing all possible optimizations, you won't meet your goal performance threshold, and the low-level optimizations don't make enough difference in performance to justify the loss of readability.
In my experience, most optimization problems can be solved at either the architecture/design or data-structure/algorithm level. Optimizing for memory footprint is often (though not always) called for. But it's rarely necessary to optimize the flow control & expression logic. And in those cases where it actually is necessary, it's rarely sufficient.

I try to only optimise when a performance issue is confirmed.
My definition of premature optimisation is 'effort wasted on code that is not known to be a performance problem.' There is most definitely a time and place for optimisation. However, the trick is to spend the extra cost only where it counts to the performance of the application and where the additional cost outweighs the performance hit.
When writing code (or a DB query) I strive to write 'efficient' code (i.e. code that performs its intended function, quickly and completely with simplest logic reasonable.) Note that 'efficient' code is not necessarily the same as 'optimised' code. Optimisations often introduce additional complexity into code which increases both the development and maintenance cost of that code.
My advice: Try to only pay the cost of optimisation when you can quantify the benefit.

When programming, a number of parameters are vital. Among these are:
Readability
Maintainability
Complexity
Robustness
Correctness
Performance
Development time
Optimisation (going for performance) often comes at the expense of other parameters, and must be balanced against the "loss" in these areas.
When you have the option of choosing well-known algorithms that perform well, the cost of "optimising" up-front is often acceptable.

Norman's answer is excellent. Somehow, you routinely do some "premature optimization" which are, actually, best practices, because doing otherwise is known to be totally inefficient.
For example, to add to Norman's list:
Using StringBuilder concatenation in Java (or C#, etc.) instead of String + String (in a loop);
Avoiding to loop in C like: for (i = 0; i < strlen(str); i++) (because strlen here is a function call walking the string each time, called on each loop);
It seems in most JavaScript implementations, it is faster to do too for (i = 0 l = str.length; i < l; i++) and it is still readable, so OK.
And so on. But such micro-optimizations should never come at the cost of readability of code.

The need to use a profiler should be left for extreme cases. The engineers of the project should be aware of where performance bottlenecks are.
I think "premature optimisation" is incredibly subjective.
If I am writing some code and I know that I should be using a Hashtable then I will do that. I won't implement it in some flawed way and then wait for the bug report to arrive a month or a year later when somebody is having a problem with it.
Redesign is more costly than optimising a design in obvious ways from the start.
Obviously some small things will be missed the first time around but these are rarely key design decisions.
Therefore: NOT optimising a design is IMO a code smell in and of itself.

It's worth noting that Knuth's original quote came from a paper he wrote promoting the use of goto in carefully selected and measured areas as a way to eliminate hotspots. His quote was a caveat he added to justify his rationale for using goto in order to speed up those critical loops.
[...] again, this is a noticeable saving in the overall running speed,
if, say, the average value of n is about 20, and if the search routine
is performed about a million or so times in the program. Such loop
optimizations [using gotos] are not difficult to learn and, as I have
said, they are appropriate in just a small part of a program, yet they
often yield substantial savings. [...]
And continues:
The conventional wisdom shared by many of today's software engineers
calls for ignoring efficiency in the small; but I believe this is
simply an overreaction to the abuses they see being practiced by
pennywise-and-pound-foolish programmers, who can't debug or maintain
their "optimized" programs. In established engineering disciplines a
12% improvement, easily obtained, is never considered marginal; and I
believe the same viewpoint should prevail in software engineering. Of
course I wouldn't bother making such optimizations on a oneshot job,
but when it's a question of preparing quality programs, I don't want
to restrict myself to tools that deny me such efficiencies [i.e., goto
statements in this context].
Keep in mind how he used "optimized" in quotes (the software probably isn't actually efficient). Also note how he isn't just criticizing these "pennywise-and-pound-foolish" programmers, but also the people who react by suggesting you should always ignore small inefficiencies. Finally, to the frequently-quoted part:
There is no doubt that the grail of efficiency leads to abuse.
Programmers waste enormous amounts of time thinking about, or worrying
about, the speed of noncritical parts of their programs, and these
attempts at efficiency actually have a strong negative impact when
debugging and maintenance are considered. We should forgot about small
efficiencies, say 97% of the time; premature optimization is the root
of all evil.
... and then some more about the importance of profiling tools:
It is often a mistake to make a priori judgments about what parts of a
program are really critical, since the universal experience of
programmers who have been using measurement tools has been that their
intuitive guesses fail. After working with such tools for seven years,
I've become convinced that all compilers written from now on should be
designed to provide all programmers with feedback indicating what
parts of their programs are costing the most; indeed, this feedback
should be supplied automatically unless it has been specifically
turned off.
People have misused his quote all over the place, often suggesting that micro-optimizations are premature when his entire paper was advocating micro-optimizations! One of the groups of people he was criticizing who echo this "conventional wisdom" as he put of always ignoring efficiencies in the small are often misusing his quote which was originally directed, in part, against such types who discourage all forms of micro-optimization.
Yet it was a quote in favor of appropriately applied micro-optimizations when used by an experienced hand holding a profiler. Today's analogical equivalent might be like, "People shouldn't be taking blind stabs at optimizing their software, but custom memory allocators can make a huge difference when applied in key areas to improve locality of reference," or, "Handwritten SIMD code using an SoA rep is really hard to maintain and you shouldn't be using it all over the place, but it can consume memory much faster when applied appropriately by an experienced and guided hand."
Any time you're trying to promote carefully-applied micro-optimizations as Knuth promoted above, it's good to throw in a disclaimer to discourage novices from getting too excited and blindly taking stabs at optimization, like rewriting their entire software to use goto. That's in part what he was doing. His quote was effectively a part of a big disclaimer, just like someone doing a motorcycle jump over a flaming fire pit might add a disclaimer that amateurs shouldn't try this at home while simultaneously criticizing those who try without proper knowledge and equipment and get hurt.
What he deemed "premature optimizations" were optimizations applied by people who effectively didn't know what they were doing: didn't know if the optimization was really needed, didn't measure with proper tools, maybe didn't understand the nature of their compiler or computer architecture, and most of all, were "pennywise-and-pound-foolish", meaning they overlooked the big opportunities to optimize (save millions of dollars) by trying to pinch pennies, and all while creating code they can no longer effectively debug and maintain.
If you don't fit in the "pennywise-and-pound-foolish" category, then you aren't prematurely optimizing by Knuth's standards, even if you're using a goto in order to speed up a critical loop (something which is unlikely to help much against today's optimizers, but if it did, and in a genuinely critical area, then you wouldn't be prematurely optimizing). If you're actually applying whatever you're doing in areas that are genuinely needed and they genuinely benefit from it, then you're doing just great in the eyes of Knuth.

Premature optimization to me means trying to improve the efficiency of your code before you have a working system, and before you have actually profiled it and know where the bottleneck is. Even after that, readability and maintainability should come before optimization in many cases.

I don't think that recognized best practices are premature optimizations. It's more about burning time on the what ifs that are potential performance problems depending on the usage scenarios. A good example: If you burn a week trying to optimize reflecting over an object before you have proof that it is a bottleneck you are prematurely optimizing.

Unless you find that you need more performance out of your application, due to either a user or business need, there's little reason to worry about optimizing. Even then, don't do anything until you've profiled your code. Then attack the parts which take the most time.

The way I see it is, if you optimize something without knowing how much performance you can gain in different scenario IS a premature optimization. The goal of code should really making it easiest for human to read.

As I posted on a similar question, the rules of optimisation are:
1) Don't optimise
2) (for experts only) Optimise later
When is optimisation premature? Usually.
The exception is perhaps in your design, or in well encapsulated code that is heavily used. In the past I've worked on some time critical code (an RSA implementation) where looking at the assembler that the compiler produced and removing a single unnecessary instruction in an inner loop gave a 30% speedup. But, the speedup from using more sophisticated algorithms was orders of magnitude more than that.
Another question to ask yourself when optimising is "am I doing the equivalent of optimising for a 300 baud modem here?". In other words, will Moore's law make your optimisation irrelevant before too long. Many problems of scaling can be solved just by throwing more hardware at the problem.
Last but not least it's premature to optimise before the program is going too slowly. If it's web application you're talking about, you can run it under load to see where the bottlenecks are - but the likelihood is that you will have the same scaling problems as most other sites, and the same solutions will apply.
edit: Incidentally, regarding the linked article, I would question many of the assumptions made. Firstly it's not true that Moore's law stopped working in the 90s. Secondly, it's not obvious that user's time is more valuable than programmer's time. Most users are (to say the least) not frantically using every CPU cycle available anyhow, they are probably waiting for the network to do something. Plus there is an opportunity cost when programmer's time is diverted from implementing something else, to shaving a few milliseconds off something that the program does while the user is on the phone. Anything longer than that isn't usually optimisation, it's bug fixing.

Related

Large Switch Statements And Efficiency

I'm taking a day off banging my head against memory management and opengl-es to try and boost efficiency. Part of the app we're working on accesses vast tables of data. These data were supplied to us in the form of switch statements, mostly.
In all, I have four switch statements, each with 2000+ cases, expected to grow. How poor is the access time going to be to these? Is it worth looking for lower-hanging optimisation fruit for now or is this a big no-no for Objective-C compilers?
Switch cases are in general quite fast, since they only do integer comparisons.
If you really want to micro-optimise, certain types of data can be stored in C-arrays for extremely fast lookup using pointer arithmetic. This is something that you should only ever look into if you really need the extra speed - pointer arithmetic involves a lot of potential bugs, many of which can be quite hard to debug.
The real question is: have you done any profiling? Shark is a very effective tool when it comes to time profiling of iOS apps - use it, and see how much execution time is being spent on your switch case code. If it's less than 5-10%, there's probably no point in even considering optimisation.

How important is optimization?

I got into a bit of a debate yesterday with my boss about the proper role of optimization when building software. Essentially, his position was that optimization needs to be a primary concern during the entire process of development.
My opinion is that you need to make the right algorithmic decisions during development, but you should never be counting cycles during development. In fact, I feel so strongly about this I had to walk away from the conversation. I've seen too many bad programming decisions in the name of "optimization", and too much bad code defended with the excuse "this way is faster".
What does the StackOverflow.com community think?
"We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%."
- Donald Knuth
I think the premature optimization quote is used by too many to avoid thinking about the hard stuff concerning how well the application will run. I guarantee the users want you to think about how to design it so it will run as fast as possible.
This is not to say you should be timing everything, but the design phase is the easiest place to optimize and not cost lots of time later.
There are often several ways to do anything, you should pick in the design phase the one which is most likely to perform the best (if it turns out to be one of the times when it isn't the best, then optimize later). This should trump the need to have easy to read code.
If you aren't considering performance in the design phase, you aren't going to have a well designed system. That doesn't mean it should be the only concern (although in a database I'd rate it as 3rd in importance, right after data integrity and security), but trying to fix a system where poorly performing techniques were used throughout because the developers thought they were easier to understand is a nightmare. Being a user of such a system where you have to wait for minutes everytime you want to move from one screen to another is a nightmare (developers reallly should spend all day everyday for at least a week, using their systems!) for everyone who is stuck with the badly designed system. It costs less to design properly than to fix later and considering performance is critical to designing properly.
I work somewhere where the orginal developers drank the koolaid about premature optimization and did everything the way they thought was simplest (but which in almost every case was the wrong choice from a performance perspective). Now we are at 10 times the size we were three years ago and every screen on every website takes 30 seconds or so to load (or worse times out) and we are losing customers because of it. But changing it will be too hard because at the base they designed the database without considering how it would perform and redesigning a database with many many gigabytes of data into a new structure is way too time consuming and costly. If it had been designed to perform from the start it would be both easier to maintain and faster for the clients. We aren't talking about the need to performance tune the top 10 slowest queries here, we are talking about the fact that the overall structure requires a drastic change (one that would affect virtually every query against the system) to perform well.
Yes don't do micro optimization until you nmeed to, but please do the macro stuff. Consider if is this the best way before you commit to the path. Don't write cursors to hit tables with millions of records when a set-based statement will do. Don't try to have as few tables as possible becasue that seems to be a more elegant solution when the tables are storing disparate items (such as people, places, and vehicles) causing every single query to hit the same table and causing every delete to check all sorts of foreign key tables that will not ever have a record for that type of entity (it takes minutes to delete one record from the main table in our database, it's a real joy when something goes wrong in an import (bad data from a client usually) and we have to delete 200,000 let me tell you).
Optimization is almost tautologically a tradeoff: you gain runtime efficiency at the cost of other things (readability, maintainability, flexibility, compile times, etc.). As such, it's really never a good idea to do unless you know what the tradeoff is and why it's worthwhile.
Even worse, thinking about "how do I do X fast" can be very distracting. To much energy in that direction can easily lead to you misisng out on method Y which is much better (and often faster --- this is how "optimization" can make your code slower). Particularly if you do too much of this on a big project from the beginning, it represents a lot of momentum. If you can't afford to overcome that momentum, you can easily become locked into a bad design because you can't afford the time to restructure it. This way lie dragons
What your boss may be thinking of is more of an issue of writing bad code via inappropriate representations and algorithms. It's not really the same thing as optimizing, but an approach where you pay no attention whatsoever to appropriate data structures etc. can result in a codebase that is slow everywhere, and (much like the above "lock in") requires heroic effort to fix.
In general though, premature optimization really honestly is a terrible idea. Particularly when you end up with a complex, finely tuned, well documented (because that's the only way you can understand it) piece of code you end up not using anyway. And that's not even getting into the issue of subtle bugs that are often introduced when "optimizing"
[edit: pshaw, of course a Knuth quote encapsulates this well. That's what I get for typing too much]
Engineer throughout, optimize at the end.
Since with going with pithy, I'll say that optimization is as important as the impact of not doing it.
I think the "premature optimization is root of all evil" has to be understood literally - it does not say when is premature, and does not say you should optimize only at the end. Just not too early. Also, the "use the right algorithm, O(n^2) vs O(N)" is a bit dangerous if taken literally - because for many problems, the N is actually small, etc...
I think it depends a lot of the type of software you are doing: some software are such as every part is very independent, and can be optimized separately. But that's not always the case. For many (most ?) applications, speed just does not matter at all, the brute force but obviously correct way is the best one. But for projects where speed matters, it often has to be taken into account early - maybe that's another possible interpretation of Knuth's saying: many applications don't need to be optimized at all, just know which ones need and plan ahead.
Optimization is a primary concern through development when you have a good reason to expect that performance will be unfixably bad if optimization is a secondary concern.
This depends a lot what kind of code you're writing, but there are often better reasons to believe that your code will be unfixably difficult to use; or maintain; or full of bugs; or late; if all those things become secondary to tweaking performance.
Bad managers say, "all of those things are our primary concerns". Good managers work to find out which are dangers for this project.
Of course, good design does have to consider all these things, and the earlier you have a back-of-the-envelope estimate of any of them, the better. If all your manager is saying, is that if you never think about how fast your code will run then too much of your code will be dog-slow, then he's right. I just wouldn't say that makes optimization your "primary" concern.
If the USP of your software is that it's faster than your competitors', then optimization is a primary concern. With experience, you can often predict what sorts of operations will be the bottlenecks, design those with optimization in mind right from the start, and more-or-less ignore optimization elsewhere. A lot of projects won't even need this: they'll be fast enough without much effort, provided you use sensible algorithms and don't do anything stupid. "Don't do anything stupid" is always a primary concern, with no need to mention performance in particular.
I think that code needs to be, first and foremost, readable and understandable. So, optimisations that are done, should not be at the expense of readability. However, optimisation is often a trade-off.
Whether or not you should optimise your code depends on your application domain. If you are working on an embedded processor with only 8Mb of memory, then optimisation is probably something that every team member needs to keep in mind, when writing code - optimising for space vs speed.
However, pre-mature optimisation is not useful unless your system has been clearly spec'ed and understood. This is because most programmers do not make good optimisation decisions unless they can factor in the influence of the overall system, including processor architectural factors such as cache memory, hardware threads, pipelines, etc.
From 2 years of building highly optimized Java code (and that needed to be optimized that way) I would say that there is a time-spent rule that governs optimization:
optimizing on the spot: 5%-10% of your development time, because you have to do it countless times (every single time you have to amend your design)
optimizing just when you have had it working: 2% of your development time (you do it only once)
going back to it and optimizing when it's too slow: 30% of your development time, because you have to plunge yourself back into the system
SO I would come to the conclusion that there is a right time and a right way to optimize: do it entity by entity (class by class, if you have classes that have a single, well defined job to do, and can be tested and optimized), test well, make sure the logic is working, optimize just afterward, and forget about that entity's implementation details forever.
When developing, just keep it simple. IMHO, most performance problems are caused by over-engineering - making mountains out of molehills, often because of wanting "the right algorithm".
Periodically, stress test with a big data set, profiling or (my favorite technique) manual random sampling. You find a problem, you fix it. You find another, you fix it.
That way you avoid creating slugs (slowness bugs), and when they do arise, you kill them.
Added: If I can just elaborate on point 1. OO is seemingly the law of the land, and it certainly has good reasons behind it. Unfortunately it causes many young programmers to feel that programming is all about having lots of data structure, with layers upon layers of abstraction. Not that those are inherently bad, but combine that with the natural tendency to assume that the time something takes is roughly proportional to the number of characters you have to type to invoke it, and that this tendency multiplies over the layers (and besides, the machines are really fast), it's easy to create a perfect storm of cycle-waste.
Quote from a friend: "It's easier to make a working system efficient than to make an efficient system work".
I think it is important to use smart practices and patterns from the start, but get the system actually running for small test cases then do performance analysis. Frequently the areas of code that have poor performance aren't anticipated at the beginning, so get some real data and then optimize the bottlenecking 3% (or 20%, or whatever it is).
I think your boss is a lot more right than you are.
Allt too often the user experience is lost only to be "rediscovered" at the last possible moments when performance activites are prohibitively costly and inefficient. Or when it is discovered that the batch program that will process today's transactions requires forty hours to run.
Things such as database organization, when and when not to do which SELECTs are examples of design decisions that can make or break an application. Still you run the risk of a single programmer deciding to to otherwise, misinterpret or simply not understand what to do. Following-up on performance during an entire project decreases the risk that such things will happen. It also allows design decisions to be changed when there is a need for that without puting the entire poroject at risk.
"You need to make the right algorithmic decisions during development" is certainly true yet how many mainstream programmers are able to do that? Browsing the net for information does not guarantee finding a high quality solution. "Right" could be interpreted as meaning that it is ok to choose a poor algorithm because it is easy to understand and implement (= less development time, lower cost) than a more complicated one (= more development time, higher cost).
The pendulum of quantity vs quality is almost always on the quantity side because more code per hour or faster development time means money in the short term. The quality side means money in the long term.
EDIT
This article discusses performance and optimization quite thoroughly.
Performance preacher Rico Mariana sums it up in the short statement "never give up your performance accidentally."
Premature optimization is the root of all evil...There is a fine balance between, but I would say 95% of the time you need to optimize at the end; however, there are decisions you can make early on to help prevent issues. For example assume we are talking about an e-commerce web site. You have a requirement to display the catalog. Now you can grab all 100,000 items and display 50 of them, or you can grab just 50 from the database. These type of decisions should be made up front.
Cycle counting should only be done when a problem has been identified.
Your boss is partly right, optimisation does need to be considered throughout the development lifecycle but it is rarely the primary concern. Also, the term 'optimisation' is vague - it's an adjective, 'optimise for ...' which could be 'memory', 'speed', 'usability', 'maintainability' and so on.
However, the OP is right that counting cycles is pointless for many projects. For most PC applications the CPU is never the bottleneck. Also, the IA32 is not consistent - what worked well on one architecture, performs poorly on another. Cycle counting should only ever be done when it will actually make a difference - usually in CPU limited code or code with very specific timing needs.
Optimisation, of any kind, must always be driven by hard evidence. Never assume anything about the system or how the code is behaving. In an ideal world, application performance / constraints will be specified in the initial product design and tools to monitor the application's performance during development will be added early on in the development phase to guide the programmers as the product is made.

basic operations cpu time cost

I was wondering, how to optimize loops for systems with very limited resources. Let's say, if we have a basic for loop, like ( written in javascript):
for(var i = someArr.length - 1; i > -1; i--)
{
someArr[i]
}
I honestly don't know, isn't != cheaper than > ?
I would be grateful for any resources covering computing cost in context of basic operators, like the aforementioned, >>, ~, !, and so on.
Performance on a modern CPU is far from trivial. Here are a couple of things that complicate it:
Computers are fast. Your CPU can execute upwards of 6 billion instructions per second. So even the slowest instruction can be executed millions of times per second, meaning that it only really matters if you use it very often
Modern CPU's have hundreds of instructions in flight simultaneously. They are pipelined, meaning that while one instruction is being read, another is reading from registers, a third one is executing, and a fourth one is writing back to a register. Modern CPU's have 15-20 of such stages. On top of this, they can execute 3-4 instructions at the same time on each of these stages. And they can reorder these instructions. If the multiplication unit is being used by another instruction, perhaps we can find an addition instruction to execute instead, for example. So even if you have some slow instructions mixed in, their cost can be hidden very well most of the time, by executing other instructions while waiting for the slow one to finish.
Memory is hundreds of times slower than the CPU. The instructions being executed don't really matter if their cost is dwarfed by retrieval of data from memory. And even this isn't reliable, because the CPU has its own onboard caches to attempt to hide this cost.
So the short answer is "don't try to outsmart the compiler". If you are able to choose between two equivalent expressions, the compiler is probably able to do the same, and will pick the most efficient one. The cost of an instruction varies, depending on all the above factors. Which other instructions are executing, what data is in the CPU's cache, which precise CPU model is the code running on, and so on. Code that is super efficient in one case may be very inefficient in other cases. The compiler will try to pick the most generally efficient instructions, and schedule them as well as possible. Unless you know more than the compiler about this, you're unlikely to be able to do a better job of it.
Don't try such microoptimizations unless you really know what you're doing. As the above shows, low-level performance is a ridiculously complex subject, and it's very easy to write "optimizations" that result in far slower code. Or which just sacrifice readability on something that makes no difference at all.
Further, most of your code simply doesn't have a measurable impact on performance.
People generally love quoting (or misquoting) Knuth on this subject:
We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil
People often interpret this as "don't bother trying to optimize your code". If you actually read the full quote, some much more interesting consequences should become clear:
Most of the time, we should forget about microoptimizations. Most code is executed so rarely that optimizations won't matter. Keeping in mind the number of instructions a CPU can execute per second, it is obvious that a block of code has to be executed very often for optimizations in it to have any effect. So about 97% of the time, your optimizations will be a waste of time. But he also says that sometimes (3% of the time), your optimizations will matter. And obviously, looking for those 3% is a bit like looking for a needle in a haystack. If you just decide to "optimize your code" in general, you're going to waste your time on the first 97%. Instead, you need to first locate the 3% that actually need optimizing. In other words, run your code through a profiler, and let it tell you which code takes up the most CPU time. Then you know where to optimize. And then your optimizations are no longer premature.
It is extraordinarily unlikely that such micro-optimizations will make a noticeable difference to your code in any but the most extreme (real time embedded systems?) circumstances. Your time would probably be better served worrying about making your code readable and maintainable.
When in doubt, always begin by asking Donald Knuth:
http://shreevatsa.wordpress.com/2008/05/16/premature-optimization-is-the-root-of-all-evil/
Or, for a slightly less high-brow take on micro-optimization:
http://www.codinghorror.com/blog/archives/000185.html
Most comparations have same coast, because the processor simply compares it in all aspects, then after that it takes a decision based on flags generated by this previous comparation so the comparation signal doesn't matter at all. But some architectures try to accelerate this process based on the value you are comparing with, like comparations against 0.
As far as I know, bitwise operations are the cheapest operations, slightly faster than addition and subtraction. Multiplication and division operations are a little more expensive, and comparation is the highest coast operation.
That's like asking for a fish, when I would rather teach you to fish.
There are simple ways to see for yourself how long things take. My favorite is to just copy the code 10 times, and then wrap it in a loop of 10^8 times. If I run it and look at my watch, the number of seconds it takes translates to nanoseconds.
Saying don't do premature optimization is a "don't be". If you want a "do be" you could try a proactive performance tuning technique like this.
BTW my favorite way of coding your loop is:
for (i = N; --i >= 0;){...}
Premature Optimization can be dangerous the best approach would be to write your application without worrying about that and then find the slow points and optimize those. If you are really worried about this use a lower level language. An interpreted language like javascript will cost you some processing power when compared to a lower level language like C.
In this particular case, > vs = is probably not a perfomance issue. HOWEVER > is generally safer choice because prevents cases where you modified the code from running off into the weeds and stuck in a infinite loop.

Compiler optimizations: Where/how can I get a feel for what the payoff is for different optimizations?

In my independent study of various compiler books and web sites, I am learning about many different ways that a compiler can optimize the code that is being compiled, but I am having trouble figuring out how much of a benefit each optimization will tend to give.
How do most compiler writers go about deciding which optimizations to implement first? Or which optimizations are worth the effort or not worth the effort? I realize that this will vary between types of code and even individual programs, but I'm hoping that there is enough similarity between most programs to say, for instance, that one given technique will usually give you a better performance gain than another technique.
I found when implementing textbook compiler optimizations that some of them tended to reverse the improvements made by other optimizations. This entailed a lot of work trying to find the right balance between them.
So there really isn't a good answer to your question. Everything is a tradeoff. Many optimizations work well on one type of code, but are pessimizations for other types. It's like designing a house - if you make the kitchen bigger, the pantry gets smaller.
The real work in building an optimizer is trying out the various combinations, benchmarking the results, and, like a master chef, picking the right mix of ingredients.
Tongue in cheek:
Hubris
Benchmarks
Embarrassment
More seriously, it depends on your compiler's architecture and goals. Here's one person's experience...
Go for the "big payoffs":
native code generation
register allocation
instruction scheduling
Go for the remaining "low hanging fruit":
strength reduction
constant propagation
copy propagation
Keep bennchmarking.
Look at the output; fix anything that looks stupid.
It is usually the case that combining optimizations, or even repeating optimization passes, is more effective than you might expect. The benefit is more than the sum of the parts.
You may find that introduction of one optimization may necessitate another. For example, SSA with Briggs-Chaitin register allocation really benefits from copy propagation.
Historically, there are "algorithmical" optimizations from which the code should benefit in most of the cases, like loop unrolling (and compiler writers should implement those "documented" and "tested" optimizations first).
Then there are types of optimizations that could benefit from the type of processor used (like using SIMD instructions on modern CPUs).
See Compiler Optimizations on Wikipedia for a reference.
Finally, various type of optimizations could be tested profiling the code or doing accurate timing of repeated executions.
I'm not a compiler writer, but why not just incrementally optimize portions of your code, profiling all the while?
My optimization scheme usually goes:
1) make sure the program is working
2) find something to optimize
3) optimize it
4) compare the test results with what came out from 1; if they are different, then the optimization is actually a breaking change.
5) compare the timing difference
Incrementally, I'll get it faster.
I choose which portions to focus on by using a profiler. I'm not sure what extra information you'll garner by asking the compiler writers.
This really depends on what you are compiling. There is was a reasonably good discussion about this on the LLVM mailing list recently, it is of course somewhat specific to the optimizers they have available. They use abbreviations for a lot of their optimization passes, if you not familiar with any of acronyms they are tossing around you can look at their passes page for documentation. Ultimately you can spend years reading academic papers on this subject.
This is one of those topics where academic papers (ACM perhaps?) may be one of the better sources of up-to-date information. The best thing to do if you really want to know could be to create some code in unoptimized form and some in the form that the optimization would take (loops unrolled, etc) and actually figure out where the gains are likely to be using a compiler with optimizations turned off.
It is worth noting that in many cases, compiler writers will NOT spend much time, if any, on ensuring that their libraries are optimized. Benchmarks tend to de-emphasize or even ignore library differences, presumably because you can just use different libraries. For example, the permutation algorithms in GCC are asymptotically* less efficient than they could be when trying to permute complex data. This relates to incorrectly making deep copies during calls to swap functions. This will likely be corrected in most compilers with the introduction of rvalue references (part of the C++0x standard). Rewriting the STL to be much faster is surprisingly easy.
*This assumes the size of the class being permuted is variable. E.g. permutting a vector of vectors of ints would slow down if the vectors of ints were larger.
One that can give big speedups but is rarely done is to insert memory prefetch instructions. The trick is to figure out what memory the program will be wanting far enough in advance, never ask for the wrong memory and never overflow the D-cache.

How to avoid the dangers of optimisation when designing for the unknown?

A two parter:
1) Say you're designing a new type of application and you're in the process of coming up with new algorithms to express the concepts and content -- does it make sense to attempt to actively not consider optimisation techniques at that stage, even if in the back of your mind you fear it might end up as O(N!) over millions of elements?
2) If so, say to avoid limiting cool functionality which you might be able to optimise once the proof-of-concept is running -- how do you stop yourself from this programmers habit of a lifetime? I've been trying mental exercises, paper notes, but I grew up essentially counting clock cycles in assembler and I continually find myself vetoing potential solutions for being too wasteful before fully considering the functional value.
Edit: This is about designing something which hasn't been done before (the unknown), when you're not even sure if it can be done in theory, never mind with unlimited computing power at hand. So answers along the line of "of course you have to optimise before you have a prototype because it's an established computing principle," aren't particularly useful.
I say all the following not because I think you don't already know it, but to provide moral support while you suppress your inner critic :-)
The key is to retain sanity.
If you find yourself writing a Theta(N!) algorithm which is expected to scale, then you're crazy. You'll have to throw it away, so you might as well start now finding a better algorithm that you might actually use.
If you find yourself worrying about whether a bit of Pentium code, that executes precisely once per user keypress, will take 10 cycles or 10K cycles, then you're crazy. The CPU is 95% idle. Give it ten thousand measly cycles. Raise an enhancement ticket if you must, but step slowly away from the assembler.
Once thing to decide is whether the project is "write a research prototype and then evolve it into a real product", or "write a research prototype". With obviously an expectation that if the research succeeds, there will be another related project down the line.
In the latter case (which from comments sounds like what you have), you can afford to write something that only works for N<=7 and even then causes brownouts from here to Cincinnati. That's still something you weren't sure you could do. Once you have a feel for the problem, then you'll have a better idea what the performance issues are.
What you're doing, is striking a balance between wasting time now (on considerations that your research proves irrelevant) with wasting time later (because you didn't consider something now that turns out to be important). The more risky your research is, the more you should be happy just to do something, and worry about what you've done later.
My big answer is Test Driven Development. By writing all your tests up front then you force yourself to only write enough code to implement the behavior you are looking for. If timing and clock cycles becomes a requirement then you can write tests to cover that scenario and then refactor your code to meet those requirements.
Like security and usability, performance is something that has to be considered from the beginning of the project. As such, you should definitely be designing with good performance in mind.
The old Knuth line is "We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil." O(N!) to O(poly(N)) is not a "small efficiency"!
The best way to handle type 1 is to start with the simplest thing that could possibly work (O(N!) cannot possibly work unless you're not scaling past a couple dozen elements!) and encapsulate it from the rest of the application so you could rewrite it to a better approach assuming that there is going to be a performance issue.
Optimization isn't exactly a danger; its good to think about speed to some extent when writing code, because it stops you from implementing slow and messy solutions when something simpler and faster would do. It also gives you a check in your mind on whether something is going to be practical or not.
The worst thing that can happen is you design a large program explicitly ignoring optimization, only to go back and find that your entire design is completely useless because it cannot be optimized without completely rewriting it. This never happens if you consider everything when writing it--and part of that "everything" is potential performance issues.
"Premature optimization is the root of all evil" is the root of all evil. I've seen projects crippled by overuse of this concept. At my company we have a software program that broadcasts transport streams from disk on the network. It was originally created for testing purposes (so we would just need a few streams at once), but it was always in the program's spec requirements that it work for larger numbers of streams so it could later be used for video on demand.
Because it was written completely ignoring speed, it was a mess; it had tons of memcpys despite the fact that they should never be necessary, its TS processing code was absurdly slow (it actually parsed every single TS packet multiple times), and so forth. It handled a mere 40 streams at a time instead of the thousands it was supposed to, and when it actually came time to use it for VOD, we had to go back and spend a huge amount of time cleaning it up and rewriting large parts of it.
"First, make it run. Then make it run fast."
or
"To finish first, first you have to finish."
Slow existing app is usually better than ultra-fast non-existing app.
First of all peopleclaim that finishign is only thing that matters (or almost).
But if you finish a product that has O(N!) complexity on its main algorithm, as a rule of thumb you did not finished it! You have an incomplete and unacceptable product for 99% of the cases.
A reasonable performance is part of a working product. A perfect performance might not be. If you finish a text editor that needs 6 GB of memory to write a short note, then you have not finished a product at all, you have only a waste of time at your hands.. You must remember always that is not only delivering code that makes a product complete, is making it achieve capability of supplying the costumer/users needs. If you fail at that it matters nothing that you have finished the code writing in the schedule.
So all optimizations that avoid a resulting useless product are due to be considered and applied as soon as they do not compromise the rest of design and implementation proccess.
"actively not consider optimisation" sounds really weird to me. Usually 80/20 rule works quite good. If you spend 80% of your time to optimize program for less than 20% of use cases, it might be better to not waste time unless those 20% of use-cases really matter.
As for perfectionism, there is nothing wrong with it unless it starts to slow you down and makes you miss time-frames. Art of computer programming is an act of balancing between beauty and functionality of your applications. To help yourself consider learning time-management. When you learn how to split and measure your work, it would be easy to decide whether to optimize it right now, or create working version.
I think it is quite reasonable to forget about O(N!) worst case for an algorithm. First you need to determine that a given process is possible at all. Keep in mind that Moore's law is still in effect, so even bad algorithms will take less time in 10 or 20 years!
First optimize for Design -- e.g. get it to work first :-) Then optimize for performance. This is the kind of tradeoff python programmers do inherently. By programming in a language that is typically slower at run-time, but is higher level (e.g. compared to C/C++) and thus faster to develop, python programmers are able to accomplish quite a bit. Then they focus on optimization.
One caveat, if the time it takes to finish is so long that you can't determine if your algorithm is right, then it is a very good time to worry about optimization earlier up stream. I've encountered this scenario only a few times -- but good to be aware of it.
Following on from onebyone's answer there's a big difference between optimising the code and optimising the algorithm.
Yes, at this stage optimising the code is going to be of questionable benefit. You don't know where the real bottlenecks are, you don't know if there is going to be a speed problem in the first place.
But being mindful of scaling issues even at this stage of the development of your algorithm/data structures etc. is not only reasonable but I suspect essential. After all there's not going to be a lot of point continuing if your back-of-the-envelope analysis says that you won't be able to run your shiny new application once to completion before the heat death of the universe happens. ;-)
I like this question, so I'm giving an answer, even though others have already answered it.
When I was in grad school, in the MIT AI Lab, we faced this situation all the time, where we were trying to write programs to gain understanding into language, vision, learning, reasoning, etc.
My impression was that those who made progress were more interested in writing programs that would do something interesting than do something fast. In fact, time spent worrying about performance was basically subtracted from time spent conceiving interesting behavior.
Now I work on more prosaic stuff, but the same principle applies. If I get something working I can always make it work faster.
I would caution however that the way software engineering is now taught strongly encourages making mountains out of molehills. Rather than just getting it done, folks are taught to create a class hierarchy, with as many layers of abstraction as they can make, with services, interface specifications, plugins, and everything under the sun. They are not taught to use these things as sparingly as possible.
The result is monstrously overcomplicated software that is much harder to optimize because it is much more complicated to change.
I think the only way to avoid this is to get a lot of experience doing performance tuning and in that way come to recognize the design approaches that lead to this overcomplication. (Such as: an over-emphasis on classes and data structure.)
Here is an example of tuning an application that has been written in the way that is generally taught.
I will give a little story about something that happened to me, but not really an answer.
I am developing a project for a client where one part of it is processing very large scans (images) on the server. When i wrote it i was looking for functionality, but i thought of several ways to optimize the code so it was faster and used less memory.
Now an issue has arisen. During Demos to potential clients for this software and beta testing, on the demo unit (self contained laptop) it fails due to too much memory being used. It also fails on the dev server with really large files.
So was it an optimization, or was it a known future bug. Do i fix it or oprtimize it now? well, that is to be determined as their are other priorities as well.
It just makes me wish I did spend the time to reoptimize the code earlier on.
Think about the operational scenarios. ( use cases)
Say that we're making a pizza-shop finder gizmo.
The user turns on the machine. It has to show him the nearest Pizza shop in meaningful time. It Turns out our users want to know fast: in under 15 seconds.
So now, any idea you have, you think: is this going to ever, realistically run in some time less than 15 seconds, less all other time spend doing important stuff..
Or you're a trading system: accurate sums. Less than a millisecond per trade if you can, please. (They'd probably accept 10ms), so , agian: you look at every idea from the relevant scenarios point of view.
Say it's a phone app: has to start in under (how many seconds)
Demonstrations to customers fomr laptops are ALWAYS a scenario. We've got to sell the product.
Maintenance, where some person upgrades the thing are ALWAYS a scenario.
So now, as an example: all the hard, AI heavy, lisp-customized approaches are not suitable.
Or for different strokes, the XML server configuration file is not user friendly enough.
See how that helps.
If I'm concerned about the codes ability to handle data growth, before I get too far along I try to set up sample data sets in large chunk increments to test it with like:
1000 records
10000 records
100000 records
1000000 records
and see where it breaks or becomes un-usable. Then you can decide based on real data if you need to optimize or re-design the core algorithms.