As Knuth said,
We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.
This is something which often comes up in Stack Overflow answers to questions like "which is the most efficient loop mechanism", "SQL optimisation techniques?" (and so on). The standard answer to these optimisation-tips questions is to profile your code and see if it's a problem first, and if it's not, then therefore your new technique is unneeded.
My question is, if a particular technique is different but not particularly obscure or obfuscated, can that really be considered a premature optimisation?
Here's a related article by Randall Hyde called The Fallacy of Premature Optimization.
What you seem to be talking about is optimization like using a hash-based lookup container vs an indexed one like an array when a lot of key lookups will be done. This is not premature optimization, but something you should decide in the design phase.
The kind of optimization the Knuth rule is about is minimizing the length the most common codepaths, optimizing the code that is run most by for example rewriting in assembly or simplifying the code, making it less general. But doing this has no use until you are certain which parts of code need this kind of optimization and optimizing will (could?) make the code harder to understand or maintain, hence "premature optimization is the root of all evil".
Knuth also says it is always better to, instead of optimizing, change the algorithms your program uses, the approach it takes to a problem. For example whereas a little tweaking might give you a 10% increase of speed with optimization, changing fundamentally the way your program works might make it 10x faster.
In reaction to a lot of the other comments posted on this question: algorithm selection != optimization
When programming, a number of parameters are vital. Among these are:
Optimisation (going for performance) often comes at the expense of other parameters, and must be balanced against the "loss" in these areas.
When you have the option of choosing well-known algorithms that perform well, the cost of "optimising" up-front is often acceptable.
Norman's answer is excellent. Somehow, you routinely do some "premature optimization" which are, actually, best practices, because doing otherwise is known to be totally inefficient.
For example, to add to Norman's list:
for (i = 0; i < strlen(str); i++)
(because strlen here is a function call walking the string each time, called on each loop);for (i = 0 l = str.length; i < l; i++)
and it is still readable, so OK.And so on. But such micro-optimizations should never come at the cost of readability of code.
From a database perspective, not to consider optimal design at the design stage is foolhardy at best. Databases do not refactor easily. Once they are poorly designed (this is what a design that doesn't consider optimization is no matter how you might try to hide behind the nonsense of premature optimization), is almost never able to recover from that becasue the database is too basic to the operation of the whole system. It is far less costly to design correctly considering the optimal code for the situation you expect than to wait until the there are a million users and people are screaming becasue you used cursors throughout the application. Other optimizations such as using sargeable code, selecting what look to be the best possible indexes, etc. only make sense to do at design time. There is a reason why quick and dirty is called that. Because it can't work well ever, so don't use quickness as a substitute for good code. Also frankly when you understand performance tuning in databases, you can write code that is more likely to perform well in the same time or less than it takes to write code which doesn't perform well. Not taking the time to learn what is good performing database design is developer laziness, not best practice.
I don't think that recognized best practices are premature optimizations. It's more about burning time on the what ifs that are potential performance problems depending on the usage scenarios. A good example: If you burn a week trying to optimize reflecting over an object before you have proof that it is a bottleneck you are prematurely optimizing.
Here's the problem I see with the whole concept of avoiding premature optimization.
There's a disconnect between saying it and doing it.
I've done lots of performance tuning, squeezing large factors out of otherwise well-designed code, seemingly done without premature optimization. Here's an example.
In almost every case, the reason for the suboptimal performance is what I call galloping generality, which is the use of abstract multi-layer classes and thorough object-oriented design, where simple concepts would be less elegant but entirely sufficient.
And in the teaching material where these abstract design concepts are taught, such as notification-driven architecture, and information-hiding where simply setting a boolean property of an object can have an unbounded ripple effect of activities, what is the reason given? Efficiency.
So, was that premature optimization or not?