I see this term used a lot but I feel like most people use it out of laziness or ignorance. For instance, I was reading this article:
http://blogs.msdn.com/b/ricom/archive/2006/09/07/745085.aspx
where he talks about his decisions he makes to implement the types necessary for his app.
If it was me, talking about these for code that we need to write, other programmers would think either:
- I am thinking way too much ahead when there is nothing and thus prematurely optimizing.
- Over-thinking insignificant details when there is no slowdowns or performance problems experienced.
or both.
and would suggest to just implement it and not worry about these until they become a problem.
Which is more preferential?
How to make the differentiation between premature optimization vs informed decision making for a performance critical application before any implementation is done?
Note that optimization is not free (as in beer)
So before optimizing anything, you should be sure it's worth it.
That Point3D type you linked to seems like the cornerstone of something, and the case for optimization was probably obvious.
Just like the creators of the .NET library didn't need any measurements before they started optimizing System.String. They would have to measure during though.
But most code does not play a significant role in the performance of the end product. And that means any effort in optimization is wasted.
Besides all that, most 'premature optimizations' are untested/unmeasured hacks.
When you have less that 10 years of coding experience.
Optimization is tricky. Consider the following examples:
My bottom line here is simple. Optimization is a broad term. When people talk about premature optimization, they don't mean you need to just do the first thing that comes to mind without considering the complete picture. They are saying you should:
It really all boils down to your experience. If you are an expert in image processing, and someone requests you do something you did ten times before, you will probably push all your known optimizations right from the beginning, but that would be ok. Premature optimization is when you're trying to optimize something when you don't know it needs optimization to begin with. The reason for that is simple - it's risky, it's wasting your time, and it will be less maintainable. So unless you're experienced and you've been down that road before, don't optimize if you don't know there's a problem.
Optimization is premature if:
Your application isn't doing anything time-critical. (Which means, if you're writing a program that adds up 500 numbers in a file, the word "optimization" shouldn't even pop into your brain, since all it'll do is waste your time.)
You're doing something time-critical in something other than assembly, and still worrying whether
i++; i++;
is faster ori += 2
... if it's really that critical, you'd be working in assembly and not wasting time worrying about this. (Even then, this particular example most likely won't matter.)You have a hunch that one thing might be a bit faster than the other, but you need to look it up. For example, if something is bugging you about whether
StopWatch
is faster orEnvironment.TickCount
, it's premature optimization, since if the difference was bigger, you'd probably be more sure and wouldn't need to look it up.If you have a guess that something might be slow but you're not too sure, just put a
//NOTE: Performance?
comment, and if you later run into bottlenecks, check such places in your code. I personally don't worry about optimizations that aren't too obvious; I just use a profiler later, if I need to.Another technique:
I just run my program, randomly break into it with the debugger, and see where it stopped -- wherever it stops is likely a bottleneck, and the more often it stops there, the worse the bottleneck. It works almost like magic. :)