I just posted some micro-optimization tricks I found efficient and I got the following question:
Khaja Minhajuddin Great article and very good tips. However, do you think we really need to invest our time in micro optimizations when throwing some more memory and CPU cycles would have the same effect. Do you think in Today’s age of cheap computing power, Micro Optimizations are still needed?
Khaja, my position is very clear on this. For any software that have human users (or better said, for any software), it is the responsibility of programmers to struggle for optimal performance. The reason is simple: there is nothing more valuable than the time of human user.
A concrete example: if micro-optimizations could make run VisualStudio and its ecosystem 25% faster for all use-cases, it could certainly save me 15 minutes a day. In other word I estimate I wait one hour a day development tasks completion (compilation, test run, test coverage, starting time, debug switch, static analysis…).
Another concrete example. I remember back in good old PDC 2003 Microsoft bet on faster hardware for its future Vista. Mr Gates started its speech with something like: in 2 years, hardware will be much faster…GPU…CPU clock…RAM access time… We know all the result: an OS that makes everything noticeably slower compare to its predecessor (startup time, file copy, program startup…). It seems that Microsoft learnt (the hard way) the lesson, and now they promise for Windows 7 at least XP performances and even better. But let’s do a quick math:
- number of Vista users, let’s say 200M human users
- number of hours spend on computer everyday, let’s say 3 hours
We got: 600M human/hours
- Let’s say Vista slows down 15% compare to XP, which leads to a 2% time loss for users, on 3 hours it is something like 3 minutes and half per day.
The result is 12M hours of user-time waiting per day. Or in other words, each days, almost 1.400 human/years (something like 19 entire life!) are wasted, just because some code performance is not optimal! (ok all this time might not be completely wasted, it is generally spent making coffee, going toilets, doing simple talk…[:)])
Never think that fast CPU is a reason to develop slower code, I mean both slower than it used to be and slower than it could be. First, your users will hate you if you make them noticeable wait, second CPU doesn’t get any faster nowadays (SSD does still), CPU are just being multiplied which makes the paradigm completely different.
I underlined the word noticeable because hopefully, with today hardware, code can do many many tricky and complex things in a fraction of second (fraction of second = not noticeable from the human user point of view). Just imagine the amount of data treated per frame by any cheap CPU+GPU 3D console (50 frames per seconds!). I underlined the word noticeable also because human users care more for responsiveness than pure-performance, if you have, say 1000 rows of a datagrid to compute, you can still present the first 50 visible rows to user and compute the invisible rows in background, or even better, lazy compute the invisible rows.