Do we need Micro-Optimization?

I just posted some micro-optimization tricks I found efficient and I got the following question:

Khaja Minhajuddin Great article and very good tips. However, do you think we really need to invest our time in micro optimizations when throwing some more memory and CPU cycles would have the same effect. Do you think in Today’s age of cheap computing power, Micro Optimizations are still needed?

Khaja, my position is very clear on this. For any software that have human users (or better said, for any software), it is the responsibility of programmers to struggle for optimal performance. The reason is simple: there is nothing more valuable than the time of human user.

A concrete example: if micro-optimizations could make run VisualStudio and its ecosystem 25% faster for all use-cases, it could certainly save me 15 minutes a day. In other word I estimate I wait one hour a day development tasks completion (compilation, test run, test coverage, starting time, debug switch, static analysis…).

Another concrete example. I remember back in good old PDC 2003 Microsoft bet on faster hardware for its future Vista. Mr Gates started its speech with something like: in 2 years, hardware will be much faster…GPU…CPU clock…RAM access time… We know all the result: an OS that makes everything noticeably slower compare to its predecessor (startup time, file copy, program startup…).  It seems that Microsoft learnt (the hard way) the lesson, and now they promise for Windows 7 at least XP performances and even better. But let’s do a quick math:

  • number of Vista users, let’s say 200M human users

Multiply by

  • number of  hours spend on computer everyday, let’s say 3 hours

We got: 600M human/hours

  • Let’s say Vista slows down 15% compare to XP, which leads to a 2% time loss for users, on 3 hours it is something like 3 minutes and half per day.

The result is 12M hours of user-time waiting per day. Or in other words, each days, almost 1.400 human/years (something like 19 entire life!) are wasted, just because some code performance is not optimal! (ok all this time might not be completely wasted, it is generally spent making coffee, going toilets, doing simple talk…[:)])

Never think that fast CPU is a reason to develop slower code, I mean both slower than it used to be and slower than it could be.  First, your users will hate you if you make them noticeable wait, second CPU doesn’t get any faster nowadays (SSD does still), CPU are just being multiplied which makes the paradigm completely different.

I underlined the word noticeable because hopefully, with today hardware, code can do many many tricky and complex things in a fraction of second (fraction of second = not noticeable from the human user point of view). Just imagine the amount of data treated per frame by any cheap CPU+GPU 3D console (50 frames per seconds!). I underlined the word noticeable also because human users care more for responsiveness than pure-performance, if you have, say 1000 rows of a datagrid to compute, you can still present the first 50 visible rows to user and compute the invisible rows in background, or even better, lazy compute the invisible rows.






This entry was posted in Uncategorized. Bookmark the permalink. Follow any comments here with the RSS feed for this post.
  • Jack

    I agree, lots of ‘Micro’ means ‘BIG’

  • karl

    It’s like everything, you need to know HOW to optimize, and all the little details, so you can WISELY decide whether you need to or not.

    You need to understand O/R mappers and ActiveRecord so you can decide on the best strategy for a particular system. If you don’t, you’ll always use DataSets.

    Not knowing about detailed implementation and performance tradeoffs and simply saying “don’t micro-optimize” is a shortsighted strategy which will only ensure that someone else on your team gets the credit for speeding up a bottleneck.

    Not “getting” algebra is not acceptable for a mathematician, as not “getting” pointers is not acceptable for programmers. Too fundamental

  • Patrick Smacchia

    Jabe, this multiplication is just a way to quantify the impact that can have some lack of optimization. If you have some other ways, please submit.

    The more users your software have, the more responsibility you have (and hopefully the more money you have to employ talented programmers that will improve your code).

  • Jabe

    Humans don’t work the “15 minutes * 6.77 billion”-way. You can’t just multiply a small number with a big one and tell people it’s worth it. It’s not. That way, I can make ANY number related to humans big … and it’ll blow your mind.

  • Patrick Smacchia

    Joe, Krzysztof, don’t take me wrong: I explain that we need micro-optimization, but I never say that one shouldn’t work on user experience, code size…

    There are dozens of non-functional requirements that we need to treat each adequately in order to develop good software. I just give reasons that makes me things that the 15 to 25% better perf that one can get with micro-optimization, is worth it.

  • Krzysztof Kozmic

    The problem with micro optimizations is that it’s mostly appealing to the novice programmers, who waste time optimizing loos and ifs so that they run faster, but are impossible to read and as a result of that – to change.

    I oppose blind micro optimizations because that time could be better spend optimizing user experience – making common tasks easier to perform (with less clicks, menus, typing) than making rarely accessed function run 0.001s faster.

    So to sum up my point, micro optimizations are good and needed, where they are good and needed, but 99% the time is better spent optimizing user experience, not execution time.


    Visual Studio runs slowly. Visual Studio + ReSharper runs even slower, but as ReSharper improves user experience, I can’t imagine working without it, even though it makes certaing opertaions slower, like opening a new file in the editor.

  • Joe Chung

    Faster isn’t the only way to improve software. Smaller is, too.

    If you optimize for speed at the expense of size, you risk not only nullifying your performance micro-optimizations but also making your code smaller because of CPU cache misses, page faults, thread context switching overhead, increase in network payload size and round-trips, etc.

    Doing nothing is optimal. It takes no time and consumes no space. If what you’re doing isn’t necessary, don’t do it.

  • lucas

    I would invest in Solid=State Harddrive, it is suppose to speed up OS/VS by a measurable amount, perhaps you could use this as a reason to do an experiment? time yourself a day or two on wasted time, then install an SSD and time yourself and see the difference.

  • Twirrim

    I’d say that, if anything, micro-optimization is just as important with web applications.
    You can make broad guesses at what level of usage your server solution is going to be presented with but there are always the unexpected stuff, like the slashdot effect, or Mashable / Twitter effect, and such like. You just can’t guarantee if the website is going to suddenly be hit hard. You can make life a lot easier for yourself and your wallet if you do your best to ensure the application is efficient, fast and uses caching where possible. Serve static content wherever you can (even if you have to re-generate it regularly.)

    Web users are just as impatient as software users. More so if anything. If they think your web shop or whatever is taking too long (and that can be measured in seconds) they’ll not bother or go elsewhere.

  • Khaja Minhajuddin

    Wow, that was really fast, I really appreciate you taking the time to answer my question with a full blown blog post 😀
    And I completely agree with whatever you are saying. For software like Windows or any desktop application for that matter, Optimization *really* matters, (I’ve used Vista and I know the horrors). With that said, I still wonder if Micro optimization matters when you write web applications which attract moderate traffic.