More on Perceived Performance

In my last post, I talked about some of the key takeaways (for me) after reading the book Designing and Engineering Time: The Psychology of Time Perception in Software.  Unfortunately, I had to run out before I noted all of the items on my list, so I’m going to finish that list up here.

  • If performance is a differentiating factor between a function of your product and a similar function in a competitor’s product, you may not want to (or be able to) spend the time and money that it would take to make your product objectively faster.  So how much faster do you need to worry about making your product in order to neutralize performance as a differentiating factor?  According to the findings presented, researchers have determined that this bar is the geometric mean between the 2 performance measures.  Users will associate a duration higher than this bar with the higher number and a duration lower than the bar with the lower number.
  • There are 2 important qualities to consider when collecting performance data – reliability and validity.  Reliability, as it is explained, seems to be synonymous with repeatability (as a side note, when I look at our current measuring methods for MSDN and TechNet, reliability is one area where we need pretty dramatic improvement).  Validity describes whether or not you’re actually measuring what you’re trying to measure.  Since reliability and validity are independent of one another, a given method can fall into any of the 4 combinations of the 2.
  • Define your start and end events from the user’s point of view – not from whatever available options your tools give you.  Not only does this give you a better picture of how your users perceive your application, but it means that you aren’t going to have to change your measuring method when you start optimizing the responsiveness characteristics of your application.
  • When measuring, choose an appropriate precision based on the responsiveness you’re shooting for.  Just like performance tuning shouldn’t be some kind of open-ended “let’s just make everything as fast as possible” kind of exercise, it’s pointless to measure an Ajax call down to the milliseconds, since most of these operations will fall into the ‘continuous’ aka ‘user flow’ or even ‘captive’ interaction classes (for more on interaction classes, see the UI Timing Cheatsheet).
  • For assessing user tolerance, one technique is to have a user interact with several different prototypes of a given feature (in random order, even repeating some prototypes) and have them rate the experience along some scale of responsiveness.  This can help to identify what are the key factors that influence tolerance.
  • One technique that I thought was pretty cool for managing a user’s perception of performance for a lengthy process (think product install) is to order the sub processes so that they proceed from longest duration to shortest duration.  Research shows that users are more likely to watch the tail end of a process, so having the sub processes with shorter durations at the end leaves users feeling as though the overall process was faster (recall how annoying it is when the last 5% of an operation takes 95% of the time – see what I mean?)
  • Another technique that I thought was equally cool is one that the author calls “End on Time”.  The basic premise is that rather than spend all of your energy trying to predict the end time with pinpoint accuracy, “Give approximate remaining time in time anchors while a process is running, but when the process is actually complete, add a few seconds and show the process counting down to and ending exactly at zero”.  This is a great perception managing technique in that it creates the illusion of precision and builds confidence with your users.  Keep in mind that the applicability here is for processes that are measured in terms of minutes – adding 3 seconds to a 2 second process will likely not create the desired perception.

So that wraps up my key takeaways from the book.  Keep in mind that there is a ton of great stuff in there – I just pulled out some of the things that resonated with me.  I’ll be trying to put some of this information to use over the next 6-9 months and will be sure to see how some of these techniques work in my own context.  If you do the same, or if you’re currently using some similar techniques, do share.

About Howard Dierking

I like technology...a lot...
This entry was posted in performance. Bookmark the permalink. Follow any comments here with the RSS feed for this post.