Prioritizing by Technical Risk versus Business Value

When I first started in software development the conventional wisdom was to identify architectural and technical risks early on and work these parts of the system first to retire that same technical risk.  I took this to heart after one of my projects almost bombed in phase 2 because of a technical assumption I made about a particular technology in phase 1.*  By that time it was too costly to make the change to enable my original architecture, so I was left scrambling and came up with a hacked up bubblegum and duct tape approach that still causes me feelings of shame.

When I started reading about Extreme Programming, Kent Beck and Ron Jeffries among others talked about scheduling a project strictly by business priority.  That contention spawned no little amount of debate at the time as I recall.  I always felt uneasy about scheduling by business value only without consideration for technical risk, but lately I've come around to the conclusion that ordering code construction tasks by business value really is the best way to rein in speculative development and maximize the return on investment from software projects.  Part of that belief is the theory that I can truly flatten the change curve through meticulously well factored code and oodles of build and test automation.  I really do think that the cost of adding features is necessarily more expensive late in the project — assuming that you really do what I said in the first sentence.

As always, something arose to turn my thinking on its head.  I'm consulting for a big shop for the first time in quite awhile and I'm coming face to face with organizational issues that just don't happen as often in smaller companies. We're making an initial assumption about how we're going to persist data with a Serialized LOB pattern. It's going to be a very simple way to cut through a Gordian Knot of complex data modeling and O/R mapping with a lot of polymorphic variation.  The problem I have today with that persistence strategy is that the strategy just might not provide good enough performance for a set of features from the secondary priority tier that might require us to search through the Xml CLOB's in the database.

From a strictly development and architectural perspective, I don't really need to care about the second set of requirements. I'm very certain that the Serialized LOB will be perfectly fine for the initial set of requirements, and it's a very quick, simple way to get those features delivered.  Because I'm a TDD veteran, I know to keep the rest of the application loosely coupled from the persistence mechanism.  At worst, if that persistence mechanism really isn't fast enough for the secondary features, I only have to throw away a little bit of work on the easy mechanism and move to a much more complex relational model.  Heck, it's perfectly possible we won't even do those secondary features anyway, so why sink a bunch of time and effort getting ready for something that might never even be built?

Aw, but here's the issue that caught me by surprise (but really shouldn't have) — if we do need to retreat back to a true relational database model, we're very likely to need far more help from some specific business and database modeling experts to work through the persistence.  Unsurprisingly, those external folks are very busy and it'll take quite a bit of lead time to get their attention. In our case, it may be very worthwhile to play a purely technical story upfront to do some performance testing against the serialized LOB strategy just to know if we really do need that help from the external folks.

This might just tie back to the idea of The Last Responsible Moment.  In my particular case, that moment looks like it's much earlier than I originally thought.  From Mary Poppendieck (via Jeff Atwood):

[…] delay commitment until the last responsible moment, that is, the moment at which failing to make a decision eliminates an important alternative.

Just stuff to think about today, and I thought that I'd share.  I think I might start paying a little more attention to the idea of the Theory of Constraints and look a bit beyond technical stuff.



* I had to throw out my original architectural assumption because we hadn't configured the Oracle database correctly upfront and the fear of regression breaks prevented changing that configuration change out of fear and overhead.  With great sincerity, I think that had that project been executed with the build and test practices my teams follow today that we could have simply made the configuration change, ran the automated regression testing, and gone on our merry way.

I did learn a valuable lesson about making assumptions about a novel approach and unfamiliar technologies.  Not to mention that technical documentation from a vendor in the absence of real feedback isn't enough information πŸ˜‰ 

About Jeremy Miller

Jeremy is the Chief Software Architect at Dovetail Software, the coolest ISV in Austin. Jeremy began his IT career writing "Shadow IT" applications to automate his engineering documentation, then wandered into software development because it looked like more fun. Jeremy is the author of the open source StructureMap tool for Dependency Injection with .Net, StoryTeller for supercharged acceptance testing in .Net, and one of the principal developers behind FubuMVC. Jeremy's thoughts on all things software can be found at The Shade Tree Developer at
This entry was posted in Uncategorized. Bookmark the permalink. Follow any comments here with the RSS feed for this post.
  • jmiller


    I would generally do exactly the same thing, and that’s actually what we’ll do. The only point I wanted to make for this case is that the availability of other resources/outside constraint is potentially making me take on that technical infrastructure much, much earlier than I would if my team was fully self contained.

    You make a great point about capping spikes.

    In technical terms only, I could easily get the 1st tier requirements out much faster in a perfectly reliable state without the fancier persistence.

    The only thing that I would add to your comment is that the spikes themselves get done just in time.

    Thanks for the comment,


  • Joe Ocampo

    Hi Jeremy,

    Regarding your issue:

    β€œThe problem I have today with that persistence strategy is that the strategy just might not provide good enough performance for a set of features from the secondary priority tier that might require us to search through the Xml CLOB’s in the database.”

    In our group when we are uncertain of how a particular technology will pan out, we usually invoke a spike. We have gone to great lengths to explain to our business the significance of Spiking. We try to keep the Spike no longer than 4 Units (Story Points) and it has to be prioritized early on in the release. We have talked to great length on the subject of technical debt. Seeing that the company I work for is in the financial sector they quickly understand penalties of incurring debt.

  • Luke Melia

    Great post. We maintain a risk list for each of our projects which we review with our customer at each sprint review. We rate each risk on likelihood and severity and discuss possible mitigation strategies. While the customer prioritizes stories on the basis of business value, she can also prioritize particular risk mitigations if she so chooses.

    I’m not proposing this as a solution to your particular problem, just as an interesting process that aids communication and transparency.