When I first started in software development the conventional wisdom was to identify architectural and technical risks early on and work these parts of the system first to retire that same technical risk. I took this to heart after one of my projects almost bombed in phase 2 because of a technical assumption I made about a particular technology in phase 1.* By that time it was too costly to make the change to enable my original architecture, so I was left scrambling and came up with a hacked up bubblegum and duct tape approach that still causes me feelings of shame.
When I started reading about Extreme Programming, Kent Beck and Ron Jeffries among others talked about scheduling a project strictly by business priority. That contention spawned no little amount of debate at the time as I recall. I always felt uneasy about scheduling by business value only without consideration for technical risk, but lately I've come around to the conclusion that ordering code construction tasks by business value really is the best way to rein in speculative development and maximize the return on investment from software projects. Part of that belief is the theory that I can truly flatten the change curve through meticulously well factored code and oodles of build and test automation. I really do think that the cost of adding features is necessarily more expensive late in the project — assuming that you really do what I said in the first sentence.
As always, something arose to turn my thinking on its head. I'm consulting for a big shop for the first time in quite awhile and I'm coming face to face with organizational issues that just don't happen as often in smaller companies. We're making an initial assumption about how we're going to persist data with a Serialized LOB pattern. It's going to be a very simple way to cut through a Gordian Knot of complex data modeling and O/R mapping with a lot of polymorphic variation. The problem I have today with that persistence strategy is that the strategy just might not provide good enough performance for a set of features from the secondary priority tier that might require us to search through the Xml CLOB's in the database.
From a strictly development and architectural perspective, I don't really need to care about the second set of requirements. I'm very certain that the Serialized LOB will be perfectly fine for the initial set of requirements, and it's a very quick, simple way to get those features delivered. Because I'm a TDD veteran, I know to keep the rest of the application loosely coupled from the persistence mechanism. At worst, if that persistence mechanism really isn't fast enough for the secondary features, I only have to throw away a little bit of work on the easy mechanism and move to a much more complex relational model. Heck, it's perfectly possible we won't even do those secondary features anyway, so why sink a bunch of time and effort getting ready for something that might never even be built?
Aw, but here's the issue that caught me by surprise (but really shouldn't have) — if we do need to retreat back to a true relational database model, we're very likely to need far more help from some specific business and database modeling experts to work through the persistence. Unsurprisingly, those external folks are very busy and it'll take quite a bit of lead time to get their attention. In our case, it may be very worthwhile to play a purely technical story upfront to do some performance testing against the serialized LOB strategy just to know if we really do need that help from the external folks.
This might just tie back to the idea of The Last Responsible Moment. In my particular case, that moment looks like it's much earlier than I originally thought. From Mary Poppendieck (via Jeff Atwood):
[…] delay commitment until the last responsible moment, that is, the moment at which failing to make a decision eliminates an important alternative.
Just stuff to think about today, and I thought that I'd share. I think I might start paying a little more attention to the idea of the Theory of Constraints and look a bit beyond technical stuff.
* I had to throw out my original architectural assumption because we hadn't configured the Oracle database correctly upfront and the fear of regression breaks prevented changing that configuration change out of fear and overhead. With great sincerity, I think that had that project been executed with the build and test practices my teams follow today that we could have simply made the configuration change, ran the automated regression testing, and gone on our merry way.
I did learn a valuable lesson about making assumptions about a novel approach and unfamiliar technologies. Not to mention that technical documentation from a vendor in the absence of real feedback isn't enough information 😉