We’ve all been there. Many of the worst, most unreliable pieces of code I’m familiar with began their lives as good-looking prototypes that were rushed into production. Coincidentally, sometime in the next couple of iterations we’ve got a user story in our backlog to analyze a troublesome mission critical component that began as a prototype rushed into production.
One of my previous employers had such a terrible history of allowing prototype code into production that there was an explicit policy forbidding prototyping. Needless to say, the policy could not possibly be enforced, but project managers would get very nervous about any prototyping activities. They relaxed this stance later and just in time for the big CMM adoption. The new “disciplined” waterfall approach expressly forbade any coding before the design specification was complete and signed off. The problem is that it is very difficult to do an accurate upfront design and estimate without doing at least some prototyping work to test out design ideas and unfamiliar technologies. Any estimate or design with unproven assumptions is nothing but a working theory. Knowing this, teams would use the “it’s just prototype code” loophole to start doing some coding. Only when the code seemed to taking shape would they retrofit the design specification against the prototype. They didn’t throw away the prototype code though, and the trouble continued.
My personal feeling, from experience, is that large scale prototypes are an inefficient usage of time. It’s a more accurate way to do big upfront design, but you’re potentially spending a lot more time upfront with absolutely zero production quality code (client value) to show for your efforts. This is the exact situation that leads to sloppy prototype code being deployed to production. When that deadline starts to creep closer the temptation is going to mount to just “clean up” the prototype and push it in. For that matter, I have an almost pathological distrust of the typical manager’s ability to understand the consequential differences between prototype code and production quality code (or software quality in general).
If you don’t know what to do, Spike!
One of the points I liked from Len’s post was that he codes in one of two modes. From Len (emphasis mine), “The first is the controlled, mostly test driven, quality focused coding that I do when I know where I’m going and when I know that I want to use the code for something serious.” When you don’t know where you’re going or if something can work he switches to a prototyping mode where the only goal is to find out where to go or whether or not the destination is even possible. In Extreme Programming terms, do a Spike.
A couple of my teammates are new to TDD and they often struggle with writing that first unit test or writing code test first in general. They often end up trying to write the code first so that they can figure out how to write the test. The tests that get retrofitted to this code are often gnarly because the code isn’t easy to test. Even worse is the fact that the retrofitted tests rarely provide the same level of test coverage as unit tests written test first. Writing the code and retrofitting unit tests isn’t an ideal strategy, but just staring blankly at the IDE hoping for sudden inspiration isn’t a winning strategy either. Switch gears and do something else.
I’m very comfortable with UML or CRC modeling to explore ideas, but that isn’t always the right or only answer to breaking the “coder’s block” either. That’s when it’s time to do a spike. Forget TDD and all the other “dot the ‘i’, cross the ‘t’” coding activities, go off to some code on the side, and just try to solve the coding problem. At a minimum, try to learn enough to know how to write the real code. Then, and this is crucial, throw the spike code away (or at least make sure it isn’t inside the main coding trunk). Now you can switch back to doing TDD and re-write the spike solution in a disciplined manner. Even if the spike is exactly the way the code is going to end up, re-write it with TDD to make sure. One of the very real risks and downsides of spikes is that the code written as a result of the spike often has less test coverage because of the temptation to forgo unit tests on something you’ve already done.
Spikes and Project/Iteration Management
Spikes should be relatively short activities with a specific goal, usually to answer a specific design question or to provide a more accurate estimate for a user story. The (organized) agile projects I’ve been on have always treated largish spikes as user stories that are prioritized and estimated along with the normal user stories. The difference is that the estimated time for a spike is really a cap, i.e “we will spend no more than 8 ideal hours on the XYZ spike.”
One of the projects I was on a couple of years ago mandated that a different pair wrote the production version of a spike solution, just because some developer (me) was doing the lion’s share of the spikes and architectural refactorings. I’m still a little undecided about the wisdom and efficiency of that approach, but it definitely spreads the knowledge around.
It is obviously useful to keep the spike code around. Our preference now is to just make a quick Subversion branch and do the spike in this code. That way the code doesn’t “accidentally” land in the main trunk or get lost later.
Matt Stephens has a somewhat differing opinion on early prototyping versus the XP-style emergent design concept. I think he’s more than a little bit incoherent and inaccurate anytime he or Doug Rosenberg talks about XP or XP practices. I just thought I’d add a link with an opposing view (and I’m pretty sure that the picture of the bridge on his page is the famous 360 bridge in Austin about a mile from my office, so he gets some points).