"Code Complete" is a lie, "Done, done, done" is the truth

Before you flame me, I’m not talking about the canonical book by Steve McConnell.  What I mean is that this statement is a lie – “we’re code complete on feature xyz.”  That phrase is a lie, or at least misleading, because you really aren’t complete with the feature.  Code complete doesn’t tell you anything definitive about the quality of the code, or most importantly, the readiness of the code for actual deployment.  Code complete just means that the developers have reached a point where they’re ready to turn the code over for testing.  You say the phrase “code complete” to mark off a gate on a schedule or claim earned credit for coding work done on a project schedule.  Using “code complete” to claim earned value is a bald-faced lie because it doesn’t translate into business value – yet.  It could have lots of bugs and issues yet to be uncovered by the testers.  If the code hasn’t gone through user acceptance testing, it might not even be the right functionality.


It’s a nearly arbitrary status anyway.  Oftentimes a team will call something code complete just because the date on the schedule says its supposed to be code complete on that date.  Just call it code complete to satisfy the schedule, all software has bugs anyway right?  Code complete is one of many common evils on waterfall projects when team members have somewhat divergent goals.  Developers may be judged by meeting the code complete date, and the testers by the quality of the system – i.e. the lack of bugs.  These conflicting goals can easily have a negative impact on the project.


One of my favorite aspects of XP development has been the emphasis on creating working software instead of obsessing over intermediate progress gates and deliverables.  In direct contrast to “Code Complete,” XP teams use the phrase “Done, done, done” to describe a feature as complete.  “Done, done, done” means the feature is 100% ready to deploy to production. 


There’s quite a bit of variance from project to project, but the “story wall” statuses within an iteration I learned on my first XP project were:



  1. Not started

  2. Development

  3. Business Analyst Review (The developers called it “BA Volleyball,” I think this was a process smell)

  4. Testing

  5. Customer Review

  6. Done, done, done

The other story wall columns besides “done, done, done” are just intermediate stages that help the team coordinate activities and handoffs between team members.  We use a Scrum style burndown chart to track intermediate progress in an iteration, but it’s nothing better than an educated guess of status.  The burndown chart informs management on the state of the iteration and helps spot problems and changes in the iteration plan, but the authoritative progress meter is the number stories crossing the line into the “done” column.  The goal of the XP team in any given iteration is to get every story in play into the “done, done, done” column before the end of the iteration.  No credit or progress is earned on an XP project until a user story has been coded, fully tested, and approved by the business customer.  It’s a draconian measure for a team, but it’s an accurate indication of how much business value an XP team has really delivered.  XP nirvana is being able to consistently push all iteration stories into the “done, done, done” column and produce a potentially deployable release at the end of every iteration.  It’s a lofty goal that most XP teams probably don’t meet, but that doesn’t mean we should stop trying to reach that goal.  Like so many agile teams we struggle with leaky iterations because we aren’t fully driving stories to production-ready completion within an iteration, but that’s the topic for my next post…


A friend of mine who’s still mired in waterfall land challenged me one time on how an XP can know their status without a detailed plan.  My response was that our status was measured by what new features were ready to deploy.  If you think about it, that’s a far more useful measurement than saying x number of features are “code complete” with unknown quality attributes. 


A very important attribute of agile methods in general is the entire team sharing a common goal to produce working software.  My experience has been that developer/tester interaction is far smoother in agile projects (certainly not perfect of course), in no small part because the developer’s purview moves from getting code into test to getting code to a potentially deployable state.  As an agile developer it’s in your best interest to do everything possible to make testing go smoothly – helping with test automation, build automation, unit testing, etc.  Testers are your allies now, not sparring partners.

About Jeremy Miller

Jeremy is the Chief Software Architect at Dovetail Software, the coolest ISV in Austin. Jeremy began his IT career writing "Shadow IT" applications to automate his engineering documentation, then wandered into software development because it looked like more fun. Jeremy is the author of the open source StructureMap tool for Dependency Injection with .Net, StoryTeller for supercharged acceptance testing in .Net, and one of the principal developers behind FubuMVC. Jeremy's thoughts on all things software can be found at The Shade Tree Developer at http://codebetter.com/jeremymiller.
This entry was posted in Uncategorized. Bookmark the permalink. Follow any comments here with the RSS feed for this post.
  • http://blog.markturansky.com/archives/39 Mark Turansky

    I agree, I’ve also written that Code complete does not mean you’re done.

  • http://www.jeffreypalermo.com jpalermo

    I remember my days at an old employer (when I did waterfall development) when the schedule said “testing”, so we were in testing. Now this didn’t stop us from fixing bugs before they were reported. For instance, we needed to fix a bug that could be summarized “product maintainance screen missing”. :)

    Misinformation flowed freely about in meetings with the project managers and directors. The schedule was progressing. That’s all that mattered. The testing phase was 4 weeks, but the testers didn’t actually start until half-way through because the system was unusable. i.e. not finished.

    Getting the testers involved right from the start of development is a much better way to go.