In my last post I talked a little bit about the difficulties that my team is facing in driving the user stories in each user story to “done, done, done.” Last Thursday we did a release retrospective meeting on our just concluded release. One of the keys to making these meetings succeed is to walk away with something actionable to improve the next release. An account manager at my former employer used to say that you asked the question “If you could change one thing for the next release, what would it be?” On our previous release the obvious answers were developer/tester interaction and poor build and configuration management. We finally got a tester onsite for the team and sunk a lot of ultimately successful time investment into better build automation. This time the clear problems are the related issues of context switching and leaky iterations, i.e. not finishing all the user stories inside the iteration.
Context switching has historically killed my team’s productivity. Several times we’ve had to abandon half-finished work to focus on a new higher priority. Besides the obvious problem that the human mind just isn’t efficient with a lot of context switching, we weren’t able to immediately deploy the new priority code because the previous work had the system all torn up (yes, some judicious branching would have helped too). It’s also been more difficult to plan release dates when work keeps bleeding from one iteration to the next. With our situation as a tiny little software vendor making solutions for very large customers, we really have to be able to accommodate turbulent changes in the release roadmap. Finishing off our biweekly iterations with production quality code is a good way to ameliorate the roadmap turbulence.
The Story Wall
My favorite way to track user stories (I really don’t think the Scrum burndown chart gets the job done) is the XP story wall that places the story cards in vertical columns denoting the state of the user story. On day 1 of the iteration the story wall will look something like this below with all of the user stories in the “Not Started” column.
|Not Started||In Development||Testing||Awaiting Customer Signature||Done, done, done!|
At the halfway point of the iteration the story wall should look something like the chart below. If the iteration is going smoothly, you should be able to roughly draw a continuous line from the upper right hand corner to bottom left. Some stories should already be complete and others pushed into testing.
|Not Started||In Development||Testing||Awaiting Customer Signature||Done, done, done!|
So the next question is how do you work an iteration to get this smoothly continuous flow of stories from left to right? One of the most important lessons newcomers to XP development need to learn is that you should not treat iterations as mini-waterfalls. My first “real” XP project and my last project both suffered from what Michael Feathers calls trailer-hitched QA. If you dump all of your stories onto testing near the end of the iteration, you will not complete the iteration. If the QA testing is an iteration behind you can’t really take credit for finishing the work and you bear a context switching cost as developers switch between old and new code. The only way to successfully finish off iterations is to get code into testing as soon as possible, and the testing team needs to be ready to start testing as soon as they get the new code. That’s not enough by itself, you also have to ruthlessly eliminate inefficiency in handoffs between disciplines.
Here’s how we’re planning to plug the leaky iteration problem:
- Prioritize the completion of stories in flight before putting new stories into play
- Continuous Integration – I really can’t say this forcefully enough, comprehensive, automated builds reduce project friction and make you go faster. What we call “NAnt-y” work is not overhead, it’s an investment in team efficiency. Throughout my career I’ve wasted countless hours doing manual code moves and troubleshooting botched testing installs. Nothing aggravates me more than defects that originate from a testing environment problem. It just makes developers look dumb. “We fixed that, but it just didn’t get into the build somehow.” Our CI infrastructure now extends to making testing deployments with accompanying environment tests to ensure the testing environment is valid before a tester spends any time on that build.
- On a related note, pay attention to the build number any time you move a story to testing. The testers need to know the build number that includes the new functionality to test. That’s a real problem because you’re potentially pushing code to the testers more often and they need to know what they have. The testers really need an easy way to query the system to see what the build number is that is installed (some kind of “about” screen for example). On the other hand, the testers really need to watch the build number on any bugs they report.
- Smaller User Stories – This is huge. My vote is that user stories are cut into the smallest units that make sense to test independently. Smaller stories generally lead to more accurate estimates and a smoother flow of stories into testing.
- Don’t carry non-trivial bugs between iterations. Zero tolerance.
- Acceptance Test Driven Development – We’ve been using FitNesse* for automating and expressing acceptance tests for some time, but only recently has it started to pay off. The next step we’re going to try on this release is to require that the FitNesse test tables for a user story be written before development. There’s a couple of good reasons to try this:
- Unambiguous requirements – Creating a FitNesse test forces analysis questions out of hiding. Writing a FitNesse test means getting down to the nitty gritty of exactly how a system is supposed to behave
- Executable requirements – We can validate our code against the requirements. Even better yet, if we and the testers have set the tests up early, we code until the FitNesse tests pass without involving the testers. It’s a much, much faster feedback cycle than pushing code to a tester only to play defect tracker volleyball. You can’t express everything in FitNesse, but the more you can do will enable more time for manual testing and exploratory testing of the things FitNesse can’t handle.
- Creating a shared understanding with the testers. I’ve suffered quite a bit in the past when the testers have simply interpreted requirements differently than I did. Writing the FitNesse tests together upfront, along with the product manager and customer, can drastically reduce that problem. It also helps to create Eric Evan’s Ubitiquous Language on a project, the grammar of the fixture tables should closely reflect your Domain Model. Theoretically, I’m hoping it eliminates the intellectual lag between developers and testers when we hand them a new story to test. This way the testers should never be learning about a story for the first time when we hand it over.
- Just a little bit of ceremony. Do 5-10 minute meetings when you start a new story for the first time across disciplines. When you move a story between columns on the story wall, make sure the rest of the team knows that. When testers find bugs and move stories back to development, they need to verbally tell us. Defect tracking tools are for auditing, not communication! Ideally, I like testers to show me the bug on their box as they’re entering the bug. I’ve found that little practice greatly reduces the amount of time it takes to turn around a bug fix. If for no other reason, do iteration retrospectives just to feel like the iteration boundaries are real.
* My first reaction to Fit/FitNesse was WTF? My second experience with Fit was trying to write an ActionFixture to test a WinForms client. Pain. Late last year we bought several copies of Fit for Developing Software and suddenly it all made sense. Also grab the FitLibrary for .Net. It makes complicated tests and state management inside of FitNesse tests much smoother. The point being to give FitNesse a real try because it can pay off in a big way.
Cory Foy has a nice getting started introduction to FitNesse.
James Shore has a series of articles on using Fit/FitNesse for requirements analysis here.
Architecture, design, and coding are the most intellectually demanding and important activities on a software project. That’s why we developers, the talented few, take this work. That said though, there’s a lot of other process activities (iteration management, testing, requirements, etc.) that can negatively impact our ability to create code. The project team’s process is far too important to be trusted solely to project managers and managers in general (and most certainly not to CMM enforcers, SEPG teams, and external “Quality Assurance” bureaucrats who have absolutely no stake in your project succeeding). We developers need to exert some amount of control over the team’s processes.