From Nat, emphasis mine:
The fundamental basis of TDD is that you drive all development from tests. That is, you don’t write anything without a failing test. That applies AT ALL SCALES OF THE SYSTEM. So, you don’t start working on a feature of the system without a failing end-to-end test that exercises that feature. Once that test is written you can start changing the system to make the test pass. When doing that, you don’t start working on classes in the system without failing unit tests that shows what the classes should do, and you don’t write classes that they need without failing unit tests, etc. etc.
I erred by confusing the bolded issue by my separation of Acceptance Testing and unit testing. I have no reservations whatsoever about using executable requirements or even just prose based acceptance tests be my real detailed requirements. As a matter of fact, it’s what I prefer and recommend even though, truth be told, I’m only partially successful at making it happen at my clients right now. I was separating the practice of Acceptance Test Driven Development from just plain Test Driven Development as I see these two practices as different practices with different goals. I think that’s what’s gotten me into trouble with the Double D post. My real purpose for treating these separately was that I think you’re better off to do both rather than one or the other.
From Scott’s Behavior-Driven Development – Not TDD++ post,
“…The growing sense that I have of BDD [Behavior Driven Development] is that it doesn’t conform to the same boundaries as TDD [Test Driven Development] and ATDD [Acceptance Test Driven Development]. In doing so, it makes the distinction between TDD and ATDD less clear, but it also makes the practices more fluent by drawing them into a longer continuum whose transition is smoothed by user story practices and aspects of domain-driven design.”
Now this is something that has been bugging me about BDD, whither or not BDD encompasses and subsumes both unit level specifications and acceptance level specifications. I’m perfectly willing to concede the advantages of creating a continuous continuum of specifications from the customer facing level to the granular developer level. I think one of the last frontiers for Agile development is executable requirements and it does seem like BDD has a better story for this than we’ve had before.
The reason that I still have a hangup about the differentiation between driving a design from unit tests and driving the functionality of the system from executable requirements is my very fervent belief in the efficacy of Test Small before Testing Big. To try to explain what I mean here, and try to prove that this idea still has value with BDD, let’s look at this graphic of the old V-Model for testing. Now, let’s replace some of the terminology and forget about all of the boxes in the middle. All I really want to say is that regardless of the new continuum model we should still rigorlously go through steps 3-4 instead of jumping from 2 to 5 because I think that’s more efficient in terms of effort — despite the fact that it looks like more work from the outside.
Requirements == “Story Card is selected for the iteration”
High Level Design == “Write customer facing specification”
Low Level Design == “Write granular specification to specify a small behavior of the code”
Unit Testing == “Make the granular specification pass, refactor if necessary, rinse and repeat step 3″
Integration Testing == “Now that the granular specifications/tests pass and I’m relatively confident in my code, let’s take the customer facing specification that we started with, put all of the code together, and work until the original specification passes”
System Testing == “Push the feature to the testers for exploratory testing and show the feature to the customer for approval”
– graphic shamelessly stolen from Raymond’s SDLC post.