TDD Misconceptions

Jeffrey’s post on Rocky Lhotka’s off the cuff remarks about TDD is spawning a good discussion and several rebuttals.  Several people have gone off on agile zealots as well.  I’m pretty sure I’m included in the zealot bag there.  We all like to think we’re one of the reasonable people and everyone else is the crazy zealot, but…


[EDIT 4/5/2006] Of course I also think the words “jihad” and “zealot” got tossed around far too casually and unfairly.  If you go read Jeffrey’s post, he didn’t say anybody was a bad developer for not doing TDD, just that Rocky made some erroneous or mistaken statements about TDD.



Rocky’s response here:



I’ve seen several common misconceptions, or just plain concerns and doubts, about TDD repeated that I’d like to respond to.  I’ll especially pick on Sahil’s comment on Rocky’s post just because he’s Sahil.


TDD can, and probably should, coexist with other design tools and methodologies.  Rocky and several other people mention they prefer using Responsibility Driven Design with or without CRC cards.  I wholeheartedly agree (but I’ve never managed to get into CRC cards).  I had an article last year on using RDD to make TDD easier.  Use anything that works for you (I still like to sketch out UML just before coding), as long as testability is always in the forefront of your design thinking.  It will make TDD easier.  Nothing is more difficult to me than just sitting in front of a blank screen trying to figure out how to write that first test. 


The great thing about TDD as a design tool is that it really enforces loose coupling and high cohesion.  Forcing yourself to write a unit test first, or at least immediately afterwards, makes loose coupling a necessity to retain the hair on your head.  One way to think about it is that testability is frequently an accurate “test” of your design.  External dependencies are always difficult to test, but code that is hard to test usually points to an issue with coupling or cohesion.


This is close to a cliche and it’s already been said several times, but TDD is primarily a design tool.  The fact that the “design specification” is a large body of executable unit tests is an undeniably huge benefit as well.


Occasionally you’ll see a claim that TDD == 0 debugging.  That’s obviously false, but effective usage of TDD drives debugging time down and that’s still good.  From my experience, when the unit tests are granular the need for the debugger goes way down.  When I do have to fire up the debugger I debug through the unit tests themselves.  Cutting down the scope of any particular debugging session helps remarkably.  The caveat is that you really must be doing granular unit tests.  A lot of debugging usage is often a cue to rethink how you’re unit testing. 


TDD isn’t enough testing by itself.  Duh.  I don’t think many people are trying to claim TDD does away with the need for testers.  The fact that TDD isn’t a comprehensive set of tests doesn’t mean that the TDD unit tests don’t add a lot of value.  We use a full Taxonomy of Tests, not just our TDD tests.  I’ve seen a very correlation between the thoroughness of our unit tests and the bug counts in different areas of the code.  Sahil mentions that TDD doesn’t directly address concerns like load testing, transactional rules, and concurrency.  Ok, then do some load testing and exploratory testing too.  You can test performance and transactions with TDD, it’s just harder.  What TDD excels at is getting the business functionality and mainline logic correct.  Get that out of the way and you should, at least in theory, be able to spend more time on other types of testing for non-functional qualities.


And the constant stereotype about developers being bad testers, that’s just unfair.  It’s like saying developers don’t have good social skills, wait, nevermind.


You must write a test first!  I probably write test first about 80-90% of the time.  There’s always exceptions.  I certainly don’t spend any time writing unit tests for simple getter/setters, especially when they’re codegen’ed by ReSharper.  Just use your best judgement.  One rule of thumb I’ve read from Kent Beck is that you just need to TDD any code that might change.


We’ll have to maintain all of those unit tests.  True, but in general the unit tests should make your code easier to maintain, and not just because of the automated regression safety net.  Unit tests can be a great living document about the system.  The ability to refactor a system safely is priceless.  Now, there is a need to write unit tests well in order to keep them from being a burden.  I’ve bumped into places where the unit tests were written in such a way that I was afraid to change the code for fear of breaking the unit tests.  It’s awfully important to treat your unit test code as something important and keep loose coupling between your tests and the implementation.  That’s often a very real complaint about excessive usage of mock objects.  There’s more than a little bit of art to writing non-brittle unit tests.


Just to head the comment off, there might come a day when our programming languages reach a high enough level of abstraction that the usefulness of TDD goes away because the code will perfectly express its intention.  That would be nice, but we’ll just have to wait and see.


I do feel a bit for Rocky on this one and I know that I jumped the gun a little bit.  I think if he’d just gotten to say something like “I haven’t done much TDD yet, but here are the things I don’t buy about TDD” this wouldn’t have been any big deal. 


As far as being a zealot, yeah I’m definitely going to tone down my blog.  I’ve never enjoyed working with runaway XP zealots, so I do get the irritation.

About Jeremy Miller

Jeremy is the Chief Software Architect at Dovetail Software, the coolest ISV in Austin. Jeremy began his IT career writing "Shadow IT" applications to automate his engineering documentation, then wandered into software development because it looked like more fun. Jeremy is the author of the open source StructureMap tool for Dependency Injection with .Net, StoryTeller for supercharged acceptance testing in .Net, and one of the principal developers behind FubuMVC. Jeremy's thoughts on all things software can be found at The Shade Tree Developer at http://codebetter.com/jeremymiller.
This entry was posted in Test Driven Development. Bookmark the permalink. Follow any comments here with the RSS feed for this post.
  • jmiller

    Gary,

    Environment tests are definitely forthcoming

    Jeremy

  • jmiller

    I do think that the words “zealot” and “holy war” were unnecessarily bandied about though.

    You might take a bit of a look at FitNesse for testing set-based operations against a database instead of NUnit. I’m starting to come around to using FitNesse fixtures for integration tests.

    Basically,

    1.) A test table(s) to setup the initial data in the database
    2.) An ActionFixture or DoFixture to perform some sort of operation, fetch data, batch update, etc.
    3.) A RowFixture to declaratively check the results of step #2\

  • http://www.codebetter.com/blogs/jay.kimble jkimble

    I’m not going to say much more than “thanks” Jeremy. I’ve felt that the Zealotry needs to tone down some. I’ve seen so many posts where someone makes an off the cuff remark and gets flamed over TDD that it has soured me on all the Agile disciplines… It has worked as reverse evangelism…

    It’s nice to see someone come out and admit that “maybe we were a little harsh.” Thanks… maybe I’ll try TDD again (except that most of my projects are Data Driven from SQL Server which ends up having me trying to figure out how to test against the DB, and rereading a bazillion posts on how to do it… ugghhh!)

  • http://dotnetjunkies.com/weblog/johnwood johnwood

    Gary,
    “However, they have a very different purpose and are operating for a very different reason. ”
    Can you explain what you think those purposes are?

    “we need multiple views of the program to verify its intent”
    What if the source code *is* the intention? What’s the point of verifying it against itself?

  • http://archiveddeveloper.blogspot.com/ Gary Williams

    I would say that Unit Tests don’t encapsulate requirements. They give the intentions the programmer had for a particular block of code, which may, indirectly, say something about the requirements. However, they have a very different purpose and are operating for a very different reason.

    On another line, I don’t believe automated tests will *ever* go away, simply because we need multiple views of the program to verify its intent, regardless of how it’s written. The higher the acceptable risk, the less we need to test, but the tests will always need to be there.

    OT, Jeremy, can you write a post on the usefulness of environment tests? :)

  • jmiller

    FitNesse tests are a de facto DSL for testing. For complex scenarios we’ve been having a great deal of success with the new flow style fixtures like the DoFixture in the FitLibrary.

    I think anything would be less ambiguous than “the system shall” requirements. What the FitNesse style tests can do is make the business and tech folks do a really detailed analysis of a user story/feature/scenario before coding starts. You can always supplement the tables with explanatory text in the tests themselves.

  • http://dotnetjunkies.com/weblog/johnwood johnwood

    Incidentally I haven’t actually used fitnesse myself on a project, but I know others who have and who rave about it. I like the collaborative and declarative aspects of designing the test suites (although I’m not sure if it the declarative aspect holds up to more complex scenarios). I’m sure it holds a lot of promise, if TDD is your thing. :)

  • http://dotnetjunkies.com/weblog/johnwood johnwood

    So you’re basically saying that tests are actually a *better*, or at least less ambiguous, representation of the requirements than a plain english requirements document. Interesting. I’ll have to ponder on that. It’s probably better to look at some real world examples… of a DSL vs. general purpose language code with tests, even though there is no DSL out there to describe the requirements, we could probably come up with an imaginary one just for the purpose of comparison. When I get some time I’ll post more on the topic and maybe we can continue the discussion then.

  • jmiller

    “Acceptance tests are just unit tests implemented at a level that business users can understand”

    - that’s an awfully big distinction

    Having executable requirements that tell you in no uncertain terms that the system behaves correctly is awfully nice. Especially compared to the traditional “system shall do this fuzzy worded thing” requirements that leave too much room for interpretation.

    Have you looked at FitNesse (http://fitnesse.org) style tests? FitNesse enables the testers and developers to create succinct, readable tests in a mostly declarative fashion. I don’t know how you could be much clearer than a ColumnFixture-style test for business rules.

    At this point, I’m not aware of any way to create executable intentions John. And when we do get to fullblown DSL’s, I bet we’ll still have to TDD the creation of our homegrown DSL’s because there certainly won’t be a DSL for every possible problem domain. And you know well that the need for testing will never be eliminated.

    I just can’t see declarative style DSL’s completely replacing the need to drop back into OO or procedural coding.

  • http://dotnetjunkies.com/weblog/johnwood johnwood

    So what’s the difference between “desired functionality” and “requirements” ?

    Whether you’re talking about acceptance tests or unit tests, they’re basically the same thing at different levels. Acceptance tests are just unit tests implemented at a level that business users can understand. It’s not that TDD isn’t testing the requirements, they’re just typically requirements for a lower-level piece of functionality.

    But it still comes down to the question of whether you communicate your requirements as tests or as intentions. A test describes the byproduct of the intention, through a significantly more longwinded and roundabout approach. The intention – while difficult to articulate without the right semantics – is a far more accurate and succinct way to communicate the requirements.

    The value of TDD is inversely proportional to the clarity and intentional richness of the language. If we ever meet the holy grail of languages that can describe the intention 1-to-1 (which may never happen), then I don’t think there will be a place for TDD at all. Why question the requirements themselves?

  • jmiller

    For that matter, you might go check out Behavior Driven Development here –> http://behaviour-driven.org/

    I don’t possibly see how it could work with a static language like C#, but the Ruby implementation sounds pretty cool.

  • jmiller

    I was thinking of you when I wrote that John. I think higher levels of abstraction will simplify TDD, not eliminate it all together.

    For that matter, one of the coolest existing usages of a DSL I’ve seen is the built in unit testing stuff in Ruby on Rails to simplify TDD.

    As to your TDD posts a couple of weeks back, I thought you were getting a couple disparate things confused. TDD unit tests describe the desired functionality of a tiny, granular piece of code. That has very little to do with requirements. If you’re talking about automated acceptance tests, that’s a whole different ball of wax.

    To this quote from you -

    “To truly grasp the requirements, being told what is expected isn’t enough – you must have a thorough understanding of the problem itself. My daughter knows exactly what she wants because she wants it! The business expert knows exactly what he wants, because he understands the problem and the business system inside out.”

    TDD doesn’t address this at all and was never intended to. Automated *Acceptance* Tests ala NFit/FitNesse is our preferred mechanism for business level requirements. Writing the input for the FitNesse tables really needs to be done by the business folks, hopefully together with us. Making as much of the requirements as possible be described by an unambiguous definition *is* a great way to determine your requirements. I.e., if your system accepts these inputs and produces exactly these outcomes it is correct. If the team as a whole cannot write the acceptance tests, the story/requirements are not understood well enough to code yet.

    Even with better languages, I don’t see the value of executable requirements going away, just easier to code.

    I’ve talked to some people that have seen Intentional Programming demos and walked away impressed, but it’s been cooking for a long, long time.

  • http://dotnetjunkies.com/weblog/johnwood johnwood

    As I’ve been explaining recently in my various blog posts, there’s inevitably a gap between the requirements and the end product, and you can either: a) ignore it and raise the system’s TCO with constant maintenance, b) encode the requirements using a language closer to the requirements themselves (DSLs or IP for example), or c) use TDD to verify that the code does indeed meet the requirements. TDD isn’t perfect, I think b is the perfect answer (as you allude to above), but it’s not useless at all… without the level of abstraction required to perfectly encode the requirements, TDD is really the next best thing.

  • jmiller

    I just noticed that as well. I don’t know about you, but my grandmother just knows that I “work with computers.”

    Props to Brendan.

  • http://www.flux88.com Ben Scheirman

    I read your post, then scrolled down to view the comments. I didn’t see any, but I did see that advertisement at the bottom of the page:

    “Grandma would have been a lot happier if she understood TDD.” -lol

  • http://www.jeffreypalermo.com jpalermo

    TDD doesn’t guarantee the success of my software project? What??? I haven’t felt this way since I found out that using source control alone didn’t make my software faster.

    In case there is any doubt, I’m using a little sarcasm. :)