Avoid Testing Implementation Details, Test Behaviours

Every so often I return to Kent Beck’s Test-Driven Development. I honestly believe it to be one of the finest software development books ever written. What I love about the book is its simplicity. There is a sparseness to it that decieves, as though it were a lightweight exploration of the field. But continued reading as you progress with TDD reveals that it is fairly complete in its insights. Where later works have added pages and ideas, they have often lessened the technique, burdening it with complexity, born of misunderstandings of its power. I urge any of you who have practiced TDD for a while to re-read it regularly

For me one of the key ideas of TDD that is often overlooked is expressed in the cycle or Red, Green, Refactor. First we write a failing test, then we implement as quickly and dirtily as we can, and then we make it elegant. In this blog post I want to talk about some of the insights of the second and third steps that get missed. (Although you should remember the first step: tests don’t have tests so you need to prove them).

A lot of folks stall when they come to implement, because they try to implement well, thinking about good OO, SOLID patterns etc. Stop! The goal is to get green as soon as possible. Don’t try to build a decent solution at this point, just script out the quickest solution to the problem you can get too. Cut and Paste if you can, copy algorithms from CodeProject or StackOverflow if you can.

From Test-Driven Development

Stop. Hold on. I can hear the aesthetically inclined among you sneering and spitting. Copy-and-paste reuse? The death of abstraction? The killer of clean design?

If you’re upset, take a cleansing breath. In through the nose … hold it 1, 2, 3 … out through the mouth. There. Remember, our cycle has different phases (they go by quickly, often in seconds, but they are phases.):

1 Write a test.

2. Make it compile.

3. Run it to see that it fails.

4. Make it run.

5. Remove duplication.”

It’s that final step (which is our refactoring step) in which we create Clean Code. Not before, but after.

“Solving “clean code” at the same time that you solve “that works” can be too much to do at once. As soon as it is, go back to solving “that works,” and then “clean code” at leisure.”

Now there is an important implication here that is often overlooked. We don’t have to write tests for the classes that we refactor out through that last step. They will be internal to our implementation, and are already covered by the first test. We do not need to add new tests for the next level – unless we feel we do not know how to navigate to the next step.

From Test-Driven Development again:

“Because we are using Pairs as keys, we have to implement equals() and hashCode(). I’m not going to write tests for these, because we are writing this code in the context of a refactoring. If we get to the payoff of the refactoring and all of the tests run, then we expect the code to have been exercised. If I were programming with someone who didn’t see exactly where we were going with this, or if the logic became the least bit complex, I would begin writing separate tests.”

Code developed in the context of refactoring does not require new tests! It is already covered, and safe refactoring techniques mean we should not be introducing specualtive change, just cleaning up the rough implementation we used to to green. At this point further tests help you steer (but they come at a cost)

The outcome of this understanding is that a test-case per class approach fails to capture the ethos for TDD. Adding a new class is not the trigger for writing tests. The trigger is implementing a requirement. So we should test outside-in, (though I would recommend using ports and adapters and making the ‘outside’ the port), writing tests to cover then use cases (scenarios, examples, GWTs etc.), but only writing tests to cover the implementation details of that as and when we need to better understand the refactoring of the simple implementation we start with.

A positive outcome from this will be that much of our implementation will be internal or private, and never exposed outside our assembly. That will lower the coupling of our solution and make it easier for us to effect change.

From Test-Driven Development again:

“In TDD we use refactoring in an interesting way. Usually, a refactoring cannot change the semantics of the program under any circumstances. In TDD, the circumstances we care about are the tests that are already passing.”

When we refactor we don’t want to break tests. If our tests know too much about our implementation, that will be difficult, because changes to our implementation will necessarily result in us re-writing tests – at which point we are not refactoring. We would say that we have over-specified through our tests. Instead of assisting change, our tests have now begun to hamper it. As someone who has made this mistake, I can vouch for the fact that it is possible to write too many unit tests against details. If we follow TDD as originally envisaged though, the tests will naturally fall mainly on our public interface, not the implementation details, and we will find it far easier to meet the goal of changing the impementation details (safe change) and not the public interface (unsafe change)

Of course, this observation, that this idea has been overlooked by many TDD practitioners, formed the kernel of Dan North’s original BDD missive and is why BDD focused on scenarios and outside-in for TDD

About Ian Cooper

Ian Cooper has over 18 years of experience delivering Microsoft platform solutions in government, healthcare, and finance. During that time he has worked for the DTi, Reuters, Sungard, Misys and Beazley delivering everything from bespoke enterpise solutions to 'shrink-wrapped' products to thousands of customers. Ian is a passionate exponent of the benefits of OO and Agile. He is test-infected and contagious. When he is not writing C# code he is also the and founder of the London .NET user group. http://www.dnug.org.uk
This entry was posted in ATDD, BDD, SOLID, STDD, TDD, xUnit. Bookmark the permalink. Follow any comments here with the RSS feed for this post.
  • Pingback: Extreme Enthusiasm » Blog Archive » How I remembered Object Thinking

  • Pingback: UGG 5219

  • RichB

    I’ve been thinking about this some more. And while I agree you shouldn’t test every implementation detail, there’s huge value in testing complex implementation details.

    For example, what if you implemented a feature using a home-grown file format. The format of the file is an implementation detail, but you sure want to have a good test suite for it. Alternatively, lets say you implemented a feature by parsing URLs and you didn’t have a URL parser to hand, so you wrote one. Then you would need a Fixture for that too. Or, let’s say you decided to code a feature by executing expressions with a simple expression evaluator DSL.

    Reading this post makes me think your only experience is with glorified CRUD applications, but based on conversations I’ve had with you in the past, I know that’s not true.

  • RichB

    Up to a point. The danger is that if you refactor, then your test is no longer a unit test but an integration test. So it may be sensible to create unit tests for the new units.

    > Code developed in the context of refactoring does not require new tests! 
    Except when you are refactoring a legacy system with zero unit tests. In which case, you have to refactor to make the code unit testable!

  • Pingback: Рубрика «Полезное чтиво». Выпуск 6 « XP Injection

  • Pingback: Рубрика «Полезное чтиво». Выпуск 5 « XP Injection

  • Pingback: Extreme Enthusiasm » Blog Archive » Clearing some misunderstandings about TDD

  • http://matteo.vaccari.name/blog/ Matteo Vaccari

    Hi Erik, while I agree with everything that Ian says, having gone through all of the misunderstandings that he lists :-), there’s something more to add: when you’re TDDing you should always be on the lookout for opportunities to abstract. 

    In your average salary example: it might be too big a step to implement directly.  In this case you might want to use the Child Test pattern (also from Beck’s book).  For instance you might invent an Average object that takes the average of any collection of objects that know how to return a number.  You invent an “CanBeAveraged” interface that Employee could implement, and develop your Average object without the need or the burden of working with Employee objects. 

    Once you have test-driven these building blocks you go back to the orginal test, you let Employee implement CanBeAveraged.  A side effect is that the system now has less coupling than it would have if Average needed to know about Employees.

    That is one way you could go about it.  Or maybe you could implement the averaging behaviour in Employee and then factor out the Average later.  If you feel unsure about the behaviour of the classes you extracted, you could put on a tester hat and then write tests to cover its behaviour until you feel confident about it.  It’s a matter of judgment.  Kent says “if the logic became the least bit complex, I would begin writing separate tests” …

    Matteo

  • Pingback: The Morning Brew - Chris Alcock » The Morning Brew #954

  • http://davesquared.net David Tchepak

    Really enjoyed this post. Thanks.

    I was wondering if all our tests are outside-in and we don’t test stuff we pull out does that leave us driving almost entirely via acceptance tests? (I’m not saying that’s a bad thing; just trying to understand).

    The problem I’ve faced when trying to follow this style of TDD is I get combinatorial explosions of test cases. If I follow a more mockist/GOOS approach I drive out abstractions as I go and end up testing the messages between dependencies, rather than lots of combinations of test cases. This has its own disadvantages, but I’ve had more success with that approach to date.

    Interested to get your thoughts on this, as I can see some good advantages to the more traditional TDD approach you’ve described.

    Cheers,
    David

  • http://twitter.com/colin_jack Colin Jack

    Good as always.

    This topic has always interested me because its a fundamental choice. You either do what you describe here and use refactoring as a key design tool, or start using unit testing to drive deeper and deeper into your design. Both approaches have their advantages.

  • http://twitter.com/colin_jack Colin Jack

    Good as always.

    This topic has always interested me because its a fundamental choice. You either do what you describe here and use refactoring as a key design tool, or start using unit testing to drive deeper and deeper into your design. Both approaches have their advantages.

  • Александр Сидоров

    >>A lot of folks stall when they come to implement, because they try to implement well, thinking about good OO, SOLID patterns etc. Stop! The goal is to get green as soon as possible.
    No, the goal is to implement a task with appropriate quality

    >>When we refactor we don’t want to break tests

    Great point. If you have a lot of mock expectations, your tests often become false-negative during refactoring. That’s why I prefer classical TDD instead of mockist approach (according to Fowler classification).

    The title is a bit strange. There are two types of tests: state and behaviour tests. But I guess by “Test Behaviours” you actually don’t mean that we should write behaviour tests.

  • http://www.daedtech.com Erik

    I enjoyed reading your post and thought you made good points about the public interface of your code and its behaviors being what you actually want to test.  One thing did raise an eyebrow, though: 

    “Adding a new class is not the trigger for writing tests. The trigger is implementing a requirement. ”
    I take this to mean that if I had some public method that created a list of employee objects and calculated the average salary of those employees, you’re saying that extracting a factory for creating the employees and some sort of little class for calculating the average would mean that I had no motivation to test my new “AverageCalculator” and “EmployeeFactory” classes – they’re tested, as I need them in terms of the application by the tests for my original method.

    But aren’t these now integration tests?  My original tests no longer test the “unit” of my original class/method, but the result of my class/methods interactions with other classes.  I’ve increased the public surface area of my code by refactoring, but I haven’t increased the test suite.

    Wouldn’t this place a burden on “future me” or someone else if they then want to use my AverageCalculator and EmployeeFactory?  They’re responsible now for testing their classes and also their class’ path through my non-unit-tested classes.

    Thoughts?

  • http://www.caffeinatedair.com/ Matt

    Great points. I think that mastering TDD is challenging, but it is often made more difficult when you jump the gun in the whole red-green-refactor cycle. When you watch people who are proficient at TDD, you quickly notice that they start with writing the simplest thing possible to make the failing test pass. If you are already planning an elaborate OO design at this early stage, you have missed the point.

    Your post also addresses the question that many devs have about testing private methods. If you are testing a feature/requirement, you are by default testing the public surface of your codebase. The private methods are implementation details.

    That said, I still find scenarios where I want to test private methods ;)