Learning and craftsmanship

Roy has a pretty thoughful post on the barrier to entry for most developers with Test-Driven Development. I hope I am not doing Roy a disservice by summarizing it as: we have made TDD unapproachable to many by making it more and more complex.

I would agree with many of the points that Roy makes about returning to the basics when approaching TDD. I still find Kent Beck’s book captues the essence of TDD. It also contain wisdom that when I return to it makes me realize that much of what I assumed were new ideas were all there in Kent’s original book on the topic. I like the clarity of the Four-Phase Test and its cousin Act/Arrange/Assert. I agree that we should prefer testing the state of the class we are testing over its behavior.

But I do not think that another naming taxonomy for techniques to swap out depended-on components is going to do anything more than muddy the waters. Maybe you don’t like Gerard’s Test Doubles but they do catalogue a lot of what is out there in the wild. The naming conventions behind test doubles are not the issue most people encounter. The problem they hit is that in reality there are hard-to-test areas and people look for help in overcoming them.

After I learnt TDD I wrote, by my current standards, poor unit tests. I wrote tests that touched the Database. I wrote tests that touched the file system and called across the network. I wrote tests that checked private methods. I wrote tests that shared fixture or had dependencies between them. I put mocks everywhere, replacing every depended upon component.  And I suffered with fragility, slowness, an unmaintainable code for my mistakes. But I needed to make those mistakes. Because it was only by making the mistake that I could understand the need for and value of a solution.

We live in a fairly lucky time for software craftsmanship. The wisdom of the masters of our trade is published in accessible formats for all of us to read. So when we need to learn at the feet of the masters we do not have to look for opportunities to work alongside them, we can find their wisdom in the form of patterns, principles, and practices published in books or on the web. I started my career in a period where this kind of knowledge was much less accessible, and the only way to discover a better solution was to find one yourself or work with someone who knew of a different approach. We live in fortunate times. But the ease with which we can find best practice often masks the fact that understanding the value of that advice usually requires us to have experienced the pain beforehand.

When I teach TDD one point I try to flag is that the first step is to start writing tests. Not to worry about how good those tests are, but to start writing tests. Once you are writing tests and experiencing the pain that comes from poor tests, you will appreciate the value of some of the best practices out there. Because then you understand them as solutions to problems not just more complexity. But first, just start writing tests. The difficulty with the software community is that experienced practitioners can overwhelm newbies by flooding them with information on ‘how to do it right’. The problem is that the most important step is not doing it right, but doing it at all. As a result newbies suffer from the kind of analysis paralysis that BDUF usually brings: should I be using a fake here, should I be using a mock? Instead we need to embrace the TDD solution to analysis paralysis of making it work and then making it right. Get your code under test, then worry about refactoring your test code. Over time you will learn enough to write better test code first time around. Your ability to see what Kent calls the ‘obvious implementation’ widens to include these new techniques.

We still need to do this today. On my current project we are still looking at the best way to test our controllers. Our existing test code works but is fragile and we need to review where we check the controller’s action by checking state and where we use mocks to check the interaction with the domain because the controller has no state that we can check. Right now, in some places, we use the state of the domain to check the controller’s action, and it is hurting us with test fragility as the test depends on too much. So we need to make it right. And in doing so we will increase our understanding of when to use mocks.

There is no shame in making a mistake, only in not from learning from it.

Don’t be put off by patterns and practices and assume you have to master all of them before you can begin work. They are there to help you when you need a solution.

Make it work, then make it right.






About Ian Cooper

Ian Cooper has over 18 years of experience delivering Microsoft platform solutions in government, healthcare, and finance. During that time he has worked for the DTi, Reuters, Sungard, Misys and Beazley delivering everything from bespoke enterpise solutions to 'shrink-wrapped' products to thousands of customers. Ian is a passionate exponent of the benefits of OO and Agile. He is test-infected and contagious. When he is not writing C# code he is also the and founder of the London .NET user group. http://www.dnug.org.uk
This entry was posted in Agile, Fakes, Mocks, Object-Orientation, Stubs, TDD, XP, xUnit. Bookmark the permalink. Follow any comments here with the RSS feed for this post.
  • http://www.mmocode.com WOW CD KEY

    just do it

  • http://www.mmocode.com WOW CD KEY

    just do it

  • http://www.u-waytours.com tibet tour
  • xiuying
  • http://www.bzlab.ru Alexander Byndyu

    “The problem is that the most important step is not doing it right, but doing it at all.”
    Absolutely agree with you. I am trying to improve TDD practices in our company with this slogan! =)

  • Alfino Batolo

    Interesting but for the sake of argument here is a completely different view: http://littletutorials.com/2008/09/29/thought-driven-development/

  • Ian Cooper

    @Bob, @Jeremy

    A lot of us started that way. We still touch the Db in our acceptance tests.

    The smell that drove me away from databases was not the poor performance but the lack of clarity in the tests and their fragility. The tipping point may be different for different people.

    When you test using a Db you need to set up enough data to meet your referential integrity constraints, load expected parts of the object graph etc. You also either have to tear it down or use something like the start transaction-> rollback approach to discarding your changes.

    The amount of setup/teardown code required was obscuring our tests. We spent more time writing that code than writing the tests themselves. That cost mean people began to fight writing a test for a new feature, because of the cost of writing the matching set up code.

    The tests become obscure. It became hard for us to determine what was under test. You knew that a method was under test, but not what parts of the pre-condition state were important for the post-condition state.

    The tests became interdependent. Someone would change one piece of data in the setup for their test and a dozen other tests would break, forcing us into a debugging exercise to understand what depdency we had broken.

    When you have to go into the debugger to fix tests, then it may be a smell that your tests are too complicated and need simplification.

    Indeed folks still dislike acceptance tests primarily because they still have to hit all these pain points.

    But the point I am trying to make was that it was these smells that pushed us into changing the dirty nappy. Folks new to TDD may well need to smell the s**t too before they change.

  • http://blogs.msdn.com/jmeier J.D. Meier

    > The problem is that the most important step is not doing it right, but doing it at all.
    I once heard the expression … anything worth doing right, is worth doing wrong.

    I originally thought it was a typo, but it was about taking baby steps, failing fast, and moving on.

    I’ve always preferred the name – test first development. I like the idea of starting with your tests for success. If you know your target, you have a better chance of hitting it.

  • http://codebetter.com/blogs/jeremy.miller Jeremy D. Miller

    @Bob Saggett,

    When you hit 500+ tests, the performance starts to become a huge issue. Right now we have about ~800 unit tests that run in 20 seconds and ~170 integration tests that either touch the database or test views in a browser that run in 90 seconds.

    Each test should run in milliseconds.

    Besides, it’s not just the run time, it’s also the mechanical work needed to set some test data up in code/memory versus putting the database in a known state.

  • http://saggettbob@gmail.com Bob Saggett

    Still playing with TDD. I like to read as much as possible, then discard the bits I don’t like. I particularly don’t like the not touching the database bits. I like that I can test with writing to the database and prefer this to mocking those particular elements. The main argument against seems to be speed of testing. I haven’t found this to be a problem, just whack on a fast hard disk (solid state is ideal) and test away.

  • Ian Cooper

    @Daniel As Michael points out, keep it simple to begin with, or walk before you run.

    That said, acceptance testing is the key to testing the finished piece. The issue here is that unlike TDD there is a lot more variance among experts in the best approach to take.

    Still we could broadly say that your story (or use case, or really whatever your requirment format is) should have some customer defined acceptance criteria. You need to write automated tests to prove those, not just prove your units. It would be best to do that up front as well – so you have a clear roadmap of where you are going. You want to avoid UI tests here, they are expensive and brittle. Focus on driving the requirements by calling your code behind the UI layer. Tools for writing these kind of tests vary from xUnit through StoryTeller to Fitnesse.

    Today we are using Fitnesse to do this, because it gives us good communication with the shareholders. But its not without its kinks. For a long time we were just writing them in xUnit. The trouble here is that they were based on developer interpretation and thus subject to ‘chinese whispers’ on the requirement. However they may be a good starting point.

    Seperate these tests from your unit tests, they run slower. You will probably run them less often during the working day, so this should not be an issue for you.

    But it is these acceptance tests where you go end-to-end.

    With legacy code it can be easier to write an acceptance test suite to preserve behavior first, then think about unit tests later

  • http://blog.just3ws.com Michael Hall

    Here, here. Gold stars all around. I’ve noticed a movement in the ALT.NET community that looks more akin to “how many bleeding edge concepts and technologies can I cram into my code” than “how can I actually make my product better”. Keeping it simple really is about keeping it simple. Add concepts and techniques as they become apparent, not because some guy told you it’s the Golden Path™. Want to write better code? Then write tests. Oh, the tests are getting complicated and hairy? Then refactor to stubs and mocks. Even if you’re a mocking expert (npi) trying to shoehorn in those concepts too early can cause you to spend more time trying to write around the testing framework than the other way around. Concentrate on the problem the application is intended to solve, write tests to verify it works and swap out pieces as necessary when appropriate. Not the other way around.

  • Daniel

    As somebody who’s slowly implementing TDD, I agree that the “barrier to entry” is difficult. For me, the biggest barrier is this “don’t touch the database, file system, or network” rule you mention. After much reading and esp. Rob Conery’s MVC videos, I understand it, and now get how to write testable code that follows this rule. But I know this will require big changes in our code and the way we develop. I also know that the most likely code to break is those very pieces.

    The phrase I keep hearing is “it’s good to test those things, but not in your _Unit_ tests”. Given that, I’d love to see some TDD proponents give examples of the proper time/place/methodology for covering these bits of code. I think something along those lines may clear up some of the confusion developers new to TDD may have…

  • Ian Cooper


    Agreed. I always think that the tragedy around TDD adoption is that the people that need the highest skill level, those bringing legacy codebases under control, are often the ones with the least experience of the techniques.

    Even so legacy projects are definitely an area where make it work, make it right applies to TDD.

  • Mat Roberts

    The problem I had earning TDD, is that it is very hard to do on an existing code base.

    A couple of years ago I started on a green field project and we did TDD from the start. This allowed me to develop my skills.

    Now I am working on a legacy project, and I can apply TDD now I know how to to it (with the help of Feathers book).

    But I think the biggest barrier to entry is existing code.

  • Ian Cooper

    @James I’m not a figure of authority, but I’m happy to mull it over. A programming challenge is certainly a good idea that alt.net bloggers might be able to make use of generally.

  • Ian Cooper


    I suspect that is why one of the XP principles is courage.

    All projects find that they have change, as developers learn more about the domain in which they are working. All projects have poor code that developers, with the experience of having written it once, would write very differently the next time around.

    Attempts to solve this through BDUF consistently fail.

    TDD helps with change, becuase it protects you against it. Poorly written TDD makes that protection more expensive, but all software projects need to factor in the expense of working with new techniques. What offsets that cost is the technical debt that accumulates without TDD.

    Because you have to buy ‘up-front’ with TDD there is a leap of faith to get a customer to spend that money for the promise of future reward. So usually we pick smaller projects, with lower risk and visibility, to trial new techniques so that we can prove their value and work out the kinks.

    Agile rarely happens in a revolution, but is usually an evolution as best practices prove their fitness for survival and replace the approaches of yesteryear.

    But sure its tough to effect change and I think it would be disingenuous to deny that.

  • James

    I’m a wannabe TDD developer who’s yet to take the plunge. I’ve read a fair bit but not actually implemented anything yet – slow starter! I’m currently feeling the pain of not having any unit tests in a project of mine that has been resurrected for an upgrade. I’m making changes left,right and centre with fingers crossed that I’m not breaking anything!

    An idea – I would love to see a weekly TDD exercise where some figure of authority sets a challenge with a small set of software requirements and then a week later their TDD implementation is revealed along with any common mistakes made by submitting participants.

    Not only would this help people like me learn the “right” ways it could also highlight beginner’s mistakes and why they are wrong. This would only ever allow for relatively simple scenarios but it would be a start. Besides, isn’t one of the ideas behind TDD to break requirements down into the smallest simplest steps?

    Just an idea that will unfortunately require someone else to take up and run with…

  • http://http:www.blogcoward.com jdn

    “When I teach TDD one point I try to flag is that the first step is to start writing tests. Not to worry about how good those tests are, but to start writing tests. Once you are writing tests and experiencing the pain that comes from poor tests, you will appreciate the value of some of the best practices out there.”

    This is a common theme that I’ve read amongst people who are experienced with TDD, and this is what scares businesses (among other things). The idea that we are going to introduce something that will cause problems in the beginning, but payoff in the long run.

    I’ve seen enough cases where the problems are large enough that the practice gets dropped before the payoff comes, and businesses get leary about this.

    I’m not sure what the solution is.

  • http://twitter.com/pavan_mishra Pavan Mishra


  • http://davesquared.blogspot.com David

    I couldn’t agree more with this post. I’ve been guilty of this form of paralysis myself.

    The most valuable antidote to this that I’ve found is Michael Feather’s Working Effectively With Legacy Code book. It showed me that if you have your code under test, not matter how ugly the code or the tests, you can work toward fixing it as you learn more.

  • http://devlicio.us/blogs/casey Casey

    One of the biggests problem I see with picking up TTD is developer’s seem to find it very hard to simplify.

    The key to solving any problem is to break it down nto it’s simplest components, and then to deal with each of those individually, which unit testing and TDD essentialy require as a prerequisite.

    Too many developers see a web page, and gape at the number of possible interactions there are, and therefore cannot see where on earth they would start with testing. They can incrementally chip away at that one page with lots of changes here and there to get it working right, but cannot conceive of it as a set of very small interacting pieces.

  • http://holytshirt.blogspot.com/ Toby Henderson

    I completely agree with you, just do it and the rest will come with time. Good craftsmanship takes time, there are no shorts cuts only guides along the way.