SetUp and Teardown Methods in Test Classes

Jim Newkirk is blogging about the down side of setup and teardown methods in test classes, and why you shouldn’t use them.

Setup and teardown methods attract entropy faster than an outsource programmer with his first patterns book.

Jim’s new framework, xUnit.NET doesn’t have primitives for setup and teardown, although it sounds like there are mechanisms that could be used to accomplish the same kind of thing.

Roy feels that xUnit.NET isn’t quite there yet.  I think that Roy’s perspective will be reflected by lots of folks who have become used to NUnit (et al) over the past six years.

I think that Jim is on the right track, but I’m the kind of guy that feels that a test class’s greatest responsibility is to document behavior in the clearest possible way, even if that means sacrificing design qualities best reserved for functional code – like reuse.

Unit test patterns in .NET can learn a lot from RSpec, where setup and teardown methods, as well as test methods themselves, are kept small – tiny, actually – by breaking up multiple test classes into smaller, more focused modules.

An RSpec context (“Describe” block) doesn’t have a direct equivalent in .NET and NUnit, but the spirit of a test context can still be achieved.

A context is typically smaller than an xUnit test class.  Because it’s smaller, it avoids a significant contributor to test class entropy.

Here are a couple practices that I use in RSpec to keep entropy at bay that are also applicable to unit testing practices in .NET:

  • Don’t put anything in a setup method that doesn’t pertain to every test method in the test class
  • If you have stuff in a setup method that doesn’t pertain to each test method, break up the test class into multiple test classes where the instance members initialized in the setup pertain to each test method in the new test classes.
  • These new, smaller test classes are test “contexts”.
  • Keep related contexts in the same file.
  • When you get four or five test methods in a single context, consider whether the test methods are related to the same concern.  If they’re not, break up the context.
  • Name contexts for the concern.
  • Name test methods to describe the experience of the concern under test, not the implementation of either the test or the functional code.
  • If you can’t see the setup method on the same screen and in the same glance as the test methods, break up the context.

The problem with setup methods has to do with solubility.

Loading setup methods up with initialization that doesn’t pertain to each test method and having test classes the scroll on out of sight saps solubility.

If you really want setup behavior in xUnit.NET, I bet you can figure out a way to make that happen.  However, Jim and Brad are giving you a chance to reconsider your own testing practices in light of their extensive experience, and in light of emerging beliefs about unit testing and test-driven practices as they continue to evolve based on the discoveries of some of the field’s most persistent explorers.

Posted in Agile, Design, Solubility, Test-Driven Development | Leave a comment

SOA, BI, Events, and Push

Arnon talks about event-based BI:
http://www.infoq.com/articles/BI-and-SOA

This is exactly what I’ve been thinking about for our Rails / .NET integrated solution.  I thought that this architecture would be received as heresy.  I’m stoked to see that folks like Arnon are out in front (of my efforts at least) blazing a path.

Posted in Design | Leave a comment

My Credo

I never believe that anything I do is correct until I subject it to the to inquiry, “How is this wrong?”

I believe that everything I do is suspect.  This is my point of departure for every single line of code I write.  Over the past few years this inquiry has become instinctual.

Posted in Agile, Design | Leave a comment

Write Specs to the Experience Rather than the Implementation

This post is a place-holder for a longer post with examples, but I’ve been sitting on this one for months, so I thought I’d just toss it out there for what its worth.

Unit testing practices often make no bones with writing test names that describe the implementation.  You see signs of this when programmer-speak shows up in test names.  More egregious words like “returns” and “calls” and “invokes” and “raises” and “throws” litter the test names.  In more subtle cases, the test names describe the code in the test method itself.

Behavior-driven test names, or specification names, should describe the experience of the user of the, even if the immediate use is another piece of code.

For example rather than using “…throws an exception” in a test name, you could say, “…is not allowed.”

You might think that this will lead to tests that are less discoverable, but it doesn’t work out that way in practice.  When I follow stick to good test code practices and patterns all around, I end up with code that is as discoverable, and often more discoverable than if I tired to capture in the test name that which is already in the test code.

This means having a single concern per test. This often equates to having one assert per test, but the one assert per test rule is a poor choice of language that doesn’t express the essence of the problem directly, and depends on the programmer to already know what the rule is rather than rely on how the rule is commonly expressed by those who already know it instinctively (a common problem in agile development).

It also means keeping setup code focused on a single concern and not using too many mocks.  And it means a whole lot of stuff that is probably too subtle to describe than it is to demonstrate in code.

Posted in Agile, Behavior-Driven Development | Leave a comment

TDD is About Not Knowing

The greatest obstruction to grokking Test-Driven Development is the assertion that a developer makes to himself about how right his plans are for a solution.

The essence of TDD is a suspension of our belief that we know what we’re doing, and a submission to a structured process of exploration and elaboration of ideas.

No matter how certain I am that a conception of a design is right, I test the assumption against reality through TDD.  I inevitably find that the higher level of my design ideas withstand TDD scrutiny, but the fine-grained atomic design assumptions don’t pass muster with any great regularity.

Because of this, I rarely even try to conceive of low level design ideas up front.  I know that by fleshing them out though a process of specification with code that I end up with a good result in good time, and the result is frequently better than what I had assumed, and often in surprising ways that open me up to new avenues of learning and understanding in programming and design.

Most good developers refute TDD’s viability, and yet many excellent developers practice it diligently, and claim that they went from being good to excellent because of what they learned with TDD.

I think that the difference between a good developer and an excellent developer is the excellent developer’s willingness to not know, an openness to explore, and faith in skills that guide solutions to good ends even when the end is not known at the outset.

Posted in Agile, Design, Test-Driven Development | Leave a comment