Can we redefine Unit Test?

It's just an opinion, but I'm thinking that the hardcore Feathers definition of a "Unit Test" needs to be restated, or at least not applied so literally.  I used to get a little annoyed with one of my former colleagues who was a bit anal in his definition of a unit test.  "Oh no, that test uses two classes, it's an integration test so we should put it over here."  Let's worry more about the goal of writing a test and less about the mechanics of a test.

I typically like to set up one assembly for unit tests and another for integration tests.  The point of the separation isn't really unit vs integrated so much as it's fast test versus slow test.  I don't really care that much if more than one concrete class is involved in a single "unit" test (with some provisos listed down below). 

The important thing about TDD/BDD is to use a test to specify the desired behavior of a piece of code.  I see people hurting themselves by putting too much emphasis on the letter of the "unit test" law and not enough thought into the spirit.  The goal isn't to have a disconnected "unit test" for each piece of code, it's to have a test that specifies the behavior of a piece of code.  Thinking of some specific examples, I hate to see people try to unit test data access code by mocking ADO.Net.  Another example that bugs me is seeing people obsess with trying to mock the file system.  By all means, do put a thin wrapper around that stuff to isolate the data access from the rest of the code, but drive the construction of code that only interacts with the database by interacting with the database.  If a class's only raison d'etre is to do data access, test it against the database!  Writing a "unit test" with mocked out ADO.Net calls doesn't add much value and sucks down time.

Here's my alternative version of the Feathers list:

A Unit Test is:

  • Fast
  • Short
  • Easy to setup, control, and verify, i.e. very few variables in the test equation
  • Specifies the behavior of a minute fraction of the code
  • Tells me exactly what is wrong.  "This, this code right here is wrong"  If a test just tells you that something is wrong, it's not a unit test, it's an integration test.  In other words, if I need a debugger to fix a broken test, it's most likely an integration test.

Having a little cluster of two or more concrete classes in a unit test doesn't bother me, as long as the unit test meets these requirements.  The real "categorization" of tests for builds and developer runs is strictly on a Fast vs. Slow split.  By that definition, almost any test that calls outside a single AppDomain is a slow test.  Any test that requires some kind of environment setup, or involves an intensive calculation, gets shoved into the "slow" integration test assembly.

In a more general note, any automated test that is hard to diagnose when it fails, isn't very valuable.
 

About Jeremy Miller

Jeremy is the Chief Software Architect at Dovetail Software, the coolest ISV in Austin. Jeremy began his IT career writing "Shadow IT" applications to automate his engineering documentation, then wandered into software development because it looked like more fun. Jeremy is the author of the open source StructureMap tool for Dependency Injection with .Net, StoryTeller for supercharged acceptance testing in .Net, and one of the principal developers behind FubuMVC. Jeremy's thoughts on all things software can be found at The Shade Tree Developer at http://codebetter.com/jeremymiller.
This entry was posted in Test Driven Development. Bookmark the permalink. Follow any comments here with the RSS feed for this post.
  • http://chrisdonnan.com chris donnan

    I also practice 2 packages of tests. Unit tests -must be fast – and I try to shoot for as textbook as can be. Integration test package is very useful – particularly when you have services written by non-tdd-ers (shameful)! I try to put tests like this at the boundaries between teams – they almost always have great value and are used well throughout the project. I also think integration tests (using a test harness – aka NUnit)) are helpful if you are coming onto a project that has no tests. Basic ‘smoke tests’ at a high level will just help you make some more bold changes/ fixes as you go through the motions with the project.

    -Chris

  • Wayne Mack

    I’m in agreement with Jeremy. If one wants to adopt Test First Design, one cannot segregate the tests to be written on a per-class basis. In a project, I really don’t care whether someone chooses to call a particular test set a “unit test” on an “integration test.” I do care whether the tests are automated, whether the tests are completely and continuously run, and whether the tests are written prior to the associated code. The same as in any other complex task, these priorities have implicit conflicts.

    To facilitate refactoring, the tests need to minimize restrictions on the software design. Refactoring that is simultaneously changing code and test structure is highly risky. This implies that the tests need to be written at a high level; atop a stack of evolving classes.

    Test written at a high level, however, may require quite a bit of set up and tear down to extensively test some complex operations. It may be more effective to write tests to excercise these operations at a lower level and avoid doing the repeated set up. This way I can fully excercise the desired operation and keep the test case clear and understandable. I will still create a basic higher level test to ensure the lower level class is called, but I do not feel it valuable to extensively test the lower class capabilities through the entire stack once its been verified by a lower level test,

    In order to run the test continuously, i.e., every several minutes, the tests must run quickly. Some tests, however, tend to bog the run down. To solve this, I usually end up create two test suites, a full test suite and a fast test suite. When I notice a particular test that causes the status bar to freeze and jump, I remove that test from the fast test and leave it only in the full test. I can run the fast test repeatedly and reserve the full test for significant check-ins, approximately 2 – 4 times per day. The downside is that this approach requires duplication in entering new test sets in both the full and fast test suites. Just be anal about it and ensure that nothing gets omitted from the full test suite.

    The rationale behind test first design is ensuring that the code you are writing is functionally correct. Tests must drive the decision to write code in a particular class, call library functions, or call lower level classes. I do not care whether you call these different cases unit tests or integration tests, I do care whether they are included in the automated test suite that we run.

  • jmiller

    Liang,

    Sorry, I missed the second part of the question. YES, I say you definitely want the tests that exercise the NHibernate mappings or any data access in the CCNet builds. Separating the unit (fast) tests from the integration (slow) tests is strictly a matter of convenience. I’ll run the unit tests all the time for refactoring, and the integration tests usually just as part of the local or CCNet build.

  • jmiller

    Liang,

    We don’t do anything very fancy. For a brand new class/table/mapping, I follow this basic set of steps in an NUnit TestFixture:

    1.) Wipe the table data
    2.) Create an instance of the domain class and fill every property
    3.) Persist “instance1″ with the NHibernate service class
    4.) Create a new instance of the NHibernate service class
    5.) Find “instance2″ using the primary key from “instance1″
    6.) Verify that the object references are not the same
    7.) Compare each and every persisted property

    It gets a bit more complicated for testing relationships and cascading deletes, but it’s more or less the same strategy.

  • Liang

    “If a class’s only raison d’etre is to do data access, test it against the database! ” I got it, Thanks, Jeremy!!

  • Liang

    I still have a question if we really need to do “Unit” test on DAO objects. As Jeremy defined, “Unit” Test is behavior test. So any test touching database should be “state” test. You can call it regression test, or whatever. The thing is wether we need to include “DAO” test into auto build or under CC.NET. I know Jeremy is using NHibernate. Could I ask how you test your HQL and how to test the mapping?

  • jmiller

    Eric,

    I don’t disagree with Michael Feathers at all, I just worry a bit about people taking the literal interpretation a bit too far.

    And yes, I agree with you that 100% *unit* test coverage isn’t pragmatic in most cases. 100% test coverage might be doable — assuming you’re willing to pay for it.

  • Eric Nicholson

    I agree with everything you said 100%.

    The one thing that I would mention is that you’ve basically restated Feathers definition of a unit test…

    The Glossary in “Working Effectively with Legacy Code” says:
    unit test: A test that runs in less than 1/10th of a second and is small enough to help you localize problems when it fails.

    What you seem to be arguing with is the dogmatic idea of 100% unit test coverage. That’s a very expensive & naive proposition. One that you’ve clearly exposed. But Feathers never suggested that…

  • jmiller

    Carlton,

    I agree that DAO tests are integration tests, and I got a bit incoherent here. But I’d rather drive the behavior of a DAO against the database and not even write any disconnected tests.

  • Carlton

    I tend to agree with you that DAO classes should test against a database. That said, these types of tests tend to brittle and require a lot of set-up or teardown (I should know – I’ve written a lot of them). They really are integration tests, not unit tests.

  • http://haacked.com/ Haacked

    You got my vote!