It's just an opinion, but I'm thinking that the hardcore Feathers definition of a "Unit Test" needs to be restated, or at least not applied so literally. I used to get a little annoyed with one of my former colleagues who was a bit anal in his definition of a unit test. "Oh no, that test uses two classes, it's an integration test so we should put it over here." Let's worry more about the goal of writing a test and less about the mechanics of a test.
I typically like to set up one assembly for unit tests and another for integration tests. The point of the separation isn't really unit vs integrated so much as it's fast test versus slow test. I don't really care that much if more than one concrete class is involved in a single "unit" test (with some provisos listed down below).
The important thing about TDD/BDD is to use a test to specify the desired behavior of a piece of code. I see people hurting themselves by putting too much emphasis on the letter of the "unit test" law and not enough thought into the spirit. The goal isn't to have a disconnected "unit test" for each piece of code, it's to have a test that specifies the behavior of a piece of code. Thinking of some specific examples, I hate to see people try to unit test data access code by mocking ADO.Net. Another example that bugs me is seeing people obsess with trying to mock the file system. By all means, do put a thin wrapper around that stuff to isolate the data access from the rest of the code, but drive the construction of code that only interacts with the database by interacting with the database. If a class's only raison d'etre is to do data access, test it against the database! Writing a "unit test" with mocked out ADO.Net calls doesn't add much value and sucks down time.
Here's my alternative version of the Feathers list:
A Unit Test is:
- Easy to setup, control, and verify, i.e. very few variables in the test equation
- Specifies the behavior of a minute fraction of the code
- Tells me exactly what is wrong. "This, this code right here is wrong" If a test just tells you that something is wrong, it's not a unit test, it's an integration test. In other words, if I need a debugger to fix a broken test, it's most likely an integration test.
Having a little cluster of two or more concrete classes in a unit test doesn't bother me, as long as the unit test meets these requirements. The real "categorization" of tests for builds and developer runs is strictly on a Fast vs. Slow split. By that definition, almost any test that calls outside a single AppDomain is a slow test. Any test that requires some kind of environment setup, or involves an intensive calculation, gets shoved into the "slow" integration test assembly.
In a more general note, any automated test that is hard to diagnose when it fails, isn't very valuable.