I can see more of these little posts about setting up a project coming up, so I made a new category called "Starting a new Project."
My team discussed how we were going to organize the NUnit TestFixture code. We talked through the normal options and pretty well decided to go down a very generic path.
The first decision was how to group unit tests into TestFixture classes. I brought up two alternatives:
- TestFixture class per concrete class (by far and away the most common approach)
- TestFixture class per user story
I voted for grouping the tests by the class that they test. Another developer liked grouping by user story. We ended up adopting . I've done the grouping by story or feature before in StructureMap and felt it worked just fine, but it might just be a sign that I had a Shotgun Surgery code smell going on. For example, when I retrofitted "Setter Injection" into StructureMap I created a separate TestFixture class that tested the relevant changes across about a half-dozen classes. I kind of liked being able to write all the new unit tests in a single file rather than doing so much switching back and forth to write a single unit test in several different files. As I recall, it was almost purely additive coding rather than changing existing code. The drawback with this approach is that the unit tests for a single class *are* scattered across story test fixtures. When I'm changing an existing class with new behavior I want to be able to run all of the unit tests for just that class (simple right click and "Run Tests" with TestDriven.Net) as I code for more targeted testing feedback. I don't have a hard opinion about this issue, but I feel like the per concrete class approach is the choice by commonality if nothing else.
The second decision is where do the TestFixture classes themselves go. I know of a couple alternatives:
- Put the TestFixture classes inside the production assemblies. In this case the team will denote a naming convention to put the TestFixture classes in child namespaces, often within each production namespace. When the team is making a production build the unit test code will generally be filtered out so that the unit test code does not ship. The obvious advantage is more resiliency to member visibility issues for teams that are aggressive about making types and methods "internal." Some people also like being able to always find the unit test code directly under the production code. Personally, I don't like this approach. Filtering out the test code is extra overhead to maintain in the build automation scripts. I think it makes the production code a bit harder to look at in Solution Explorer because there's so much more noise going on. I haven't really seen that many problems with member visibility by putting the test code into separate assemblies, even without the InternalsVisibleAttribute trickery. You simply make anything that needs to be tested public and away you go. As far as being able to find the relevant TestFixture class for each class, as long as you follow a consistent naming convention you can quickly find the TestFixture with something like ReSharper's CTRL-N shortcut.
- Put the TestFixture code into a separate assembly for each production assembly. I don't really know why people like this pattern, unless it goes back to being able to find the TestFixture easier through only the physical file structure. I don't like this approach for a couple reasons. The first, and most immediate, is that for big changes I often want to run all of the unit tests at one time. Separating things out into multiple test assemblies just adds to the mouse clicks and build automation overhead. Yeah, you can always run all of the tests for a solution, but as I'll cover in the third bullet, that's not necessarily what you want to do on a regular basis. The second reason isn't quite as obvious, but it's a very real concern. At my previous job we found that the compile time for a big solution seems to be more dependent upon the number of projects than the total amount of code. At one point we inherited a code tree that had 55-60 projects and took upwards of 4 minutes to compile. We brought that compile time to under 30 seconds just by combining fine grained assemblies into bigger assemblies based on deployment. Part of that effort was combining about 15 test assemblies into only 2-3 assemblies. A slow compile time is deadly for productivity because it slows down the feedback cycle of virtually all coding activities.
- Lastly, and this is my preferred approach, create two different assemblies for testing. One assembly holds the unit tests (fast tests) and the other has the integrated tests (slow tests). Generally, you parallel the namespace structure of the production assemblies. The benefits to this strategy are being able to run all of the unit tests at one time, faster compile times, and less overhead in maintaining the build scripts. An additional issue is that the two types of tests may have different configuration requirements. The integrated tests will definitely need configuration for external resources like database connections and web service urls. The unit tests shouldn't need any of that stuff, but may require a completely stubbed out version of the configuration files (I'm thinking about using an IoC tool here).
Anybody got a different approach? I'm all ears.