Week 1 Questions: How do you organize your NUnit test code?

I can see more of these little posts about setting up a project coming up, so I made a new category called "Starting a new Project."


My team discussed how we were going to organize the NUnit TestFixture code.  We talked through the normal options and pretty well decided to go down a very generic path.

The first decision was how to group unit tests into TestFixture classes.  I brought up two alternatives:

  • TestFixture class per concrete class (by far and away the most common approach)
  • TestFixture class per user story

I voted for grouping the tests by the class that they test.  Another developer liked grouping by user story.  We ended up adopting .  I've done the grouping by story or feature before in StructureMap and felt it worked just fine, but it might just be a sign that I had a Shotgun Surgery code smell going on.  For example, when I retrofitted "Setter Injection" into StructureMap I created a separate TestFixture class that tested the relevant changes across about a half-dozen classes.  I kind of liked being able to write all the new unit tests in a single file rather than doing so much switching back and forth to write a single unit test in several different files.  As I recall, it was almost purely additive coding rather than changing existing code.  The drawback with this approach is that the unit tests for a single class *are* scattered across story test fixtures.  When I'm changing an existing class with new behavior I want to be able to run all of the unit tests for just that class (simple right click and "Run Tests" with TestDriven.Net) as I code for more targeted testing feedback.  I don't have a hard opinion about this issue, but I feel like the per concrete class approach is the choice by commonality if nothing else.

The second decision is where do the TestFixture classes themselves go.  I know of a couple alternatives:

  • Put the TestFixture classes inside the production assemblies.  In this case the team will denote a naming convention to put the TestFixture classes in child namespaces, often within each production namespace.  When the team is making a production build the unit test code will generally be filtered out so that the unit test code does not ship.  The obvious advantage is more resiliency to member visibility issues for teams that are aggressive about making types and methods "internal."  Some people also like being able to always find the unit test code directly under the production code.  Personally, I don't like this approach.  Filtering out the test code is extra overhead to maintain in the build automation scripts.  I think it makes the production code a bit harder to look at in Solution Explorer because there's so much more noise going on.  I haven't really seen that many problems with member visibility by putting the test code into separate assemblies, even without the InternalsVisibleAttribute trickery.  You simply make anything that needs to be tested public and away you go.  As far as being able to find the relevant TestFixture class for each class, as long as you follow a consistent naming convention you can quickly find the TestFixture with something like ReSharper's CTRL-N shortcut.
  • Put the TestFixture code into a separate assembly for each production assembly.  I don't really know why people like this pattern, unless it goes back to being able to find the TestFixture easier through only the physical file structure.  I don't like this approach for a couple reasons.  The first, and most immediate, is that for big changes I often want to run all of the unit tests at one time.  Separating things out into multiple test assemblies just adds to the mouse clicks and build automation overhead.  Yeah, you can always run all of the tests for a solution, but as I'll cover in the third bullet, that's not necessarily what you want to do on a regular basis.  The second reason isn't quite as obvious, but it's a very real concern.  At my previous job we found that the compile time for a big solution seems to be more dependent upon the number of projects than the total amount of code.  At one point we inherited a code tree that had 55-60 projects and took upwards of 4 minutes to compile.  We brought that compile time to under 30 seconds just by combining fine grained assemblies into bigger assemblies based on deployment.  Part of that effort was combining about 15 test assemblies into only 2-3 assemblies.  A slow compile time is deadly for productivity because it slows down the feedback cycle of virtually all coding activities.
  • Lastly, and this is my preferred approach, create two different assemblies for testing.  One assembly holds the unit tests (fast tests) and the other has the integrated tests (slow tests).  Generally, you parallel the namespace structure of the production assemblies.  The benefits to this strategy are being able to run all of the unit tests at one time, faster compile times, and less overhead in maintaining the build scripts.  An additional issue is that the two types of tests may have different configuration requirements.  The integrated tests will definitely need configuration for external resources like database connections and web service urls.  The unit tests shouldn't need any of that stuff, but may require a completely stubbed out version of the configuration files (I'm thinking about using an IoC tool here).

Anybody got a different approach?  I'm all ears.

About Jeremy Miller

Jeremy is the Chief Software Architect at Dovetail Software, the coolest ISV in Austin. Jeremy began his IT career writing "Shadow IT" applications to automate his engineering documentation, then wandered into software development because it looked like more fun. Jeremy is the author of the open source StructureMap tool for Dependency Injection with .Net, StoryTeller for supercharged acceptance testing in .Net, and one of the principal developers behind FubuMVC. Jeremy's thoughts on all things software can be found at The Shade Tree Developer at http://codebetter.com/jeremymiller.
This entry was posted in Continuous Integration, Starting a new Project, Test Driven Development. Bookmark the permalink. Follow any comments here with the RSS feed for this post.
  • Jeff

    After a bit experimentation I’m considering placing tests inside the assemblies themselves mainly to reduce the number of extra assemblies needed in a solution with over 30 projects already and also because some of the libraries are reused within other solutions/products so it makes it much easier for the tests to follow the assembly itself around without any extra effort. The only other way to do this would be to create a test assembly per assembly being tested which would mean in our case that we would have 60+ projects in our solution. After my testing with Visual Studio and MSBuild it appears that adding extra projects/assemblies to a solution is one of the main factors in decreasing build performance so I the option of putting tests within the assemblies as really the most straight forward approach that will scale.

  • http://jamespeckham.com James Peckham

    @Ranjan Sakalley
    we have 2 solutions with numerous assemblies with 2 test assemblies (1 per solution) and we’re doing NCover just fine using the XmlPublish task in CC.net. It merges it together for us automagically i just add the ncover-results.xml files both to the publish.

  • http://codebetter.com/blogs/jeremy.miller Jeremy D. Miller


    Sure, but why is that important in your own internal code? Coding libraries for someone else isn’t the most common activity. Even then, is that really that important? The other side of the equation is getting screwed over as the consumer of a library because something valuable that you want to use independently is scoped as internal.

  • http://www.commongenius.com David Nelson

    Since when is scoping about paranoia? Scoping is about controlling what users can see, not merely for the purposes of protecting the internals, but also for protecting the user! By controlling what users can take dependencies on, you control the extensibility of your library; otherwise your ability to refactor is extremely limited. You also guide users to the natural entry points into your library that it was designed for them to use, rather than making them dig through everything you have exposed trying to find something that is relevant to their situation.

  • http://codebetter.com/blogs/jeremy.miller Jeremy D. Miller

    I just do the InternalsVisibleTo trick. I very seldomly worry about Internal scoping in my systems. Outside of StructureMap’s externally facing API, I just make everything public.

    I’m in the school of thought that says paranoid scoping adds no value, so I don’t really care about having things public.

  • http://www.commongenius.com David Nelson


    I would rather keep my tests in a separate assembly. However, in many cases I have classes that are internal to my assembly, but which have behavior that I would like to test. Of course, I could test its behavior through the classes which ARE exposed and which use the internal class, but the behavior of the internal class is significant enough that I would prefer to test it independently. This is in the spirit of using unit tests to ensure that what I am writing is doing what I think it should as I write it. I don’t see any other way to do that except to put the tests in the same assembly, or use InternalsVisibleToAssembly. Based on your stated preference, can I assume that you never try to unit test internal types? In your opinion, should I trust that testing the externally exposed types is enough?

  • http://codebetter.com/blogs/jeremy.miller Jeremy D. Miller


    The people and projects that I’ve seen do this all did the production builds with the NAnt task. In that task you just set a filter that excludes files or folders matching a string pattern, i.e. don’t include folders with the word “Test” in their name.

    I’m sure MSBuild has something similar.,

  • http://no Qasim

    I need an example of NUnit testing with complete description and is it possible that if we change the parameters of a main function and when we test through NUnit all the sub fuctions parameters are chaged.
    I need this.

  • http://dima.sadakov.com cDima

    In the first approach of putting the TestFixture classes inside the production assemblies, I love the idea of being able to always find the unit test code directly under the production code.

    But… How can I filter out the test code via the build automation scripts? If the two classes are in one file, then there needs to be a reference to the NUnit dll with the TestFixtures to be successfully compiled, yet I don’t want to ship out the unit dll with the production code.

  • Corey G

    I’ve always gone for 1 assembly with all test-classes in that 1 assembly. All the unit tests are in 1 spot and easy to test and a seperate from any production code..

    Assembly (Class Library): Porsche.Carrera.Targa.NUnit.dll

    class Porsche.Carrera.Targa.NUnit.TestHelper { //Had created this once when I had to create some crazy WebService created by the fellow before me with alot of initialization }
    class Porsche.Carrera.Targa.NUnit.Acceleration {}
    class Porsche.Carrera.Targa.NUnit.Handling {}
    class Porsche.Carrera.Targa.NUnit.Braking {}

  • Andrew Macodnald

    I tend to have multiple projects for different tiers/architectural patterns (such as separated interface for remoting) and then just have one project for my nUnit tests.

    I tend to just have unit tests that drive the implementation and then rely on fitness for integration testing (as fitness is testing what the customer is supposed to experience – don’t duplicate test material).

    KISS – Keep it simple stupid 😉

    What do others think about my method of testing?

  • http://weblogs.asp.net/bsimser Bil Simser


    I organize my tests based on layers in my application. I have a unit test project for the UI, one for the domain, and another for the DAL. Inside each project I separate out test fixtures by folders (basically mimicking the heriachy in my domain project(s)) and create a test fixture for each concrete class.

    The test assemblies have a .Test suffix to them so that I can ignore them in things like code coverage. NCover let’s me just ignore any namespaces or assembly names via wildcards, so it lets me focus on the code I want to cover.

    Works great.

  • http://www.kiwidude.com/blog/ kiwidude

    To merge NCover reports – just use NCoverExplorer.Console as part of your build script. There are NAnt and MSBuild tasks for it or else you can use the command line. You can also apply coverage exclusions at the same time to trim down the resulting report if you want.

  • http://www.lukemelia.com/devblog/ Luke Melia

    We create a separate assembly for tests with a .Test suffix in the assembly name and namespace, and use InternalsVisibleTo as you mentioned. We label unit tests (I like calling them microtests now) and integration tests by using MbUnit’s FixtureCategory attribute. We can then run whichever we want from the console and employed the TD.NET add-in architecture to make it tend to run unit tests only unless explicity requested to run an integration test. I documented this approach here.

  • http://codebetter.com/blogs/jeremy.miller jmiller

    That sounds like a fine compromise to me.

  • http://haacked.com/ Haacked

    Seems like that NCover issue would be a showstopper for two assemblies. Why not use two top-level namespaces in a single test assembly.

    namespace Test.Integration
    namespace Test.Unit

    That way you can run the one namespace (since MbUnit, TD.NET both allow running tests for a given namespace. Can’t remember if NUnit does).

  • http://codebetter.com/blogs/jeremy.miller jmiller


    I typically like to divide assemblies up by deployment. You might have an assembly for the client and another for the server side in the same solution.

  • http://flimflan.com/blog Joshua Flanagan

    I also prefer the 1 unit test assembly per production assembly. The assembly is the atomic unit of deployment. If you have 2 production assemblies, the assumption is there is some scenario where they are used independently. In that case, you want the unit tests to be able to follow the assembly wherever it goes (use in other projects, etc).
    Why do you have more than 1 assembly in your project, if your unit tests assume all the assemblies will always be used together? Why not just put all your production code in a single assembly and reduce your build times even more?

  • Shawn Hinsey

    I seem to remember that the key to getting a decent NCover report out of two assemblies is to merge the assemblies, not the report.

  • http://codebetter.com/blogs/ranjan.sakalley/ Ranjan Sakalley

    Haacked and Jeremy,

    from my experience, NCover reports dont work right at all with two assemblies. There is no dependable merge mechanism for two NCover reports.

  • http://www.winterdom.com/weblog/ Tomas Restrepo


    I hear you, though I guess we’ll just agree to disagree :)

    Sure, build time is important, specially for large projects; though structure helps a lot (and splitting things into multiple solutions as well). I’ve handled large projects, using the structure I mentioned, without any problems, and without causing any hassle in maintaining the build. But, at least on VS.NET 2003, this pretty much meant abandoning the build system completely. Nant was pretty flexible in scenarios like this and there are some pretty good ways you can split your buildfiles so that all is done pretty much automatically for you when you add a new project (and the corresponding unit test projects).

    So there are certainly ways around it :)

  • http://codebetter.com/blogs/jeremy.miller jmiller


    As for CI, I’ve been happily using CC.Net on all of my projects for the last 3-4 years. I’ve always had both the unit test and integration test assemblies running in both the developer and CC.Net build files though. I haven’t hit a project yet that’s gotten big enough where that became a problem. I have moved Fitnesse tests or tests that run through the UI into a staged build for the sake of faster CC builds.

    As for the NCover reports, I don’t know. I’ve either neglected it, or gotten someone else on the team to set it up. I’m pretty sure it can handle a single report with multiple test assemblies.


    Fair enough, but TestDriven.Net will also allow you to run tests by namespace (I can’t really think of anything useful that TestDriven.Net does not do). I usually find myself doing one of two things, either running all of the unit tests for a class, or all of the unit tests period when I’m doing a complex refactoring. My concern about longer build times and more stuff to track in the build file still stands, especially for 50+ project solutions (not that anybody who reads this blog would be crazy enough to do that).

  • http://www.winterdom.com/weblog/ Tomas Restrepo


    Personally, I prefer the approach you dislike, in keeping a test assembly per each production-code assembly. It makes it far easier to keep things organized and clean, and frankly, it makes it easier and faster for *me* to run the tests, contrary to your experience. This is because I indeed rarely need to run the entire set of tests, except when I do a full build (either through CI or by hand, using either NAnt or MSBuild).

    Now, I admit part of this has to do with my use of TestDriven.NET to run the tests, which makes it very easy and fast to run just the tests I like (a specific method, the entire class, the entire assembly), so this combination gives me all the granularity I need, particularly since I prefer the Test Fixture per Class style.

  • http://haacked.com/ Haacked

    I have a question about this approach. Do you use CruiseControl.NET or some other continuous integration software?

    If so, how do you integrate two separate unit test assemblies into one NCover code coverage report? I don’t need details, I’m just wondering if it’s easy or hard to do.

    I pretty much follow what you do for the most part. I’ll blog about it soon because it’s an interesting topic.