Non-trivial and real-world feedbacks on writing Unit-Tests

I began writing tests around 8 years ago and I must say this revolutionized my way of developing code. In this blog post, I’d like to group and share several non-trivial feedbacks gained over all these years of practicing real-world testing.

Test Organization

There must be a one-to-one correspondence between classes tested and test fixture classes. Very simple classes like pure immutable entity (i.e simple constructors, simple getters, and readonly backing fields) can only be tested through test fixture classes corresponding to more complex tested classes.

The one-to-one correspondence stated above makes the following operations easy and straightforward:

  • writing a new test suite for a currently created class,
  • writing some new tests for an existing class,
  • refactoring tests when refactoring some classes,
  • identifying which test to run to test a class.

Tests must be written in dedicated tests assemblies because product assemblies must not be weighted with test code. Hence tests and tested code lives in different VS projects. These projects can live in the same VS solutions, or in different VS solutions.

Having tested code VS projects and tests VS projects living in the same VS solution is useful to navigate easily across tested code and tests.

If tested code and tests live in different VS solution, a global VS solution containing both code and tests VS project is still needed. Indeed, such global VS solution is useful to use refactoring tools to refactor at the same time tested code and tests

Test-First and specifications

I don’t abide by the test-first theory. Tests concerning a code portion should be written at the same time than the code portion. The reason I advocate to do so, is because not all edge cases can be faced up-front, even by expert of a domain and professional testers. Edge cases are discovered only when thoroughly implementing an algorithm.

Test-first is still useful when tests can be written up-front by a domain expert or a client, with dedicated tooling such as FitNesse. In this conditions test-first is a great communication mean that can remove many pesky misunderstanding.

However only the code itself is the complete specification. Tests can be seen as a specification scaffold, that is removed at runtime. Tests can also be seen as a secondary less-detailed specification, useful to ensure that the main specification, i.e the code itself, respects a range of requirements, materialized by tests assertions.

Test and Code Contracts

Calls from tests to tested code, shouldn’t violate tested code contracts. When a contract is violated by running tests, it means that the tested code is buggy or (non-exclusive) that the test is buggy.

Assertions in tests are not much different than code contract assertions. The only real difference is that assertions in tests are not checked at production-time. But both kind of assertions are here for correctness, to buzz as soon as something goes wrong when running the code.

Test Coverage

The number of tests is a completely meaningless metric. Dozens of [TestCase] can be written in minutes, while a tricky integration test can takes days to be written. When it comes to test, significant metrics are:

  • percentage of code covered by tests and
  • number of different assertions executed.

Not all classes need to be 100% covered by tests or even covered at all. Typically UI code is decoupled from logic, and UI code is hard to test properly, despite the progress in this domain. Notice that UI code decoupling, which is a good design practice, is naturally enforced by tests.

When testing a class, covering it 100% is a necessary goal. 90% coverage is not enough because empirically we observe that the 10% non-tested code contains most of unidentified bugs.

If to cover 10% of the code of a class it takes as much effort as covering the other 90%, it means that this 10% of code needs to be redesigned to be easily testable. Hence the 100% coverage practice is costly only when it forces the code to be well designed.

Having zero dead code is a natural consequence of having code 100% covered.

It is a good practice to tag a class 100% covered with an arbitrary [FullCovered] attribute because it documents for future developers in charge of an eventual refactoring, that the class must remain 100% covered.

Code 100% covered by tests + containing all relevant contracts, is very hard to break. This is the magical practice to be very close to bug-free code.

Personally I consider that the job is done only when all classes that can be covered by tests are indeed 100% covered by tests.

Test Execution

The concrete difference between unit tests and integrations tests is that unit tests are in-process only, they don’t require touching a DB, network or the file system. The distinction is useful because unit-tests are usually one or several orders of magnitude faster to run.

Tests must be run from the build machine to validate any code before it is committed.

Test should be fast to run, like a few minutes maximum, because tests must also be ran often on the developer machine, to detect regression as soon as possible.

Having a one-to-one correspondence between tested class and test class + 100% coverage, makes easy to identify which tests to run for a particular piece of code.

Test and Refactoring

Last released version of the product that is working fine in production, should stress out that bugs are more likely to appear on newly developed code and refactored code. Hence tests must be written in priority for newly developed code and refactored code.

When some refactored code is not properly tested, it is a good practice to write tests that should have been written in the first place. However this practice of testing refactored code is not mandatory because it can be pretty time consuming.

Code 100% covered by tests + containing all relevant contracts, can be refactored at whim. In these conditions, there are very few chances to miss a behavior regression.

Test and Design

Code that is hard to be covered by tests is bug-prone and need to be redesigned to be easily coverable by easy tests. This situation often results in creating one or several classes dedicated to handle the logic implemented formerly by the code hard to cover by tests.

The previous point can be restated this way: writing a complete test suite naturally leads to good design, enforcing low coupling and high cohesion.

Singletons should be prohibited because they make the code hard to test. It is harder to re-cycle and re-initialize a singleton object for each tests, than it is to create an object for each test.

Mocking should be used to transform integration tests into unit-test, by abstracting tests from out-of-proc consideration like DB access, network access or file system. In these conditions mocking helps separating concerns in the code and hence, increases the design value.

Mocking is also useful to isolate code completely un-coverable by tests, like calls to MessageBox.Show().

Test Tooling

Personally I am happy with:

This entry was posted in Uncategorized. Bookmark the permalink. Follow any comments here with the RSS feed for this post.
  • Vince Bullinger

     Just randomly saying stuff doesn’t make it so. Why do you believe in number one? The author put out good reasons as to why you’re wrong and you basically just said “Nuh uh!”

    He never said or even implied what you spoke against number two. In fact, you two seem to be in agreement.

  • Tomek Kaczanowski

    Interesting, but:
    1) test first is the right way to go,
    2) believing that 100% coverage means high quality tests and high quality code is a fallacy

  • psmacchia

    You can first write tests, then code that is de-facto testable! I don’t call this way of doing test-first, since here I am talking of an iterative process where you write a few tests, then write the few corresponding lines of code and methods, and repeat until the features and classes are completely implemented and 100% covered by tests.

    With experience one can anticipates how to design the code as testable,. Personaly I often write first the code, then the associated tests, then repeat again.

    Calling this iterative process test-first or code-first is just a philosophical chicken and egg problem. Once again the key is the experience, that makes the design of the code and the tests two  intricated sides of a unique problem.

  • Christia Bjerre

    Thanks for a short and useful test pratice guide. Keeping in line with what you write here will help make better software faster and with high confidence.

    The challenge is as usual how to make sure we don’t write code that is hard to test?

  • psmacchia

    Not yet but the video looks pretty awesome :)

  • Rodney Richardson

    If you require the same initialization/zetup for different test fixtures, consider inheritence with the setup on the base class. This still allows the freedom of one fixture per class, and shared setup code.

  • garethevans1986

    Have you tried – Watch the video on the site.

  • psmacchia

    I would say that 100% coverage means pretty much 100% correctness, if (and only if) the code tested is stuffed with contract assertions. Indeed, in such conditions a statement covered by tests fullfils its unit action and all contextual data values are in well defined ranges, checked by contract assertions. The number of touch count is not really relevant. For example, if I know that a piece of code works fine for all positive integers,  I don’t need to test all 2 Billions positive values to increase the number of touch. Testing the 4 values 1, 10, 100.000, and int.MaxValue should be enough.
    To digress a bit, 100% coverage is not actually 100% coverage because of the way .NET code coverage tools works. Language construct such as ternary operator, inner method calls, or linq query are considered as one statement (i.e one monolithic breakpoint) by code coverage tools. For example if just the false expression of a ternary operator is covered, the  true expression is seen as covered as well.Hopefully, NCover is smarter on that and proposes the metric branch coverage which is more fine grained than the basic code coverage, and can handle this limitation of monolithic breakpoint.

  • Steve Py

    One point I would add is that sometimes 100% coverage isn’t 100%. With code that may have several combinations (Generally a candidate for re-factoring and simplification, but we’re talking about real-world systems) you cannot merely trust a 100% coverage stat without considering the “touch count” for how many times each line/statement was touched by tests. 100% merely means each line was touched _at least once_. Complex methods with touch-counts of 1 should scream of possible bugs and it would be a dangerous assumption that they’re safe just because they read 100% coverage.

  • psmacchia

    Of course if you estimate that you need more than one test fixture class, you are free to create many. In general, a class that needs many test fixture class might violate the Single Responsability Principle:

    Concerning the situation you described with multi-setup, it could be preferable to keep all tests in the same test fixure class with each test responsible for calling the right init. But if you are talking of dozens of tests, having many test fixture class makes sense, just to group tests appropriately.

  • Bill Sorensen

    “There must be a one-to-one correspondence between classes tested and test fixture classes.”

    How do you deal with cases where you need multiple fixtures? Tests 1, 2, and 3 against system-under-test class A need one complex setup, while tests 4, 5, and 6 against class A need another. This would seem to indicate that two test fixtures are appropriate, at least under NUnit.

  • Patrick Smacchia

    For a real-world complex algorithm to develop it is not humanly possible to anticipate all edge cases that will need to be tested. The options are twofold:

    A) you write test-first, you code, you write more tests to cover edge cases discovered while coding, you code, you write more tests…  this is what I am doing and I don’t call it test first. I call it “write code and tests at the same time”.

    B) you write test-first, you code, and then you don’t write tests  to cover edge cases discovered while coding. -> this lead to less than 100% coverage and I’ve explained why it is a bad practice.

    For any sufficiently complex piece of code, there is no such thing like a case C) where you write test-first, you code, and everything works fine with 100% coverage.

    To summarize, as Jim Cooper wrote below, test-first is an iterative process. Tests necessarily need to be refined iteratively while code is developed. I guess we are all doing it the same way, but I prefer to call it “write code and tests at the same time” than “test-first”.

    >Testability is not about writing tests but about designing software that is easy to test.

    100% agree, I stated this point through the remark (in bold): If to cover 10% of the code of a class it takes as much effort as
    covering the other 90%, it means that this 10% of code needs to be
    redesigned to be easily testable.

  • Anonymous

    This kind of test is named as integration test because it does not touch only in your code base. If the test exercises an object’s dependency then it is an integration test too. By the way, you can write and run integration test using NUnit or any other xUnit framework.

  • Dennis Kozora

    “Code that is hard to
    be covered by tests is bug-prone and need to be redesigned to be easily
    coverable by easy tests” … which is exactly why you should start test first. You will avoid those mistakes. Test first encourages you to design for testability (easy to test). Its not about a “test”. A test is an automated specification that validates the “assumptions” we used for our design.

    Misko Hevery has talked extensively about “testability”. Testability is not about writing tests but about designing software that is easy to test.

  • psmacchia

    As explained the problem I have with test first is that 
    not all edge cases can be faced up-front. While I agree that test first is a good communicate mean from domain expert to the developer, it should be used only for such situation. 

    Moreover when I read the two statements 
    you don’t write ALL the tests first 
    It’s an iterative process it sounds like you switch back and forth from test-first to code development to test rewriting…, or in other words you are developing tests at the same time than you are developing tested code.>Secondly, you don’t always have to decouple tests from the file systemSure, I just write that if your tests are accessing the file system, they must be flagged as integration tests.

  • Jim Cooper

    I agree with most of that, with a couple of exceptions.

    Test first (or TDD) is definitely a better way to write and design code than writing tests and code at the same time. However, you don’t write ALL the tests first – you do one at a time (and I usually check in whenever everything goes green). It’s an iterative process.

    However, you may need some framework code in place on a brand new project before you can write an integration test, depending on the project, tools, etc. But you should always use TDD for unit tests.

    Secondly, you don’t always have to decouple tests from the file system. Some people go to insane lengths to do that when you can get away with just using a test area. It depends on the situation.


    It’s an endless debate, but SQL query is code right ? So why should we exclude them from our unit test ? Unit testing is about testing a isolated unit of your software, because we can’t mock (ASFAIK) a sql server, our SQL queries are a unit. Refactoring often include refactoring so it seems important for me to include it in your test suite (even if there is problems like deleting the test data).

  • Anonymous

    +1 for the Testdriven.Net addiction