Mocks and Tell Don’t Ask

One of our alumni Karl blogged a request recently for folks to stop using mocks. Once upon a time I also made clear that I had a significant distrust of mocks. I’ve mellowed on that position over time, so I thought that I should explain why I have changed my opinion.

Perhaps it would be useful to give a summary of the discussion around this. Whilst I don’t think that you need to choose between being a classicist or mockist, I think that Martin Fowler’s article on Mocks Aren’t Stubs is still a good starting point for understanding the debate, because it talks about the different types of Test Double, as well as outlining approaches to their usage. However, for my part, Martin does not highlight Meszaros’s key distinction between replacing indirect inputs and monitoring indirect outputs. Both mention the danger of Over-specified Software from Mocks, and Meszaros cautions to Use the Front Door. On the other side the proponents of mocks have always pointed to their admonition to Mock Roles Not Objects {PDF}. Growing Object Oriented Software, Guided by Tests (GooS) is probably the best expression of their technique today.

The key to understanding mocks to me was that the motivation behind Mocks, revealed in GooS is to support the Tell Don’t Ask principle. Tim Mackinnon notes that:

In particular, I had noticed a tendency to add “getter” methods to our objects to facilitate testing. This felt wrong, since it could be seen as violating object-oriented principles, so I was interested in the thoughts of the other members. The conversation was quite lively—mainly centering on the tension between pragmatism in testing and pure object-oriented design.

For me understanding this goal for mock objects was something of a revelation in how I understood them to be used. Previously I had heard arguments that centered around the concept of a unit test needing isolation being the driver for use of mocking frameworks, but now I can see them as tools helping us avoid asking the object about its internal state.

Within any test we tend to follow the same pattern: set up the pre-conditions for the test, exercise the test and then check post-conditions. It’s the latter step that causes the pain for developers working towards a Tell Don’t Ask principle – how do I confirm the post-conditions without asking objects for their state.

This is a concern because objects should encapsulate state and expose behavior (whereas data structures have state but no behavior). This helps avoid feature envy, where an objects collaborator holds the behavior of that object, in a set of calls to discover its state instead of asking the object to carry out the behavior on the state itself. This leads to coupling and makes our software more rigid (difficult to change) and brittle (likely to break when we do change it).

But this leads to the question: how do we confirm the post-conditions if we have no getters?

Mocks come into play where we decide to confirm is the behavior of the class-under-test through its interaction with its collaborators, instead of through getters checking the state. This is the meaning of behavior-based testing over state-based testing, although you might want to think of it as message-based testing, because what you really care about is the messages that you send to your collaborators.

Now when we talk about confirming behavior, recognize that in a Tell Don’t Ask style the objects we are interested in confirming our interaction with are the indirect outputs of the class under test, not the indirect inputs (classes that we get state from). In an ideal world of course, we won’t have many indirect inputs (we want Tell Don’t Ask remember), but most people will tend to be realistic about how far they can take a Tell Don’t Ask approach. Those indirect inputs may continue to be fakes, stubs, or even instances of collaborators created with TestDataBuilders or ObjectMothers. Of course the presence of a lot of them in our test setup is a smell that we have not followed a Test Don’t Ask approach.

It is also likely that many of our concerns about over-specified software with mocks often come from the coupling caused by these indirect inputs over indirect outputs. They are likely to reveal most about the implementation of the their collaborator. By contrast where we tell an object to do something for us, we likely let it hide the implementation details from us. Of course we need to beware of feature envy from calling a lot of setters instead of simply calling a method on the collaborator. This chattiness is again a feature envy design smell.

I have come to see an over-specified test when using mocks is not a problem of using mocks themselves, but mocks surfacing the problem that the class-under-test is not telling collaborators to perform actions at a granular enough level, or is asking too many questions of collaborators.

Finally one question that often arises here is: if we are just handing off to other classes, how do we know that the state was eventually correct i.e. the algorithm calculated the right value, or we stored the correct information. One catch-all option is to use a visitor, and pass it in so that the class-under-test can populate it with the state that you need to observe. Other options include observers that listen for changes. Of course at some point you need to consume the data from the visitor or the event sent to the observer – but the trick here is to use a data structure where it is acceptable to expose state (but not to have behavior).

CQRS also has huge payoffs for a Tell Don’t Ask approach. Because your write side rarely needs to ‘get’ the state, you can defer the question of where we consume the data – the read side does it, in the simplest fashion of just creating data structures directly. Event Sourcing also has benefits here, because you can confirm the changes to the state made to the object by examining the events raised by the class under test instead of asking the object for its state.

About Ian Cooper

Ian Cooper has over 18 years of experience delivering Microsoft platform solutions in government, healthcare, and finance. During that time he has worked for the DTi, Reuters, Sungard, Misys and Beazley delivering everything from bespoke enterpise solutions to 'shrink-wrapped' products to thousands of customers. Ian is a passionate exponent of the benefits of OO and Agile. He is test-infected and contagious. When he is not writing C# code he is also the and founder of the London .NET user group.
This entry was posted in .Net, Agile, BDD, Behavior Specification, Mocks, Object-Orientation, Stubs, TDD. Bookmark the permalink. Follow any comments here with the RSS feed for this post.
  • ikaruga

    Read Martin’s article and I’m definitely a classical TDD’er *by nature*. So this all seems unintuitive to me.

    “One catch-all option is to use a visitor, and pass it in so that the class-under-test can populate it with the state that you need to observe.”

    Not sure I follow. If the SUT uses collaborators to generate the state that you want to observe (and in your test, you mock all these collaborators), then how does this make sense? It seems that I still haven’t made that paradigm shift to behavior based testing :-)

  • Steve Py

    Not sure I follow exactly. Part of the problem with Karl’s original post was his definition of Mocks vs. Stubs. What he did was still use a mock, just loosened the definition. For example with Moq, using It.IsAny rather than mocking around specific parameters. Unit tests should not get too bogged down with mocking exactly how a dependency is called, only if it is called, or should not be called. (IMO, the key difference between a stub and mock is Mocks provide this assertion.)

    Code under test should never need to expose additional state for unit tests to review, because unit tests should only be testing public state (return values) or publicly accessible behaviour. Determining if behaviour has been triggered correctly is the role of mocks. For example if on a call to save a domain object you have a validation service and a domain persistence service as dependencies, an example unit test scenario for behaviour would be to assert that if the Validation returns an invalid response, that the domain persistence service *isn’t* called. Details on how the validation call is made are irrelevant, the only relevance that it returned failure.

  • Pingback: links for 2011-04-06 «

  • Pingback: The Morning Brew - Chris Alcock » The Morning Brew #828

  • Oli Bye

    Thanks for picking out the point about adding getters simply for testing feeling wrong. Tim and I always felt this. Not being able to add getters to HttpServletRequest forced us to Mock it.

    On your last note:
    CQRS, feels much like the decoupling that original Taligent MVP pattern achieves Indeed I unit test our presenters simply my checking the commands they apply to their selections. It’s not important to test what the commands do to the model in presenter tests, that’s what we do when unit testing the commands.

  • Steve Freeman

    Thanks for saying this. We’ve spent so much time over the years having the wrong kind of disagreements over just these arguments. The answer is almost always to listen to the tests.