Having just got back from Alt.Net Seattle 2009, I wanted to post a few reflections. Mostly this is just a dump of things that grabbed my attention – its not meant to be any kind of documentation of the convesations there. I would also like to thank all the organizers for their effort – I had a great time – and my fellow delegates for stimulating and challenging conversation.
DTOs, Messages, and the Presentation Layer
I hosted this session. Given parallels with a morning session, it evolved into a discussion of communication between the domain layer and the presentation layer. The question many of us had was whether the consensus was that the presentation layer could depend directly on the domain model, or should depend on a seperate model that acted as an intermediary to the domain layer. Broadly there was agreement that the advantage of a specific model was reducing the coupling between the presentation layer and the domain layer by keeping the presentation layer igorant to the structure of the objects in the domain. Udi Dahan pointed out that many of us were using the term DTO incorrectly to describe the composition or projection of data from the domain model into a shape that we needed for the view, pointing out that this was actually the Presentation Model. DTOs as a pattern represent on-the-wire serialization of data between to physical layers, not seperating between a ‘behind-the-glass’ view of the data and the domain model. That clarification helped the conversation to realize that the principle we needed to talk about more was command-query seperation (CQS). We query for the presentation model, composing or projecting as required, and we update the domain model through a command. I realized that my conversations on this one had used lazy mis-application of the DTO name, and the session should be a good kick-in-the-pants to use terminology more carefully here.
Can MS help with DDD Tooling
Udi challenged my thinking again in this session with the observation that in DDD the Aggregate Root is a role. The interesting part of this idea is that it allows us to treat aggregate roots (AR) polymorpically so that we provide separate interfaces for the use of the AR – the Interface Segregation Principle. Some things fall out nicely once we do this. The first is the simplicity within the service layer of interacting with the AR now that we have reduced its surface area. But we can also make other decisions based of the fact that we now understand the context we are using the AR in. As an example, orderRepository<IEnterNewOrder>.Get(id) now carries enough information to allow the repository to decide which portions of the object graph need eager loading and which need lazy loading. Another is that if we want to use behavior based testing instead of state based testing on our service layer (because it holds no state) then we can easily provide a test double for the interface to the aggregate.
Is Persistence Ignorance Necessary
Ward Bell did a great job in facilitating the discussion, clearly setting out his understanding of PI and his concerns around it.There was a broad consensus that Persistence Ignorance (PI) within the domain model brought benefits to the developer (low friction,well-seperated concerns) but Ward’s question was to whether this meant paying a high price within the infrastructure layers where that PI was implemented. I believe that those of us who favor PI were able to get agreement from those who are still doubtful that if it was easy to implement persistence with PI as without, then they would support PI, because they could see the benefits. Obviously the NHibernate fans felt that the cost of supporting PI was well worth the benefits. But, it’s always seems easy when you know how. It’s always worth the PI believers remembering that they had to make the effort to learn the paradigms that NHibernate uses to support PI (sessions, transient and persistent objects etc.) before they could reap its benefits. So the feeling that PI takes effort is understandable, because there is a learning curve the first time you use an ORM that supports PI.
But the conversation ended positively because many of the doubters seemed to be more open to deeper exporation of NHibernate in order to more clearly understand why those using NHibernate do not find the cost of supporting PI an issue. Overall a session that could have been very fractious ended up being very fruitful.
Why We Stopped Using the Auto-Mocking Container and What’s Next
Aaron Jensen hosted this one. He shared his experiences of the issues with over-use of mocking within your tests. I was interested to see this one emerge because I have posted about it before, from my own experience,and in our discussions at altnetconf in the UK, but not really seen much acknowledgement of it from the other side of the pond.
It’s timing is also ironic because although we prefer state based testing in our domain layer, I am starting to move toward preferring use of behavior based testing in the service layer precisely because services have no state to test, and I don’t want to test the state of the Depended-Upon-Components (DoC) from my Service Layer. The conclusion from the fact that both absolutist approaches lead to issues would seem to be that an appropriate combination of behavior and state based testing is needed, depending on the characteristics of the System Under Test (SUT).
One issue with a state based approach is that it is harder to see how it works with a top-down or outside-in approach to building your components as opposed to an inside-out or bottom-up approach to development, because the top-down approach relies on stubbing out components. We tend to use CRC cards or whiteboard sessions to determine what to build. Fitnesse tests work well with this style too because we can define failing tests prior to providing an implementation. We then build bottom-up, assuming the risk of some re-work for un-needed objects or methods will be small. Aaaron was investigating a best of both world’s solution that would allow him to combine the design benefits of a top-down approach with the amenability of state-based testing to change. It will be interesting to see how that develops.