ALT.NET Seattle 2009, some impressions

Having just got back from Alt.Net Seattle 2009, I wanted to post a few reflections. Mostly this is just a dump of things that grabbed my attention – its not meant to be any kind of documentation of the convesations there. I would also like to thank all the organizers for their effort – I had a great time – and my fellow delegates for stimulating and challenging conversation.

DTOs, Messages, and the Presentation Layer

I hosted this session. Given parallels with a morning session, it evolved into a discussion of communication between the domain layer and the presentation layer. The question many of us had was whether the consensus was that the presentation layer could depend directly on the domain model, or should depend on a seperate model that acted as an intermediary to the domain layer. Broadly there was agreement that the advantage of a specific model was reducing the coupling between the presentation layer and the domain layer by keeping the presentation layer igorant to the structure of the objects in the domain. Udi Dahan pointed out that many of us were using the term DTO incorrectly to describe the composition or projection of data from the domain model into a shape that we needed for the view, pointing out that this was actually the Presentation Model. DTOs as a pattern represent on-the-wire serialization of data between to physical layers, not seperating between a ‘behind-the-glass’ view of the data and the domain model. That clarification helped the conversation to realize that the principle we needed to talk about more was command-query seperation (CQS). We query for the presentation model, composing or projecting as required, and we update the domain model through a command. I realized that my conversations on this one had used lazy mis-application of the DTO name, and the session should be a good kick-in-the-pants to use terminology more carefully here.

Can MS help with DDD Tooling

Udi challenged my thinking again in this session with the observation that in DDD the Aggregate Root is a role. The interesting part of this idea is that it allows us to treat aggregate roots (AR) polymorpically so that we provide separate interfaces for the use of the AR – the Interface Segregation Principle. Some things fall out nicely once we do this. The first is the simplicity within the service layer of interacting with the AR now that we have reduced its surface area. But we can also make other decisions based of the fact that we now understand the context we are using the AR in. As an example, orderRepository<IEnterNewOrder>.Get(id) now carries enough information to allow the repository to decide which portions of the object graph need eager loading and which need lazy loading. Another is that if we want to use behavior based testing instead of state based testing on our service layer (because it holds no state) then we can easily provide a test double for the interface to the aggregate.

Is Persistence Ignorance Necessary

Ward Bell did a great job in facilitating the discussion, clearly setting out his understanding of PI and his concerns around it.There was a broad consensus that Persistence Ignorance (PI) within the domain model brought benefits to the developer (low friction,well-seperated concerns) but Ward’s question was to whether this meant paying a high price within the infrastructure layers where that PI was implemented. I believe that those of us who favor PI were able to get agreement from those who are still doubtful that if it was easy to implement persistence with PI as without, then they would support PI, because they could see the benefits. Obviously the NHibernate fans felt that the cost of supporting PI was well worth the benefits. But, it’s always seems easy when you know how. It’s always worth the PI believers remembering that they had to make the effort to learn the paradigms that NHibernate uses to support PI (sessions, transient and persistent objects etc.) before they could reap its benefits. So the feeling that PI takes effort is understandable, because there is a learning curve the first time you use an ORM that supports PI.

But the conversation ended positively because many of the doubters seemed to be more open to deeper exporation of NHibernate in order to more clearly understand why those using NHibernate do not find the cost of supporting PI an issue. Overall a session that could have been very fractious ended up being very fruitful.

Why We Stopped Using the Auto-Mocking Container and What’s Next

Aaron Jensen hosted this one. He shared his experiences of the issues with over-use of mocking within your tests. I was interested to see this one emerge because I have posted about it before, from my own experience,and in our discussions at altnetconf in the UK, but not really seen much acknowledgement of it from the other side of the pond.

It’s timing is also ironic because although we prefer state based testing in our domain layer, I am starting to move toward preferring use of behavior based testing in the service layer precisely because services have no state to test, and I don’t want to test the state of the Depended-Upon-Components (DoC) from my Service Layer. The conclusion from the fact that both absolutist approaches lead to issues would seem to be that an appropriate combination of behavior and state based testing is needed, depending on the characteristics of the System Under Test (SUT).

One issue with a state based approach is that it is harder to see how it works with a top-down or outside-in approach to building your components as opposed to an inside-out or bottom-up approach to development, because the top-down approach relies on stubbing out components. We tend to use CRC cards or whiteboard sessions to determine what to build. Fitnesse tests work well with this style too because we can define failing tests prior to providing an implementation. We then build bottom-up, assuming the risk of some re-work for un-needed objects or methods will be small. Aaaron was investigating a best of both world’s solution that would allow him to combine the design benefits of a top-down approach with the amenability of state-based testing to change. It will be interesting to see how that develops.

About Ian Cooper

Ian Cooper has over 18 years of experience delivering Microsoft platform solutions in government, healthcare, and finance. During that time he has worked for the DTi, Reuters, Sungard, Misys and Beazley delivering everything from bespoke enterpise solutions to 'shrink-wrapped' products to thousands of customers. Ian is a passionate exponent of the benefits of OO and Agile. He is test-infected and contagious. When he is not writing C# code he is also the and founder of the London .NET user group. http://www.dnug.org.uk
This entry was posted in Agile, altnetconf, Events, Mocks, NHibernate, ORM. Bookmark the permalink. Follow any comments here with the RSS feed for this post.
  • http://etvhfyuo.com/ Geavnrak

    heFa6m

  • Ian Cooper

    @Presentation model
    Automapper looks to have some value in helping create presentation models. Is that your question?

  • PresentationModel and Undo/Redo, Validation Rules…

    Hi, How would you implement these features when using presentation models? (When using a domain model…)
    Do you know of any framework, which might help?

  • Ian Cooper

    @Colin

    I’m just glad that the message around the overspecified software problem is getting out there, so that folks are aware of it when using behavior based testing and know how to be cautious.

    Last time I was accused of promoting hammer vs. saw when everyone should use all the toolkit, when I was trying to warn of the danger of hitting your own thumb when using a hammer.

    So its good to know the overall discussion has become ‘safety advice when using hammers and saws’ which is where I think we both were originally

  • http://colinjack.lostechies.com Colin Jack

    @Ian
    Agreed, what irritates me is that they then present anyone who did read the article carefully as being a zealot when the truth is the opposite. To my eye MF nailed the dicussion but somehow a lot of people missed what he was getting at, which seems odd cause normally he aritculates these subtle arguments so clearly.

    Anyhow VERY interested to read about how you find the whole role interface in domain approach, I’ve read Udi’s articles on it quite carefully and I do see the sense but so far I’ve not found a use but as you know it is a topic I think about a lot so I’m glad your looking at.

  • Ian Cooper

    @Colin

    Agreed, I don’t think the article was intended to propose a binary position but, I think that some folks interpreted it that way.

  • http://colinjack.lostechies.com Colin Jack

    “As an example, orderRepository.Get(id) now carries enough information to allow the repository to decide which portions of the object graph need eager loading and which need lazy loading.”

    I like that approach assuming that the aggregate rules are still met. I guess most of the time I don’t find it necessary to bring in role-based interfaces in the domain if the aggregates are small enough.

    “That said, I think that the binary positions are taken by many, erroneously.”

    Totally, but then Martin Fowlers article made clear it wasn’t either or (use mocking as much as we can or never use it was never the point).

  • http://blogs.msdn.com/alexj/archive/2009/03/02/alt-net-seattle.aspx Alex James

    …I also enjoyed the session hosted by Ian Cooper on DTOs, Messages, and the Presentation Layer… see Ian’s write up for more detail on this…

  • Ian Cooper

    @Jeremy

    Sure, the binary positions are not ones to adopt. I confess, you had me bang to rights on this one. I owe you a victory beer or something :-)

    That said, I think that the binary positions are taken by many, erroneously.

  • http://codebetter.com/blogs/jeremy.miller Jeremy D. Miller

    Sooo, just to jab at you. Can we finally admit that the Classicist vs. Mockist style of TDD argument is total bunk and you should choose an approach on a case by case basis?