DDDD: Master-Detail Question

I received an interesting question via email recently that I figured would be better answered via a blog post than a private response.

Greg,

<clip of intro and flattery>

So you are getting messages directly out of your repositories and sending command messages directly to them for persistence.

There’s one thing that hit me when thinking about your approach: You do not seem to have a UI with a master-detail connected with this “model”, as it sounds very transactional. Further sending a bunch of messages just as dta for a single view does not make sense. Do you also have to deal with such UI stuff and how do you support it? I guess with another synced model?

This is a great question as it really jumps into how things work architecturally. This becomes rather obvious in the talk I have been doing as there are diagrams but has not been talked about in detail on this blog.

Command and Query Separation


The first thing that should be discussed here is that repositories are not responsible for handling “read” messages. Repositories have a single reading operation and that is to load a single instance of an aggregate root based upon it’s unique identifier.

CustomerRepository::Get(Guid Id)

This is because our domain is only responsible for handling *Command* (write) operations. All other *Query * (reading) operations are delegated off to another “thin” bit of code. But this “thin” bit of code is important and deserves to be talked about more because it gets into the second question discussed here.

The UI Mapping

In most systems the creation of DTOs for viewing of data by the UI is handled by some code that interacts with repositories then maps domain objects to the DTOs. This process is often quite inefficient as the fetching paths are different for reading in this way than when one would want to execute a transaction on a domain object. In DDDD this is not done but instead there are separate “providers” (repositories is a bad term since they only handle reads) in another layer that handle the mapping of DTOs for the UI. It is important to note that this is a very thin layer that basically reads from a data model and maps directly to the DTOs (there are no intermediate domain objects).

It is important that we pause for a moment and look at what these “DTOs” are. In an ideal world these DTOs map directly to how the UI wants to use the data. They will often times have no correlation to how the domain model is laid out (it is quite possible that 3-4 aggregate roots make up a single DTO for the UI). I say that this is in an “ideal” world because there is also a cost associated with the creation of screen specific DTOs. What if I have 5 clients that show similar but slightly different information? Do I create 5 slightly different methods? Probably not because the reason I want them to map up is to minimize the number of round trips I am getting. The law of diminishing returns applies here…

For things like “searches” that are lists (the master generally in the master-detail). The DTOs also match up the screen (summary information and an id). It is important to note that we are not searching the same model (if we are storing commands we don’t have a current snapshot). The model that we are searching will tend to be an eventually consistent snapshot denormalized version of the model but does not *have* to be eventually consistent (we could in a system of small scale require the write all the way through to the denormaliztion before returning to the caller (2pc) thus creating full consistency). It is important to note that the unique identifier of each item is included.

Knowing this let’s take a look at a master-detail screen. I come to a screen where I search customers by their first name …

I type “greg”.

My UI sends up a request to my thin read layer let’s call it a PerformCustomerSearchByFirstNameQuery with the first name of “Greg”

My thin read layer returns a set of PerformCustomerSearchByFirstNameQueryResult to my UI (note that there are probably other things going on here like paging).

Now when I want to drill in. I would in the UI take the id that I wanted and ask for the full DTO for that object note that if the object has 7 tabs, I am probably getting all 7 tabs of data at once). Generally the rule is to prefer big chunky messages sent less often here but occasionally lazy loading from a client model can have value (just remember to measure!)

While in the detail view my UI would build commands representing the changes the user was making within the detail (bound to the unique aggregate root id of the item I am editing). When complete the user would press save and these commands would be dispatched to my “write only” domain model.

This pattern is called event sourcing and is not new although I think it has been discovered hundreds of times :). Martin Fowler has a write up on it here.

Begging the question…

Sometimes the complexity that we are trying to encapsulate in our domain is “reading” code. How do we encapsulate this logic and avoid duplicating it between the domain and the “reading” layer. An example of this might be a business rule and a search that both use a complex derived “score” for a customer. How can we prevent duplicating all of the logic in the two models …

next time on this blog…

This entry was posted in DDD, DDDD. Bookmark the permalink. Follow any comments here with the RSS feed for this post.

9 Responses to DDDD: Master-Detail Question

  1. Alex says:

    I was wandering if there already was a followup on last question in the blog about “preventing duplication of logic” or if there will still come a followup ?
    How do you handle the case of allowing the user to change something in the detail (or header for that matter) and you require some immediate feedback in UI (the result of some kind of calculation based on business logic) of that change without having send any commands of change, as a way of showing the users impact of his change before actually performing the action ? This seems another place where duplication of logic is possible ?

  2. Tomas says:

    Hi Greg,

    I used 2 model approach in one smaller project in the past. It was an custom ASP .NET extension of
    MS CRM. There were 2 reasons to do model separation:
    1. Perfomance
    2. CRM has rich query object API but with lack of join functionality

    Even a small(not so complex) project, the 2 model approach was good decision.
    ———————————————–
    But I still have one question/problem. I create a hypothetical example:

    There’s a marketing department. There’s a campaign administrator, that creates a marketing list by running query against list of all leads.
    There’s also campaign supervisor, that runs query against all leads in marketing list to assign them to individual marketing employees.
    There’s number of leads, let say up to million. None of both – campaign admin and supervisor want to manually choose from a matched list of leads to perform their goal, they don’t want to see such list. They are interested only in how much items the query matched.
    They want to perform some update operations by executing a query. Domain model is too fat/slow do handle this effectivelly. But it’s also out of scoope of “read only” model.

    How would you solve this problem?

    Thank you for any suggestions/reply.

  3. Jonathan says:

    I would love to see a quick example of the mechanism (including DB schema) used to store each of the event sourcing commands.

  4. Sam says:

    I get stuck while thinking in CQS way here. Let’s say I have screen shows the order detail info for customer to review, we have ItemName, Quantity, Price and OrderTotal fields. The user is allowed to change the Quantity field value. Since OrderTotal could be Price * Quantity * Tax, my question is when do you think is the good time to update the OrderTotal field on the screen. The two options I could think of are:

    A: Update the screen right away. which means we need to duplicate the calculation logic in WriteOnly Domain Model and frontend model
    B: Update the screen after we click the Save button (which would fire all the commands to the server first, and then after the client get the notificiation from the server says your aggregate have been changed and the client send the request to reload the whole thing). This is not that intuitive to the user since he/ she could not see the OrderTotal without click the Save button.

    I am not quite happy with either. Could you help me out here?

  5. Greg says:

    Good question Nick. I will write more about this another time but the answer is yes.

    And to your next question … yes this can get complex if the reporting source is eventually consistent ;-)

    Cheers,

    Greg

  6. Nick says:

    Would you go to this read layer for non-UI related queries? E.g. I have a business rule that email addresses have to be unique within my system. So for example on CustomerService.Activate I want to search for the existence of a customer with the email address of the activating customer.

  7. Another great mind-provoking post! It leaves me with one question though: do you only have a Get(… id) method on your repositories or also a Find(Specification spec) like method?

    If not, then what to do if I want to (f.e.) calculate all orders for a given customer?

    Again, great post.

  8. Stuart C says:

    So if I understand you correctly you do all reads excepty those that load your persistent model’s root aggregate from a seperate “reading layer”?
    Is this a common pattern and if so where can I read about it?
    I do alot of of reading to return DTOs and have felt uncomfortable with this being in the repository but have been reluctant to move the read code elsewhere as although it seems like a seperation of concerns it is a subtle distinction…

  9. James Bigler says:

    Thanks for this post Greg. We are investigating Domain Driven Design at my company. I read Eric Evan’s book and Jimmy Nillson’s book then started a book study group for Eric Evan’s book with some of my coworkers. One of the problems we are having is deciding how to handle what we call our grids (I think our grids are the same thing as your master detail pages). We didn’t want to make things overly complex if we didn’t have to. Your approach sounds exactly like what we are looking for. I am sure my boss’s first question will be how do we reuse business logic embedded in those queries so I can’t wait to read your next post. Thanks again.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>