I very commonly hear that applying CQRS is in people’s opinions more work that staying with the more classic CRUD based systems they are used to. The main reason for this objection is that the read layer is separated and goes directly back to the data model as opposed to through their domain model. They envision this as introducing more work. It could not be further from the truth, using CQRS even eat the simplest level where you only have a single data model is often though not always less work.
Let us consider for a moment the process of reading in a more classical architecture where we project our DTOs off of the domain model. We have our domain objects littered with getters and setters and our code maybe using a tool like Automapper will take these domain objects and project DTOs off of them. On the other side we will use the same process to map the DTOs back to the domain objects then save them. I will for the time being not even go into the fallacy of having a domain of property buckets or the fact that one has created an ubiquitous language with no verbs … can you use that in a sentence for me? Let’s just focus on the read / write behaviors.
In our system we are simply projecting which is not too bad if everything matches up 1-1 but does it? More often than not such systems have a DTO model that is screen based as it we gain perceived performance gains at the client due to lower numbers of round trips back and forth to the server –There are times where a canonical DTO model is preferred but that can be another post, for the sake of discussion we will imagine that we are dealing with a screen based DTO model—. The unfortunate thing about screen models is that they are tied to how our screens show data which may or may not have anything to do with our aggregate boundaries in our domain as the aggregate boundaries tend to be focused on consistency and invariants in order to perform operations. In other words they don’t line up.
The problem here is that it is very difficult to use our domain model, that was created for processing transactions, in an efficient way to provide reading behaviors. After optimization some general symptoms of this problem would be unneeded data being read, multiple round trips for building a DTO to the database, or my personal favorite gratuitous use of prefetch paths with the ORM of choice. Getting prefetch paths setup appropriately (and more importantly maintaining them over time) is hard and developers spend a lot of time trying to deal with the impedance mismatch between the domain model and the data model, especially over time as the two fall out sync with each other due to features being added etc.
Let us compare this with the simplest version of CQRS (a separated read model hitting the same datasource as the write model). Here the read layer talks directly to the data model. Query optimization becomes well … query optimization. We are not spending our time on the impedance mismatch between the object model and the data model. We instead are just writing queries against the model and mapping to DTO’s, which can be done using a tool like automapper just like in the first case. We are writing basic queries and mapping them, we can join in new and unusual ways, deal with roll ups, or even use an object model if we so chose! We can optimize queries easily (or even better let the DBA’s optimize them).
A similar issue exists when we start wanting things in a different way that the domain has things. Some examples of this would be when the screen DTO wants counts of something, or some rolled up view of data. It becomes extremely difficult to handle many of these situations in an efficient way when dealing with an object model.
Many people have run into issues like this, but instead of dealing with them they do something far worse. They just make their aggregate root boundaries the same as what their screens are (which may or may not have any correlation). Doing this completely breaks the concept of the aggregate root in the domain.
Some have brought up persistence ignorance on the read side, that building off the domain brings in persistence ignorance. There are a few options here. The first is to question, what is the actual risk of needing to support a completely different backing data model (note not a different database but a completely different data model), if it is low, how concerned should you really be? What would be the amount of work to change? Is the cost of losing this decision in the probability that you lose a good bet? The second option is to use a criteria api over the top of your object model, this adds much complexity but can isolate you from such a change.
Said differently, with CQRS you are not doing more work by separating the read service from the write. You are doing different work, it may be that its more, less, or the same. This is a common thread of thought I run into with CQRS style architectures, they are rarely more work but you are doing different work and the different work offers different architectural properties. More often than not people assume its more work because its different to them and they project it as being more complex when in fact it is just, different.