CQRS recap, or “How to resuscitate”

I’m fighting a bit with my ego at the moment which is telling me I need to provide at least four paragraphs of update on what I’ve been doing the last three years when I last posted. The fight is with my more practical side which is saying, “Name three people that have noticed.” I’ll compromise with a bulleted list because some of it does have a little bearing on the rest of this post:

  • I’m no longer with BookedIN although they are still going strong.
  • I’ve started recently with Clear Measure who has graciously relaxed their “no hillbilly” hiring policy. Guardedly.
  • For those interested in my extra-curriculars, I’m also blogging at http://kyle.baley.org where you’ll find a recent follow-up to an older post on Life in the Bahamas, which I keep getting emails about for some reason…

This past weekend, Clear Measure hosted a meetup/coding-thingy on CQRS with your host, Gabriel Schenker. Initial intended as an event by and for Clear Measurians, it was opened to the public as a means to garner feedback for future events. It was a 7-hour affair where Gabriel set out the task to perform then left us to our devices to build as much as we could while he provided guidance and answered questions.

The event itself ran as well as I expected, which, me being an optimistic sort, was awesome! And Gabriel, if you’re reading, I did manage to get to the beach today so don’t feel bad about taking that time away from me. I won’t go into logistics but wanted to get my thoughts on CQRS on the table for posterity.

By way of background, my knowledge of CQRS was, up until I started at Clear Measure, pretty vague. Limited mostly to what I knew about the definition of each of the words in the acronym. Gabriel has, in meetings and recently in his blog, increased my awareness of it to some degree to the point where it made sense as an architectural pattern but was still abstract enough that if someone asked me to implement it in a project, I would have first consulted with a local voodoo doctor (i.e. speed dial #4).

The good part

So the major benefit I got from the event is how much CQRS was demystified. It really *is* just segregation of the responsibilities of the commands and the queries. Commands must logically be separated from queries to the point where they don’t even share the same domain model. Even the term “domain model” is misleading since the model for queries is just DTOs, and not even glorified ones at that.

Taking one example, we created ourselves a swanky new TaskService for saving a new task. It takes in a ScheduleTaskDto which contains the basics from the UI: a task name, a due date, some instructions, and a list of assignees. The TaskService uses that info to create a fully-formed Task domain object, setting not only the properties passed in but also maybe the CreateDate, the Status, and the ID. Then maybe it validates the thing, saves it to the repository, and notifies the assignees of the new task. All like a good, well-behaved domain object.

Now we want a list of tasks to show on the dashboard. Here are two things we actively had to *stop* ourselves from doing:

  • Returning a list of Task objects
  • Putting the logic to retrieve the tasks in the TaskService

Instead, we created DashboardTask DTO containing the TaskId, Name, DueDate, and Status. All the items needed to display a list of tasks and nothing else. We also created a view in the database that retrieves exactly that information. The code to retrieve that info was in a separate class that goes directly to the database, not through the TaskService.

Given more time, I can see how the command/query separation would play out more easily. For the commands, we may have used NHibernate which gives us all the lazy loading and relationship-handling and property mappings and everything else is does well. For the queries, probably stick with views and Dapper which allow us to query exactly the information we want.

My sense is that we’d have a lot bigger set of classes in the query model than in the command model (which would be a full-fledged domain). Because the query model requires at lease one class almost for each and every screen in the app. Dashboard listing of tasks for supervisors: SupervisorDashboardTask. List of tasks for a dropdown list: TaskListItem. Retrieve a task for printing on a report: OverdueTask. All separate and all very specific.

Wrap up

My partner-in-crime for the day was Alper Sunar who is hosting our day’s efforts, such as they are. The big hurdle I had to jump early on was to stop myself from going infrastructure crazy. Early discussions touched on: Bootstrap, RavenDB, IoC, and Angular, all of which would have kept me from my goal: learning CQRS.

I’ve forked the code with the intent of continuing the journey and perhaps looking into something like RavenDB. I have to admit, all the talk around the virtual water cooler about elastic search has me thinking. And not just about finding new sister-wives…

Kyle the Returned

Posted in BookedIN, Clear Measure, CQRS | Tagged , , , | 2 Comments

DotNetFringe

This week I had the pleasure of getting to attend the DotNetFringe conference. This was hosted in Portland, OR (a place I had never been) and it was an outstanding conference. Thank you to all of the organizers for putting on such a great event.

pips_bounce

I have been a long time fan of OSS and I’m always looking to learn about new ways of doing things. One of those things that I have been thinking about lately is what are the next big changes in software development, specifically for .Net usually. What is the next IoC container for .Net applications? After watching a talk on Reactive Extensions by Amy I continue to be convinced that those who really understand functional programming and state management will be making the systems that I want to work on in the future.

I’ve been playing in Clojure for the last year or so and it has really changed the way I see software. Early this year I started porting all of my build scripts to FAKE (F#) and I can see how powerful F# is and the beauty it brings. I’m also watching with rapt attention the developments on the client side in terms of react.js and the various ClojureScript projects that take this model to the next level. These projects seem to really take something that is normally quite messy and make it simpler. My ability to reason about my F# and Clojure projects is an order of magnitude better than my OO projects from 3 years ago (because my current OO projects have larger doses of a functional aspect to them).

I think that this all dovetails nicely with the discussion around different forms of state management on disk as well. Looking at tools outside of the traditional RDBMS and looking at Graphs and Document databases.

State brings with it a lot of difficulty, its one of the reasons why I like working on Websites more than Native applications. But with these new tools coming about, maybe I should give it a second look.

Thanks again to the entire team running DotNetFringe!

Posted in Uncategorized | 2 Comments

Using nHibernate EventListeners to validate and audit data

In an application you would like to have maximum control over interaction with the database. In the ideal situation there is one single point where all data can be checked and monitored before it is sent to, or read from the database. In nHibernate EventListeners are an ideal way to do that. Every entity read from or sent to the db, whether explicitly flushed or part of a graph, passes there. nHibernate has a long list of Events you can listen to. At first sight the documentation on picking the right listeners and how to implement them points to an article by Ayende. Alas there are some severe issues taking that direction. There is a better way.

The problem

Entities in our domain can have quite complex validation. My base DomainObject class has a string-list of possible Issues, and a Validate method. An object with an empty issue list is considered valid. What makes an object invalid is described in the issue-list. Implementing this validation in the nHibernate OnPreUpdate event listener would seem a solid way to trap all validation errors.

Code Snippet
  1. public bool OnPreUpdate(PreUpdateEvent preUpdateEvent)
  2. {
  3.     var domainObject = preUpdateEvent.Entity as DomainObject;
  4.     if (domainObject == null)
  5.         return false;
  6.     domainObject.Validate();
  7.     if (domainObject.Issues.Any())
  8.         throw new InvalidDomainObjectException(domainObject);
  9.     return false;
  10. }

 

Pretty straigthforward. The Validate method performs the validation. In case this results in any issues an exception is thrown and the update is canceled. But there is a huge problem with this approach. As the validation code can, and will, do almost anything there is a chance it will touch a lazy collection. Resulting in an nHibernate exception. It is performing a flush, the lazy collection will trigger a read. This results in the dreaded “Collection was not processed in flush” exception.

The way out

There are loads and loads of events your code can listen to. The OnFlushEntity event is fired before the OnPreUpdate event. It fires at the right occasion and the best part is that you can touch anything while inside.

Code Snippet
  1. public void OnFlushEntity(FlushEntityEvent flushEntityEvent)
  2. {
  3.     if (HasDirtyProperties(flushEntityEvent))
  4.     {
  5.         var domainObject = flushEntityEvent.Entity as DomainObject;
  6.         if (domainObject != null)
  7.             domainObject.Validate();
  8.         }
  9.     }
  10. }

 

The validation is only performed, I leave the rejection of invalid entities in the OnPreUpdate event.

Code Snippet
  1. public bool OnPreUpdate(PreUpdateEvent preUpdateEvent)
  2. {
  3.     var domainObject = preUpdateEvent.Entity as DomainObject;
  4.     if (domainObject == null)
  5.         return false;
  6.     if (domainObject.Issues.Any())
  7.         throw new InvalidDomainObjectException(domainObject);
  8.     return false;
  9. }

 

Crucial in the OnFlushEntity event is the HasDirtyProperties method. This method was found here, in a GitHub contribution by Filip Kinsky, just about the only documentation on the event.

Code Snippet
  1. private bool HasDirtyProperties(FlushEntityEvent flushEntityEvent)
  2. {
  3.     ISessionImplementor session = flushEntityEvent.Session;
  4.     EntityEntry entry = flushEntityEvent.EntityEntry;
  5.     var entity = flushEntityEvent.Entity;
  6.     if (!entry.RequiresDirtyCheck(entity) || !entry.ExistsInDatabase || entry.LoadedState == null)
  7.     {
  8.         return false;
  9.     }
  10.     IEntityPersister persister = entry.Persister;
  11.  
  12.     object[] currentState = persister.GetPropertyValues(entity, session.EntityMode);
  13.     object[] loadedState = entry.LoadedState;
  14.  
  15.     return persister.EntityMetamodel.Properties
  16.         .Where((property, i) => !LazyPropertyInitializer.UnfetchedProperty.Equals(currentState[i]) && property.Type.IsDirty(loadedState[i], currentState[i], session))
  17.         .Any();
  18. }

 

As stated, you can do almost anything in the OnFlushEntity event, modifying data in entities included. So this is an ideal place to set auditing properties, or even add items to collections of the entity. All these modifications will be persisted to the database.

The original post by Ayende was on setting auditing properties, not about validation. Modifying an entity inside the OnPreUpdate event can be done, but takes some fiddling. Having discovered OnFlushEntity we moved not only the validation but also the auditing code here.

Setting eventhandlers

So far I have described the event handlers but have not shown yet how to hook them up. The eventhandlers are defined in interfaces, to be implemented in a class. The snippets above are all members of one class. 

Code Snippet
  1. public class RechtenValidatieEnLogListener : IPreUpdateEventListener, IPreInsertEventListener, IPreDeleteEventListener, IPostLoadEventListener, IFlushEntityEventListener
  2. {
  3.  
  4.   // ……
  5.  
  6. }

 

Used  when creating the sessionfactory.

Code Snippet
  1. private static ISessionFactory GetFactory()
  2. {
  3.     var listener = new RechtenValidatieEnLogListener();
  4.     return Fluently.Configure().
  5.         Database(CurrentConfiguration).
  6.         Mappings(m => m.FluentMappings.AddFromAssembly(Assembly.GetExecutingAssembly())).
  7.         ExposeConfiguration(c => listener.Register(c)).
  8.         CurrentSessionContext<HybridWebSessionContext>().
  9.         BuildSessionFactory();
  10. }

 

Registering the handlers is done by the Register method, which uses the configuration

Code Snippet
  1. public void Register(Configuration cfg)
  2. {
  3.      cfg.EventListeners.FlushEntityEventListeners = new[] { this }
  4.         .Concat(cfg.EventListeners.FlushEntityEventListeners)
  5.         .ToArray();
  6.      cfg.EventListeners.PreUpdateEventListeners = new[] { this }
  7.         .Concat(cfg.EventListeners.PreUpdateEventListeners)
  8.         .ToArray();
  9.  }

 

Most examples on hooking in handlers use the cfg.SetListener method. The problem with that is that it knocks out any handlers already hooked in. For the OnPreUpdate event that’s no problem, but knocking out the default OnFlushEntity event is fatal. Using this code your custom listener will be combined with any handlers already set.

Winding down

That’s all there is to it. I have left out the parts on validating reads, inserts or deletes. They follow the same pattern, up to you to implement them. EventListeners are very powerful, but it is a pity the documentation is so sparse. All of this was found by scraping the web and a lot of trial and error. But now we have a very solid system for validation and auditing. Without any limitations.

Posted in Uncategorized | 6 Comments

.NET Fringe, defining the future

test

There’s a long history in OSS communities of projects emerging from the community that start on the fringes. The people working on these projects are motivated by a desire for change, for doing things in a way that is not necessarily the norm. The fact that they exist is a sign of a healthy community.

As a very salient example, take jQuery. At one point it was a small JavaScript library that a few passionate developers worked on, it then grew to be the de facto library for developers everywhere. Thanks to a rich ecosystem around consumption and contribution to Open Source, what was once fringe become the mainstream.

Until recently, this ecosystem has not really existed in the .NET world. However, times they are a changin! In the past 5 to 10 years there have been major positive changes. One really important one was the ground work laid by ALT.NET. It made a very loud call for many of the changes we’re seeing. Another one is the bold steps Microsoft has taken to level the playing field for OSS libraries and tools. Regardless, the important point is this ecosystem I described is here in .NET and growing.

There’s too many examples to name all but I’ll list a few recent projects that illustrate this : jQuery, NuGetGithub, JSON.NET, AutomapperXamarinNancyFX, and .NET vNext. This is just a sampling that does not do justice as there are many many other examples.

This change is important. This is just the beginning, but it’s a great beginning. A group of us think this is so important, that we’re putting together an event focused on this topic, .NET Fringe.

We’re bringing together members of the .NET OSS community that have been working to define the future. They are going to share their works, share their learnings, and share their passion. And it’s happening in Portland, a place rich in OSS culture.

Be part of something amazing, come to .NET Fringe!

Posted in .net, oss | 10 Comments

A short note on build tasks

Surely, I’m behind the curve here but I’ve been thinking about the typical build process here at work. For a long time I’ve been operating off of the classic model from back in my NAnt days where it was all about Build -> Unit Test -> Integration Test. Maybe if you are feeling fancy you have some different steps for running database migrations or running other code analysis tools.

However, I’ve been working a lot with the concept of microservices in my side project. Trying to figure out where they break down and where they shine. Also, just trying to figure out the practical issues with running them. There is so much going on in our domain right now that trying to keep up can seem like a tidal wave. So if you are new to some of these concepts, well – so am I. :) The following is a brain dump of ideas for managing all of this crazy.

Package

First, I’ve started adding a few new steps to my build process. First off, is a reminder of how important a ‘packaging’ step is. Rob and I built this into UppercuT but its really coming back to me how important this is. I think its critical to realize that your default build output may not be enough packaging. In Visual Studio (well for me anyways) there is a very common behavior to just grab the ‘bin’ output and run. Because I need to package up my source better, I am now running my build step, then moving/copying all of the deployable content to something like a ‘build_output’ folder.

For those of you doing classic Visual Studio / C# / .Net development, I strongly recommend that you break out of your IDE for this. I would invite you to look at the power contained in your command line tools, even CMD can be used with great good. From there look at PowerShell. For me its bash, but I really need to look at ZSH.

Now that everything is in the ‘build_output’, my next step has been to run various HTML/ASPX/CSS/JS minification programs (and the litany of lilliputian tools arrive) to compress and optimize my application. Next step has been to then package all of this build_output content into a deployable unit. For my .net apps this is a NuGet via OctoPack and for my side project it has been Debian files.

Push

This leads me to my newest build task, push. Push for me takes all of my nice new build output and makes it available to the larger audience. Note that this step could be run by me or by my automated build tooling, but what it does is simply take the assets and for my .Net projects throws them in Octo’s NuGet repo or for my side project uses fpm/deb-s3 to generate my own Debian repository so that I can then pull these assets down and deploy them in testing / staging / production and have a consistent experience.

So, nothing really earth shattering here, but I wanted to share my thoughts. Also, I find these kinds of topics hard to find out there in the interwebs. If you have any great articles around the topic please do share in the comments.

Posted in Uncategorized | 1 Comment