Using nHibernate EventListeners to validate and audit data

In an application you would like to have maximum control over interaction with the database. In the ideal situation there is one single point where all data can be checked and monitored before it is sent to, or read from the database. In nHibernate EventListeners are an ideal way to do that. Every entity read from or sent to the db, whether explicitly flushed or part of a graph, passes there. nHibernate has a long list of Events you can listen to. At first sight the documentation on picking the right listeners and how to implement them points to an article by Ayende. Alas there are some severe issues taking that direction. There is a better way.

The problem

Entities in our domain can have quite complex validation. My base DomainObject class has a string-list of possible Issues, and a Validate method. An object with an empty issue list is considered valid. What makes an object invalid is described in the issue-list. Implementing this validation in the nHibernate OnPreUpdate event listener would seem a solid way to trap all validation errors.

Code Snippet
  1. public bool OnPreUpdate(PreUpdateEvent preUpdateEvent)
  2. {
  3.     var domainObject = preUpdateEvent.Entity as DomainObject;
  4.     if (domainObject == null)
  5.         return false;
  6.     domainObject.Validate();
  7.     if (domainObject.Issues.Any())
  8.         throw new InvalidDomainObjectException(domainObject);
  9.     return false;
  10. }


Pretty straigthforward. The Validate method performs the validation. In case this results in any issues an exception is thrown and the update is canceled. But there is a huge problem with this approach. As the validation code can, and will, do almost anything there is a chance it will touch a lazy collection. Resulting in an nHibernate exception. It is performing a flush, the lazy collection will trigger a read. This results in the dreaded “Collection was not processed in flush” exception.

The way out

There are loads and loads of events your code can listen to. The OnFlushEntity event is fired before the OnPreUpdate event. It fires at the right occasion and the best part is that you can touch anything while inside.

Code Snippet
  1. public void OnFlushEntity(FlushEntityEvent flushEntityEvent)
  2. {
  3.     if (HasDirtyProperties(flushEntityEvent))
  4.     {
  5.         var domainObject = flushEntityEvent.Entity as DomainObject;
  6.         if (domainObject != null)
  7.             domainObject.Validate();
  8.         }
  9.     }
  10. }


The validation is only performed, I leave the rejection of invalid entities in the OnPreUpdate event.

Code Snippet
  1. public bool OnPreUpdate(PreUpdateEvent preUpdateEvent)
  2. {
  3.     var domainObject = preUpdateEvent.Entity as DomainObject;
  4.     if (domainObject == null)
  5.         return false;
  6.     if (domainObject.Issues.Any())
  7.         throw new InvalidDomainObjectException(domainObject);
  8.     return false;
  9. }


Crucial in the OnFlushEntity event is the HasDirtyProperties method. This method was found here, in a GitHub contribution by Filip Kinsky, just about the only documentation on the event.

Code Snippet
  1. private bool HasDirtyProperties(FlushEntityEvent flushEntityEvent)
  2. {
  3.     ISessionImplementor session = flushEntityEvent.Session;
  4.     EntityEntry entry = flushEntityEvent.EntityEntry;
  5.     var entity = flushEntityEvent.Entity;
  6.     if (!entry.RequiresDirtyCheck(entity) || !entry.ExistsInDatabase || entry.LoadedState == null)
  7.     {
  8.         return false;
  9.     }
  10.     IEntityPersister persister = entry.Persister;
  12.     object[] currentState = persister.GetPropertyValues(entity, session.EntityMode);
  13.     object[] loadedState = entry.LoadedState;
  15.     return persister.EntityMetamodel.Properties
  16.         .Where((property, i) => !LazyPropertyInitializer.UnfetchedProperty.Equals(currentState[i]) && property.Type.IsDirty(loadedState[i], currentState[i], session))
  17.         .Any();
  18. }


As stated, you can do almost anything in the OnFlushEntity event, modifying data in entities included. So this is an ideal place to set auditing properties, or even add items to collections of the entity. All these modifications will be persisted to the database.

The original post by Ayende was on setting auditing properties, not about validation. Modifying an entity inside the OnPreUpdate event can be done, but takes some fiddling. Having discovered OnFlushEntity we moved not only the validation but also the auditing code here.

Setting eventhandlers

So far I have described the event handlers but have not shown yet how to hook them up. The eventhandlers are defined in interfaces, to be implemented in a class. The snippets above are all members of one class. 

Code Snippet
  1. public class RechtenValidatieEnLogListener : IPreUpdateEventListener, IPreInsertEventListener, IPreDeleteEventListener, IPostLoadEventListener, IFlushEntityEventListener
  2. {
  4.   // ……
  6. }


Used  when creating the sessionfactory.

Code Snippet
  1. private static ISessionFactory GetFactory()
  2. {
  3.     var listener = new RechtenValidatieEnLogListener();
  4.     return Fluently.Configure().
  5.         Database(CurrentConfiguration).
  6.         Mappings(m => m.FluentMappings.AddFromAssembly(Assembly.GetExecutingAssembly())).
  7.         ExposeConfiguration(c => listener.Register(c)).
  8.         CurrentSessionContext<HybridWebSessionContext>().
  9.         BuildSessionFactory();
  10. }


Registering the handlers is done by the Register method, which uses the configuration

Code Snippet
  1. public void Register(Configuration cfg)
  2. {
  3.      cfg.EventListeners.FlushEntityEventListeners = new[] { this }
  4.         .Concat(cfg.EventListeners.FlushEntityEventListeners)
  5.         .ToArray();
  6.      cfg.EventListeners.PreUpdateEventListeners = new[] { this }
  7.         .Concat(cfg.EventListeners.PreUpdateEventListeners)
  8.         .ToArray();
  9.  }


Most examples on hooking in handlers use the cfg.SetListener method. The problem with that is that it knocks out any handlers already hooked in. For the OnPreUpdate event that’s no problem, but knocking out the default OnFlushEntity event is fatal. Using this code your custom listener will be combined with any handlers already set.

Winding down

That’s all there is to it. I have left out the parts on validating reads, inserts or deletes. They follow the same pattern, up to you to implement them. EventListeners are very powerful, but it is a pity the documentation is so sparse. All of this was found by scraping the web and a lot of trial and error. But now we have a very solid system for validation and auditing. Without any limitations.

Posted in Uncategorized | 6 Comments

Software components as a toy

For kids Christmas has a lot to do with toys. Looking forward to new ways to play. For me an as elder kid who still enjoys to toy around, aka as a software engineer, I want to reflect on the analogy between software components and certain toys.

Well organized software is build from components. Quite often the components have been compared to Lego blocks. Toys for kids aged 0 to 99, to quote their own advertorials. A Lego block is simple and has a clear interface to connect it to other blocks.


Compared to the simplicity of the interface the number of ways to connect blocks together is big. Leading to an endless amount of creatable structures.


(Helicopter story here)

Over the years the line of Lego products has adapted to popular demand. A lot of sets pre-fantasized to a specific movie or game were thrown on the market. These sets contain more and more building blocks which are very different from the original simple blocks.


(Yes Minecraft)lego_tool_no_3_1

The blocks have no more possibilities than just to be stacked on top of the rest, with no more interfaces to connect it to anything else.

Lego is not the only construction toy. In my youth Meccano was also very popular. Metal parts put together using nuts and bolts. We built cars, steam engines, cranes and the like.


Meccano suffered the same problem as the later Lego blocks. Basic parts were quite simple, but for building more complex things you needed an ever increasing array of specialized parts.


Mecano was not the only brand offering metal construction kits. I had the luck to have Trix Express, handed down by my father.


Trix was basically different. The Meccano strips have one row of holes. All Trix parts have three in paralel. Organized in such way that the holes make even triangles.

Trix unité A

With two nuts you firmly join two strips in an angle of 0, 45 or 90  degrees. And thus you can create almost each complex construction from a very limited number of different parts.

Giant Blocksetter Crane

The only limit on whatever you were building was the amount of pocket money or Santa Claus (Sinterklaas)

To finish of with a look into the new year: what do your software components look like, Meccano or Trix ?

Happy constructing

Posted in Uncategorized | 1 Comment

Telerik, moving to Kendo

I’ve been pretty quiet lately. Spent all my time on our big app, working with the usual MVC, DDD, nHibernate and the like. Much to our content, so far we have managed to keep up with the ever changing landscape of Dutch mental healthcare. For our webpages we were using the Telerik MVC suite, in a previous post I already shared some of our experiences. Telerik is replacing the MVC suite with the new Kendo suite. Which has several very appealing options. Besides that support on the MVC suite is coming to an end. Time to move on.

Replacing all components in one go was not possible. Kendo has the same architecture as the MVC suite and a very similar syntax. But our app is just to large to change all in one go. According to the documentation it’s possible to mix the two suites, even in the same view. To get that actually to work took an effort beyond the faq. In this post I’ll dive into the details.


Both MVC and Kendo require some style sheets. In the mvc suite you need a StyleSheetRegistrar component to render the css links. Kendo follows the standard way.

Registering the style sheets

Code Snippet
  1. @Html.Telerik().StyleSheetRegistrar().DefaultGroup(group => group.Add(“telerik.common.css”).Add(“telerik.eposoffice2010silver.min.css”).Combined(true).Compress(true))
  2. <linkhref=”@Url.Content(“~/Content/kendo.common.min.css”)rel=”stylesheet”/>
  3. <linkhref=”@Url.Content(“~/Content/kendo.default.min.css”)rel=”stylesheet”/>
  4. <linkhref=”@Url.Content(“~/Content/epos/epos.css”)rel=”stylesheet”type=”text/css”/>

Completely straightforward


When it comes to jQuery things get somewhat complicated. The MVC suite requires version <= 1.7, Kendo requires versions >= 1.9. These two are not compatible. Thank goodness the jquery-migrate script library patches the leaks, making it possible for the mvc suite to run using jquey 1.9. The gotcha is that the mvc suite uses a ScriptRegistrar component to register the Telerik scipts. By default this component will also register (the wrong version of) jQuery again. This is prevented by the jQuery(false) method of the registrar.

Code Snippet
  1. <scriptsrc=”@Url.Content(“~/Scripts/jquery-1.11.0.min.js”)type=”text/javascript”></script>
  2. <scriptsrc=”@Url.Content(“~/Scripts/jquery-migrate-1.2.1.min.js”)“></script>
  3. <scriptsrc=”@Url.Content(“~/Scripts/kendo.all.min.js” + “?v=” + version)“></script>
  4. <scriptsrc=”@Url.Content(“~/Scripts/kendo.aspnetmvc.min.js” + “?v=” + version)“></script>
  5. <scriptsrc=”@Url.Content(“~/Scripts/” + “?v=” + version)“></script>

The MVC suite scripts should be rendered at  the end of the page

Code Snippet
  1. @(Html.Telerik().ScriptRegistrar().jQuery(false).Globalization(true).DefaultGroup(group => group.Combined(true).Compress(true)))

To sum things up. The full master layout

Code Snippet
  1. @using Telerik.Web.Mvc.UI
  2. <!DOCTYPEhtml>
  3. <html>
  4. <head>
  5.     <metacharset=”utf-8″/>
  6.     <title>@ViewBag.Title</title>
  7.     @Html.Telerik().StyleSheetRegistrar().DefaultGroup(group => group.Add(“telerik.common.css”).Add(“telerik.eposoffice2010silver.min.css”).Combined(true).Compress(true))
  8.     <linkhref=”@Url.Content(“~/Content/kendo.common.min.css”)rel=”stylesheet”/>
  9.     <linkhref=”@Url.Content(“~/Content/kendo.default.min.css”)rel=”stylesheet”/>
  10.     <linkhref=”@Url.Content(“~/Content/epos/epos.css”)rel=”stylesheet”type=”text/css”/>
  11.     <scriptsrc=”@Url.Content(“~/Scripts/jquery-1.11.0.min.js”)type=”text/javascript”></script>
  12.     <scriptsrc=”@Url.Content(“~/Scripts/jquery-migrate-1.2.1.min.js”)“></script>
  13.     <scriptsrc=”@Url.Content(“~/Scripts/kendo.all.min.js”)“></script>
  14.     <scriptsrc=”@Url.Content(“~/Scripts/kendo.aspnetmvc.min.js”)“></script>
  15.     <scriptsrc=”@Url.Content(“~/Scripts/”)“></script>
  16.     @* Scripts required for ajax forms *@
  17.     <scriptsrc=”@Url.Content(“~/Scripts/jquery.unobtrusive-ajax.js”)type=”text/javascript”></script>
  18.     <scriptsrc=”@Url.Content(“~/Scripts/jquery.validate.min.js”)type=”text/javascript”></script>
  19.       <scriptsrc=”@Url.Content(“~/Scripts/jquery.validate.unobtrusive.min.js”)type=”text/javascript”></script>
  20.     @* App specific scripts *@
  21.     <scriptsrc=”@Url.Content(“~/Scripts/Epos.js”)type=”text/javascript”></script>
  22.       <scriptsrc=”@Url.Content(“~/Scripts/tiny_mce/tiny_mce.js”)type=”text/javascript”></script>
  23. </head>
  24. <body>
  25.     <scripttype=”text/javascript”>
  26.         kendo.culture(“nl-NL”);
  27.     </script>
  28.     <div>
  29.         <divid=”maincontent”>
  30.             @RenderBody()
  31.         </div>
  32.         @(Html.Telerik().ScriptRegistrar().jQuery(false).Globalization(true).DefaultGroup(group => group.Combined(true).Compress(true)))
  33.     </div>
  34. </body>
  35. </html>

In case your views use Ajax forms, rendered using Ajax.BeginForm, the ajax scripts are required. Imho these forms are a pita. The handling of events is quite entangling and, even worse, fields edited with a Kendo editor are not included the postdata. For most of our forms we do a straight post using extension methods. Alas, not all views have been adapted yet, so for now the scripts are still required. The good thing is that they still work in the new jquery scenario and don’t seem to bite anything else.

In action

Using this layout both suites can be combined on one page. Here’s a Kendo Colorpicker used within an mvc tabstrip



The way the localization of the components work differs. Kendo is very much script based. For instance the titles of the buttons of the colorpicker above can only be set from script.

To standardize the look and feel of our components we build them using html extension methods, as described here. Also the Telerik Components, which have a lot of options which can be set.  The Kendo suite has an Html extension library, the extension methods in there provide options to configure the component. Alas this list of options is not complete, it is smaller in than the list of configuration options from script. FI it is not possible to set the captions from the extension method for the colorpicker.

All html extension methods do is render some script. It is no great deal to bypass the Kendo extension methods and render the script yourself. The Kendo components all follow the same pattern

Code Snippet
  1. <inputid=”colorpicker”type=”color”/>
  2. <script>
  3.     $(“#colorpicker”).kendoColorPicker({
  4.         buttons: false
  5.     })
  6. </script>

The main difference is that one component renders an input and the other a div.

To get all possibilities of configuration via script you can create your own extension methods which directly render the script

Code Snippet
  1. internalstaticMvcHtmlString KendoColorPickerFor<TModel>(thisHtmlHelper<TModel> htmlHelper, Expression<Func<TModel, string>> expression)
  2. {
  3.     /*
  4.         <input id=”colorpicker” type=”color” />
  5.         <script>
  6.         $(“#colorpicker”).kendoColorPicker({
  7.             messages: {
  8.             apply: “Update”,
  9.             cancel: “Discard”
  10.             }
  11.         })
  12.         </script>
  13.         */
  14.     var id = expression.NameBuilder(htmlHelper.ViewData.Model);
  15.     var value = ModelMetadata.FromLambdaExpression(expression, htmlHelper.ViewData).Model;
  16.     var sb = newStringBuilder();
  17.     sb.AppendFormat(“<input id='{0}‘ type=’color’ value='{1}‘/>”, id, value);
  18.     sb.AppendLine(“<script>”);
  19.     sb.AppendFormat(“$(‘#{0}‘).kendoColorPicker({{ messages: {{ apply: ‘OK’, cancel: ‘Annuleren’ }} }})”, id);
  20.     sb.AppendLine(“</script>”);
  21.     returnMvcHtmlString.Create(sb.ToString());
  22. }

The captions are set to OK and annuleren. For now hard coded.

The NameBuilder method is the one from one of my previous stories. It is used to generate an unique id.

Small gotchas

So far mixing Kendo and the classical suite works well. We had two small gotchas so far.

  • Window stacking. In the mvc suite you can stack windows on top of each other. A popup in a popup, In the Kendo suite you can do that as well. This stacking is done using high z-index-es. The classical suite uses a higher range than Kendo. Resulting in a Kendo popup window launched from a classic window appearing behind it’s originator. You cannot stack Kendo and classical windows. Thanks goodness changing from classical window to Kendo window is no big deal.
  • Ajax forms. As mentioned inputs using Kendo components will exclude the edited fields from the ajax postdata. In our app we are switching to posting the data straight from script, using our postdata extension. Far more flexible and no more need for a form.

Winding down

Finding out the way the classical suite and Kendo require jQuery was somewhat of a hurdle. But now all works well and we can convert all further code at our pace. And please our users with a better look and feel. The way the Kendo window automatically adjusts to the size of it’s content is worth the migration on itself.

Posted in Uncategorized | Leave a comment

.Net framework versions and dll hell

With .net version numbers increasing and increasing I recently encountered something which reminded me of dllhell. Which the .net framework promised to end. The nice part is that it already shows up at build time. Not at run time, leading to disappointed customers. Framework versions can be mixed in one solution. Up to the moment one assembly references another assembly built against a higher version. At this moment the VS build process starts to lose track.

The catch is that it’s not always clear you’re mixing framework versions in a solution. Adding a new project will default against the newest framework version. Which is not constant over a solution’s life time. The result can be quite strange. I managed to get away with a circular reference which was not seen because one project was using a version of the other built against an older framework version. And was thus considered a different assembly. Up to the moment I checked the framework version of all projects.

The only good way to solve this descend into hell would be having all projects in the solution on one version. In a big solution with a lot of projects this can be quite tedious. I would love to be able to set the version on the solution level. Or else to have a nice ‘refactoring’ tool to the work for me. R# are you listening ?


Posted in Uncategorized | 8 Comments

@model and beyond

In my previous post I had been fiddling with the html helper used in Razor views. Since then our custom html-extensions have been doing great things for our project. To mention some:

  1. Standardizing the look and feel. It is far more consistent and maintainable to set attributes (including a css class) and event handlers in one centralized place.
  2. Simplify the script. Often a part of the logic the script will follow is already known server side. Instead of writing everything out in javascript rendering the intended statements leads to a leaner client. There is an example in my previous post on building post-data.
  3. Decoupling the libraries used. At the moment we are using the Telerik MVC suite. In my previous post I described how our html helpers build standardized Telerik components for our views. In the not to far future we want to switch to the Telerik Kendo suite. Having wrapped up the library dependency in our Html helper will make this switch a lot easier to implement.

What has evolved is the way we work with the model. In MVC the implementation of the controller and the view is clear. When it comes to the implementation of the model there are almost as many different styles of implementation as there are programmers. In general the model can bring any data to the view you can imagine. Not only the source of the data varies, from plain sql to a C# property, also the use of the model’s data varies. It can be a customers name from the database. Or it can be the string representation of some html attribute needed for a fancy picker. Here data and presentation start to get mixed up. Our extensions needed information for the Html-Id. The original Html helper had a custom constructor to get that specific data from the model into the helper. Which required to create our own htmlhelper when starting the view and use that one instead of the standard @html. As seen in the eposHtml in the previous story. It would be cleaner if our extension methods could be satisfied with the default html helper. It would also be cleaner to keep a better separation between ‘real’ data and presentation.

The model is available in every HtmlHelper extension method.

public static PostDataBuilder<TModel> PostData<TModel>(this HtmlHelper<TModel> htmlHelper)


    return new PostDataBuilder<TModel>(Id(htmlHelper.ViewData.Model));



It’s a property of the ViewData.

In our case we needed something to give the control an unique Id. The Id method builds that Id. Previously we passed the Id-base in the constructor, which lead to the custom helper. A far more elegant solution is using a very basic IOC-DI pattern. As implemented By the Id method

private static string Id(object model)


    var complex = model as IProvideCompositeId;

    if (complex != null)

        return complex.CompositeId;

    var simple = model as IProvideId;

    return simple == null ? “” :  simple.Id < 0  ? String.Format(“N{0}”, Math.Abs(simple.Id)) : simple.Id.ToString();



The method queries the model first for the IProvideCompositeId interface, in case the model does not implement that it is queried for the IprovideId interface. Resulting in a string which can be safely used in an HtmlId. (A negative number would lead to a ‘–’ in the string, which is not accepted in an Html Id).

These interfaces are very straightforward

public interface IProvideCompositeId


    string CompositeId { get; }



public interface IProvideId


    int Id { get; }



In case the model is going to be used in a view requiring unique Id’s the model has to implement one of these interfaces.

public class FactuurDefinitie : IProvideCompositeId


    public readonly int IdTraject;

    public readonly int UZOVI;

    public readonly bool Verzekerd;


    public FactuurDefinitie(int idTraject, int uzovi, bool verzekerd)


        // The usual stuff



    public string CompositeId


        get { return String.Format(“{0}{1}{2}”, IdTraject, UZOVI, Verzekerd); }




Working this way:

  • We can use our custom html extensions in the default html helper
  • Specific data from the model is available inside our extensions
  • The model and the view do not get entangled

The code is no big deal. I know. But the model is something whose horizons are still not in sight.

Posted in Uncategorized | 1 Comment