Why use the command processor pattern in the service layer

Using a Command Processor

When we think about a layered or hexagonal architecture it is common to identify the need for a service layer. The service layer both provides a facade over our domain layer to applications – acting as an API definition – and contains the co-ordination and control logic for orchestrating how we respond to requests.

One option to implement this is the notion of a service as a class:


public class MyFatDomainService
{
    public void CreateMyThing(CreateMyThingCommand createMyThingCommand)
    {
        /*Stuff*/
    }

    public void UpdateMyThingForFoo(FooCommand fooHappened)
    {
       /*Other Stuff*/
    }

     public void UpdateMyThingForBar(BarCommand barHappened)
     {
        /*Other Stuff*/
     }

     /*Loads more of these*/
}

Another option is to use the command processor pattern. There are some keys to understanding the choice to use a command processor to implement your service layer.

The Interface Segregation Principle states that clients should not be forced to depend on methods on an interface that they do not use. This is because we do not want to update the client because the interface changes to service other clients in a way that the client itself does not care about. Operation script style domain service classes force consumers (for example MVC controllers) to become dependent on methods on the domain service class that they do not consume.

Now this can be obviated by having the domain service implement a number of interfaces, and hand to its clients interfaces that only cover the concerns they have. With application service layers this naturally tends towards one method per interface.


public interface ICreateMyThingDomainService
{
    void CreateMyThing(CreateMyThingCommand createMyThingCommand);
}

public interface IUpdateMyThingForFooDomainService
{
    void UpdateMyThingForFoo(FooCommand fooHappened);
}

public interface IUpdateMyThingForBarDomainService
{
   void UpdateMyThingForBar(BarCommand barHappened);
}

public class MyFatDomainService : ICreateMyThingDomainService, IUpdateMyThingForFooDomainService, IUpdateMyThingForBarDomainService
{
   public void CreateMyThing(CreateMyThingCommand createMyThingCommand)
  {
      /*Stuff*/
   }

   public void UpdateMyThingForFoo(FooCommand fooHappened)
  {
      /*Other Stuff*/
  }

   public void UpdateMyThingForBar(BarCommand barHappened)
   {
      /*Other Stuff*/
    }

/*Loads more of these*/

}

Now the Single Responsibility Principle suggests that a class should have one and only one reason to change. All these separate interfaces begin to suggest that a separate class might be better for each interface, to avoid updating a class for concerns that it does not have.


public interface ICreateMyThingDomainService
{
    void CreateMyThing(CreateMyThingCommand createMyThingCommand);
}

public class CreateMyThingDomainService : ICreateMyThingDomainService
{
    public void CreateMyThing(CreateMyThingCommand createMyThingCommand)
    {
       /*Stuff */
    }
}

public interface IUpdateMyThingForFooDomainService
{
    void UpdateMyThingForFoo(FooCommand fooHappened);
}

public class UpdateMyThingForFooDomainService : IUpdateMyThingForBarDomainService
{
    public void UpdateMyThingForBar(BarCommand barHappened)
   {
      /*Other Stuff*/
   }
}

public interface IUpdateMyThingForFooDomainService
{
   void UpdateMyThingForBar(FooCommand barHappened);
}

public class UpdateMyThingForFooDomainService : IUpdateMyThingForFooDomainService
{
   public void UpdateMyThingForFoo(FooCommand barHappened)
  {
    /*Other Stuff*/
  }
}

Having split these individual classes out we might choose to avoid calling them directly, but instead decide to send a message to them. There are a number of reasons for this.

The first is that we decouple the caller from the service. This is useful where we might want to change what the service does – for example handle requests asynchronously, without modifying the caller.

The second is that we might want to handle orthogonal concerns orthogonally such as transactions or logging. Although we could use a technology like PostSharp to weave in aspects, a simpler approach is pipes and filters – create a pipeline with a series of steps, the end of which is our service. We then send a message, the framework instantiates a pipeline with those orthogonal concerns, the message is passed through the filter steps before arriving at the target handler.

In order to make this work the pipeline steps need to implement a common interface as this allows us to add Decorators that implement the same interface, forming a Chain of Responsibility.

 


public interface IHandleMessages
{
    void Handle(T command);
}

public class CreateMyThingHandler : IHandleMessages
{
    public void Handles(CreateMyThingCommand createMyThingCommand)
   {
      /*Stuff */
  }
}

public class UpdateMyThingForFooHandler : IHandleMessages
{
    [Logging]
    public void Handles(BarCommand barHappened)
    {
      /*Other Stuff*/
    }
}

public class Logger : IHandleMessages
{
    public void Handles(BarCommand barHappened)
   {
       /*Other Stuff*/
   }
}

The code is not shown here, but you may want to use attributes to document a handler and show the decorators in the chain of responsibility. You could use these to decide what decorators to put in the chain. However, It’s worth noting that real implementations that assemble a chain of responsibility often use an Inversion of Control container to resolve the handler and you will often find it easier to implement if you have an IDecorate interface. Some of the pipeline building code involves usually involves dealing with Type.MakeGenericType and you might want to be aware of that if you want to use this approach. I might do a separate post on building a chain of responsibility at a future date.

Note that this also serves to reduce the number of interfaces that we must implement as the generic interface can stand in for most of them.

Third is the number of dependencies our service has. We might have an action that we need to execute as part of the response to a number of actions by the user. We don’t want to repeat that code in every service. But, we also don’t want to have services calling too many other services, (that in turn may call other services), because we will get an explosion of dependencies for our service, and the service that it depends on. This makes it hard to get our service under test, or makes the tests unintelligible and results in anti-patterns like auto-mocking. The need for auto-mocking may be seen as a design smell: you have too many dependencies; resolution might be to use a command processor.

We gain some dependency advantages from the split into separate handlers, because each handler will have fewer dependencies than a service. But in addition we can separate concerns in our handlers, such that we focus on updating a small part of our domain model or object graph in each handler (in DDD terms we focus on an aggregate).

So to further limit the number of dependencies we prefer to publish a message (an event) from the service that handles the initial request, handing off any work that is related to handling the command, but applies to a different aggregate (and should potentially be in a different transaction and consistency boundary). This allows the framework to pass the message to those services, removing our dependency on them.


public class CreateMyThingHandler : IHandleMessages
{
    IProcessCommands _commandProcessor;
    IMyThingRepository _myThingRepository;

    public CreateMyThingHandler(IProcessCommands commandProcessor, 
         IMyThingRepository myThingRepository)
   {
      _commandProcessor = commandProcessor;
      _myThingRepository = myThingRepository;
   }

   public void Handles(CreateMyThingCommand createMyThingCommand)
  {
      /*Use Factory or Factory Method to create a my thing  */
      /*save my thing to a repository*/

      _commandProcessor.Publish(
          new MyThingCreated
              {/* properties that other consumers may care about*/}
          );
   }
}

In addition a command processor, because it is responsible for loading the services, is also the point at which an Inversion of Control (IOC) container is injected into our request handling pipeline, which removes any issues with lack of support for IOC containers in legacy platforms like WebForms.

Finally switching to a command processor eases our path to handling some requests asynchronously.

Disadvantages of using a command processor

One common criticism of using a command processor is that it adds a level of indirection. To read the code you have to identify how commands map to handlers. Naming conventions can help ease the burden, especially if used with tools like R#.

About Ian Cooper

Ian Cooper has over 18 years of experience delivering Microsoft platform solutions in government, healthcare, and finance. During that time he has worked for the DTi, Reuters, Sungard, Misys and Beazley delivering everything from bespoke enterpise solutions to 'shrink-wrapped' products to thousands of customers. Ian is a passionate exponent of the benefits of OO and Agile. He is test-infected and contagious. When he is not writing C# code he is also the and founder of the London .NET user group. http://www.dnug.org.uk
This entry was posted in CQRS, DDD, Events, Object-Orientation, SOLID, Uncategorized. Bookmark the permalink. Follow any comments here with the RSS feed for this post.