Keeping JS Sane

I gave a talk at #DDD10 this Saturday about keeping JS sane, I had some questions after about the list of things I ran through so documented in all its glory are my current thoughts on development with a dynamic language like JS.

Javascript sucks and it doesn’t matter

Yes, you can add arrays to numbers to strings to whatever else you like and get bizarre and hilarious results. Yes, you can write code with more callbacks than Lara Croft on a dating website – but if you’re doing these things in production code then you deserve what is coming to you – and that’s before I even ask if you have tests which would exercise that code and raise the common errors anyway.

And yes, it’s a dynamic language and you don’t get all that compile-time goodness you’re used to, you don’t get all of the re-factoring support you’re used to in [insert your IDE here] and you don’t get all of the magical intellisense that helps you from having to think but you do get all the other advantages inherent in a scripting system which I’m not going to talk about because man it’s a boring subject and it has been talked to death by a million language nerds and shouldn’t you just be writing Clojure anyway?

Tldr: Most of the craziness in JS won’t ever affect you, and if it does your tests will save you.

Use JSHint/JSLint

This is not just something that is a ‘nice to have’, this is not something that you read in a blog entry and go ‘oh, we should do that sometime’, this is genuinely something you should be setting up at the very start of a project and integrating with whatever editor it is you are using to give you and your team feedback as you pour your efforts into a project full of files.

Yeah – there are two of them and yeah you need to pick one. If your team is really going to have an argument over it, just flip a coin and move on because the difference is pretty much that JSLint tends towards being a crockford-style-enforcer rather than just a sanity checker, this is irrelevant for most people.

In each of your projects, you can have a .jshintrc file dictating what my preferences are for that directory – and you can turn off a lot of the warnings if you have a case of the ‘I know better’ (or your code doesn’t use semi-colons). This doesn’t stop the instant feedback every time you save being incredibly useful in shortening the feedback loop between code-change and code-breakage.

To get and run jshint on your project, the following steps are all you need.

- Install nodejs
- type ‘npm install -g jshint’
- type ‘jshint ./src/*’

 

By setting this up in your build process/editing process/etc, you are pretty much covering half of the ‘good things’ automatically leaving you able to spend a bit more time focusing on the large rather than the small.

Write tests, no not those tests, the other tests

Yeah, testing is a pain, and yet even if we have a nice suite of tests for our backend .NET/Java/Whatevs we often don’t bother writing any for our front-end because it’s difficult to isolate logic for testing away from the browser, and testing code inside the browser means setting up godawful tooling and build scripts. This high initial pain putting people off is understandable, which is why for the most part the trick is ‘not to bother with all of that’.

I’m a fan of headless browsers, of which there is again a choice of two main players (Zombie and Phantom), both can be driven from Javascript to test Javascript, and both can be set up with little effort as part of the build scripts. Let’s show you how easy this actually is with zombie.

- Install nodejs
- type ‘npm install zombie’
- Write code like the following in a JS file

var Browser = require('zombie')
  , client = new Browser()

client.visit('http://localhost', function() {
  client.fill('title', 'This is an awesome blog entry')
  client.click('submit')
})

- Run the file with ‘node myfile’

Combine this with a test runner like mocha (npm install -g mocha) and you have a way to test pretty much everything across your web application. Yeah – you won’t be able to test animations and transitions and whatever but you can at least make sure your functional code works effectively.

You can even do crazy things like grab handles to the real JS objects present within the application and assert on them, so for example:

var title = client.execute("$('#title'))
title.css('background-color').should.equal('#FFF')

Or

var app = client.execute('global_application_object');
app.currentView.should.equal('/ponies')

All of this assumes that you can easily bootstrap your application and that you have decent in-memory implementations of things like persistence (if you want to be able keep your tests fast). In .NET this could mean using SQLite with NHibernate for example or if you’re starting from the beginning maybe just writing everything to in-memory implementations and doing your real persistence work later in development (a sane way to work anyway tbh).

Anyway, that’s up to you – I would advocate that for *most* logic in *most* applications that this sort of headless full-stack testing is a better use of time and energy than striving for isolated unit tests for most scenarios (You can always drop down to testing components in isolation where necessary anyway – follow the pain and all that).

By having a half decent suite of tests that describe the behaviour expected from the application, you’ll likely catch a lot of silly mistakes that would otherwise slip through to production. The tooling now exists to make testing easy so there is little excuse not to do it.

Use coffeescript

I actually don’t like Coffeescript all that much, I don’t really like Ruby all that much either – the best languages are the ones with a few constructs to pick up and run with instead of 15 million ways to avoid writing an if statement or for-loop.

That said, JS isn’t very terse, Coffeescript is – and I value that sometimes, I especially value it in my tests where I don’t have the same problems of mapping between the languages as I would in the multitude of browsers that we have to support these days.

Consider this test:

Scenario("Basic Lobbying", function() {
  var context = new ManualContext()
    , bob = null
  
  Given("A server in a clean state", function(done) {
    context.start(done)
  })
  When("bob connects", function(done) {
    bob = context.add_client_called('bob', done)
  })
  Then("bob should have a canvas displayed", function() {
    bob.should_have_element('canvas')
  })
  And("bob should be told he is the only one", function() {
    bob.clientCount().should.include('only player')
  })
  And("bob should be waiting for other players", function() {
    bob.isWaiting().should.equal(true)
  })
  And("bob shouldn't see the text input controls", function() {
    bob.can_see_text_input().should.equal(false)
  })
  And("bob shouldn't see the paintbrush controls", function() {
    bob.can_see_paintbrushes().should.equal(false)
  })
  And("bob shouldn't see the time left", function() {
    bob.can_see_time_left().should.equal(false)
  })
})

Now written in Coffeescript

Scenario "Basic Lobbying", ->
  context = new ManualContext()
  bob = null
  
  Given "a server in a clean state", (done) ->
    context.start done
  When "bob connects", (done) ->
    bob = context.add_client_called 'bob', done
  Then "bob should have a canvas displayed", ->
    bob.should_have_element('canvas')
  And "bob should be told he is the only one", ->
    bob.clientCount().should.include('only player')
  And "bob should be waiting for other players", ->
    bob.isWaiting().should.equal(true)
  And "bob shouldn't see the text input controls", ->
    bob.can_see_text_input().should.equal(false)
  And "bob shouldn't see the paintbrush controls", ->
    bob.can_see_paintbrushes().should.equal(false)
  And "bob shouldn't see the time left", ->
    bob.can_see_time_left().should.equal(false)

There isn’t a huge amount of difference, but I really value the lack of noise in Coffeesript-driven-tests and it saves from bothering with more hideous ways of doing things like Cucumber/etc.

Don’t use Coffeescript

Enough has been written about this in every single mailing list item that even dares to mention coffeescript since the dawn of time, but basically it turns out that JS is nice enough in most cases and Coffeescript is just a nice way of keeping things terse and learning JS. It can cause a lot of problems itself (usually around debugging – even though browsers are beginning to implement source-code maps, that’s not going to be universal) and you end up having to learn JS and map it to the CS in your head as you write it (at least to begin with)

Whatever, I advise you learn JS and perhaps allow CS for places where you feel terseness really adds a pile more value.

Manage your dependencies

Before getting into a conversation about module systems in Javascript, it needs pointing out that having a web of dependencies between your own code files is a bad thing. Like a lot of the pains in JS, the best way to avoid the pain is to avoid creating the pain in the first place.

For example, if you find yourself missing ‘re-factoring across the entire solution’, you should question why you need that in the first place, if you have libraries which are in use across in so many places across the code-base and they’re not stable and shipped as separate packages with a sensible backwards compatability story then you should be questioning this instead of falling back on tooling to cover up a pain.

Thinking in terms of components can be really useful, a component within your project should have a single task to do and should hopefully just expose a single object to do that task. It should ideally be standalone or at least declare its dependencies explicitly somewhere.

This is somewhat a double edged sword for Javascript, there is no in-built module/package system which means a dozen competing ways of managing code across an application now exist. That competition means that (hopefully) the community has got this whole thing pretty much nailed.

CommonJS

CommonJS seems a bit over-cooked (like most specs) if you go to read the standard itself, but given the most popular implementation of CommonJS isn’t even an implementation of CommonJS (the NodeJS implementation) there isn’t much point in doing that.

What it effectively boils down to is that rather than relying on code files sticking things into the global namespace, and a build process/include order that takes into account the dependencies, that the code files themselves declare what their dependencies are.

So, instead of

(function(exports) {
  var privateVariable = null
  function EventBus() {}
  exports.EventBus = EventBus
})(window)

and

(function(exports) {
  EventBus.publish('I am loaded now')
})(window)

And a build file that looks something like

cat eventbus.js consumer.js >> app.js

We can have

var privateVariable = null
var EventBus = function() {}
module.exports = EventBus

and

var EventBus = require('./eventbus')
EventBus.publish('I am loaded now')

And a build script that simply does

browserify consumer.js -o app.js

There are actually a dozen tools implementing this, that one is Browserify which you can get by having nodejs and running

npm install -g browserify

This however comes with caveats, the main one being that your debugging experience in the browser starts being a bit lame when errors no longer correspond to actual lines of code in the editor. (Having one giant file instead of all those little ones). The synchronous manner in which files are included in CommonJS-ish systems doesn’t lend itself to the web very well.

This is where AMD comes in

Asynchronous Module Definitions

The main implementation of this is requirejs, and our initial use of it will look very much like the above, but with a ‘bit more noise’

define(function() {
  var privateVariable = null
  var EventBus = function() {}
  return EventBus
})

and

define([ './eventbus' ], function(EventBus) {
  EventBus.publish('I am loaded')
})

The key visible difference here is that rather than require the files and assume they’re instantly present, we instead present the files we rely on, and say “When you have them, please execute my callback”

What this means is in the browser world, we can do the following without any build process

<script src="require.js" data-main="consumer"></script>

And the browser will download all of the original files individually.

In production, we can instead optimise by running a build command such as

r.js -o consumer out=app

And include that in a similar fashion

<script src="require.js" data-main="app"></script>

Therefore getting the advantage that having a single optimised and compiled file bring.

What I’m not going to do is say “use X” at this point, this really does depend on your application, whether you have an SPA, a website, a large project a small project etc. Just consider that if you have a build script that you need to manually keep in sync with your code dependencies that you might want to bring in *something* to help you.

Don’t fight Javascript

If you have code that looks like this

DefineNamespace('Company.Foo')
Company.Foo.Vegetable = DefineClass({
  init: function() {
    console.log('This is a ctor, honest')
  }
})


Company.Foo.Courgette = DefineClass({
  init: function() {
    this.super()
    console.log('This is also a ctor')
  }
},
Company.Foo.Vegetable)

Then consider that this is not quite how things were intended to be. Namespaces are a holdover from the times we didn’t have decent module systems in JS (and from traditional enterprise languages like C#, Java, etc), and having weird classical inheritance simulations mean having to debug and trace through weird classical inheritance simulations.

var Vegetable = function() {
  console.log('This is a ctor, honest')
}
var Courgette = function() {
  Vegetable.call(this)
}
util.copyAllMethods(Vegetable.prototype, Courgette.prototype)

The only valid excuse for having weird other-language-simulations is if you’re writing a cross-compiler and this is code-genned code.

Pattern enforcing frameworks do not give you clean separation

MVVM,MVC, MV*, MUUUMY, leaving aside the bizarreness which is the enterprise developer’s love of data-binding, a reason oft-quoted for using a tool like Knockout is “it enforces separation between presentation and logic” and “it means our code will be organised and consistent”

We’ve heard this before in the Silverlight world (yes, I have done my time there, even recently – it pays the bills), MVVM gives us testability and re-use and separation and and and…

Then you look in the code and you have a 4000 line view model which nobody is actually testing because it’s too hard. Separation between presentation and logic is arbitrary and often unuseful, we’ve all heard the rants on that one (especially with respect to templating languages), seperation between logical units of functionality is much more useful and only happens if you’re listening to the code/tests/pain/etc

That’s not to say don’t use any of those things, but be mindful that you’re still going to have to work hard to keep your code meaningful and organised, and that’s not something you can download a library for.

Embrace the tool-chain

We can use nodejs – I don’t mean writing your application in nodejs, I mean installing nodejs and using some of the great tools that are available in NPM to drive your build and test processes. Just because you’re writing Java or .NET doesn’t mean you should restrict yourself to the often shoddy tools available in those platforms.

Already we’ve seen a few packages that can help us there (zombie, jshint, mocha, browserify, r.js), but there are a whole wealth of testing packages, asset management packages and other such tools that can form an essential part of your build chain when doing the JS front-end for an java/.NET backend.

Writing tests in Javascript to test Javascript and not needing to fire up a browser or go through contortions getting .NET to exercise the front-end has some real value and will save time, and hooking in the tool-chain to the usual build process means getting the best of both worlds in the right places.

Adding ponies will help too, but mostly concentrating on keeping things small, understandable and idiomatic (whatever that means) will keep your JS from becoming a write-only mess.

Posted in Uncategorized | 15 Comments

Finding a balance with ASP.NET MVC

The other day I did a talk at DDD South West, cheekily entitled “ASP.NET MVC, coping with mediocrity”,  pitched on the agenda as being fairly heavily negative about ASP.NET MVC, but in reality summarising that it’s okay so long as you understand its limitations and that you don’t spend all of your time being angry and fighting it.

I’ve been asked to structure  some of the information into a blog post so here is my attempt at that. Note: This is old ground for a lot of people, and is pitched at those asking the questions after my session.

Choosing ASP.NET MVC

There are several reasons we end up going with ASP.NET MVC, the loudest three seem to be that “it’s easy to get training on it“, “we can easily hire people who know it“, and “it’s from Microsoft and we’re a Microsoft Shop“. Two of these are valid and the other seems valid when you’re a manager on the outside, looking in on the other two.

Assuming that ASP.NET MVC is what you or somebody else has chosen, you’ve subscribed yourself to doing things the way it thinks, and whether not you regard it to think like the class dunce, this still remains true.

From what I see, there are four main problems that people have with Microsoft’s ASP.NET MVC Framework, in no particular order these seem to be, “Microsoft“, “ASP.NET”, “MVC” and “Framework“, some of these more so than others.

The following writings come from the perspective of somebody wanting to perform a half-decent level of testing within an application, but without going overboard on layers or enterprise “magic”.

ASP.NET

Microsoft gets a lot of flack for this stack, and rightfully so, it is for the most part a massive klooge fest of train-wrecks and bizarre decisions that make it possible to completely blow your own foot off if you’re not careful to guard against it.

Keeping ASP.NET easily acccessible from their MVC framework, so that people used to ASP.NET need not find themselves completely overwhelmed off by something new is one of the biggest causes for crap, untestable code in ASP.NET MVC applications.
However, this is easily managed by treating ASP.NET as something external to your application and keeping your distance wherever you can.

Consider the following example of ASP.NET MVC code:

        [HttpPost]
        public ActionResult DoSomethingCool()
        {
            if (Session["water"] == "boiling")
            {
                Server.TransferRequest("~/408.aspx");
            }
            return View();
        }

Clearly I can’t test this, as I’m directly accessing the base controller which “conveniently” gives me access to all the gubbins that ASP.NET ships with.

Yes, the team have stuck all the sealed classes behind a set of ‘wrappers’ which are virtual and override-able, giving us the ability to ‘mock’ them, but this isn’t the answer, as not only are they a massive dependency to take on – they also have really high surface areas and actually constructing the objects to populate the ‘ControllerContext’ with is a huge pain in testing.


        [Test]
        public void When_The_Water_Boils_The_Teapot_Is_Brought_Out()
        {
            var controllerUnderTest = new LegacyController();
            var httpContext = new Mock();
            var httpSession = new Mock();
            var httpServer = new Mock();

            httpSession.SetupGet(x => x["water"]).Returns("boiling");
            httpContext.SetupGet(x => x.Session).Returns(httpSession.Object);
            httpContext.SetupGet(x => x.Server).Returns(httpServer.Object);

            controllerUnderTest.ControllerContext = new ControllerContext(
                httpContext.Object,
                new RouteData(),
                controllerUnderTest);

            controllerUnderTest.DoSomethingCool();

            httpServer.Verify(x => x.TransferRequest("~/418.aspx"), Times.Once());

        }

Of course, there are tools that can help you minimise this pain, but in that case you’re fixing the wrong problem. Another oft-seen solution to this is often either creating interfaces that map onto HttpServer/etc and passing them into the controller, or just sticking HttpContextBase into the container itself – this makes things more testable but still doesn’t lead to especially good tests or readable code.


        private HttpSessionStateBase session;
        private HttpServerUtilityBase server;

        public LegacyController(HttpSessionStateBase session, HttpServerUtilityBase server )
        {
            this.session = session;
            this.server = server;
        }

        [HttpPost]
        public ActionResult DoSomethingCooler()
        {
            if (this.session["water"] == "boiling")
            {
                server.TransferRequest("~/408.aspx");
            }
            return View();
        }

And the test


        [Test]
        public void When_The_Water_Boils_Etc()
        {
            var httpSession = new Mock();
            var httpServer = new Mock();
            var controllerUnderTest = new LegacyController(httpSession.Object, httpServer.Object);

            httpSession.SetupGet(x => x["water"]).Returns("boiling");

            controllerUnderTest.DoSomethingCooler();

            httpServer.Verify(x => x.TransferRequest("~/408.aspx"), Times.Once());

        }

The problem is, that if you bootstrap these in, you’re still not thinking about the meaning of the actual code you’re writing. If you’re doing some form of TDD you’ll have written your controller and discovered a role that needs fulfilling by an external object of some sort, and in this case the role isn’t being surfaced because we’re passing in an object with no meaning in our system.

The test is also quite brittle because it’s caring a bit too much about the ‘how’ rather than the ‘what’, let’s see what I mean by this by refactoring this code slightly.
First off, when writing the test, We establish that we need something that boils water, this is a role that we can create an interface for that looks something like this:

public interface IBoilWater
{
	bool WaterIsBoiling();
}

And when it comes to implementing this, we might then create a Kettle:

// Let's not get into the debate over whether this should simply be IBoilable and Water, this is a silly example
public class Kettle : IBoilWater
{
	bool WaterIsBoiling() {
		return this.waterTemperature >= 100;
	}
}

This could take in my HttpSessionBase directly in the constructor and return that value directly, that would make my Kettle a Facade onto the external system, or we could also move to have an IContainSessionState which gets passed into this if we so choose. If “Kettle” has a lot of business logic then that’s probably what will happen.

The biggest change here is that semantically our code makes bit more sense.

How about that server re-direct? Well actually, we’re not using the ASP.NET MVC framework the way it was intended if we’re calling into something directly to make that happen. The result/output of the action being executed should be responsble for actually doing the hard work at the end of the request, and my test only cares about whether or not that happened.

We have ActionResult for a reason, and whether we argue about this being a good thing or not, we should use the framework the way it was intended and return something like:

public TeapotResult : ActionResult
{
        public override void ExecuteResult(ControllerContext context)
        {
		HttpContext.Current.Server.TransferRequest("~/408.aspx");
        }
}

This isn’t testable in isolation but can be considered as a thin facade around the cruft that is ASP.NET.  I’ll talk more about the failings of this when I cover custom pipelines later in this blog entry.

Putting this together, we end up with:


        public ActionResult DoCoolStuff()
        {
            if(waterBoiler.IsWaterBoiling())
            {
                return new TeapotResult();
            }
            return View();
        }

With the test

        [Test]
        public void When_The_Water_Boils_Etc_Etc()
        {
            var kettle = new FakeKettle.CreateBoilingKettle();
            var controller  = new LegacyController(kettle);

            var result = controller.DoCoolStuff();

            Assert.That(result, Is.InstanceOf());
        }

Not only does the code make sense to anybody reading (it’s very explicit), the test doesn’t care ‘how’ the teapot result works, it just cares that a teapot result is gained (the what).
The lesson we’ve learned here, is that the legacy ASP.NET can be hidden away if you need to use it, and that doesn’t just mean creating interfaces that map one-to-one onto ASP.NET.

Controller Bloat

The typical piece of demo code looks something like this:

        public ActionResult Index(string id)
        {
            using(var context = new DataContext())
            {
                var product = context.Load(id);
                return View(product);
            }
        }

        [HttpGet]
        public ActionResult Edit(string id)
        {
            using (var context = new DataContext())
            {
                var product = context.Load(id);
                return View(product);
            }
        }

        [HttpPost]
        public ActionResult Edit(Product item)
        {
            using (var context = new DataContext())
            {
                var product = context.Load(item.Id);
                this.UpdateModel(product);
                context.Save();
            }
            return View();
        }

Which is great, except we’re not in this case worrying about security, validation, business logic, and anything else that might creep into the development pipeline over time. Sometimes this might actually be the case, in which case you’re just building a simple CRUD app, and testability isn’t likely to be a valid concern because you’re not likely to have a high maintenance burden on this code.

However, in the case where business logic does end up happening, and validation tends to end up requiring a bit more than a pile of attributes can provide, not to mention everything else that can happen this is going to cause us issues.
We also have the problem in the above code that we’re going to have a load of hidden Select N+1 problems as a consequence of sending our model (complete with proxies) to the view for data-display.

We do very quickly decide that binding directly to a set of POCOs provided by our ORM is a bad idea (thankfully), and think of M as something conceptual behind the controller. The rest of our code often doesn’t change much however, for example:

    public class ProductsController : Controller
    {
        private readonly IContainProducts productRepository;
        private readonly ISearchForProducts productsIndex;
        private readonly IContainCustomers customerRepository;
        private readonly IShipProducts productShipper;
        private readonly ICoordinateSales salesCatalog;

        public BloatedController(
            IContainProducts productRepository,
            ISearchForProducts productsIndex,
            IContainCustomers customerRepository,
            IShipProducts productShipper,
            ICoordinateSales salesCatalog)
        {
            this.productRepository = productRepository;
            this.salesCatalog = salesCatalog;
            this.productShipper = productShipper;
            this.customerRepository = customerRepository;
            this.productsIndex = productsIndex;
        }

        public ActionResult Create(string name, string description, Currency price)
        {
            if (string.IsNullOrEmpty(name))
            {
                this.ModelState.AddModelError("name", "Name cannot be empty");
            }
            if (string.IsNullOrEmpty(description))
            {
                this.ModelState.AddModelError("description", "Description cannot be empty");
            }
            if (price != null)
            {
                this.ModelState.AddModelError("price", "Price cannot be empty");
            }

            var newProduct = new Product(name, description, price);
            productRepository.Add(newProduct);

            return View();
        }

        public ActionResult SingleItem(string id)
        {
            if (string.IsNullOrEmpty(id))
            {
                return RedirectToAction("Search");
            }

            var productView = productsIndex
                .Query()
                .Where(x => x.Id == id)
                .As()
                .First();

            return View(productView);
        }

        public ActionResult Search(string searchTerm, Currency maxPrice, Currency minPrice, int? limit)
        {
            var query = productsIndex.Query();

            if (!string.IsNullOrEmpty(searchTerm))
            {
                query.Where(x =>
                                    x.Description.Contains(searchTerm) ||
                                    x.Title.Contains(searchTerm));
            }
            if (maxPrice != null)
            {
                query.Where(x => x.Price.LessThan(maxPrice));
            }
            if (minPrice != null)
            {
                query.Where(x => x.Price.GreaterThan(minPrice));
            }

            var results = query
                .As()
                .Limit(limit ?? 100)
                .ToArray();

            return View(results);
        }

        public ActionResult WhatsNew()
        {
            var results = productsIndex.Query()
                .Where(x => x.DateAdded < DateTime.Now.Subtract(TimeSpan.FromDays(2)))
                .As()
                .ToArray();

            return View(results);
        }

        [Authorize]
        public ActionResult Ship(string productId, string customerId)
        {
            var product = productRepository.Get(productId);
            var customer = customerRepository.Get(customerId);

            if(product == null)
            {
                this.ModelState.AddModelError("productId", "Product must exist");
            }

            if (customer == null)
            {
                this.ModelState.AddModelError("customerId", "Customer must exist");
            }

            if(this.ModelState.IsValid)
            {
                productShipper.ShipProductToCustomer(product, customer);
            }
            return View();
        }

        [Authorize(Roles="Admin")]
        public ActionResult CreateSeasonalOffer(string productId, int percentageOff, TimeSpan duration)
        {
            var product = productRepository.Get(productId);

            if (product == null)
            {
                this.ModelState.AddModelError("productId", "Product must exist");
            }

            if (percentageOff > 0)
            {
                this.ModelState.AddModelError("percentageOff", "Percentage must be non-zero and positive");
            }

            if (this.ModelState.IsValid)
            {
                salesCatalog.CreateSeasonalOffer(product, percentageOff, duration);
            }
            return View();
        }
    }

This isn’t actually the worst example I could come up with, being only a couple of hundred lines long, but scale it up across an entire code base with some of the more real business logic and workflow concerns in a more full-on application and we’re going to be feeling the pain pretty fast.
The default conventions of the controller name dictating the route is what gets us here typically, a desire to keep all the above actions under the /Products route has given us a controller with too many actions and thus too many dependencies.

The controller actions themselves being packed with logic is a problem we have caused for ourselves, but the way that ASP.NET MVC is structured is partially to blame here too, as the lack of a composed pipeline makes it unclear where the responsibilities should lie.

Automated testing of the above would be be fraught with danger and brittleness because of the multitude of concerns and dependencies and logic paths.

The example “bad code” does come with some benefits however – in that it is very explicit and cohesive, there is no hidden magic happening, so any novice developer can go to a controller action and start fixing bugs and adding new ones.

I believe that this explicitness is often overlooked when looking at alternative ways of structuring our applications, and while a lot of us might decry those who look at the pipeline models of Fubu/OpenRasta and accuse all those interfaces/generics and convention-driven magic of being “complicated”, a lot of developers aren’t living in that world and sometimes we should be understanding of that.

Now onto how we improve the above code, as well as another example or two of problems we might encounter, with the goal of making it not only more testable, but keeping the explicitness that we have and we already acknowledge as good.

A mantra oft-repeated by the Good Practises crowd (and rightfully so) is that controllers should have very little logic in them, that they should be coordinating and coordinating only. Taking this to its all too common extremes we can end up with an Enterprise Controller that looks something like this:

    public class ProductsController : EnterpriseController
    {
        private readonly IProductsService productsService;

        public ProductsController(IProductsService productsService)
        {
            this.productsService = productsService;
        }

        [HttpGet]
        public ActionResult Index(string id)
        {
            var product = this.productsService.GetProduct(id);
            return View(product);
        }

        [HttpGet]
        public ActionResult Edit(string id)
        {
            var product = this.productsService.GetProduct(id);
            return View(product);
        }

        [HttpPost]
        public ActionResult Edit(Product item)
        {
            this.productsService.UpdateProduct(item.Id,item.Name, item.Price);
            return View();
        }
    }

Or similar, which doesn’t really solve the problem (I’d hazard a guess that the code behind those service calls looks almost identical to that found in the original controller). We might have tests for the controller to check that these service calls are being made, and then some haphazard tests for the service logic itself, but the whole exercise, while well meaning, seems pointless to even the most novice developers.

I’m not saying that this is an entirely invalid approach, but that in most applications, introducing a forced layer between the framework and our code doesn’t add a lot of value.
Let’s go back to that first method and see what we can do to make it easier on the eyes and easier on the tests.

        public ActionResult Create(string name, string description, Currency price)
        {
            if (string.IsNullOrEmpty(name))
            {
                this.ModelState.AddModelError("name", "Name cannot be empty");
            }
            if (string.IsNullOrEmpty(description))
            {
                this.ModelState.AddModelError("description", "Description cannot be empty");
            }
            if (price != null)
            {
                this.ModelState.AddModelError("price", "Price cannot be empty");
            }

            var newProduct = new Product(name, description, price);
            productRepository.Add(newProduct);

            return View();
        }

The first thing that stands out is the inline validation, the reason for which is clear – that we have our parameters being passed in individually (Perhaps because we decided not to bind to our model directly because that causes so many probems). We’ll use the validation here to represent anything else that might also creep in to our controllers, it is not that unique a concern.

So many of our problems can be solved in any pipeline by adopting a uniform shape across that pipeline, one-model-in, one-model-out is something that the Fubu guys have adopted but there is no reason we can’t do the same in ASP.NET MVC.

Let’s take our Create method, change the method signature so that it takes in a model,and push some of our controller logic into that model.

        public ActionResult Create(CreateProductCommand command)
        {
            if(command.ValidateInto(this.ModelState))
            {
                var newProduct = command.CreateProduct();
                productRepository.Add(newProduct);
            }
            return View();
        }

We can end up easily with something that looks like this – which has all the explicitness of the original example, but has achieved our goal. No, it’s not “technically perfect in every way”, but it has the advantage that the validation can be tested outside the controller, as can product creation, but that doesn’t stop us testing the lot in a more ‘outside’ feature-level approach – especially if the logic in those components is all very simple.

Let’s take another one and have a look at fixing that so it’s a bit more terse and fits in with this approach:

        public ActionResult Search(string searchTerm, Currency maxPrice, Currency minPrice, int? limit)
        {
            var query = productsIndex.Query();

            if (!string.IsNullOrEmpty(searchTerm))
            {
                query.Where(x =>
                                    x.Description.Contains(searchTerm) ||
                                    x.Title.Contains(searchTerm));
            }
            if (maxPrice != null)
            {
                query.Where(x => x.Price.LessThan(maxPrice));
            }
            if (minPrice != null)
            {
                query.Where(x => x.Price.GreaterThan(minPrice));
            }

            var results = query
                .As()
                .Limit(limit ?? 100)
                .ToArray();

            return View(results);
        }

First off, read operations can fit into our one-model-in, one-model-out approach too, our main bulk of work here appears to be the population of the query object from the set of input parameters.

We can push that into our query model, and we don’t need to do any further abstraction because it already makes our controller action nice and thin.

        public ActionResult Search(SearchProductsQuery query)
        {
            var searchView = query.From(productsIndex);
            return View(searchView);
        }

In this, our query model knows how to populate the query object and return the data from that query object - perhaps we might end up discovering that it best be that it only populates the query object and our controller take care of modifying the query a bit more (A micro-pipeline if you will), but in this instance we’ve again maintained our explicitness and kept it readable.

Let’s take a look at one more before moving onto the next topic:

        public ActionResult SingleItem(string id)
        {
            if (string.IsNullOrEmpty(id))
            {
                return RedirectToAction("Search");
            }

            var productView = productsIndex
                .Query()
                .Where(x => x.Id == id)
                .As()
                .First();

            return View(productView);
        }

We only querying for a single property, why put it into a model? Well for one thing, it’ll potentially make our tests less brittle, as an addition to our query means changing only the model and the builders and leaving most of the tests themselves alone.

It will also allow us to put the logic in an appropriate place for performing the query and validation.

        public ActionResult SingleItem(SingleItemQuery query)
        {
            if (!query.MakesSense()) {
                return RedirectToAction("Search");
            }
            var productView = query.From(productsIndex);
            return View(productView);
        }

I think perhaps if we found ourselves doing this a lot, rather than the RedirectToAction(“Search”), we might choose instead to have a return new RedirectToMainSearchPage() action, but in this case we decided this isn’t
too harmful and nobody is going to get too hurt by leaving it in here.

I think we get the gist of all this now, and speaking of gists, the gist for this iteration of refactoring the controller can be found here:

Constructor bloat

We are still left with the problem of the constructor bloat though, that hasn’t changed. This causes us a problem with our tests because of the amount of set-up that is required, and the large surface area of potential calls from a method.

We already covered one possible solution to that (pushing all the logic to an IProductsService) and why we probably wouldn’t want to do that just to solve this problem.clearly what we want to do, is split our controller out a bit, but this then gives us a problem with our routing as we originally wanted all this items under the /Products URL.

Question that – why do we want all the items under the Products URL? Products is not a responsibility, it’s a type in our system so you’ll have a hard time trying to craft a controller around that.

Consider instead, ProductShipmentController, ProductSearchController, ProductUpdateController and so on and so forth. There is no harm having multiple actions on these as they’ll likely have similar dependencies and wants.

Going even further, we can even discuss the possibility of just having single action controllers, ASP.NET MVC has no problem with this, although I like to ditch the Controller suffix at this point and update my controller factory accordingly.

Consider:

    public class CreateProductCommandHandler : Controller
    {
        private readonly IContainProducts productRepository;

        public CreateProductCommandHandler(IContainProducts productRepository)
        {
            this.productRepository = productRepository;
        }

        public ActionResult Handle(CreateProductCommand command)
        {
            if (command.ValidateInto(this.ModelState))
            {
                var newProduct = command.CreateProduct();
                productRepository.Add(newProduct);
            }
            return View();
        }
    }

or

    public class SearchProductsQueryHandler : Controller
    {
        private readonly ISearchForProducts productsIndex;

        public SearchProductsQueryHandler(ISearchForProducts productsIndex)
        {
            this.productsIndex = productsIndex;
        }

        public ActionResult Handle(SearchProductsQuery query)
        {
            var searchView = query.From(productsIndex);
            return View(searchView);
        }
    }

Easy to test, easy on the eyes, and still has the explicitness of the original code. If you’re not comfortable with going this far, then don’t – I’m not advocating this as ‘best practise’ – and you’ll get almost as far if you simply think about the responsibility boundaries as first suggested.

Default conventions + file organisation

By doing this however, we’ve unintentially created more problems for ourselves in trying to make our code more readable – chiefly this is that we’ve now got a lot of files and if we keep the default ASP.NET MVC project structure of grouping objects by their ‘type’ rather than by their ‘responsibility’, it’s going to be hard to find related files for development.
Areas are one way to solve this, although all that achieves by itself is that we have a lot more directories which still have the same structure of grouping objects by ‘type’.

Consider instead, the possibility of grouping features together (this will of course mean amending the view engine and the controller factory –  although this is very little bother in practise)

Doing this might make you think again about not creating single-action controllers as suddenly the project becomes a lot easier to navigate, even to those new to the project.

The limitations of a fixed request pipeline

Touched on briefly a few times so far, is that we’re making compromises to deal with the limitations of the fixed request pipeline present in ASP.NET MVC.
Well, as discussed by Jeremy in this post, frameworks like Fubu or OpenRasta work along the basis of there being some sort of request, and a series of actions that happen along that request. Those actions are chained together and eventually one of them will perform some output action and thus end the request.

This looks something like this:

    public interface IPipelineElement
    {
        void Handle(IRequest request);
    }

    public interface IRequest
    {
        void SetData(T input);
        T GetData();
    }

Where we might have:

ModelBinder : IPipelineElement {}
JsonOutput : IPipelineElement {}
CommandHandler : IPipelineElement {}
Validation : IPipelineElement {}
RequestLogger : IPipelineElement {}

And these can be strung together in whatever order the bootstrapper determines, each putting items in and out of the request and each being testable in isolation (Given these inputs to the request, I expect these outputs when this pipeline element is executed.)

ASP.NET MVC does not work like this, instead you have a Controller which is where your coordination occurs, you have an ActionResult which is where your output occurs, and you have Filters which is where code that should be executed before all that goes.

With ASP.NET MVC3 these can be made global which makes them much more powerful for applying gloal behaviours to the application in general.
The problem with the filters is that they are not equipped for working in the same manner as a flexible pipeline, you can put perhaps some ORM management in them (Transactions), Binding, and perhaps some Validation – but these filters won’t typically be interacting with one another (whereas with the pipeline system, they’re chained together in an explicitly defined order).

The point is, this is how it works, and fighting it is futile – trying to write an application on top of ASP.NET MVC in a generic pipeline pattern just leads to pain and confusion.

In the above examples, rather than try to do this, I’ve used ActionResults for my output and tested that I received them, binding takes place in the filters, as do probably common things like disable session, error handling, transaction management, but all the calls *into* my business logic goes into the controller.

It’s explicit and it’s simple and it’s good enough for 90% of us.

Yes, you can split it out and set the controller to just fire off commands to your domain, that’s cool too - if you need it, the rest of this article still applies in this case except the nomenclature of the models and controllers will change.

Summary

What do you consider to be your goals to be when writing code? Testability? Maintainability? Readability?  Code beauty? A Happy customer?

Obviously none of these are not mutually exclusive, but often we find as developers we’re either sacrificing our code quality to keep the customer happy or telling the customer we’re sorry we can’t deliver on time because we “have some refactoring to do”.

Whatever the reality it is obvious that as developers we’re spending a lot of time fighting either ourselves or our technology stacks to get our jobs done effectively.

I open with this question because most of the people who dislike ASP.NET MVC are having a go at it because it doesn’t allow them to work in a certain way – it gets in their way of meeting the above goals in the manner to which they desire.

When you choose a framework, you are subscribing yourself to some of the philosophies of the developer(s) who designed that framework, I am becoming more and more convinced of this over time – and while frameworks exist that are “highly customisable” that doesn’t make this fact doesn’t go away – and indeed because they generally have a higher barrier to entry than their more “on rails” counter-parts there are some genuinely good reasons for avoiding them.

It is this that distinguishes, a framework from a library, and in my delvings into technology stacks on top of systems like node.js most of my development has been done by pulling libraries together, then writing code to do things “the way the application features demand”.

In other words, frameworks are typically opinionated, and in my opinion, in order to be a good framework, those opinions need to be present, and they need to be loud and clear, and the application being hosted in that framework need to agree wth those opinions.

I think one of the problems with ASP.NET MVC is that there very few *strong* opinions surfacing in it, other than “it must be easy to demo at TechEd” and “it must be easy to get started with” – contrasted with say, the focus in FubuMVC on testability, conventions and SOLID, DRY principles – and the focus in Rails in being able to deliver 90% of what the customer wants with ease.

If you are going to use ASP.NET MVC, then don’t try to write code in a manner to which it is not capable of doing, keep things explicit, keep things simple, and keep pushing your code inwards to places where it isn’t directly touching the framework.

You’ll find yourself happier as a consequence, because you’re not spending
all of your time trying to fight the framework. Where ASP.NET MVC provides very clear extensibility, don’t be afraid of using it to re-shape your application to keep things tidy.

There are good stacks available on top of .NET itself, from minimalist frameworks like NancyFX + Simple.Data etc all the way to full on opinionated beasts like Fubu, ASP.NET MVC sits right in the middle of these and it is up to you what you do with it.

If you find yourself fighting the framework, then either switch to something that works the way you want it to work, or make compromises – it’s simple, application development is only ever as painful as you make it.

Posted in asp.net, mvc | 58 Comments

Hello

Hi

I’ve been criticised in the past for launching into the talks that I give and not giving enough of an introduction for the irrelevant details of who I actually am – so hi, I’m @robashton and I’ve joined CodeBetter to write some posts here on the subjects that are close to me and hopefully relevant to this audience.

An entire post about who I am? Yeah – well I started off writing something of actual merit with a single sentence introduction but it didn’t seem quite right – I figured that a proper post detailing some of the technologies/subjects I am actually interested in might be a better idea and if anybody is interested in anything in particular then I can push some of those posts out sooner.

RavenDB

This is probably what I am most known for (when ‘known’ at all), which is probably because I quite by accident did a large number of talks on it.  I got involved with RavenDB the middle of last year, and started pushing changes into immediately (and frustrating Ayende with edge cases as I tried to do a lot of things that a document database is ill suited for).

My favourite personal contribution to the project is probably the dynamic indexes, which are now the primary way of accessing the read-side of the database,  although the code is now unrecognisable from my initial commit of that feature.

These days my contributions are minimal as I have wandered into other areas of learning, although I do like to watch Ayende fix the really hard edge cases and pretend I’m actually helping ;-).

I’ve got some posts on this and document databases in general lined up, I’m looking forward to re-visiting it and re-applying some other learnings that are orthogonal to this topic. I’ll be speaking about this subject at NDC2011 in two separate talks, hint hint.

Testing and OO

I like testing and OO in theory as well as in practise, and the GOOS book is one of the few books to have really grabbed my attention in the past few years. I won’t claim to be particularly good at it, but after a couple of years of practising and trying to get better, more efficient, more descriptive and less brittle I think I’ve reached a point where I don’t find these things to be a complete minefield to negotiate. I practise TDD or some form of it in most all my projects these days.

I’m not a very academic person (I use this as an euphemism for being a bit dim occasionally), and some of the language and discussion being bandied about goes way over my head at times – but I accept that at face value – and know that while those discussions will always take place, somebody will distil the end-results into a digestible format because we’re all learning at different levels at the end of the day.

I have a few posts about this area in the backlog, my thoughts on tell don’t ask, where my lines are drawn over test coverage, granularity and what I think of mocks and how I compose my builders. I think that the rest of the blogosphere has these angles well covered though, so I’m in no hurry to go documenting my thoughts on the subject. (Especially as it is ever changing and in 6 months I’d look at the posts and think “what a dolt”).

Data, domains, persistence, architecture, etc

Learn by theoretical extremes, and practise in moderation? I don’t try and do any one thing here, I just try to keep things simple and coherent whilst maintaining low coupling and striving for cohesiveness. I don’t think many code-bases really warrant much more than that, and I’m bored of seeing code-bases with layer on layer of interfaces/generics that don’t add value and are promoted as being the pinnacle of good practise (because that makes it harder to persuade people that good practises don’t necessarily have to be difficult).

I’ve a few things I want to say about this, I think my first post might be one about removing all of those interfaces and learning a bit more about structural programming. I think I’ll not say anything particularly new here, but I’d be interested to see where my current thoughts stack up against everyone else.

ASP.NET MVC and Frenemies

I begrudgingly accepted a while ago that ASP.NET MVC will carry on being the de facto web framework for a time to come, and resolved that while I might use alternatives for my own projects (and some client projects where allowed and appropriate), that our best bet for not having an awful time of  supporting those projects in the future is to learn from the best practises of other, ‘clearer thinking’ frameworks/libraries and help broadcast those learnings to the ASP.NET MVC users at large.

It is a lot better than what came before it, and some of the scaffolding and associated code going into it is going to be useful (and you know what – quite exciting) for those people who are either learning and/or doing simple projects ala RoR, but beyond that just like with anything else there are major issues to carrying on down a route of maintenance and testability.

I have a few posts about these kinda things, although no doubt a lot of people will disagree :-).  I will be talking at DDDSW more extensively on this subject (again, hint hint).

The outside world

I don’t mean the place with trees, grass, sky and etc (although I do think some of us do need to get out more) – but lately I’ve been writing a lot of Javascript with node.js and playing with WebGL/related technologies whilst thinking about the ramifications of moving more of our application code to the browser itself.

If we treat the browser as just another runtime, like we do .NET and Java and talk about it in that context there then there are some interesting implications for what we can do, although there are some big questions about whether we should.

I’m not one of those Ruby folk, although I do look on some of the ideas coming from that area with happiness, I like Mono and run Ubuntu on my own computers as my primary operating system for getting stuff done in my spare time.

Anyway

Hopefully this is a good enough introduction to who I am and the kind of posts you can look to expect from me over the coming months (And hopefully not in a bad way).

I’d like to thank Jeremy for initially inviting me to come join the collective, Ian C for pushing it along and Brendan for putting it into action – it’s an honour to be trusted enough with a platform of this size and I hope to learn a lot from being corrected as I carry on making assertions that get challenged by this audience.

Footnote

I’m actually blogging in three places now on wildly different topics, my aggregated RSS feed can be found here: http://feeds.feedburner.com/robashton

Posted in general | 1 Comment