Just to get this out of the way, here is my recap from the MVP summit. You will find something to argue about here, and that’s okay.
LINQ for Entities is NOT the O/R Mapper I want today, but might be if and when they…
As it stands right now, I would still choose NHibernate (or WilsonORMapper) over Linq for Entities as an O/R Mapping solution, especially since it looks very likely that we’ll have Linq support for NHibernate as well. That’s a little disappointing because there is a lot of promise to Linq for Entities.
At the MVP Summit, the Microsoft team building Linq for Entities very graciously spent some time with several of us to talk over some of the details around making LINQ for Entities more suitable for Domain Driven Design and evolutionary design. Specifically, we were largely concerned with the intrusiveness of Linq for Entities into your Domain Model classes, the general clumsiness of the configuration model as it is right now, and the mechanisms for tracking object state. There is definitely some very cool things in Linq for Entities, but it’s a shame that the usability isn’t there yet.
What I don’t like:
- It does not support a Persistence Ignorant approach. This is fairly significant to me. I’m in the camp that really doesn’t like any infrastructure code in my business logic classes. It’s also a Signal to Noise problem, the only signal I want to care about is the business logic. Anything else is noise code.
- The configuration is too complex because it exposes that 3rd conceptual model.
- The “changed” state of the persisted objects is tracked in the objects themselves with a marker interface somewhat like the INotifyPropertyChanged interface. I really, really don’t like this. I don’t like the implicit black magic idea of managing the change state inside of the objects themselves. I think it adds noise code and makes the transaction boundaries less clear.
- As of now, Linq for Entities is optimized out of the box for a datacentric approach that calls for designing the database model upfront and then codegen’ing the object model from the database. This isn’t the way I want to work because this approach almost forces you into a heavier upfront design. We’ve already got a dozen+ workable solutions for datacentric application building. I wish they’d delivered a solution upfront for Domain Driven Design to differentiate it more from Linq to SQL.
- Attributes in the domain model classes. I’m not sure I have a hard opinion on using attributes for the mapping, but the attributes for Linq for Entities are a duplication of the Xml configuration. It’s a violation of the DRY principle of good design.
What I want:
All of these “wants” were promised to us in a post-Orcas release. I’d like to lay these out here to get more visibility for these “wants” to make sure they get a better place on the Linq to Entities roadmap.
The first thing I want is support for a pure Persistence Ignorant approach. No marker interfaces, no codegen, no partial classes. Just plain “POO.”
The next thing is support for the Unit of Work pattern. Transaction boundaries are important details. I want the contents of a transaction explicitly defined, and expressed in a way that is easy to test. Enter the Unit of Work:
public interface IUnitOfWork
void Added(object target);
void Deleted(object target);
void Updated(object target);
The configuration model is somewhat divorced from the configuration format already (this was a huge lesson I learned with StructureMap), so we should in theory begin to write alternative configuration formats. The first thing I would do is create a simplified format that completely hides the conceptual model from the user. Next, maybe think about a “convention over configuration” approach ala ActiveRecord in Ruby. I think you could also find a way to generate DDL from an object model the way that you can with NHibernate.
Evolutionary or Continuous Design
I’m a practicing XP’er and a believer in the benefits of using Continuous Design, but it doesn’t just come for free. The obvious objection to Continuous Design is the risk of churn and the cost of changing code, and we try to beat this by purposely choosing tooling that enables easy evolution of a software design. I’ll say this and let the arguing start: it’s significantly easier to evolve the design of the object model first, then let the database drop out from the object model (and then optimized!) only when the object model is solidified. I want to be able to quickly add properties, rename properties, and add methods to evolve little by little. I can lean very heavily on refactoring tools and fast running unit tests to make small, evolutionary changes in middle tier code. C# is soft. Even with the Agile Database Techniques described by Scott Ambler, the database structure is still more work and effort to change in comparison.
I don’t want to codegen my domain model classes. Having your classes split into a pair of partial classes or an abstract class with data elements and a subclass with logic is clumsy to me. I think that code is harder to understand because of the “CTRL-TAB” factor switching back and forth. I also don’t like having to fire up a modeling tool to make a change, then depend on the compiler to find all the other code I’ve busted by changing the signature of the model. Again, I want to lean on ReSharper and my unit tests for little changes.
What’s Cool about Linq for Entities?
As David Laribee pointed out, Linq for Entities is much more than an O/R Mapper. It potentially provides us with a unified data access strategy over heterogeneous data sources (web services, xml, non relational databases, etc). That’s great if it succeeds, but that strategy, IMO, has added quite a bit of complexity that’s fully exposed to the end users in the form of 3 way mapping (object model, conceptual model, and relational model). The conceptual model only adds value for mapping non-relational data (I’d say it’s Gregor’s Canonical Data Model pattern). If it succeeds and they address the ease of use and POO issues, Linq to Entities could be really good.
- The underlying database mappers are emitted rather than using reflection
- It supported every reasonable mapping scenario I could think of to ask about
- I love the Linq query language to express queries in terms of the Domain Model with Intellisense
If Linq for Entities should fail, it’s going to be an object lesson. Instead of choosing one major use case first (O/R Mapping) or focusing on the user experience, they put a huge chunk of work into the foundational architecture. Fine, but at this point Linq for Entities is like a giant big block V8 engine mounted to a weak transmission. The raw power is there, but it’s clumsy to use it in my opinion.
What about LINQ for SQL/DLINQ?
By and large I actually like DLINQ, but it’s not really aimed at the same scenarios as a fullblown O/R Mapper. In my mind, DLINQ looks like an evolutionary improvement over strongly typed datasets, and applies to basically the same scenarios where a DataSet would be acceptable. DLINQ is specifically targeted for very data centric development, so I don’t think you would want to try a rich domain model approach with it. I would consider using it for reporting applications, simple CRUD applications, data services maybe, and most immediately for writing automated tests against a database with FIT. In the end, I think I would describe DLINQ as putting Intellisense on top of the database model.
Mapping to Stored Procedures
Yes, you will be able to map entities to stored procedures, and I’m seeing people working through how to do this. I’m going to make a fearless prediction: mapping to stored procedures will take significantly more mechanical work than mapping entities directly to database tables and columns. I don’t want to rehash the sproc arguments yet again, but keep in mind the mechanical cost of using sproc’s in this case balanced against any kind of perceived value.
MonoRail is a Watershed Moment for the .Net Community
I think MonoRail is a huge, huge deal for the .Net community. Finally, we have an application framework that comes from the community that provides a great deal of value and enables a development style that Microsoft does not. If you’re not already familiar with it, MonoRail is an open source Model View Controller framework for web development. As you can probably tell, it’s somewhat influenced by Ruby on Rails.
First, let’s look at the simplified page cycle in MonoRail (and Rails):
- A web request is made. A Front Controller takes the request first, then using the url of the request, chooses the appropriate action method on a controller class to call.
- Call the action method on a controller
- The controller makes any necessary updates
- The controller builds a model of some sort
- The model is passed into a view for rendering
There are a lot more details, and I’ve over simplified a lot, but compare and contrast this to the WebForms page cycle of events. The MVC nature of MonoRail encourages and even enforces a consistent separation of concerns between the controller and view templates. MonoRail is also much, much easier to unit test than the equivalent pages in a WebForms application – even with a Model View Presenter structure. You can test a MonoRail application with less friction because it is much more decoupled from the runtime. The lifecycle of a MonoRail page is much more in tune with the reality of a web page, so there’s much less “leak” in its abstractions. That in turn makes TDD more valuable as a way to model the actual behavior of the code. You can MVP the life out of a WebForms page, but you still end up twiddling with the page events to get everything just right.
More, in no particular order:
- As an architecture, ASP.Net WebForms has some serious weaknesses in regards to testability and maintainability. The attempt to abstract web development as a stateful forms based model simply does not work very well. I thought WebForms was a brilliant idea in 2001, but in use I think it adds far too much heft to development tasks that used to be easy in ASP classic. It’s not just me that feels this way either, a number of the people I spoke to at the MVP Summit shared the same general feeling that the WebForms model just isn’t the right direction. Plus, so does:
- It’s community driven. It’s being built by the very .Net developers that use it, without any assistance from Microsoft. Read that again. .Net developers, in the field, are building this thing to suit the way they want to work. We do not have to wait for Microsoft to do everything for us. I met a lot of smart people at Microsoft, but they’re just as human as you and I, and there’s not an infinite number of developers at Microsoft to build everything we could possibly want. MonoRail is not bound to the Orcas release cycle, so it can move at a much faster rate than something bundled up into the official .Net platform.
- It represents innovation from outside of Redmond, and we could always use more innovation that in the .Net community.
- My esteemed CodeBetter colleague, David Hayden, recently compared and contrasted MonoRail to the new Web Client Software Factory. The WCSF might reduce the mechanical cost of generating the initial code with WebForms and add some better practices like MVP and DI, but MonoRail will still have a potentially large advantage in terms of any type of architecture with WebForms: Testability & Maintainability. Over any length of time, these two “ilities” lead to lower Total Cost of Ownership and a better Return on Investment. The software factory codegen features are cool, but ROI and TCO are sexy to management.
- As you might have read on Jeffrey’s blog post, Scott Guthrie is working on a new concept for a true MVC framework for ASP.Net. I liked what ScottGu demonstrated, but there’s absolutely nothing concrete planned at the moment. No expected dates, no commitment. What I’m getting at here is that there is no reason to bypass MonoRail in the near future if you want an MVC framework that provides a high degree of testability and productivity. Besides, would it really hurt to have some serious diversity in the tooling and approaches you can take for building dynamic websites in .Net?
- In comparing WebForms to MonoRail think on this. MonoRail might need some things added to it (documentation, extra features, etc.) to catch up in some spots, but WebForms needs complexity ripped out. Guess which option is easier – adding or removing complexity?
The latest Hanselminutes podcast is an introduction to MonoRail with the Eleutian guys. I would highly recommend you give it a listen for some background (I was in the room while they recorded it. You can blame any background noise on me).
There, are you happy Hamilton? Anything big I missed?
- Big UML is Dead! Long live Little UML! – Nobody was talking about large scale UML designs anymore. Executable UML didn’t even come up in the talks that were skirting on Model Driven Architecture. I still think UML is useful, but only in a lightweight whiteboard modeling sense or as documentation after the fact. One thing that Sam said that I heartily agree with is to be somewhat precise about UML notation when you do use it to avoid misunderstandings. If you want to go fast, you need to be clear. I still think I can teach another developer everything they really need to know about UML in 15 minutes.
- Domain Specific Languages – This was a huge topic all week. I liked the talk we saw from Don Box on this subject. One thing he made clear was that DSL’s represent an attempt to raise the abstraction level for very specific problems. Another point I wish he’d made louder is that DSL’s do not automatically equate to new modeling dialects or custom Xml formats (coding in Xml, been there, done that). A lexical DSL written in near English is arguably easier to understand, and probably to write. I didn’t get a chance to talk to him much, but Neal Ford was there representing the Ruby angle on DSL’s. Look for a book from him and some other fatbrain Thoughtworkers on embedding your own DSL’s in Ruby soon.
- Queries are a Business Concern – Ayende said it first, and many people I spoke to thought so too. There are a lot of business rules embedded into “where” clauses. I really like the idea of moving this business logic to business logic classes. I think it reduces the intellectual overhead of understanding a system by gathering related logic into a single place.
I wasn’t gonna do this, but everyone else is, so why not? The MVP Summit was a great experience. The official content was so-so, but the people I got to interact with were tremendous. In no particular order, and I’m sure I left someone out:
I finally met more of the CodeBetter gang – Karl Seguin, Darrell Norton, Raymond Lewellan, Jeff Lynch, and Greg Young. Plus new CodeBetter addition Jean Paul Boodhoo. My old Austin friends Scott and Jeffrey were there, as was CodeBetter dean Sam Gentile. I spent a lot of time around the flower of Des Moines, Iowa development Nick Parker, Tim Gifford, and Javier Lozano. Somebody let a Google guy in. I finally met Scott Allen of OdeToCode fame. Ian Cooper came over from the UK. I had to go all the way to Seattle to finally meet Don Demsak and David Laribee. I spoke quite a while with TShak, and Mario Cardinal is a life long friend for asking me about StructureMap.