My employer, Finetix, added my blog to their main feed and my most recent post at the time just happened to be about Star Wars, so I thought I needed to put up my first "Grown Up" post of the year. Before you read on, I am assuming some reader familiarity with the ideas of loose coupling, cohesive code, and encapsulation.
Continuing where I left off from On Writing Maintainable Code, it's time to talk about how to write code with a high degree of Orthogonality. Applied to software design, Orthogonality is the ability to change a conceptual part of the software system while minimizing impact to other parts of the software system. An orthogonal design separates different concerns of the system (data access, security, business logic) in a way that enables changes to be made in isolated areas of the code without rippling out.
Over time, and even within the originating project, software systems will need to accommodate changes to both their behavior and infrastructure. Later in the lifecycle, old dog software systems will need to learn new tricks for unforeseen requirements as the business environment changes. Also, you stand a much higher chance of being able to reuse your existing code to perform this new functionality with a system that has been decomposed into orthogonal pieces. My favorite analogy for orthogonal code is the Lego sets you had as a child. You can use a small Lego brick to build almost anything. A big piece shaped like a pirate ship can only be used to make a pirate ship.
The basic concepts for writing Orthogonal code are loose coupling and high cohesion. These concepts are well known, but far too often ignored or not entirely appreciated. Several years ago several of my "Architect" colleagues spent a week at an OOP bootcamp. For the next couple weeks "Loosely Coupled" was worked into every single conversation that we had. Two weeks after that they went back to designing web services that took SQL strings in their input messages and stuffing business logic into stored procedures. I think they largely missed the point.
You've heard the mantra your entire software development career, as epitomized by this quote from Scott Bellware:
Loosely-coupled, highly-cohesive, ya-da, ya-da, ya-da, blah, blah, blah – the [developer's] eyes roll back into his head and the archi-speak fades into a nice, lulling drone.
Focus those eyes back to the front of your head and pay attention, because orthogonal code, maintainable code, testable code, understandable code — all of these goals are most likely accomplished by a strong focus on loose coupling and high cohesion resting on the bedrock of Encapsulation. Achieve better cohesion by paying attention to "Tell, Don't Ask," the Law of Demeter, and the Single Responsibility Principle. Supercharge the loose coupling of your code by applying the Dependency Inversion Principle and the Open/Closed Principle. Get some testability into your code.
Don't write this post off as just academic hot air, this is about consistently leaving the office on time and protecting your company's code assets because you'll be much more likely to smoothly sustain a forward progress with your existing code.
Real World Example
To gain an appreciation for orthogonal code, or the lack thereof, let's look at some poorly designed code that does not exhibit orthogonality. This isn't completely a contrived example either, this is largely taken and exaggerated from a preliminary version of a system that I later inherited. I chose this example because this code was hard to change, and major changes and extensions were necessary in its first year of life. Unfortunately, I've commonly run across code like this in my career. Microsoft developers get a bad rap for having an inadequate command of good development principles and practices, but I've seen this type of tightly coupled, uncohesive code in several different programming languages, environments, and organizations.
Here's the scenario, the architecture is a Pipes and Filters structure that involves a series of message processors that take an Xml input, interpret that data and perform business validations and/or transform the input message to an outgoing message, and then put a serialized copy of the outgoing message onto an MSMQ queue. The entry point method of one of the "Filter" classes looked like this:
public void Transform(InputMessage input)
// BusinessLogicClass needs some reference data before it can do the transform
// Use a derivation of the original Data Access Application Block from MS
// First, gotta look up the connection string first
string connectionString = ConfigurationManager
DataSet dataSet =
SqlHelper.ExecuteDataSet("select * from sometable where some_field = some_property_of_input",
// do a bunch of business logic processing to transform Input to Output. It's not shown here, but
// the processMessage() function has calls to the data access classes interjected into the
OutputMessage outputMessage = this.processMessage(input, dataSet);
// then send the message on via MSMQ
// Look up the Queue name first
string queueName = ConfigurationManager.AppSettings["BusinessLogicClassQueue"]
MessageQueue queue = new MessageQueue(queueName);
string serializedOutput = serializeOutgoingMessage(outputMessage)
Fine, that code was easy to write in the first pass, but how about easy to change? To understand? To extend? To test? In reality this code is probably several times bigger because I omitted quite a bit of try/catch logic around the data access and queue management. Not to mention that the actual business validation and transformation rules were complex. We can easily spot some significant problems with this code as wewalk through some of the possible changes to this code base. Let's talk about why a change could be difficult, and then in the sections below examine the application of orthogonal design principles to make these changes less painful.
- Change the queueing mechanism. Originally the system communicated between the logical filter classes running in multiple windows service processes by putting the outgoing Xml message into an MSMQ queue. Reflecting this choice, our Transform() method directly uses the classes in the System.Messaging to talk to MSMQ.
- Change configuration mechanisms. This method pulls its own configuration straight from the application config file. If the application needs some more sophisticated configuration infrastructure, this code, and every other piece of code,
- Change the database structure. Ok, it's the year 2007, and in the post ASP classic world we don't embed sql directly in our business logic code anymore, but sometimes it's good to remember why we follow good design principles. Changing the database structure could easily break the business logic. Just as importantly, writing the business logic was harder than it had to be because you had to slog through the data access machinery along the way. Even if the database structure doesn't change, you might want to take advantage of some sort of caching mechanism. If you don't have an encapsulated data access library, you might end up needlessly repeating a large amount of boilerplate code around connection management (get connection string, open connection, attach transaction object, try/catch/finally exception handling).
- Change the business logic. Let's call this one a near certitude. As I'll show later, reusing the business logic while leaving the database and messaging queue behind may be the important change that has to be made later.
- Implement better (any) auditing or security. Where would you put the security and auditing? Everything seems to be jumbled together here.
I've often heard encapsulation described as "shy" code. Putting it another way, a lack of encapsulation is code walking around with its underwear on the outside. Classes that conceal their inner workings are easier to use. You don't have to do nearly as much hand holding with encapsulated classes because they know how to bootstrap themselves.
For a contrast in encapsulation, let's take a look at how the Filter.Transform() method performs its data access. The original system used a modified copy of the original Data Access Application Block (DAAB) class. The DAAB was an attempt to simplify data access by encapsulating some very common ADO.Net grunt work inside one monster sized class called SqlHelper. The DAAB was popular, but I always thought it was lacking, mostly because it did not provide enough encapsulation of ADO.Net to form the core of a solid Data Access Layer. Let's look at the signature of two of the most commonly used methods on the SqlHelper, then let's talk about where the DAAB just didn't go far enough.
public static int ExecuteNonQuery(string connectionString, CommandType commandType, string commandText)
public static int ExecuteNonQuery(SqlTransaction transaction, CommandType commandType, string commandText)
That's not so terrible, and it does, after all, encapsulate the creation and manipulation of SqlCommand and exception handling. Take a look at what you *do* have to tell SqlHelper. You have to go look up the connection string separately every single time. If you need to do transactional sql statements (and you do), you're going to have to keep the SqlTransaction object around yourself and remember to pass it into the ExecuteNonQuery() overload method on every single call. You also have to keep telling the SqlHelper what type of command you're using. In the real life example system, that led to an absurd amount of repeated, noise code like this:
SqlHelper.ExecuteNonQuery(ConfigurationClass.GetConnectionString(), CommandType.Text, "some sql statement");
Why should I have to tell my Data Access Layer (DAL) what it's database connection string is? What possible value is it to me to have to specify CommandType.Text repeatedly if that's the only CommandType I care about? If I'm using transactions, I have to know about the Transaction class and shuttle it around between calls. In my opinion, the DAAB simply did not provide enough value. At most it was simply something that you could wrap further or use as sample code to write your own — and that's exactly what I did.
When I do have to use a classical DAL, I use a homegrown assembly that encapsulates all of my ADO.Net manipulation, mostly behind this core interface:
public interface IDataSession
int ExecuteCommand(IDbCommand command);
int ExecuteSql(string sql);
The major advantage over something like DAAB is far more encapsulation of the underlying ADO.Net objects. Using this interface as my core, I can concentrate solely on the sql being used inside my DAL instead of so much tedious ADO.Net code. Consumers of the IDataSession interface don't have to know how to fetch the connection string or store a Transaction class somewhere because those details are encapsulated within the concrete implementations of IDataSession.
In case you're curious, and you have far too much time on your hands, my homegrown DAL assembly is now part of the StructureMap code, but you'll have to pull it down out of the Subversion trunk (http://sourceforge.net/projects/structuremap). Given a choice I'll use NHibernate for persistence instead, but I still use StructureMap.DataAccess for test automation and utility code. I have not looked at the descendent version of the DAAB in the Enterprise Library, but I would have to assume that it's much improved from the DAAB.
Tell, Don't Ask
One way to detect or prevent leakages in encapsulation is to follow the Tell, Don't Ask principle.
…you should endeavor to tell objects what you want them to do; do not ask them questions about their state, make a decision, and then tell them what to do.
Here's an example violation of Tell, Don't Ask:
public void SelectAPointByAskingDetails(DataPointSelector selector)
// ask DataPointSelector for all of its internal Point's
Point points = selector.AllPoints;
// ask DataPointSelector what it's SelectionType is,
// then, depending on the SelectionType, pick the
// Point out of the Point array and set the SelectedPoint
// property on DataPointSelector
selector.SelectedPoint = points;
// loop through all the points and
// find the "minimum" point, whatever that means
This "Asking" method knows a lot about the DataPointSelector, and it's making all of the decisions for DataPointSelector. Following Tell, Don't Ask would lead us to encapsulate the details of Point selection into DataPointSelector. After the encapsulation, the interaction with DataPointSelector is now:
public void SelectAPointByTelling(DataPointSelector selector)
You Will Follow the Law of Demeter!
I've always wanted to use that for a blog title, but I really don't think I can improve on the Wikipedia article on the Law of Demeter. I mention it here because it's an important tool for increasing encapsulation in your code. Roughly, following the Law of Demeter increases encapsulation by avoiding long messaging chains like TopClass.Child.Helper.Connection.Close(). Instead, open up a new Close() method on the TopClass and hide the entire chain of calls through its children. It's nothing but making sure that TopClass is wearing its underwear (Child.Helper.etc) under its clothes where it belongs.
Code that is highly cohesive groups strongly related code into a common class, namespace, or module. On the flipside, a highly cohesive class is one that has narrowly focused responsibilities. Code with low cohesion either scatters related code throughout the system, or leads to classes with unrelated responsibilities. High Cohesion leads to code that is easier to understand, reduces duplication, enhances reusability, and generally makes code easier to maintain.
The Filter class flunks a test of cohesion, and also implies some greater problems throughout the codebase. To start fixing the Filter class, let's list the responsibilities of the Filter class.
- Fetching configuration
- Calling the database to get some reference data
- Performs the business validation and transformation logic
- Serializes the results into Xml
- Creates the messaging queue if it does not already exist
- Sends the output message to the outgoing messaging queue
Wait, you say. The Filter class is probably the implementation of a single use case or story. That's cohesive right? Well, no, because even inside this single use case there's a mixture of business logic and system infrastructure. Chances are very good that you will want to reuse pieces of the infrastructural code outside of this single use case. Maybe more importantly, you want to avoid duplicating system infrastructure code so that you can accommodate changes in infrastructure more readily.
Let's take the immediate refactoring step of moving the messaging infrastructure code into a separate class and simply let Filter delegate to this new class.
public class MessagingGateway
public void SendMessage(OutputMessage outputMessage)
// Look up the Queue name first
string queueName = ConfigurationManager.AppSettings["BusinessLogicClassQueue"];
MessageQueue queue = new MessageQueue(queueName);
string serializedOutput = serializeOutgoingMessage(outputMessage);
private string serializeOutgoingMessage(OutputMessage message)
Now we've gotten the details of the messaging infrastructure away from the rest of the flow of the Filter code in a class (MessagingGateway) that can be reused across multiple Filter classes. If we wanted or needed to, we could change the serialization to a binary format, use a different security mechanism on the queues, easily add more auditing, or maybe switch to an entirely different technology. Switching gears entirely, maybe you want to throw out the idea of queueing and run all of the Filter's in a single common AppDomain. Assuming that any new version of MessagingGateway encapsulates the details of its implementation, every single one of these changes could take place without altering the code in the Filter classes. That's orthogonality at it's best.
While doing research for this section I came across this great article from Meilir Page-Jones on Cohesion. It's from an introductory textbook on Structured Programming of all things, but it's readily applicable to today's OOP and I would think SOA paradigms.
Single Responsibility Principle
We improved the cohesion of the original Filter class by extracting out a class for MessagingGateway, but there are still unrelated responsibilities left in the Filter.Transform() method that could be pulled out. The new MessagingGateway class itself could use some improvement. So how many responsibilities can a class take on before cohesion is compromised? Exactly one, and that's known as the Single Responsibility Principle.
A class should have only one reason to change.
Taking a closer look at the new MessagingGateway above, we see at least two reasons it may have to change. The queueing itself obviously, but it MessagingGateway also pulls the address of the queue from the configuration file. It's very conceivable that the configuration strategy of an application could change to something more sophisticated. Specifically, I can remember a project where we could have collapsed 75 servers, all of which were a single point of failure, into 12-14 redundant servers, except that the configuration code was scattered throughout the application instead of being encapsulated into a cohesive module of the system. Intermingling the code that retrieved configuration data with the rest of the system effectively eliminated a chance to remove a huge amount of support overhead while actually increasing uptime.
Off the top of my head, I can think of two ways to remove the details of the configuration storage from the MessagingGateway class. My strong preference is to simply poke in the queue name and whatever else configuration that the MessagingGateway needs into its constructor function (using StructureMap of course!). The other way, just to avoid branching off into a discussion on IoC/DI tools or builder classes, is to pull the access to the configuration into a centralized class. That class, and the new slightly simpler MessagingGateway, might look a bit like this:
public class ConfigurationManager
public static string ConnectionString
public class MessagingGateway
public void SendMessage(OutputMessage outputMessage)
// Look up the Queue name first
string queueName = ConfigurationManager.QueueName;
Taking the Single Responsibility Principle knife to our original Filter.Transform() method leads to pushing some of the responsibilities out into these new classes:
public class DataAccessObjectForTheFilterClass
public ReferenceData GetReferenceData(InputMessage input)
// go grab the connection string, call some sort of sql,
// and put the data into some sort of usable data structure
public class BusinessValidationAndTransformationProcessor
public OutputMessage ProcessValidations(InputMessage message, ReferenceData referenceData)
// do a lot of business processing here, but only business processing
The new version of the Filter.Transform() method delegates to the new granular, cohesive classes:
public void Transform(InputMessage input)
// Get the reference data
DataAccessObjectForTheFilterClass dao = new DataAccessObjectForTheFilterClass();
ReferenceData referenceData = dao.GetReferenceData(input);
// Do the validation and transformation
BusinessValidationAndTransformationProcessor processor =
OutputMessage outputMessage = processor.ProcessValidations(input, referenceData);
// Send out the outgoing message
MessagingGateway messagingGateway = new MessagingGateway();
We pulled out the responsibility for configuration, data access, business validation, and the messaging infrastructure out of the main Filter class into cohesive classes. At this point, the Filter class is merely a "Coordinator" stereotype from Responsibility Driven Design. Centralizing the coordination of the various pieces into a dedicated coordinator is a great way to enhance the orthogonality of the code.
One of the client developers on my team asked me how do you know when you need to create a new class rather than add functionality to an existing class. I responded with "One class, one responsibility." That leads us into a question of just what *is* a responsibility. My advice is to look for pieces of a class that don't seem to be related to the rest of the class and pull it out. I think the class being discussed that day was the Application Controller class that manages the basic shell of our WinForms application. Part of the UI is a panel of tabs for individual active views. Our ApplicationController class was starting to collect quite a few methods around managing the contents of the TabControl that weren't related to the rest of the ApplicationController. We pulled the functionality for managing the active view collection and the TabControl into a separate controller/view combination. Instead of one really big schizophrenic controller class, we had two smaller controller classes, both with a focused intent. There's a very important lesson in that. Many times you'll find that more classes lead to a simpler design by minimizing the complexity of any one single class.
Back to the original Filter.Tranform() method, the business logic in the Filter class is tightly bound to the configuration of the system, the messaging infrastructure, and knows directly about the SQL that gets passed into SqlHelper. In this design, the business logic inside of Filter is coupled very tightly to the infrastructure. So what does that really mean?
Just to see if someone is reading my blog, here's another real world scenario. Let's say you built your core software system 5-6 years ago before any of the Object/Relational Mapping persistance tools were considered mature. To fill the gap you wrote your own proprietary ORM and everything is great — for awhile. Fast forward to today and you realize that Hibernate or the forthcoming LINQ tools are significantly easier to use and have some functionality that you would like to have that isn't readily possible with your own homegrown ORM tool. If your business logic, user interface, and service layer are tightly coupled to your homegrown ORM, changing to another tool may not be economical. If you interspersed too much of the persistence code with the business logic or service layer code, you're effectively looking at a rewrite just to use the better tool. That's not the end of the world, but you've just incurred an inefficiency that your competitors might not have by using the better tools.
You may absolutely not interpret this as a license to go build extra or unnecessary layers of abstraction on top of your persistence layer. When I use NHibernate I do encapsulate it behind thin wrapper interfaces, but anything beyond that might be getting into the realm of silly. Ted Neward recently skewered some folks for too many abstraction layers. All I want to say here is to keep as much distance between your persistence and business logic code as you possibly can.
I've talked quite a bit about the coupling between the business logic and persistence code, but tight coupling might be lurking in a lot more corners of your system.
- In the implementation of a web service or any other integration point, strongly coupling the functionality of that service to the transport mechanism or Xml format often turns out to have negative effects. To the SOA enthusiasts of the world, I say to you that loose coupling doesn't stop once you're underneath the initial SOAP message. Loose coupling is just as important, in the small, in the code within a service.
- Singleton's or static classes are a very strong form of coupling
- Calls within a class to the constructor of another class is a subtle form of strong coupling
Dependency Inversion Principle
Wait! You just said that a call to a constructor is a form of tight coupling? How? When we last left the Filter.Transform() method we had decomposed the responsibilities of the original scenario into cohesive classes, but the Filter class is creating the concrete helper classes directly.
MessagingGateway messagingGateway = new MessagingGateway();
In the code above, Filter knows exactly the class that performs the messaging, and it's dependent upon that MessagingGateway. We've moved all of the details around interacting with MSMQ into MessagingGateway, so we could change the transport layer without modifying Filter, but what if we really do need to use Filter both with MSMQ and a web services transport layer? Simple, let's apply the Dependency Inversion Principle and extract out an interface for MessagingGateway that is independent on any of the concrete implementations of messaging transport. The second step is to make Filter depend on the abstracted interface. You do have to have a way to get the different implementations of IMessagingGateway into Filter, so we use just pass an instance of IMessagingGateway into Filter (Constructor Injection).
public interface IMessagingGateway
void SendMessage(OutputMessage outputMessage);
public class MessagingGateway : IMessagingGateway
public class Filter
private readonly IMessagingGateway _messagingGateway;
// Push in the IMessagingGateway in the constructor function
// so Filter doesn't even have to know which concrete
// class is in here
public Filter(IMessagingGateway messagingGateway)
_messagingGateway = messagingGateway;
public void Transform(InputMessage input)
// Do the things that Filter.Transform() does
Now, we've got Filter receiving an abstracted IMessagingGateway from somewhere else, so it could work with completely different types of messaging infrastructure — or even work with no messaging. Don't just stop with the messaging either, abstract away the data access and the business validation and transformation code as well.
We've busted the original Filter.Transform() method into smaller cohesive pieces, and decreased the coupling between the pieces by extracting out abstracted interfaces. Now we can talk about the ultimate principle for creating maintainable code — the Open/Closed Principle (OCP).. Verbatim, this principle is stated "Software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification." I used to think this only referred to the practice of creating plugin extensibility points in your code, but that's only part of the OCP story. What I've learned since then is that the essence of the Open/Closed Principle is being able to add new functionality to an existing codebase by writing completely new code with minimal modification of existing code. One of the major costs of extending an existing system is the potential for regression bugs in existing functionality. The most reliable way to minimize the creation of regression bugs is to avoid modifying the existing code. I"ve consistently observed that I'm always faster when I'm building all new code versus making modifications in existing code.
I've found that the OCP pattern is going to emerge in two basic ways:
- "Pluggability." This is the obvious usage, but maybe not the most effective. By "pluggability" I really just mean the ability to create a new class or group of classes to implement a new feature and make only minimal changes to the existing code to call into the new code. For example, we can create new functionality in Filter.Transform() by injecting in all new implementations of IMessagingGateway. If we've abstracted the business rules, we can even change strictly the business rules by creating a new class and simply passing it in at the proper place. Following the OCP doesn't have to mean the full-blown ideal of runtime plugins (shameless plug: Dependency Injection tools like StructureMap make this much easier and more reliable;)).
- The second, and possibly more powerful OCP usage, is recombining existing classes or components to create new functionality. In the real world system behind my Filter.Transform() example, we had a need in a later project to use the transformation logic completely independent of the pipes and filters architecture in a brand new web service. If the transformation logic is decoupled from the rest of the Filter.Transform() method, we can simply take that cohesive business validation class and use it in the entirely new context. In real life that functionality was very strongly coupled to the pipes and filters architecture and I spent several days doing refactoring just to expose that functionality.
Think of this scenario as well from my new career in financial applications. Say you have an analysis engine inside your web application that pulls its transactional data from a database and it works great for quite awhile. Then one day your CEO comes in and says that they have a great opportunity if you can only mate the existing analysis engine to data entered into an Excel sheet so a trader can run simulations. Can you do that? If the analytic engine makes a lot of "pull" calls to the data access layer, or is tightly coupled in some other way to the database, you might miss out on that opportunity. On the other hand, if in your existing architecture you simply "push" all of the data into the analytics engine that it needs, it's going to be relatively easy to take the cohesive analytics engine and use it in a completely new context.
A while back, Scott Bellware vociferously wrote that there is no sustainability without testability (and by inference, automated tests). I could not agree more. Part of modifying existing code is verifying both the correctness of the new code and regression testing the old code to prevent regression bugs. Regression testing is a major cost and a significant risk to modifying existing software. A high degree of test automation can alleviate these concerns, but an effective and efficient test automation strategy is only enabled by a good design — specifically an orthogonal structure that enables opportunities for focused whitebox testing.
There was a good discussion on the Test Driven Development group on Yahoo about the relationship between Testability and good design. The discussion was kicked off by someone asking if there really was a linkage between testability and good design? I emphatically believe that designs that are not testable are bad designs. I'm not sure whether or not I'd say that testability is a first class goal in design, or simply a vital means towards making working code. Regardless, testability alone is enough of a reason to strive for orthogonal code structures. Code that is easy to test is easy to make work.
Testability largely coincides happily with the principles of the good design you were striving for anyway: loose coupling and high cohesion. Automated testing is much simpler when it's easy to isolate functionality (loose coupling) and when you can test one small thing at a time (cohesion). One of the very significant values of using Test Driven Development is that the technique is very effective at sniffing out coupling or cohesion problems. When it's hard to write the unit tests, it's usually indicative of low cohesion or tight coupling.
Putting it All Together
Alright, we know some of the principles of writing orthogonal code and we've taken a long look at the coupling and cohesion issues around a sample system. As I said earlier, my sample scenario was from a real project I was part of two years ago. Do your best Keith Jackson imitation here — in that real world system, the original team hit the "Granddaddy of Them All" — The Great Refactoring of Aught Five. The pipes and filters architecture was killing them with complexity. The code was slow, hard to deploy, difficult to troubleshoot, and nasty to test. I don't remember how long it took, but they stopped and ripped out some of the pipes and filters support to collapse the application into a single AppDomain. Performance went up, deployment got easier, and testing and troubleshooting improved somewhat.
All the same though, I would call the Great Refactoring a mild failure because it could have been done more efficiently and left the code in a better state. They didn't completely remove the pipes and filter motiff, largely because refactoring any farther would have been too difficult because the code was not orthogonal. The time spent doing the Great Refactoring was pure overhead, because not a single bit of business functionality was added. I say overhead because the refactoring could have been several times easier and quicker if they had religiously followed orthogonal design concepts from the very beginning. With an orthogonal structure they could have thrown away the messaging infrastructure and the filter coordination code away, but reused the business logic and data access classes largely unchanged. If the refactoring had been less risky, and orthogonal code would have mitigated the risk, they might have eased the refactoring by simply starting it early instead of waiting until they were backed into a corner.
Lastly, on the Pipes and Filters Scenario
I apologize to the original team for doing this at their expense, but they provided a near perfect object lesson on a couple fronts and I bet they'd agree with many of my conclusions two years later. The original impetus for using the Pipes and Filters architecture being spread across multiple windows services was to be able to scale the system "vertically" across multiple systems.
From my perspective, the project was jeopardized by committing far too early to a complex design upfront without any significant deviation or adaptation until things got really bad. The entire skeleton of the system was layed out before doing any significant testing or even unit testing. A large impetus for the XP YAGNI ideal is that it's considerably easier to start simple and add complexity as needed than it is to eliminate or change unnecessary complexity. I think that building good systems is very largely dependent upon getting feedback and verification on your design early and often and then using those lessons. Some of the team was used to heavy RUP style upfront design instead of an adaptive approach. They dumped the UML, but didn't replace it with an adaptive design approach. UML or not, pay attention to your design so you know when you're going off the rails.
Another lesson that we *all* need to take away from this is to keep learning and get some exposure to new ideas and different ways of thinking. Consciously or unconsciously, the team lead on the project chose the pipes and filters model because he had a deep background in mainframe applications and it was a comfortable style for him. Even after the big refactoring the pipes and filter model left a lot of "noise" code in the system that made the flow of the code very, very difficult to follow and generally bloated the codebase. I still think that a Domain Model + Service Layer organization would have been much more suitable and far simpler. This development motto from Bil Simser captures the myopia towards the comfortable for me:
"If all you do is what you know, you'll never grow."
The pipes and filters architecture included a lot of support for metadata driven composition of the filters along the way to make the addition of new workflows easier through configuration. In reality the extra metadata-driven pipeline infrastructure just turned into a great deal of intellectual overhead in understanding and troubleshooting the system. When it came time to build new functionality into the system, we struggled to reuse existing functionality because it was too tightly coupled to the pipeline extensibility mechanism. Ironically, I felt that one of the biggest impediments to adding a new workflow to the system was the very extensibility mechanism put into the code for new workflows.
Wait, there's more…
I've got much more to say on writing maintainable code in this ongoing series, but it's going to wait awhile. I'm all blogged out for now.