Sponsored By Aspose - File Format APIs for .NET

Aspose are the market leader of .NET APIs for file business formats – natively work with DOCX, XLSX, PPT, PDF, MSG, MPP, images formats and many more!

Legacy Code and the Life and Death of an Application

My colleague and I are making minor extensions to some legacy code (in C#) for the next month or so (and that generally implies that the Shade Tree Developer will feature quite a bit of ranting in the near future). After that my team will be making some large scale architectural changes to an existing product. My job for the rest of the year is going to revolve around legacy code. We very frequently make customer driven modifications to our application. We also have to keep the product ahead of the competition, and that means being agile (in the literal sense) with our development practices. Our existing codebase has some definite weak spots. The development organization has the beginnings of a long term “Grand Unified Theory” to transmogrify our existing systems and development environment to extend the company’s investment in the current products and prevent the downward spiral into “legacy code.” Rewriting the applications is completely out of the question. So first, here’s the symptoms we’re trying to prevent and eliminate. As we get farther into the “Grand Unified Theory,” I’ll blog about our strategies for getting out of the legacy code trap.


Life and Death of an Application


In my finite experience and research, applications (or services if you’re of the SOA persuasion) reach the end of their useful lifespan when the cost of altering or extending an application is greater or riskier than throwing it away and starting over. I’ve seen a lot of people make a parallel to the concept of entropy. The natural state of any system will tend towards entropy (in software terms, the big ball of mud). You have to exert energy into an application to reverse or stave off application rot (entropy). Most applications undergo a twilight period where the previously shiny, well-factored application descends into a morass of spaghetti code that becomes more and more difficult to work with. The applications are still providing business value, but they are starting to be more of an opportunity cost rather than an enabler because of the increasing friction incurred by further modifying the application. This twilight slide into retirement is what most people mean when they say “Legacy Code.” The code works, but no one wants to touch it for fear of bringing the down the whole thing.


What is Legacy Code?


Here’s a couple of takes: “Before we get started, do you know what ‘Legacy Code’ means? It’s code that works” — via Roy Osherove. Yeah, but this is true only if you leave it alone. Michael Feathers described legacy code as “code without tests.” What I think he’s driving for is code that can be safely modified and rapidly verified.


I’m going to add a corollary to Mr. Feather’s definition; legacy code without tests is code that is difficult to test. Twice in the last year I’ve transitioned from greenfield development projects that were written with TDD to working with brownfield code that had not been written with TDD. In almost startling contrast, the test-first code was vastly easier to extend with new unit tests than the code written test-last. To paraphrase the noted development sage Ferris Bueller, “the legacy code was so tightly coupled that if you put a chunk of coal between the classes you would get a diamond.” What made the legacy code made writing unit tests difficult?



  • Overuse of static methods. Static methods result in tight coupling. I ranted about this here. Keeping state in static fields makes it even worse.
  • Tight coupling to external configuration. It’s often impossible to run any piece of the code without an external configuration file. Dealing with configuration slows down unit testing. I’ll blog more on this soon, specifically ways to avoid the coupling.
  • Poorly factored code and tight coupling between concerns. Database, business logic, and presentation logic all mixed together.
  • Blobs

Legacy Code isn’t Necessarily Old


Of course, many applications aren’t well-factored to begin with — instant legacy code. Entropy also creeps in with any gap in communication or participation between the designer’s vision and the development team. I’m dealing with one of those right now. The original vision and design was thoughtful and innovative, but it was thrown over the wall to a team of developers who had not participated in the original vision. It’s particularly bad when the original prototype is coded by a skilled architect and then handed off to the developers with instructions to “just finish it off.” One of the advantages to continuous design is that the rough edges are smoothed down over the course of development. Remove the designer from the coding and those rough edges will only become sharper over time. Shamefully, I think I’ve been responsible for a bit of this myself. One way or another the technical vision needs to be socialized and shared throughout the development team if you want to create a consistently structured application. Ideally, the development team should be creating the technical vision themselves.


The Development Environment Matters Too


Since much, if not most, of the cost of modifying an existing application is verification and regression testing, the ability to quickly migrate the application to a testing environment is paramount. Repeatable automated testing is such a great long term investment in an application that it’s a shame so few projects invest upfront in test automation. At this point in my career, I would consider development infrastructure like build scripts, database scripts, and continuous integration servers to be a vital part of the application itself. If that infrastructure is lacking or nonexistent, the application is that much harder to modify. I’m working with legacy code right now, and the lack of an automated development build with unit tests is making me feel, well, like I’m naked in a hailstorm. The worst case I know of was a very large VB6 application that had been accreted around a nice tidy core over the course of several years. Simply setting up a test environment manually could take up to 10 business days, and even then they were never very confidant in the installation. That application was a core piece of the manufacturing system and changed often. Not only was the migration slow and hard, but the haphazard nature of the environmental control allowed several pieces of bad code to get through testing and into production, leading to factory downtime (and take it from me, factory downtime is a bad, bad thing. When the billionaire tycoon founder himself comes looking for someone’s head in IT, run away and get some plausible deniability). The real gallows humor was that the application was customized somewhat for each factory and business line and therefore had to be regression tested for each and every configuration.

About Jeremy Miller

Jeremy is the Chief Software Architect at Dovetail Software, the coolest ISV in Austin. Jeremy began his IT career writing "Shadow IT" applications to automate his engineering documentation, then wandered into software development because it looked like more fun. Jeremy is the author of the open source StructureMap tool for Dependency Injection with .Net, StoryTeller for supercharged acceptance testing in .Net, and one of the principal developers behind FubuMVC. Jeremy's thoughts on all things software can be found at The Shade Tree Developer at http://codebetter.com/jeremymiller.
This entry was posted in Legacy Code. Bookmark the permalink. Follow any comments here with the RSS feed for this post.