Application Re-write Wackiness

To any colleagues that stumble across this, please don’t take any offense.  It was a bad, bad day around the Austin office that day. – Jeremy 10/26/2005


 


 


Have you ever had to do a re-write of an existing application?  If you have, you’ll know exactly what I’ve been going through much of this year.  The entire requirements is often “just make it work like <old thing>.”  Nobody really understands how the old thing works, so you have to go spelunking in the code to yank out business logic.  The fun part is that the old system is usually being rewritten specifically because the code is rotten.  There might be some documentation left over from the original project, but it’s a lie until proven truthful.  I’ve had some success with what Michael Feathers describes as “characterization” tests to verify the new system replicates the old, but I feel so much better whenever there is some kind of real business requirement from real business people driving the development.


My team built a subsystem in C# to replace a gnarly VB6 COM component that was accurately code-named Marquis (De Sade.  I’m not making this up) that was partially written by one of the company’s founders.  My early investigation of that code spawned a longish rant on absurdly large classes and methods.  Just to put it into perspective, one giant VB6 class module in the old world with a “Select Case from Hell” section literally became about a hundred little C# classes.  I’m guessing that when the original developers left the company that their keyboards exhibited excessive wear and tear on the “CTRL”, “C”, and “V” keys from all the copy/paste action.


The absolute worst development activity has to be data migration from one database structure to another.  It never goes smoothly, but you learn a *lot* about how the existing system actually works by getting your fingers burned by working with legacy data.  We’re reusing the database, but we’re translating the old configuration files to the new system.  Inevitably we’ve found a lot of obscure, undocumented quirks in the old configuration format that’s causing me some heartburn.  What’s really been fun is that asking questions to the three business experts on the existing system leads to three different answers in how it’s supposed to be working.


Yesterday and this morning I’ve been fixing “defects” in the new system that arose from doing regression testing that compares the results from the old system to the results from the new.  Some of the defects are just edge cases, so nobody gets too upset [Edit:  I don't mean to imply that the bugs aren't serious, just that these bugs are requirement things that slipped through the net and odd combinations, not purely developer error.  I feel bad about bugs that are just a failure to unit test or developer sloppiness].  Several of them have been nothing but recreating a quirk in the old system.  In at least one case I’m pretty sure that we’re recreating a bug in the original system, dammit!  The original system had a very permissive XML format for rules configuration.  I’m having to recreate how the old system behaves with what is supposed to be invalid configuration (ugh).  The one thing that absolutely set me off yesterday was finding out a legacy database table did not have any unique constraints.  Silly me for assuming the tables were designed intelligently.  Lesson re-learned, don’t play with strange databases.  Apparently an unhealthy obsession with T-SQL coding doesn’t go hand in hand with a healthy regard for database niceties like primary keys and unique constraints.  Both the old and new code simply looks at the first row that comes up in the query.  Essentially, the old code uses a randomly selected row to use as the “official” data for a validation rule.  Strangely enough, this is generating odd bugs.  Go figure.  Since we can’t go clean up the data, we’re having to create an algorithm to pull out the correct validation row out of a bag of rows.  So, just how do you recreate effectively indeterminate behavior?


Of course what I’m really whining about today is that the edge cases spring a leak in what was a nice, tidy Chain of Responsibility pattern abstraction.  Oh well.


[Edit:  Removed some offensive wording.  My bad] Today’s exercise in silliness involved a NetMeeting session to write NFit tests for the edge case bugs.  The meeting went just fine and the NFit fixtures help to define the expected behavior, but when I looked at the list of screen names I saw “The Dragon Reborn”, “Yoda”, and “UmpaLumpa.”  Yes, we’re geeky.


On a re-write death march several years ago a business analyst told me that she had completed the business requirement specification for the replacement in a only a couple of weeks.  I think I hurt her feelings when I laughed at her.  I explained that there was a tremendous amount of hard-coded business rules lurking in the code that didn’t appear in any old documentation.  The project team had had a very serious morale problem over the years and suffered a couple rounds of complete team turnover.  Needless to say, there wasn’t any tribal knowledge to fill in the blanks of the incomplete documentation.  We either had to recreate all the functionality or do a lot of serious analysis to determine which rules were dead code (I suspect most of the hard-wired exception cases were long since obsolete).  The project was cancelled before we got a chance to screw it up.


My first client engagement as a consultant was one of the oddest projects I’ve ever been on.  The client (a diehard J2EE shop) had just purchased a homegrown ASP.NET application from one of their partners for an exorbitant fee.  The client had purchased the application sight unseen.  Our job was to take the purchased system and do whatever it took to put it into production.  About the time we discovered that the ASP.NET application could only support one user at a time*.  “Whatever it took” turned out to be reusing the custom graphics and rewriting everything else.  The idiots that purchased the system couldn’t admit to their management that they had screwed up and had to do a rewrite, so we couldn’t get any real support for requirements analysis or testing.  Our entire requirement specification was to “make it work like XYZ.”  One little problem, XYZ was a prototype that had never been finished.  Major modules didn’t function at all.  When we would reply that “XYZ didn’t do anything yet” we were told to do what the mocked up screen seemed to be doing.  The project was later declared a failure and our team was the scapegoat.  Sigh.


For that matter, every single project I did as a consultant was wacky.  Three projects, three strong teams of developers, three unsatisfying project outcomes.  One project was unnecessary architectural refactoring and the other two were severe cases of requirements mismanagement.


*Instead of using the nice built in IPrincipal/IUser security stuff in .Net they had stored their own custom user object on a static field on the Global class for the web application.  Anytime a new user logged on they effectively kicked everyone else off.  True story.  You can’t make this stuff up.


 

About Jeremy Miller

Jeremy is the Chief Software Architect at Dovetail Software, the coolest ISV in Austin. Jeremy began his IT career writing "Shadow IT" applications to automate his engineering documentation, then wandered into software development because it looked like more fun. Jeremy is the author of the open source StructureMap tool for Dependency Injection with .Net, StoryTeller for supercharged acceptance testing in .Net, and one of the principal developers behind FubuMVC. Jeremy's thoughts on all things software can be found at The Shade Tree Developer at http://codebetter.com/jeremymiller.
This entry was posted in Ranting. Bookmark the permalink. Follow any comments here with the RSS feed for this post.
  • http://www.cyrusinnovation.com Richard

    Our company swears by Agile and is looking to expand our presence in Boston. Please review our website if interested. rcelic@cyrusinnovation.com

  • http://www.redgum.com.au Daniel F

    Oh man, that sounds familiar. We had to rewrite a very old legacy system that was written in Clipper, without the source code, original developers, or anyone who really knew how the system was supposed to work. One of the reports we had to recreate was a fairly impressive looking list of numbers, but they meant essentially nothing. No one really knew how they were generated, the only thing we were told was “make it like the old one” and “no, thats wrong, the numbers are different to the old one”. I’m fairly sure we ended up fudging the report so it had the same bugs as the old system.
    The other fun part of that project was getting the users to accept the new system. We were on a massively tight timeframe to get around Y2K issues, so we put in an interim access system that was ugly as hell, but did the job till the real system was written. Problem was, the ugly ugly ms access system was better than the original system, and the users got so used to it, that whenever we gave them the pretty new system to test, hey hated it and wanted it more like “the new one”. Ya just can’t win with some projects.

  • windy

    Me too! I’m leaving a job because of the attitude that everything in the legacy is ‘self-documenting’ and works. Well, if it works so well, why am I here?

  • BlackTigerX

    I have re-written several old Foxpro/DMAC applications, in Delphi, now, that’s fun!!