Sponsored By Aspose - File Format APIs for .NET

Aspose are the market leader of .NET APIs for file business formats – natively work with DOCX, XLSX, PPT, PDF, MSG, MPP, images formats and many more!

What makes a system hard to work with?

I’m getting started (after I finish 2 others that are already late) on a new article roughly titled “What does ‘Good’ Design Mean?”  My whole thesis for the paper is really a fullblown refutation of the “quality doesn’t matter” folks. 

I’ve moved around professionally quite a bit the last 6-7 years, and I’ve gotten to work in quite a few different codebases.  What I’ve found is that some of those codebases were very easy to work with, and we were able to add or change functionality in those systems with great success and minimal friction.  Other codebases clearly displayed what Uncle Bob calls “Code Viscosity.”  Changing the code was hard and laborious.  Verifying the code changes was nasty.  Progress was fleeting.  I’ve even seen the exact same team of people succeed with one codebase and grind to a halt with another codebase.  My point here is that I definitely believe that there are certain “qualities” of a system’s design that either hinder or accelerate the progress of the system.  Software quality isn’t strictly an end in and of itself, it’s also the means towards delivering value to the client.  I’m going to define “quality” in software code and design as the attributes of a software design and code that enable us developers to be efficient.

In rough order of severity, here are the things that I’ve found greatly retarded my teams’ progress.  I’m not counting people issues or artificial barriers from stupid processes or overly bureaucratic configuration management.  I’m also not counting requirements churn, but I’d argue that eliminating the problems below would do a lot to mitigate the damage done by requirements churn.

  1. God/Blob classes and bloated methods.  Four years later, I still think that long methods and classes are evil.  We talk about patterns and SOLID principles and TDD and DSL’s and whatnot, but the single biggest problem in the code I’ve seen is the usage of very messy procedural code.  Big classes that do too many unrelated things are also a symptom of procedural thinking.  I should qualify this by saying that procedural programming doesn’t automatically mean that it’s bad, but really bad procedural code seems to be the norm out there in the wild.  The structured programming guys of yore wouldn’t be happy with a lot of the code I’ve seen either.
  2. Poorly organized code.  It’s always going to talk a bit to understand a code structure that’s unusual or new to you, but as long as a codebase is consistent in its internal rules, you’re going to be fine eventually.  A codebase without internal consistency is a disaster.
  3. Over engineering.  Bad abstractions that flat out don’t work or add extra weight to writing new code and in understanding the system as is.
  4. Inadequate or just flat out no build automation.  You can’t work on code until you can make it run on your box and actually test it.  Chad says it best here.  You can’t efficiently extend a system either if it takes days or even weeks just to migrate the code from development to testing to production.
  5. Tight coupling between disparate elements of the code structure.  I.e., changing one class inevitably breaks something else, or the fact that you can’t simply work in one area of the code at a time.  I want to work with business rules without wading through thread management code or UI code at the same time.  That isn’t just a testability issue, it’s also an understandability problem.  There’s also a significant tax to tightly coupling your application’s behavior to infrastructure concerns.
  6. Code that is hard to test.  Sometimes it’s just a side effect of the problem domain, other times it’s caused by poor cohesion and coupling qualities.  Sometimes the problem is just that it takes way too much work just to set up the system to test the scenario you’re interested in at that very moment.  Regardless of how you feel about or use TDD/BDD or even just plain unit testing, you have to have some sort of feedback mechanism to know that what you did actually worked without unforeseen side effects – even if it’s just an easy way to stuff some data into the database and hit F5 to look at the UI.
  7. Slow builds, slow compile times, and slow tests.  On my deathbed I really don’t want to tally up the time I’ve spent waiting for a build to complete.
  8. Crazy-ass database schemas.  See #6.  I’ve been absolutely hamstrung by database models that were almost impossibly to load data into without using the existing UI. 
  9. Putting important behavior in unusual or out of the way places.  Business logic in a database trigger.  Doing important things in a static initializer.  Programming in configuration files.  I don’t really need to explain this one, do I?
  10. No automated tests to know when you’ve broken existing code.  Related to #6 obviously, but I’m still calling it a separate problem.  The presence of #10 virtually guarantees the presence of #6 (an inconvenient fact for the “TAD” folks)
  11. Duplication.  Any time changing one part of the code almost always requires a second or third change in another area of the code, you’re in for trouble.  Duplication hinders change.
  12. Really nasty inheritance chains in the class structure.  There’s no possible way to make coupling any tighter than an inheritance relationship.
  13. Crazy attention paid to micro optimizations that cause more harm in code obfuscation than value in performance gains.  You know the line about premature optimization and the root of all evil.  My experience has been consistent with that old saying.
  14. Bad or misleading variable names.  I think most developers of any experience level do fine with this, but I did hit a system a couple years back that killed me with really bad variable names.  Bad variables names + very long methods and big classes is the kiss of death.


Okay, by my count, software design can negatively or positively impact numbers: 1, 2, 3 (usually a result of a designer having more intelligence and creativity than wisdom or experience.  See my “second system syndrome” for an example), 5, 6, 7 (definitely related to design decisions), 8, 9, 11, and 12 are completely a matter of the quality of the system’s design.  # 4 is impacted somewhat by the design.  #10 can be made much cheaper with a good design, and far too expensive to do with a poor design.  #13 is both a problem in its outcome and an opportunity cost as it usually results in wasted effort.  #14 is related to having quality coding standards.


Note that I didn’t mention anything about documentation.  I don’t think that good code requires much documentation (“Much” > none, and I think that a functioning build script and well written unit tests do a lot of good for documenting a system anyway), and all the documentation in the world won’t help with very bad code.


I have to save the real content for the actual article, so I’m more interested now in *your* comments.  I’ll write the obvious follow up in a couple days.

About Jeremy Miller

Jeremy is the Chief Software Architect at Dovetail Software, the coolest ISV in Austin. Jeremy began his IT career writing "Shadow IT" applications to automate his engineering documentation, then wandered into software development because it looked like more fun. Jeremy is the author of the open source StructureMap tool for Dependency Injection with .Net, StoryTeller for supercharged acceptance testing in .Net, and one of the principal developers behind FubuMVC. Jeremy's thoughts on all things software can be found at The Shade Tree Developer at http://codebetter.com/jeremymiller.
This entry was posted in Uncategorized. Bookmark the permalink. Follow any comments here with the RSS feed for this post.
  • http://codewright.blogspot.com Kelly S. French

    Great piece. The difficulty we have to face is the human factor. The resistance to an absolute standard can be chalked up to developers not wanting their own code evaluated against some metric. Sometimes it is because of fear of being judged, sometimes it is a rational argument about the validity of the standard itself. Since we’re compared with construction all the time, realize that not all buildings even attempt to be described as examples of “good design”, think non-descript office buildings or row houses.

    The continual striving to better oneself is a mark of a true Codewright and I believe Mr. Miller is one of those craftsmen.


  • http://www.thesoftwaredevotional.com/2009/04/zen-and-the-art-of-loose-coupling.html Michael S. Jones

    This is a good list of what not to do – I think all of us have seen and experienced these issues far too often. I referenced your blog in a recent post at The Software Devotional (http://www.thesoftwaredevotional.com).

  • http://2gbmemory.net Joannah

    I recently came across your blog and have been reading along. I thought I would leave my first comment. I don’t know what to say except that I have enjoyed reading. Nice blog. I will keep visiting this blog very often.



  • http://2gbmemory.net Joannah

    I recently came across your blog and have been reading along. I thought I would leave my first comment. I don’t know what to say except that I have enjoyed reading. Nice blog. I will keep visiting this blog very often.



  • http://lunaverse.wordpress.com Tim Scott

    To your point about documentation, I have heard it said that documentation is a smell when it speaks to “what” rather than “why”.

  • Moges

    I was working on the existing web application to add new module and modify the existing ones as well. What I got so difficult specially goes with numbers #3 ,and #12. Over engineering, No standard pattern was implmented. Everything is nested and coupled tightly. The business logic is stored and implented all over the place, in the Database, configuration files, and business classes. etc.

  • http://www.mycomputersky.com mycomputersky.com

    Thank you for submitting this cool story

  • http://bahadorn.blogspot.com Bahador

    Thanks for writing this great piece.
    About #7, at what point do you consider a complete build to be slow?
    Do you consider a ~50 second build (a web project with 12 projects) a slow one?
    I just want to get a feel of how slow are you talking about, and whether or not our current build speed is as good as it gets with VS2008.
    Should we consider merging some projects into others?

  • http://hackingon.net Liam McLennan

    I’ve taken the first step to recovery from 1.


  • http://math.com Peter Goras

    Abbreviations are a killer and working for a company that had their own ‘Abbreviation Dictionary’ has left me mentally scarred.
    I guess its #14 but bad enough have have its own ctgy.

  • Steve Py

    heh, I felt like setting those up as a BINGO card but then found I’ve run into just about all of them. :)

    I’ve seen cases where there are mixtures of certain points that I think even make the combination more severe than the sum of it’s parts. Recently it was a mix of #12 + #14. It makes the “code” rather difficult to understand and you easily get into a situation where the code becomes misleading. The single saving grace was that there was fairly extensive automated unit testing, thought those took around 3-4 hours to run. :(

    I would comment that some documentation is important to avoid wasting time. Code should be self-documenting but the one factor I live by is that code tells you what it *does*. Documentation (I.e. around what unit tests are testing, or around any “clever” code) describes what the code *should do*. I’ve wasted too many hours attempting to re-factor mess that in turn served some purpose and the “why” existed solely in another developer’s head. The result is broken unit tests if you’re lucky, or a bug if you’re not prepared.

  • http://computeristsolutions.com/blog/ josh

    great post. Covers some things I’m working on getting to in a “{X} for humans” series. so i linked this.

  • http://codebetter.com/blogs/jeremy.miller Jeremy D. Miller


    I absolutely agree. We’ve got a custom TeamCity build for pushing the latest to our demo server, and it’s paid for itself many time over.

  • http://martinsantics.blogspot.com MartinEvans

    I’d like to add to #4 a little. As well as a quick “path to live”, a quick “path to demo” for the sales team is also a must. Setting a up data and scenarios for demonstrations can be as big a theft of time as anything else.

  • http://codebetter.com/blogs/jeremy.miller Jeremy D. Miller


    Pick a different language — oh right, you said to take that off the list.

    I’m sorry, I don’t have anything specific to say. I will say that some of the most useful things I’ve done over the years has just been to create custom data loader thingies to make testing and simulation easier and cheaper. If you need to build a tool that makes you faster and saves time in the end, then that’s just what you need to do.

  • ChrisWright

    If you’re using a programming language that is not terribly popular (or are restricted from using external libraries), you’re sometimes left with a decision: either implement your own tools to allow testing, or go without testing. This brings up one of Ayende’s posts: http://ayende.com/Blog/archive/2008/03/11/Field-Expedients-or-why-I-dont-have-the-tools-is.aspx

    Even if you implement your own tools, there might still be a lot of friction for testing. For instance, mock objects in C++ ( http://mockpp.sourceforge.net/ ) are high-friction: you have to implement your class once in real code, once in a header, and once for your mock objects.

    Any advice to situations like this? Besides getting a better programming language, I mean.