Deconstructing “The Simplest Thing That Could Possibly Work”

I was reading Steve Eichert’s post on “The Simplest Thing != The Easiest Thing” and it struck a nerve with me.  The old KISS philosophy is formalized in Extreme Programming and its ilk as “The Simplest Thing That Could Possibly Work,” closely followed by “You Aren’t Gonna Need It.”  It’s
a great philosophy, but it’s awfully hard to follow when the simplest
thing isn’t obvious and the definition of “work” is subject to debate.  Following
a literal or simplistic interpretation of these principles can lead to
bad things.  I struggle quite a bit with these principles, and
expect to for the rest of my career.


Simple isn’t a License to Hack Away

One of the biggest questions with “The Simplest Thing” is what exactly is simple to you?  I like Martin Fowler’s discussion in “Is Design Dead?  Kent Beck says a simple system:

  • Runs all the Tests
  • Reveals all the intention
  • No duplication
  • Fewest number of classes or methods

An anti-pattern that I’ve commonly seen in Agile development is to focus solely on the first bullet point and ignore the rest.  Duplicating code or just writing more code than is necessary is a cause of project inefficiency.  Spotting duplication isn’t always obvious.  It’s pretty easy to spot copy/paste code, but harder to see the patterns that are emerging in your code.  I
strongly feel that judicious usage of abstraction and design patterns
can drastically reduce the amount of code and make the code more
intention-revealing.  My opinion is that copious amounts of “IF/THEN/ELSE” constructs in code are nasty.  It’s easy to write this code (at least for a little while), but not simple to read and extend.


One attitude is to very purposely write code as crudely as possible.  Only after a pattern is obvious do you make any kind of refactoring to introduce an abstraction to eliminate rework.  The thinking is that any kind of premature abstraction is more harmful than writing a lot of simple code.  I think that this ends up being inefficient because you end up writing more code than you need to.  Introducing an abstraction late leads to rework that could have been avoided.  Something to keep in mind is that there really isn’t any such thing as a large refactoring task.  That’s just redesign or rework.  I
say you should pay attention to your code for cases where an
abstraction can help you and apply them as soon as you’re sure that
they’re necessary.  I guess it’s just a judgment call, but I tend to be more aggressive than some.  The
automated refactoring tools are changing this equation by making it
simpler mechanically to create abstract super classes or interfaces
after the fact and making it cheaper to do continuous design.


In case you haven’t
caught this, I hate it when people equate simple code with writing butt
ugly code that doesn’t use anything more complicated than “IF/THEN”


What do you think is Simple?


An issue that every
developer will run into during their career is that a technique or
trick that’s simple to you is incomprehensible to your peers, or vice
versa.  Case in point, I don’t have any significant background in PERL or Unix/Linux command line utilities like grep.  I never think to reach for regular expressions to solve string problems because they seem like black magic to me.  I’ll go a longer way around to solve the problem.


I’ve worked with
plenty of developers that were either ignorant of design patterns or
just extremely hesitant to use any kind of OO abstraction.  On a project last year I used a Composite* pattern to solve a related series of stories.  I
structured the database tables so I could easily use a built in Oracle
function for retrieving the “n-level” object tree by searching for any
leaf object.  I thought the solution was fairly elegant and certainly didn’t require very much code.  Another developer coming off a vacation threw a fit over the design and refused to have any part of it.  He even went so far as to say we should recode the whole thing, even if it meant writing more code.  In
the end it turned out that a lot of his objection was because he had no
familiarity with the Composite pattern and not much comfort level with
any kind of database coding.


I frequently criticize what I consider to be foolish or wrong-headed usages of stored procedures for business logic.  On
the other hand, when you need some sort of complicated data correlation
I’d much rather use the database engine that’s been optimized for 20+
years to do just that than try to write my own code (LINQ might change
this equation).  I’ve seen developers write
several hundred lines of procedural code in a middle tier language to
avoid writing a dozen lines of stored procedure code. 


Having a limited box of tricks, be they language idioms, technologies, or design patterns is a lot like my tennis game.  My backhand swing is decidedly lacking, so I’ll try to run farther just to try to hit a forehand instead.  Hitting the forehand is the “simplest” solution for me, but if my backhand didn’t suck I could make the shots with less work.  It’s obviously not possible to know everything, but it’s an awfully good idea to not be too specialized.


The question I’ve
wrestled with over the years is whether or not you should change your
solution to accommodate other developers’ comfort and knowledge level
when it means writing more code.  I think the answer is the dreaded “It Depends.”  Personally, I’m usually pretty happy to have another developer show me an easier solution to a task.


What exactly does “Work” mean?


Here are three examples from an XP project last year where I wrestled with the “Simplest Thing” philosophy.  I definitely think I was right in one of these cases, wrong in another, and I’m still not sure about the third.  I’ll leave it to you to decide which one is which.  The
project team had a couple of vociferous XP zealots that cared a great
deal about doing things the XP way – so I was bludgeoned with XP
rhetoric and theory every single day.


I had a requirement in the system for a pessimistic offline lock on a business entity in a workflow system to prevent one user from editing an issue that was being addressed by another user.  One
of the technical issues with this pattern for concurrency checking is
what to do when the locks aren’t released in a timely fashion.  In this case a user could lock the issue and then walk out the door to go on a week long vacation.  When
I suggested that we build some sort of mechanism to release aged
locks, the PM and analysts cried YAGNI and told me that the customer
hadn’t asked for that functionality.  I felt that you most certainly did need some mechanism to release the lock.  Yes,
the pessimistic locking code would have passed all of the acceptance
tests, but in production it could have easily caused problems.


Early on we worked under the theory that we would just pull user roles and role memberships out of Active Directory.  We’re
always told that AD is more efficient for reads and it seemed
silly to duplicate the user information and administrative
screens somewhere else.  The code did work, but it was causing developer pain and environmental dependencies on AD that were inconvenient.  At
some point we finally bit the bullet and created a little background
task to write the role membership information to a database table where
it was easier to deal with.  In an interview once I was challenged on why I didn’t use AD instead of the database for user permissions on a project.  I responded with something like “Yeah it might be faster, but every developer understands SQL.  How many developers know how to query AD?”  By the end of our AD adventure, I was right back into the “just use a database” camp.  I’m
not sure I really could have known upfront that using a database would
be simpler than AD turned out to be, but I bet I could have guessed.


The users used CRUD
screens to edit a business entity in a WinForms client that called web
services to do the actual persistence.  I made the statement that we needed to perform validation on the server side.  An analyst jumped down my throat screaming that the user experience would suffer if we did validation on the server.  A
couple days later I managed to get a few words in edge-wise to explain
that the validation could and probably should be done on the client
also, but that we absolutely had to do the validation at the server.  The
web services were only going to be used by that particular WinForms
client, but they were publicly exposed web services throughout the LAN.  Maybe
you call YAGNI on that, but I still think that you shouldn’t have left
those web services exposed without validation and security
authorization logic.  In our case we were able to do the validation both client side and server side with shared code, so no big deal.  The simplest thing to do would have been to make the web services dumb, but does that really work?  See why I don’t think following the “Simplest Thing…” is all that simple?


I think the key is to focus just as much on what “work” means as we spend on “simple.”  It’s easy to get tunnel vision around just making the tests pass.  The issue for me is that the code can’t necessarily be said to “work” just because it passed all of the tests that you thought to write.  You always have to be thinking about non-functional requirements and the impact of your code.  The
requirements are typically defined by the customer or requirements
analysts, but it’s our job as developers and architects to keep the
rest of the team aware of the technical and production ramifications of
the requirements.  Things like supportability, maintainability, and scalability are our responsibility.  The customer isn’t going to be aware of these issues.  It might be that the answer is just to make sure that the production support folks are included in the projects as “customers.” 



The Hippy Corollary


I don’t know the
exact origin of this one, but I had a chance to pair a bit last year
with a very experienced XP developer named “Hippy.”  He
told me that you mentally add another clause to make the rule “the
simplest thing that could possibly work – that you can stand.”  It’s one thing to speculate on what you’ll need later and add things or hooks to the code that turn out to be wasted effort.  It’s another thing to create code that you know will cause problems later.  I’m
starting a little user story this afternoon that might be most easily
coded by adding an “after update” trigger to a database table.  Technically I’d say that that’s the “Simplest Thing” to do (at least mechanically), but it feels so wrong that I’m not going to do that.  I know from experience that database triggers are hidden maintenance traps for later developers.  Whatever path I do take won’t be the simplest thing, but it won’t keep me up at night feeling bad about the code.  I like the way Bob Koss put it in Refrigerator Code.  If you wouldn’t want another developer to see your code then the code is wrong.  The most effective judgment about the quality of your code may be how easy it is for the next guy to work with your code.



* The “Composite”
pattern is a bit complex, but it occurs so often in software
development that you almost have to be familiar with it.  The
Xml DOM model, the DHTML DOM model, the file system, any TreeView
control, and WinForms controls come to mind right off the bat.  You
might not build your own composite class structure very often, but when
you do need it, the composite can make otherwise difficult coding
simpler.  Either way, you will have to work with a composite pattern of some kind.

About Jeremy Miller

Jeremy is the Chief Software Architect at Dovetail Software, the coolest ISV in Austin. Jeremy began his IT career writing "Shadow IT" applications to automate his engineering documentation, then wandered into software development because it looked like more fun. Jeremy is the author of the open source StructureMap tool for Dependency Injection with .Net, StoryTeller for supercharged acceptance testing in .Net, and one of the principal developers behind FubuMVC. Jeremy's thoughts on all things software can be found at The Shade Tree Developer at
This entry was posted in Uncategorized. Bookmark the permalink. Follow any comments here with the RSS feed for this post.
  • Dave Nicolette

    The examples you give have something in common – it sounds as if some people got carried away with ideas like “simplest possible” and forgot that XP doesn’t absolve us of using sound software engineering principles. Of course the customer didn’t specifically ask for a mechanism to deal with aged locks! Why would they? A business customer wouldn’t even be aware of such details. It’s the responsibility of software engineers to know about those things and handle them. It’s implied. It’s expected. It’s part of the definition of “works.”

  • Robert

    If you read the miller blog daily, you would have read the great american programming novel by the end of the year! I don’t know how you find the time for it. I don’t always agree with you, but I love your insight and the way you always manage to share your experiences in an informative and structured format. Keep it up.

  • RAE

    Man, I struggle with this one on a daily basis; both within myself, and within my organization.

    Recently I was tasked with adding some new features to a web service I am responsible for maintaining. The service needed some additional information to be present in the xml structure that is passed to the web service. To me, the simplest thing to do would be to add new elements to the xml for the necessary information (especially since the xml structure is passed in as a string so adding elements would not change the web service interface at all). However, I was EXPLICITLY instructed to NOT add elements to the schema, but instead, to overload the meaning of some of the existing elements, as that would be “simpler”.

    Needless to say, I do not consider overloading the semantic meaning of anything to be a simpler solution. But what can you do when two people disagree on the meaning of “simpler”?