Lots of Provocative Thoughts from Sam Gentile

Our CodeBetter-mate Sam Gentile gave a truly great talk this evening at our ADNUG meeting.  He covered a remarkable variety of topics related to Extreme Programming practices and tooling for .Net.  I had to leave early to take a sick child off my wife’s hands, but even so I jotted down about a dozen things I wanted to blog about.

Sam stated that most of his unit test cycles last no more than ten minutes from starting to write the unit test to the unit test passing.  If your unit tests are regularly taking longer than 10 minutes, you probably need to rethink your approach and look for a different way.  Long unit testing cycles and/or needing the debugger quite a bit are an indication that your code may not be cohesive or loosely coupled.  Being able to get in a rhythm where you write fine grained unit tests in tight cycles is a good sign that your code/design is on the right track.  I’ve started work on “Jeremy’s Third Law of TDD:  Test small before testing big” that’ll expound on this topic, but for now my emphatic advice is to strive for fine grained unit tests.  Both to make writing working tests easier and to enforce good coupling and cohesion.

Sam talked a bit about his struggle to reconcile the idea of being an architect with XP and creating an architecture with Continuous Design.  While I think that book is still being written, I emphatically agree with Sam that architects should code.  I certainly feel that my ability to positively impact a project is much greater when I’m coding.  Code is important.  Even if you don’t accept the XP “Code is Design” idea, we could still agree that the quality of a “design” is definitely reflected by the ease of the coding.  Good code, i.e. maintainable code that is cost effective over time, takes skill.  Bad code results in support costs and opportunity costs because of the difficulty in extending the code.  I think it’s nearly insane to say that anybody is “too important to code” as one of my managers used to berate me when I was an architect.  Invariably, much of the worst overengineered code I’ve had to deal with originated from an architect who never had to dogfood his design (the rest came from me in my Architect Hubris stage).

The simplest thing that could possibly work isn’t a license to hack.  Someone asked a question about whether XP was more focused on business value maybe to the exclusion of technical quality (sorry if I misinterpreted the question).  The focus on delivering business value is probably the correct end, but good code/design/software backed with good build and test automation is a great means to the business value end.  I buy into the whole “doing a good job is its own reward” thing a little bit because I think the “Software Craftsman” ideal is attractive.  It’s certainly more romantic than being a CMM compliant, plug compatible software engineer anyway.

There was some talk about an overdependence on Intellisense being a barrier to TDD adoption with some reference to Rocky Lhotka’s TDD comments last week.  I certainly don’t like coding without Intellisense, but it’s not a compelling reason to forgo TDD.  If you’ll let me get away with the term, the “Reverse Intellisense” ability of ReSharper and other refactoring tools to automatically create method stubs for missing methods offsets the lack of Intellisense for new code that’s being specified in a unit test.  Add in RhinoMocks to mock a brand new interface with strong typing and you have quite a bit of ability to model the interaction between classes in a unit test, then use ReSharper to generate the missing methods that are defined in the unit test.  It may not sound like a big deal or even useful, but I’ve found it really makes TDD easier.  I like to think of this sort of TDD design doing a UML sequence diagram in code.  Behavior Driven Development is taking this idea much, much further (but it looks like it almost requires a dynamic language like Ruby or Python).

Refactoring is a grossly abused term.  Refactoring is improving the structure of existing code while maintaining the same functionality, hopefully in a disciplined manner backed up by automated tests.  You use refactoring to remove little code smells as you go and bigger refactorings to remove duplication and enable new changes.  Telling management that you’re just doing some refactoring to cover the fact that you’re really doing an architectural rewrite is just plain lying.  Shame on all of us who’ve ever played a shell game with “refactoring.”  One more thing, the phrase, “do it quick now and you can just refactor it later” makes my blood boil.

When the topic of Pair Programming came up, someone asked if pairing can reduce the amount of time it takes to bring new team members up to speed on a project.  Pairing doesn’t negate Brook’s Law by any means, but my experience in this regard has been very positive, More so when either the existing or new team members are experienced with pair programming.  Pairing is actually a bit of a skill that some of us (me) have to work on more than others. 

All in all, it was a great session.  Thanks for coming Sam and playing through pain.

About Jeremy Miller

Jeremy is the Chief Software Architect at Dovetail Software, the coolest ISV in Austin. Jeremy began his IT career writing "Shadow IT" applications to automate his engineering documentation, then wandered into software development because it looked like more fun. Jeremy is the author of the open source StructureMap tool for Dependency Injection with .Net, StoryTeller for supercharged acceptance testing in .Net, and one of the principal developers behind FubuMVC. Jeremy's thoughts on all things software can be found at The Shade Tree Developer at http://codebetter.com/jeremymiller.
This entry was posted in Pair Programming, Test Driven Development. Bookmark the permalink. Follow any comments here with the RSS feed for this post.
  • paweł paul

    Me and my friend were arguing about an issue similar to this! you need informations about Now I know that I was right.Thanks for the information you post. I just subscribe your blog. This is a nice blog. Thanks:) live forex quotes free

  • jmiller


    Some of that is just personal preference. If the business logic is complex, I start with the domain objects first and add persistence and service layer classes second. For screen work, I like to start with a sketch or picture of the view and work from the presenter/controller out iwth mocks for the view and services. In general, just think about what part of the functionality is the core that drives the functionality of the rest of the app.

    I’d make exceptions for some sorts of reporting-heavy apps, but for the most part I find coding from the database up to be very inefficient.

    I talked about this a little bit here –> http://codebetter.com/blogs/jeremy.miller/archive/2005/11/26/135098.aspx

  • http://www.jeffreypalermo.com jpalermo

    Sam pointed out that the “simplest thing that will work” doesn’t mean the simplest thing that will compile. That drew quite a few laughs from the audience.

    Jayro, I’ll give my opinion on your question. I say opinion because I don’t believe there is a single objective answer to your question. My opinion is that it’s easiest to start TDD with new code. Maybe you are adding a new feature, so you have a bit of new code. I think this is easier than trying to refactor tightly-coupled code _while_ learning TDD.

  • Jayro

    Question: If new to TDD, and you are beggining an ASP.Net application. What layer do you begin at? I have heard from 3 camps on this. The top down(UI first with Mocks simulating back end), bottom up (DB then Domain, then UI), and inside out(build domain objects first, then let TDD lead you to create back end). What is your opinion?