How do I know?: Descartes’ Rationalism versus Hume’s Empiricism

I really don’t have time for this, but I’m going to do it anyway.  I’m starting a new blog post series that’s roughly “How do I know what the best way to write software is?”  Before I get started on it, I wanted to talk just a little bit about how we know something. 

 

I took a couple classes in college on Philosophy, and one particular set of lectures stands out to me all the way to this day.  We spent a long time talking about epistemology, the (not so) simple subject of how we obtain knowledge.  The first approach is the Rationalism of Rene Descartes.  In this view, we “know” something only if we can know it to be infallibly true.  To a rationalist, “knowing” Separation of Concerns is a good practice means that you understand exactly why and how Separation of Concerns leads to better results. 

The other view promoted by David Hume and others is Empiricism.  As the name implies, empiricists “know” something to be true if it has been observed to be true.  To an empiricist, the only way to know that Separation of Concerns is valuable is to have made and recorded observations that show improved quality and sustained productivity.   

The unfortunate trap is that we can be wrong in a variety of ways.  As an empiricist would say, our reasoning can be false and lead us to erroneous conclusions about what we should do.  Descartes himself fell victim to this in his own postulated theory of light, which isn’t consistent with modern physics.  The Rationalists will tell us that we can’t trust our senses.  So even if hard numbers and measurable observations can lead us to “know” something, what does a study or even our own experience really tell us?  We can easily make false deductions from our observations or fail to make the observations that are right in front of us. 

There was recently a study published about the effectiveness of Test Driven Development debated by both Phil Haack and Jacob Proffitt that seemed to suggest that using TDD leads to higher quality software — or maybe it doesn’t do anything of the sort.  I’m not taking the study seriously either way because:

  1. My personal experience has led me to believe that TDD is effective and leads to better structural qualities of code and higher efficiency.  Generally, I’ve learned how to effectively apply TDD and I feel like I understand why and how it works.
  2. The study was performed on college students.  TDD has a nontrivial learning curve, and punishes developers who try to write code with tight coupling or lack of cohesion.  I don’t think I could have effectively utilized TDD as a college student.  Heck, I struggled on my first experience with TDD and at the time I had 3+ years of lead experience under my belt.  Now, if I’d had a coach with TDD experience, maybe that would have been different.

Agile practices are criticized, and maybe rightly so, because of a lack of empirical numbers to prove the claimed benefits of those practices.  I’d certainly worry if I saw multiple studies showing that Agile practices were creating worse results than traditional methods.  I won’t dispute the value of empirical data and methods like Six Sigma.  All the same though, I think that rationalism is in some ways more important to us in software development because I think it’s just too hard to use empirical data alone to make decisions in software development.  We simply don’t have enough history behind us to have accurate empirical proof for just about anything.  If nothing else, we need to apply a lot of analysis to any empirical study we do have to understand the causes for the numbers.

One of the things that keeps software development from being a straightforward science is that our “laws of physics” are changing rapidly.  A couple years ago I read some older authors questioning the effectiveness of TDD by quoting several studies performed in the late 70′s and 80′s about the ineffectiveness of COBOL developers doing unit testing.  The reality is very different for .Net or Java development today than it was for COBOL development.  TDD is viable with modern programming languages where the atomic units of code are much smaller for isolated unit tests and the compilers are fast enough to enable the TDD feedback loop.  I really don’t think those unit test studies on COBOL are all that relevant to me.

It’s important to us as software developers to consider why and how a practice works because that examination of the underlying forces will help us to better apply that practice.  Most of the benefits of the tools and practices that we use aren’t automatically delivered to us on our initial implementation.  The benefits are largely determined by the manner in which we apply practices.  I think Agile practices are advantageous, but yet I’ve seen instances where they failed my team and I.  I feel like I can rationalize why the successes worked and the failures failed:  feedback cycles and reacting to that feedback, orthogonal architectures, clean code, good collaboration between team members, seamless handovers, and an environment that actively supported an adaptive approach.  Forgetting whatever you happen to call your approach or what your approach actually is, I think that being mindful of the first causes will point you in the direction of success.  In the next post I’ll kick off my discussion of just what I think those first causes are.

 

 

The Afterward

When I worked at ThoughtWorks, we had an Analyst with a background as a professor of philosophy.  I had a conversation with another developer about Descartes vs. Hume one day in the team room as we were pairing.  From the look on my analyst’s face, I don’t think I would have passed his philosophy class;-)  I’m going to email the link to him just so I can imagine how much he’d cringe over this post.  There’s a good discussion on Rationalism versus Empiricism from the Stanford Encyclopedia of Philosophy that explains the differences in much more detail.  Please feel free to correct my interpretation of Descartes and Hume in the comments, and I promise never to dip this shamelessly into pseudo-intellectualism ever again. 

About Jeremy Miller

Jeremy is the Chief Software Architect at Dovetail Software, the coolest ISV in Austin. Jeremy began his IT career writing "Shadow IT" applications to automate his engineering documentation, then wandered into software development because it looked like more fun. Jeremy is the author of the open source StructureMap tool for Dependency Injection with .Net, StoryTeller for supercharged acceptance testing in .Net, and one of the principal developers behind FubuMVC. Jeremy's thoughts on all things software can be found at The Shade Tree Developer at http://codebetter.com/jeremymiller.
This entry was posted in Uncategorized. Bookmark the permalink. Follow any comments here with the RSS feed for this post.
  • http://nomorehacks.wordpress.com/ NoMoreHacks

    I think that often the problem with measurement is that people don’t even measure anything because they think there is no problem. If a team gets into the mindset that, no matter how fast stuff is going out to users, no matter how few bugs there are, they can be better then they will be interested in the measure-tweak-rinse-repeat cycle. However, the problem is getting people to realise that their development process IS broken and it DOES need to be fixed. Teams get stuck in believing that their way is the only way, and everyone who claims that things can be better/faster must be a shyster consultant who is selling something.

    The other type who don’t want to measure are the rationalists who “know” that Big Design Up Front is best because “Agile is OK for simple GUIs but this system really is special and has to be designed properly”. These are the ones who think that Agile=hacking and don’t know that no one believes Agile makes programming painless, only a little less wasteful.

  • http://blog.troyd.net/Test%2bSupported%2bDevelopment%2bTSD%2bIs%2bNot%2bTest%2bDriven%2bDevelopment%2bTDD.aspx Troy DeMonbreun

    “Meden Agan” i.e. “Nothing in excess”

    Test Supported Development (TSD) is NOT Test Driven Development (TDD):

    http://blog.troyd.net/Test%2bSupported%2bDevelopment%2bTSD%2bIs%2bNot%2bTest%2bDriven%2bDevelopment%2bTDD.aspx

  • http://agilology.blogspot.com/ Jeff Tucker
  • http://www.e-Crescendo.com jdn

    I’m staying away from this one…

    jdn

  • http://lunaverse.wordpress.com Tim Scott

    Okay since you brought up epistemology, I gotta chime in. In my opinion, any empirical study of TDD is bound to be bad science and pretty much useless.

    In 1620, the prophet of the Scientific Revolution, Sir Francis Bacon, told us about the “Four Idols” that obstruct the acquisition of knowledge (http://www.whitworth.edu/Core/Classes/CO250/Readings/fr_baco.htm). The class of idols that gives us trouble here Bacon calls the “Idols Of The Market-place.” Bacon says that words often “…plainly force and overrule the understanding, and throw all into confusion…”

    I think that’s pretty much the case here. I would suggest that you cannot empirically study the “effectiveness of TDD” until you have fairly precisely defined both “TDD” and “effectiveness”. Let’s say we could agree on a core definition of TDD (and that’s a big if), there would still be tremendous variability in how it is applied by different developers, different teams, different projects, etc, would there not, leading to wildly different “effectiveness?”

    Regarding the word “effectiveness.” Most of us developers could rattle off a list of words that we believe define the dimensions of software effectiveness. Heck, that list might even be similar to a list given by the person who pays for our work. The list would certainly contain the word “maintainability.” I wonder if this was a longitudinal study. I think five years would be reasonable duration to study maintainability. But gosh, how to you even measure maintainability?

    Regarding the more arcane question of Rationalism vs. Empiricism, I turn again to Bacon! In 1620 he told us to be like the bee. I think this analogy applies to developers in many ways:

    “Those who have handled sciences have been either men of experiment or men of dogmas. The men of experiment are like the ant, they only collect and use; the reasoners resemble spiders, who make cobwebs out of their own substance. But the bee takes a middle course: it gathers its material from the flowers of the garden and of the field, but transforms and digests it by a power of its own. Not unlike this is the true business of philosophy; for it neither relies solely or chiefly on the powers of the mind, nor does it take the matter which it gathers from natural history and mechanical experiments and lay it up in the memory whole, as it finds it, but lays it up in the understanding altered and digested. Therefore from a closer and purer league between these two faculties, the experimental and the rational (such as has never yet been made), much may be hoped.”

  • http://chadthedeveloper.blogspot.com/ Chad Sturtz

    Looking forward to the series!

  • http://www.geekswithblogs.net/robz Robz

    Great post. I thought for a moment when I started reading that you were moving towards fallacy, but your post took a good turn towards asking the right questions. I completely agree that the same approach will not work for each project. I think being agile allows you to morph each project to what is working and what is not.

    Very interesting read. :D

  • http://codebetter.com/blogs/jeremy.miller Jeremy D. Miller

    @The Other Steve,

    Would you ever say TDD isn’t the right approach? Yeah, when it’s too hard with a particular technology or codebase. The ability to do TDD is part of my thinking when I select a technology, but sometimes that isn’t under your control. You also take steps to isolate the stuff that’s hard to test. Some other things are too simple for you to care about. Other times you just need to spike something and work you’re way through it. You’ve just got to be disciplined enough to stop and rewrite it with tests or retrofit tests later.

    @Christopher,

    Pure coincidence, and I left a comment to that effect on Billy’s post.

  • http://unhandledexceptions.spaces.live.com Steve Bohlen

    I often wonder how much of the software development community’s recurring focus on trying to gather metrics to ‘prove’ an approach to be valid is based on the industry’s roots in things with names like ‘computer science’ and ‘software engineering’ when the process of software development really shares quite little with any other ‘science’ or ‘engineering’ field.

    The essence of either of these fields tends to be that of ‘repeatability of results from a body of experiments’ and this just isn’t possible in software ‘engineering’ the same way it is in ‘bridge engineering’ for example — there are just too many variable inputs to the process to ever get the same predictable outputs and this is the essence of viable repeatable experiments. For every study shown to ‘prove’ some software engineering hypothesis, there is someone who says ‘yeah, but look @ this — this one variable being different in my situation means the study is worthless to me’.

    I can’t recall who said it — maybe Steve McConnell? –, but someone important once said “There is nothing that cannot be measured in a way preferable to not measuring it at all.” Given this, I’m not at all averse to gathering metrics about project success and failure with different methodologies (if you cannot tell where you’ve been, you cannot tell where you are going), but for any process in software engineering to be ‘proven’ to work is going to take 100s (1000s?) of success ‘stories’ rather than one or two studies to be sure.

    I think a lot of this kind of thing within our field is a really beneficial quest for an ever-better process, but so much of the ‘software engineering process’ is context-sensitive that this quest for finding a one-size-fits-all is (I think) a fool’s errand.

    I think that if TDD, Agile, XP, SCRUM, or even Waterfall has worked for you, stick with it and see if it will work twice. If your current process is failing you, try TDD, Agile, XP, SCRUM, or even Waterfall and see what results you get. But take note that either a success or a failure in your new-process-trial won’t really help predict whether the same approach will help when you start your next project~!).

    Whether its the developer talent involved, the client involved, the budget, the schedule, people’s personalities, or some other variable you cannot forsee, my experience is that one or more of these will almost certainly conspire to prevent a repeat of your output (success) from your prior project because you happen to pick the same approach as last time :D

  • http://www.bluespire.com/blogs Christopher Bennage

    Wow, Descartes is popular this weekend. Did you see Billy’s post on Devlicious? My professional life has certainly been affected by my studying of philosophy.

  • The Other Steve

    Do you have any doubts? Are there times when TDD is not the right approach?

  • http://chadmyers.lostechies.com Chad Myers

    A philosophy professor as a systems analyst… now THAT had to make for some interesting requirements-gathering meetings. “But how do you really KNOW that this is the feature you?”