First Causes in Software Development: How do I decide what is good?

Picking up from my previous post, How do I know?: Descartes’ Rationalism versus Hume’s Empiricism

I had a request lately for a post on the topic of “Persistence Ignorance” (I swear I’ll finish it soon Ward).  While trying to formulate that post, I realized that I needed to start with an examination of the first causes that might lead you to Persistence Ignorance.  I thought was a nearly mandatory way to start the post because Persistence Ignorance (PI) is only a means, not a goal in and of itself.  It’s possible that an examination of the goals behind Persistence Ignorance might lead to a perfectly good compromise that isn’t pure PI.  More interesting to me is if there is really an underlying set of qualities that we can use in a concrete manner to make rationalizations about software development techniques.  I’d like to use this post as the kickoff for a short set of essays to explore what I think those first causes are.  Just warning you now, this is going to be long.

 

Debate is Good

We developers are an argumentative lot.  This tool/practice/process/technique versus that one.  We’re always debating the best way to build software — and it’s a good thing that we do because software development is a young profession.  Our basic laws of physics change underneath us as hardware abilities increase and multicore systems beckon.  New ideas, techniques, tools, and processes are popping up all the time.  Which to try?  What to bet my career on?  Can my tool really beat up his tool?

Inevitably we form emotional attachments to our tools or begin to see our identity in terms of a tool.  That’s unfortunate in that it blinds us to other ideas and leads to us holding retrograde postitions for a reality that might have passed us by.  We need to stop and remember why it is that we liked our current positions in the first place.  Both to better practice and understand those ideas, and also to know when to jump to something better. 

I personally believe that debate can be healthy for us as a means to compare and contrast ideas of how software ought to be built.  In any argument or discussion however, somebody will eventually pull out this old nugget — “I just use whatever is best for the job at hand!”  Which is a nice sentiment, but nothing but hot air without some substance behind it.  What is best?  How do we decide what is best?  Is there a constant set of basic criteria that we can use to judge software techniques through time?  I say yes.

 

And why do you believe that?

Frans Bouma in his exit from the ALT.NET list included this little nugget that made me sit up and pay attention:

In general asking ‘Why’ wasn’t answered with: “research has shown that…” but with replies which were pointing at techniques, tools and patterns, not the reasoning behind these tools, techniques and patterns. Answering Why with pointing to techniques, tools and patterns is creating a cyclic debate: the Why question is asked to understand the reasoning why some tools/techniques/patterns/practises are used. Pointing back at tools/techniques/patterns/practises isn’t going to make you any wiser, as you then only learn tricks, because you can’t put any argument on the table why you use pattern X, use technique Y and practise Z.

Frans goes on to use the example of Separation of Concerns (SoC) as a concept that everybody says you should use, but nobody questions or explains.  I’ve called Separation of Concerns the “alpha and omega” of software design.  I obviously think it’s important too, but why?  What does it give us?  To Frans’ point, only by understanding those real goals can we better utilize SoC to write better software.

It’s important to understand why we espouse the things that we do, both to better understand how to apply something, and to know if we’re just doing something out of Cargo Cult behavior.  What is the underlying first causes that make us think SoC is so important?  Let’s take up Frans’s gauntlet and start trying to answer the “why’s.”

But first, before we even talk about the “why’s,” I want to first throw out some of…

 

My Choices

I’ve made and continue to make a certain set of choices for how I prefer to work and what I think works.  You’ve made your own set of choices.  Below are some of my choices and preferences.  I’m sure some of these are purely preference, but I think many of them are directly connected to my interpretation of the first causes and underlying forces that I’ll discuss in the next section.

  • Evolutionary design is much more natural to me, and I strongly prefer an environment that actively supports that.
  • I’m a huge proponent of Test Driven Development and Continuous Integration practices.  I’d say that creating an automated build script is the very first development task in any project.  My tool usage is very much impacted by my ability to do TDD with a given tool.  And yes, I think TDD provides a lot of benefits above and beyond simple unit testing (more on that later).
  • I design from the middle tier or user interface first, and the database last.  Again, some of that is just preference, but there are some very real reasons to work first with your objects and let the database fall out secondly.
  • I’m all for micro code generation in forms like ReSharper’s live templates, but I don’t care for fullblown code generation schemes that try to build a full application from a fully formed database.  Given a choice between using a reflective approach versus code generation I’ll almost always pick the reflective approach.  We can’t do it in a mainstream .Net language yet, but I’ll take metaprogramming over code generation as well.
  • I’m interested in the little mechanical advantages of software factories, but Software Factories as a methodology sounds like a complete non-starter to me (can you say CASE 2.0?)
  • As far as the Domain Specific Language (DSL) experimentation goes, I’m betting on textual DSL’s and language oriented programming over graphical DSL tooling.  I’m much more interested in things like the Ruby acts_as_state_machine than I am in Microsoft’s Workflow Foundation.
  • I’ll go out of my way to avoid using designers in Visual Studio for anything but laying out elements on a screen, and I’m looking hard at dynamic layout techniques for my system.  I’m going down a path of using my own fluent interface DSL to express data bindings and screen behavior in WinForms because I think it’s faster than using the design time support in WinForms.
  • I favor explicit coding over magic.  I vary rarely use custom events in my code because I think they obfuscate the code and make it harder to understand.  I prefer explicit observer classes as need be.  Call me strange on that one though because I’m close to a minority of one.
  • I prefer to understand code by examining and reading the code.  I’m not a big believer in external documentation (mostly because I’ve never found other people’s documentation to be very useful and I know that the documents that I labored diligently over in the past were never read by anyone but me).  Code readability is paramount to me (but of course we all have our different ideas of what that means).
  • I generally think that most (but not all) comments in the code are an apology for bad design rather than something that is helpful.  The xml documentation embedded in code makes code harder to read.
  • I like clean separation of concerns in my architectures.  I’m very iffy about Active Record approaches in static typed languages.  I intensely dislike architectures like CSLA.Net that ignore Separation of Concerns — even though CSLA.Net obviously has plenty of fans.  I think orthogonality in a software design is a paramount concern that cannot be sacrificed for short term productivity gains. 
  • I absolutely detest intrusive frameworks, especially if they involve a lot of inheritance chains. 

Ok, there’s what *I* like.  Now, let’s see if I can connect some of these choices to what I think the underlying forces of software development really are.  And of course, if I can’t make the connection, I need to reconsider these preferences;-)

 

First Causes and Values

Underlying everything we choose is a set of drivers.  At the very lowest level is the simple goal:  “provide as much customer value as possible as efficiently as possible.”  You might say that your first cause is to just get it done.  Great, but what does “done” really mean?  I’m going to take the XP line here, “Done, done, done” means that the code can ship.  The functionality is complete, it’s tested well enough, and the customer has approved the functionality.  It’s not good enough just to write code, we also have to have ways to make sure the code we’ve written is correct and that we’ve written the correct functionality in the first place.

Most of the effort in software development is deciding what to do.  The construction of software is nothing but typing.  I’ve seen it argued that the most important, and time consuming, activity in all of software development is learning.  Learning what the customer wants, what they need, how our architecture performs, and learning that our initial design wasn’t that great when we actually tried to implement it.  I don’t think we can really know everything upfront, so if we’re really setting off on a project without knowing the exact path, we better make regular course corrections with plenty of…

Feedback – I honestly think this is the single most important “First Cause.”  My experience is that productivity is most heavily impacted by our ability to quickly apply some sort of validation to anything we do.  I’ve just written a requirement / designed a screen layout / coded a requirement / sketched a UML diagram, is it right / valuable / usable / not really what the user intended?  I strongly think that a team’s or individual’s tools and practices should be chosen with a very strong regard for the quality and quantity of feedback that those tools and practices provide.

 

 

A couple weeks back I was in some discussions about the future Composite WPF guidance.  One of the participants made the remark that we should be careful not to sacrifice productivity for the sake of testability.  I think that’s a false dichotomy.  When we talk about getting to “done” we need to be careful to consider everything that it takes to get to done, and that includes the effort that we put into removing defects from the code.  We need to optimize the complete lifecycle of the code, not just the initial production of the code.  Being fast to code and hell to debug when anything is wrong is a net loss.  So let’s add…

Testability – The simple ability to test and verify a system bitwise.  How easy and quick is it to validate parts of the application work?  I feel like this is a specific case of Feedback, but merits it’s own treatment.  I’ve written about this quite a bit in the past, but I want to use the essay on this topic to challenge my own views a little bit and see how things fall out in the post-TypeMock world.

 

Software is complex, and the overall system may have hundreds and even thousands of variables.  The human mind can only hold so many variables at one time.  What we need to do is to be able to pick off logical parts of the system and work them in isolation.  We need to eat the elephant one bite at a time.  That leads to iterative development processes for the team, and in the code we need…

Orthogonality – Simply put, the ability to divide and conquer by being able to work on a single concern of the whole system at a time.  Every major design technique that I’m aware of exists to provide a heuristic to take an amorphous blob of requirements and decompose it into a series of approachable coding tasks.  The ability to work on one thing at a time is crucial for productivity.  Designs that allow us to turn a complex system into a series of simple tasks are good.  Designs that organize the code to make the assignment of responsibilities predictable are good.  Designs that throw all the code into a giant DoWork() method aren’t helpful.

 

I’d also extend “I just want to get it done” to being able to sustain ongoing efforts in the same code later.  Most systems I’ve worked on are continuously maintained and extended until they just become too expensive to maintain and extend.  Clearly, there’s a huge economic value in these systems to providing for sustainable development.  Along the way, I’ve observed (empiracism) a very large disparity between my productivity in one codebase versus another codebase.  For purely economic reasons, I definitely want to build in the ability to change a system over time.  If I’m going to get in and change a system over time, I want a couple of qualities in my system to make that change safe.  Feedback loops, Testability, and Orthogonality all play major parts in making it safe and economical to change an existing system, but there’s another major factor in a software design’s ability to change over time:

Reversibility – The best example I can give for Reversibility is watching my 4 year old son paint with water soluble paint across the table from me as I write this.  I’d be a lot more worried about what he’s doing if I didn’t know that I can easily wash it all off later.  The same thing applies in software design.  If I know I can reverse a design decision later without serious consequences, I don’t have to be that worried about getting it perfect now.  That’s a hugely liberating feeling.  If I can build the system’s persistence scheme simple, and fit in caching later when the volumes demand it, I can build the system right now for the initial release without having to make those decisions about caching upfront.  When we have Reversibility, continuous design is possible and analysis/paralysis problems can be put to bed.  My design choices are directly informed by my wish for better Reversibility. 

 

Oh yeah, and people stuff too.  I wrote ad nauseum about teams and communication and collaboration last year because my client was self destructive in this regard.  That’s all available at:  On Software Teams

 

The Series

If I’ve tallied things up correctly, I’d like to write some deep dives on:

  • Feedback
  • Orthogonality (mostly links though)
  • Testability
  • Reversibility – Its impact and importance for doing design.  What design choices give us more or less reversibility.  I also want to look at how external factors can improve or diminish reversibility
  • Can you “see” the flow of the code?  Solubility, readability, whatever you call this quality of code.  Can I tune into the “signal” in the code that I care about without getting bogged down by unrelated “noise?”  I think I’m going to say that this is a specific manifestation of Orthogonality.
  • Why and How TDD Works – First and foremost, TDD is a design technique, and I’m going to treat it as such.  TDD is more than unit testing.  TDD encourages Orthogonality and provides a fast Feedback loop. 
  • Persistence Ignorance (PI) - Just applying the first causes to the problem of persistence and seeing where PI applies.  It’s just a means for better Orthogonality and Testability. 

 

The Fly in the Ointment

There’s at least a couple problems with my arguments in this post that I’m happy to acknowledge:

  1. Not every system or project is the same.  As I’ve said before, I strongly favor using a Domain Model architecture backed by an Object Relational Mapper that supports “Persistence Ignorance.”  I also said that I don’t care for the Linq to Sql or NetTiers type of approach to system building, but there are a lot of systems where the entire goal is to attach a user interface to a database in the fastest way possible.  I think data-centric approaches are perfectly applicable in those cases, even though I would never touch them in any of the systems I build that tend to be very rich in behavior and business logic.  All I mean to say is that our choices of “what is good” are very much influenced by the work that we do.  If you work primarily on CRUD or reporting applications that don’t change much throughout their lifecycle, then you won’t care about the same things that I do.  Then again, you’re going to get into serious trouble if you take those data-centric techniques and try to apply them to a system with rich domain logic.
  2. Opportunity Cost.  Just because your favored solution “works,” doesn’t mean that there isn’t a better way out there that would give you better results.  If that’s the case, you’re actually losing time and money by using your current techniques, technologies, or processes.  The first example that comes to my mind is a talk I sat in last spring.  The high priced architect speaking was demonstrating his application framework.  The framework required you to write a *lot* of little factory methods and classes by hand.  I asked him why he didn’t just use an IoC tool to avoid all of that monotonous coding.  His response to me was that his way has always “worked” for him.  Yes it worked, but I could have done the same thing with less code and mechanical effort.  I think that TDD works for me, but maybe there’s something totally different I could be doing instead or more likely, a better way to do TDD.  Maybe Design by Contract could replace some parts of TDD for me at some point (I still see DbC as a small subset of TDD, not the other way around). 

 

I can make both an empirical statement that TDD works, and I can provide the rationale for why it works, but there’s always the possibility that something better is out there.  Even if you “know” what’s right, you better keep a look out for better things.

About Jeremy Miller

Jeremy is the Chief Software Architect at Dovetail Software, the coolest ISV in Austin. Jeremy began his IT career writing "Shadow IT" applications to automate his engineering documentation, then wandered into software development because it looked like more fun. Jeremy is the author of the open source StructureMap tool for Dependency Injection with .Net, StoryTeller for supercharged acceptance testing in .Net, and one of the principal developers behind FubuMVC. Jeremy's thoughts on all things software can be found at The Shade Tree Developer at http://codebetter.com/jeremymiller.
This entry was posted in Uncategorized. Bookmark the permalink. Follow any comments here with the RSS feed for this post.
  • http://www.mag-corp.com/mag_website_design.asp Website Design

    Nice posting..

  • http://www.dotnettricks.com Fregas

    @Frans. We should always look at ourselves before placing blame or making some kind of judgment on someone else. Its funny, but I generally don’t have any of these problems with other vendors, bloggers, etc. However, I keep hearing over and over people who have problems with you–on blogs, on public forums, on websites, and in at least one case someone I knew in person. So that indicated to me that it wasn’t just myself that clashed with you. I would not have even posted this had I not come across this post by Jeremy and found YET ANOTHER soul that didn’t get along too well with Frans.

    Again, I don’t hate you but I do feel like you come across harsh. In reference to using your product, I’m afraid NO, I don’t feel like you did everything you could have to help. Perhaps its was not your intention but you seemed intent on making me feel stupid, like the fault was in the consumer and not in the product. Maybe some of this is as Bryan said, a communication thing, but some of it seems to be your particular idiosyncrasy.

    Even on your own site you rail about how the agile people seem out to persecute you. Yet I know others who are much more outspoken and critical of the agile movement than you are but who do not generate as much angst.

  • http://Bryan.ReynoldsLive.com Bryan Reynolds

    @All

    Just wanted to say my experience with Frans has been and is very different than the experience a few of you have had.

    I have found him to be helpful to the extreme, patient, and knowledgeable.

    You could be confusing spirited debate with something else. 70% of all communication is done non-verbally. That means when you read something you imagine the tone and facial expressions of the writer. My guess is a lot of people get this wrong. I know I do.

  • http://weblogs.asp.net/fbouma FransBouma

    @Fregas:
    Hehe :) sorry but after reading your post, I can only conclude here that it’s not me, but… you who’s attacking someone as it’s you who wrote “In fact almost every time i see a post or comment from Frans, he is coming across as aggressive, egotistical, accusatory, victimized, pushy, defensive or attacking.”,

    No offence, but I think you should look in the mirror as well, if you’re going out to bash someone in public with a post like you did above.

    I know we did everything we could to help you out, and you know that. I’m sorry you still feel a lot of hate after all these years that you have to post a piece of text on a public blog like you did above.

  • http://www.dotnettricks.com Fregas

    This may be a little late into the discussion, but I have to note: I don’t like the “online persona” of Frans Bouma either. This doesn’t mean I hate his guts or don’t like him personally (since we’ve never met) but he’s hard to deal with.

    I used his product several years back and contacted him to get some support and also to provide some feedback on some things that in my opinion were lacking or problematic. He was very defensive of his product, and basically told me i needed to read the documentation more and that my concerns were unfounded. Interestingly enough, he changed the behavior in the direction that i was asking for, but at the time he seemed to be doing his best to alienate me as a customer or make me feel stupid.

    I debated about ORMs on a public forum a little bit, and again he was very defensive of his product and as you put it “caustic”. His demeanor was personal and aggressive rather than objective and courteous.

    In fact almost every time i see a post or comment from Frans, he is coming across as aggressive, egotistical, accusatory, victimized, pushy, defensive or attacking. So no, I don’t care for it much either.

    Frans, if you keep wondering why people don’t like you, the only thing that stays the same in that situation…is you.

  • http://blog.sysconconsultants.com/ Jon

    Fantastic Article…..

  • http://www.zed-axis.com/ Nancy

    Hi,

    Excellent article – I really appreciate your knowledge about Software development that what is good, I have bookmarked it for later viewing and forwarded it on.

    Cheers.

  • http://codebetter.com/blogs/jeremy.miller Jeremy D. Miller

    @All,

    Ward got a hasty reply. I’ll publish the real thing in early March.

  • WardBell

    I see you’re still clearing your throat instead of justifying Persistence Ignorance :-)

    [aside: Your readers may take nourishment from Robert Martin's explanation and justification of SIngle Responsibility Principle (SRP) in his great book, "Agile Principles, Patterns, and Practices in C#. Can't tell the difference between SRP and "Separation of Concerns" myself so I'm supposing they're the same thing.]

    But where’s my PI wisdom?

    Let me get you started with a few spoonfuls of petulant prose.

    I’ve been thumbing through my copy of Jimmy Nilsson’s “Applying Domain-Driven Design and Patterns”, the most oft cited source for the 7 signs of the anti-PI beast (p.183).

    Your persistence framework is not PI if you must:

    1) inherit from a base class
    2) instantiate via a provided factory
    3) use specifically provided datatypes, such as for collections
    4) implement a specific interface
    5) provide specific constructors
    6) provide mandatory specific fields
    7) avoid certain constructs

    At first I noted with amusement Franz’s ringing endorsement of this book, given that Franz is on record as suspicious of PI.

    However, upon re-reading, it is easy to guess why Franz signed on. Jimmy is no absolutist. Sure he like PI as he’s defined it. But he is acknowledges repeatedly that PI purity is pricy. “PI or not PI — of course it’s not totally binary. … Anything extreme incurs high costs.”

    So much for those who would argue by thumping the Nilsson bible.

    One thing I know about you .. and it’s here in this post in abundance … is that you are always prepared to discuss principles in terms of their benefits and consequences. I like that about you. Let’s rumble.

    —-

    So what’s wrong if I’m less than PI. And can I mitigate those consequences without going PI?

    Here’s what I think Nilsson claims are the consequences of deviating from PI:

    * Distraction. Exposure to infrastructure distracts from the purpose of the code. If there is a (seemingly) unrelated member in the API, you could lose your focus.

    * Loss of design authority. As when an entity can only be instantiated by a provided factory.

    “Customer c = PersistenceFactory.CreateInstance(); ” instead of
    “Customer c = new Customer();”

    The framework is telling you how to write some of your code.

    * Testability. As when the persistence infrastructure forces you to establish a live database connection just to acquire and manipulate some entities involved in a test. As when it becomes hard – perhaps impossible – to mock an entity.

    * Inheritance lost. I can only inherit from one class and that’s gone if my entities must inherit from the persistence mechanism’s base entity class. If I have a domain model already written, I can’t simply make it persistable.

    Did I miss something?

    Because if I didn’t, this is weak beer.

    Join me, for a moment, in the assumption that my persistence infrastructure provides real value. I’m going to map to a data source. I’m going to want to maintain a Unit of Work which tracks the entities I’ve fetched, what I’ve done to them, and what’s waiting to be saved. I like entity navigation (customer.orders) and I want both lazy and eager loading. Moving data over the wire through firewalls? Maybe some caching and offline support? This is great stuff, yes? And not trivial to write either.

    So I’m willing to pay something … not everything but something for somebody else to provide all of this goodness.

    Distraction? Well no one forces you to look at the entity class members you don’t care about. Sure the API has more than you bargained for. And you’ve got an extra DLL reference or two you have to carry around. But this can’t be a fatal objection.

    Moreover, when are you going to count the distractions imposed by PI? What hoops do you have to jump through to keep track of which entities are dirty? How do you make it easy to implement Customer.Orders and remain PI?

    Loss of design authority? That’s a “not invented here” objection don’t you think? As long as construction of an entity makes sense, what’s the problem?

    Testability? Now we’re getting serious.

    I absolutely agree that the persistence infrastructure must enable testing of entity creation, modification, deletion, and save with no database in the mix. I think it’s great to have test data in a database … so you should be able to test with a database. But you should be able to do everything withou access to a database.

    But the persistence infrastructure doesn’t have to be PI to satisfy this requirement. It could have a pluggable repository that takes an in-memory store, loaded however the tester pleased.

    I appreciate too the desire to mock entities. I just can’t see that as essential to testability. If it is easy to create fake entities, that should be good enough.

    BTW, just what are you testing in those business objects? I make a good living claiming that business objects are more than just entity navigation and property bags for persistent data … the stuff of CRUD apps.

    But, to be honest, a small fraction of real world application business objects are ever more than entity navigation and property bags. Take inventory some time. You won’t find but a few of them that have calculations or workflow-like behaviors.

    [aside: I omit from this discussion the property level change notification, validation, security, and localization logic that are essential to business objects. I omit them to spare you the embarassment of arguing that these should be individually hand coded into the business objects.

    They should be delivered to the business objects and expressed in the entity properties in a systematic way. Perhaps they come from metadata. In any case, the particulars shouldn't be embedded in the classes themselves. They should be acquired from external sources and invoked automatically when an entity client gets or set a property.

    And the most effective way to make that happen is if the entity code generator inscribes the appropriate interceptor logic into the generated properties. This is some seriously tedious crap code that you wouldn't want to write by hand ... perfect for automation.]

    I bring this up because I get the sense that people feel there is a great need for testing the business objects. Not in my experience. The tough testing jobs are all in the upper layers of the application: UI and Presentation.

    Inheritance Lost. This one hurts. I have no great answer. I’m backed into a corner on this one by .NET itself .. by it’s commitment to single inheritance. Where are those mix-ins!

    On the other hand, we shouldn’t pretend that we write a domain model with absolutely no idea that we’re going to be using a persistence framework. It’s not like I woke up one day and said, “damn, I think I should persist this Foo thing … and I can’t .. because it inherits from Bar … and there’s no way I’m changing that.”

    I suppose it is possible that I will trip over something someone else wrote that I’d like to persist but can’t because it inherits from something. For example, I might want to persist the visual tree supporting a form so I can restore it when the user comes back. Clearly I can’t take the tree, as is, and make entities of it. But that wasn’t going to be easy anyway. Surely I was going to have to manipulate it in some fashion, if only to remember whose tree it was. And that’s my opportunity to translate its state into an entity that does inherit from the persistence framework’s base class.

    Well, I’m all out of other arguments at this late hour.

    If this inspires you to take on the defense of PI at last, I will be grateful and feel this dribble has served its purpose.

    I’m really looking forward to it!

  • Kerry MacLean

    This is good. I’ve been reading your posts for a while, but never felt bold enough to contribute.

    One interesting tie-in here is your philosophical rabbit-trail the other day, followed by the “why” posts. When we first embark on an agile path, or start to embrace any of the principals embodied within, we are doing it without knowledge: either empirical or rational. Often as well, we don’t have a good why – maybe we’ve just been introduced to the concepts by someone more experienced than us.

    Having somebody else’s “whys” to work with helps to get started, but they can’t be your own “whys” until you’ve come to own them, and that can sometimes be a difficult path.

    Keep on posting – I’m slowly becoming able to make more of your whys my own.

  • http://www.codebetter.com/blogs/glenn.block/ gblock

    Nice post Jeremy. I particularly like the way you touched on what I perceive as the fallacies around code cranking, ie. getting the job done faster. I used to be code-cranker so I feel obliged to speak. Cranking code may you get it to the market faster, but the time you didn’t spend up-front to do it right, you spend in the long run cleaning up the mess and at cost to the customer. At least this has been my personal experience.

  • http://www.fauberdemo.com/rankings.html David Fauber

    Franz’s quote is spot on. I don’t think there is anything more demoralizing on a project than having to make design decisions with someone whose only basis for putting forth ideas is that “[someone else] does it like this”.

  • http://anavish.livejournal.com Avish

    This really is a great direction you’re taking. I’ve been going deeper and deeper into “advanced programming” as someone above me called it, and though I’m very much convinced, it’s hard for me to explain to others what’s so good about what I preach.
    I had to build myself a set of “whys” in reverse, going from the technique and my intuitive gut feeling of “this is a Good Thing”, to the formal argument of why it is so.
    I’m really interested in seeing someone who’s experienced in these matters explain the rationale behind some of the fundamental concepts. Looking forward to the next posts.

  • http://www.kasiprasad.com Kasi Prasad

    I think Fran’s has a point in that ALT.NET should be about the why rather than the how, but if that’s the case, once the “why” is explained to some degree and understood, if the “how” is achievable using currently available and mature tools, whats the harm in asking those who know the tools (ALT.NETees) about how to use it. I would figure the ALT.NET group to be a great source for this type of information in addition to the theory behind their design.

  • http://choosetheforce.blogspot.com Charles Rector

    Echoing jdn and Christopher, I’m eager to see where you go here as well. This is exactly the stuff I’m interested in. I’m relatively new to TDD (half a year or so now,) but very much bought into it. I think the skeptics and even many practitioners take TDD to unnatural extremes — your blog gives me much needed perspective!

  • http://weblogs.asp.net/fbouma FransBouma

    I don’t recall calling anyone any name, or stating that a person disagreeing with me or my views has a disease (what’s attention deficit disorter?). I’m not a native English speaker and very direct. But you’re also direct, as most of us geeks are. I think we all should look through that and focus on arguments. I don’t ask you to agree with my views or arguments, why should I? I just want you to ask to stop confusing disagreement with dislike.

    I mean, every time you refer to a blogpost of mine, you have to state you practically hate my guts. I find that a little weird if you like debate so much as all I do is posting views on technical points of interest. Oh well…

  • http://codebetter.com/blogs/jeremy.miller Jeremy D. Miller

    @Frans,

    As you say, we’ve never met and we don’t know each other. As I said, I simply find your *online* demeanor to be offputting, unpleasant, and generally negative.

    Debate is good, but being told that I have Attention Deficit Disorder for not agreeing with your method of design isn’t particularly my cup of tea.

  • mkuczara

    Speaking about ‘what is good’ – in our project we started implementing DI (we are using spring). My question is – is there genereal rule so to say what should be injected. Im afraid of IEverything symptom. Any advcies?

  • Brad Mead

    Ditto on appreciation of the direction of these posts. To me, these hold the prospect of a fabulous adjunct to the books I am reading (Evans, Newkirk, Beck, Fowler, etc) and fit nicely into an advanced programming noob’s personal, self-justification for growing and honing that ‘recipe’ for constantly improving effort.

    I’ve said it before and I think it bears repeating, sometimes it’s very formative for a mentor/teacher/peer to say “put down your pencils and let’s talk about the ‘whys’ of our paradigm”.

    Thanks.

  • http://weblogs.asp.net/fbouma FransBouma

    A bit off topic but, as I’m involved in the subject:
    First you say “Debate is Good”. 120% agreement there.
    Then you say: “I frankly don’t like the caustic online persona that is Frans Bouma,”.
    Why is that? We never met, yet you don’t like me. Isn’t that a little weird? I mean: aren’t you confusing ‘disagreement’ with ‘dislike’ ? If you first say “debate is good”, isn’t that implying that there will be disagreement? (otherwise, no debate ;))

    Sure, if you fundamentally disagree with a person, the person’s expressions aren’t really worth reading, but as you did read it, I’m not sure where the dislike comes from.

    For the record, I don’t ‘dislike’ you as I never met you in person so I can’t judge if I like you or not. I might be in disagreement with some of your views, but that’s natural: we all disagree somewhere with one another.

  • http://www.bluespire.com/blogs Christopher Bennage

    Great post, Jeremy. I’ve been thinking a lot about the “why” myself. I like the direction you are going here.
    I have some thoughts on the process of learning, and how to communicate best practices, especially for beginners, but I’ll save those for a post.

  • http://codebetter.com/blogs/jeremy.miller Jeremy D. Miller

    @Jimmy,

    My dislike of events *is* partially preference. What I definitely dislike are depending on events for some sort of lifecyle. I’d much rather have specific code that drives the algorithm and delegates to members on a well known interface. I’m thinking here about the CAB screen activation lifecycle. The CAB relies on events being raised from the View to whatever Presenter is attached to tell the Presenter when to do things to bootstrap the view. In my world order, an ApplicationController tells the Presenter when to do the bootstrapping through calls to an IPresenter interface and that drives the lifecycle. Which lifecycle do you think is easier to understand and implement?

    If you’re only coupled to an interface defining an Observer, is that coupling really something to be concerned with? I think that using an explicit observer makes for better, cleaner contracts and makes the code much easier to navigate. I’ve done MVP both with events and with direct view to presenter calls. I’ll contend until I turn blue in the face that the direct communication is cleaner, easier to understand, and less mechanical work.

    I’ve found multi-casting scenarios to be very rare. In that case I would probably retreat to events.

    “Swinging the pendulum the other way, I would see going to a bunch of fine-grained interfaces, so implementors would have to pick and chose what notifications they cared about.”

    In for a penny, in for a pound. My experience is that interest in events tends to run in packs, so one observer interface is enough.

  • http://jimmybogard.lostechies.com Jimmy Bogard

    I hate picking one orthogonal point to comment about, but now I’m really curious on your observer position. Aren’t events supposed to eliminate the coupling that the observer/subject relationship introduces? Plus, don’t you have to always hand-roll the multi-cast-iness that events support out of the box?

    The only advantage I can see is being able to encapsulate a set of “observations” through an interface, providing an explicit contract to implementors. Swinging the pendulum the other way, I would see going to a bunch of fine-grained interfaces, so implementors would have to pick and chose what notifications they cared about.

    Or maybe events are ideal for component-based architectures like WinForms, but they shouldn’t be the default notification pattern?

  • http://www.e-Crescendo.com jdn

    I’m not heckling, I’m serious. Think it requires a separate blog post….

    I know what ‘Separation of Concerns’ means (I think….read wikipedia, Fowler, etc.), but I think it often is simply used as an immediate response to the question ‘Why are you doing that?’ and I think Frans’ reference was a good one to read and see that SOC can and does have different meanings.

    It’s good to see someone actually raise the question ‘How do I decide what is good’ in this area.

  • http://codebetter.com/blogs/jeremy.miller Jeremy D. Miller

    @jdn,

    With Bellware gone, you’ve gotta have somebody to heckle, so it might as well be me.

    “Separation of Concerns” == the ability to work on, understand, read, test, and change one aspect of the code at a time. Turning the complexity of the whole into a collection of simple things. It became a buzzword for a reason.

  • http://www.e-Crescendo.com jdn

    I’m going to be interested to see where you go with this.

    I tend to think “Separation of Concerns” is mostly used as a buzzword, and I also tend to think that a lot of what is called ‘Best Practices’ in software development is simply personal preference, so I’ll be glad to hear anything that cuts through my own biases.

    jdn