Sponsored By Aspose - File Format APIs for .NET

Aspose are the market leader of .NET APIs for file business formats – natively work with DOCX, XLSX, PPT, PDF, MSG, MPP, images formats and many more!

A Train of Thought: November 13th, 2007 Edition

I’m cheating here, because I’m writing this from my kitchen table.  My current client is one of the big investment banks.  While I’m not too enthusiastic about their culture, I can seriously get into having all the banking holidays off.  I went to part of the Westchester/Fairfield County Code Camp this weekend and came away with some topics for blogging.  It’s the same old discussions that just don’t go away, but I guess they don’t go away because we just don’t have the answers.


Windows Workflow Foundation

One of my colleagues, Jason Sliss, gave a talk about the Workflow Foundation.  I’ve been putting off a long look at WF for quite some time, so I was interested to see what Jason had to say.  I built a fairly complex workflow engine and system several years ago, and workflow has been a low level interest of mine ever since.  I think WF looks interesting, but I’m not sure that I’d be that tempted to use it.  Other people were complaining that WF didn’t have pre-canned solutions for many problems, but I actually like the fact that it doesn’t try to solve every problem for you.  I think that making WF more or less a library to consume within your application rather than some sort of big whitebox framework will make it far easier to consume and much more flexible.

More on WF scattered below.


People Design Software

One of the first thoughts I had while watching the demo of WF is that a developer could make one unholy mess of unmaintainable crap.  Unless you’re on top of things and applying the Single Responsibility Principle and worrying about testability, I think the WF would tempt you into throwing everything and the kitchen sink into the workflow classes.  There’s nothing in the WF that I saw that leads to good separation of concerns.  In fact, I think the designer-centric nature of WF development will encourage bad design.

What’s my point?  It’s simply this:  people design software, not tools.  A good developer will use WF in a way that’s sustainable and maintainable (especially since business workflow’s *always* change).  A bad developer will unthinkingly use the designer to drop code willy nilly and create spaghetti.

Actually, let me put it this way.  Tools can create passable designs in the right circumstances.  People can make great designs.  People can recognize when the tools aren’t generating the appropriate designs for the circumstances and alter their approach.  Free will and all you know.


Too much generalization

What’s the deal with so many frameworks trying to pass around Dictionary<string, object> hashes as the argument or state?  Put me down as hating this.  I’ve seen this approach cause absolutely nothing but trouble and confusion.  It’s a weak contract sematically.  When you’re trying to understand code it’s “mystery meat.”  Anything could be in there.  Writing explicit code even though that may take longer upfront versus passing around mystery meat to reuse a generic solution?  I’d rather write explicit one-off code instead of passing around mystery meat arguments just to conform to a generic framework.  Even if the initial write takes longer, I think the mystery meat would make you pay later. 

Shouldn’t generics have ended much of the Dictionary<string, object> abuse?  And if we’re gonna keep getting this stuff forced on us, can we get a language construct for defining hashtables inline like Ruby has?  And symbols too (compiler checked symbols might be cool if it’s feasible).  I know many people are worried about C# getting too big, but I think we could fit this little request into C# 4.0.  Who’s with me?


Xaml is Ugly

Explain to the slow kids why or how Xaml is more readable and easier than expressing things in code?  I’m not talking about using Xaml for user interface layout here, that I buy.


Language Oriented Programming vs Visual Tooling

Both at the Code Camp and on a podcast I listened to on WF I heard several people make the statement that WF is a better way to write workflows because the visual designer makes it easier to write and understand.  That statement was thrown out with zero justification, and that’s too bad because the superiority of visualization is a whopping assumption to make unchallenged.  I think we need to give Language Oriented Programming a chance here as a cheaper solution that might just end up being easier to deal with.  I’ll have to come back later with some links, but I’ve seen some examples of using DSL’s in Ruby for defining workflow that I think are quicker to develop and easier to read (and hence to understand and maintain) than the visual representation in WF.

I was thinking of the “acts_as_state_machine” plugin to Rails. This kind of stuff is why I’m intrigued by IronRuby.

From Micah Martin of ObjectMentor fame:  http://statemachine.rubyforge.org/


I prefer User Stories over Use Cases

I started a new little project for my client last week.  I just had a couple massive Use Case documents dumped on me from an analyst on the other side of the Atlantic ocean.  I was curious to see how it played out because my knowledge of Use Cases is strictly academic.  The Use Case I received was textbook with actors and RUP verbiage and plenty of scenario descriptions in the Word doc.  After about a half hour of looking at the Use Case I’m more than ready to say that I strongly prefer using User Stories for requirements.  You can try the Mike Cohn link at the end of this section for an expert’s opinion on the difference, but first here’s my reasoning on the advantages of User Stories versus Use Cases

  • User Stories are generally finer grained, and I think that’s useful both from an ease of estimation standpoint and the prioritization standpoint.  I broke my use case document into about 20 different user stories for my own edification, then emailed that list back to the analyst.  When I look into the Use Case I see detailed specs for a lot of different development tasks in the new screen and workflow.  Some of these tasks are vital, and others are simply nice to do.  As I’m going to remind the client today, I’m rolling off at the end of the year and I simply can’t get to every single wrinkle in the Use Case.  The lower priority stuff isn’t going to make it into the first release.  If we were using fine grained stories we’d simply play the high priority stuff and let the lower priority stuff stay in the backlog.  User Stories are practically built for iterative development.  The Use Case throws everything into a big bucket, and my analyst spent time detailing out features that won’t get built.

  • User Stories let you get started sooner and that’s important.  With a tight schedule, I’m going to be regretting the couple days that I couldn’t spend on my new project because the analyst was perfecting the Use Case document.  The simple act of dividing things up into smaller chunks gets actionable requirements into the grubby little hands of the developers and testers earlier.  Putting off detail on stories that aren’t yet in play is also more efficient to me.  Why do detailed analysis on a requirement that might never get built?

  • The conversation thing.  A User Story is largely a project management device and a means to creating a common language for the team and customer about features in the system.  In one sense the User Story is simply a placeholder for the conversations and interactions between the business, analysts, and developers that make up the detailed requirements process.  One of my favorite sayings is “the design hat never comes off,” but it’s just as important to keep the requirements analysis hat on as well.  We always understand the requirements better as we proceed through the system.  A Use Case just gives you too much incentive to sign off and stop thinking.

Mike Cohn prefers User Stories too.


Patterns are more important than Tooling

I’m going through one of my periodic “maybe I should go give CAB another detailed look” phases.  My attitude hasn’t changed yet.  If you know the underlying design patterns behind the CAB, you’ll probably be able to build something simpler that’s more appropriate for your system without the CAB.  If you know the patterns behind the CAB, you’ll be more successful with the CAB.  The same thing applies to the WF.  If you simply understand the idea of a state machine, I really don’t think the WF holds much value for simpler workflows.  Both of these tools try to solve problems in a generic way.  That’s great from the reusibility standpoint, but it’s a sacrifice.  My TradeCapture application is extended by a very specific mechanism that’s expressed in domain specific terms, not by “SmartParts” and “WorkItems.”

I’ve built 5-6 WinForms applications of significant complexity now.  Every single one of them had:

  • A main form called “ApplicationShell” than contained all of the other child forms and controls

  • An ApplicationController class that governed the activation of child Presenters and views

  • Some sort of IPresenter/Presenter layer supertype that all views had to implement

  • An IView interface

  • ICommand interface for the Command pattern

  • What I call a “ScreenCollection” to track the open screens

Aha! you say.  CAB has most of that stuff and you’re just giving into your NIH tendencies!  I’d still say no, because every single application was different.  The patterns happened in every project, but the details differed considerably.  Even the ICommand interface has varied with additional security, validation, or menu specific stuff between apps.  I personally like having all of these classes be customized for the application.  It makes them easier to use than a generic solution.

Here’s a side argument I’ve seen crop up a couple times in the past week.  Is it better to learn design patterns and principles first, or learn the same patterns and principles as a consequence of using a tool or framework that incorporates these patterns?  After looking harder at the CAB, I’d say that it’s not a bad thing at all to study for the design patterns it incorporates, but I’m still dubious that working with CAB is the best way to learn the MVP pattern.  I’m defintely in the camp that says it’s more important and effective to learn the patterns independently of a tool.  After all, how else can you really judge whether or not the tool is really what you need?

Invest in People before investing in Tools

I see sooooooo much effort and money going into producing or purchasing tooling that will “enable” bad or undertrained developers to write software with adequate results.  Software factories to tell them what to do next.  Methodologies try to straitjacket developers into being spec programmers.  Tools that frankly have no power because the makers are favoring safety to keep developers from hurting themselves.  All powerful frameworks that try to do ease development by leaving developers very little choice or freedom.  Yes, the average developer might be underskilled and undertrained, and we generally need to do something about that to make them more effective.  My constant contention is that we’d be better off to raise the average developer skill level across the board.  In economic terms I think it’s cheaper to invest more in developing developers than it is in fancy tooling. 

What is so wrong with our value system that we favor using tooling to make people interchangeable instead of investing in people to make them, and us, more effective?

About Jeremy Miller

Jeremy is the Chief Software Architect at Dovetail Software, the coolest ISV in Austin. Jeremy began his IT career writing "Shadow IT" applications to automate his engineering documentation, then wandered into software development because it looked like more fun. Jeremy is the author of the open source StructureMap tool for Dependency Injection with .Net, StoryTeller for supercharged acceptance testing in .Net, and one of the principal developers behind FubuMVC. Jeremy's thoughts on all things software can be found at The Shade Tree Developer at http://codebetter.com/jeremymiller.
This entry was posted in Featured. Bookmark the permalink. Follow any comments here with the RSS feed for this post.
  • http://www.usefulcases.com yoshi

    I’ve used use cases for a number of years, and I felt it was difficult to write and nobody would want to read it. Unfortunatley, I didn’t pay attention to user stories methodology and I and few friends of mine decided to develop a online use case authoring tool to solve the very problem of writing use cases with some coaching elements in it.

    Now, I kinda feel foolish trying to solve the problem while another approach would have eliminated the problem altogether.

    I wonder if I should I stop the online use cases tool…

  • http://www.adronbhall.name Adron

    “Invest in People before investing in Tools”

    That is the truth! Logic would dictate.

    In a contrary way, but agreeing with your thought on this.

    If a person isn’t trained they can’t use the tools. I’ve seen awesome tools put into place, that are used in horrible ways, and cause nothing but overhead. But when people are trained appropriately and provided an avenue to learn proper patterns, methods, and techniques for using the tools … then you end up with an absolute win-win. But if a people aren’t trained, the tools are worthless.

    Simple logic. :) Amazingly it is not a chicken or the egg scenario as I’ve seen discussed on some other blogs.

    Keep up the great entries! A few people and myself commonly discuss random entries of yours in our various coffee, beer, or lunch sessions.

  • http://dotnet.agilekiwi.com/blog/2007/05/implementing-workflow-with-persistent.html John Rusk

    I like your comments about Language Oriented Programming. By the way, I’ve posted a C# sample on my blog, of a language oriented approach to workflow development. It lets you write methods that pause, waiting as long as necessary for external events, and then continue where they left off. This, or something like it, is essential if workflows are to be developed in an ordinary programming language.

    It works in .NET 2.0.

  • Matt

    Lots of interesting things to think about here. Nice article.

    I particularly agree with your last point about people over tools. I’ve been telling managers this for years, but no-one listens. As an example I once worked at a place where myself and another developer spent about three months building a CMS. Six man-months of mid-level developer wages to create a tool that was perfectly suited to the needs of the organsiation instead of a vastly inflated sum for an off-the-shelf solution that would probably create as many problems in conforming to it’s inbuilt workflow than it solved.

    Shortly after I left that job, the management did just that – bought an expensive off-the-shelf CMS and discarded the bespoke system. They used it for six months and then went back to the system we’d originally written. Point proved!

  • http://www.bizcoder.com Darrel Miller

    Regarding the tools versus people problem, we are not alone in the software development world. My company develops ERP software for small manufacturing companies. Prospective customers are frequently looking for our software to reduce their dependence on intelligent people. I try to explain that software works best when you try and enhance the productivity of intelligent people rather then trying to replace them with “interchangeable cogs”.
    Somehow we have to convince the population “at large” that software, like a musical instrument, in the wrong hands can be hideous. I don’t care how much money you spend on a violin, if you give it to me to play, you are never going to like what you hear.

  • Object Bigot

    Companies don’t want to invest in people. They want interchangeable cogs, that they can fire on a whim. The things is most companies view developers as only being slightly above the janitor in the corporate hierarchy, so why invest in training them? As long a companies have a derisive view of developers, then there will be continued emphasis on tooling or the next ‘silver bullet’.

  • http://codebetter.com/blogs/jeremy.miller Jeremy D. Miller


    Thank you for the comment, and yes, I’m going to slow down a bit on blogging next year. Someone else can take up the slack.


    I was thinking about this morning. For the amount of money that a shop would spend on the full shebang of VSTS per developer, they could probably send everybody to one of JP’s Nothin’ but .Net bootcamps. I know which expenditure I think would pay off more.

  • http://mattcalla.com mattcalla

    Completely agree with your comments on ‘Invest in People before investing in Tools’.

    Throwing money at the ‘tools’ is the easy way out for companies – whereas like you said – putting the hard work in developing the actual people would be a much better investment. I have much more respect for companies that value their employees and take the time to help them become better at their craft.

  • Stephen Smith

    Jeremy, as always thanks for the generosity of your blogs. I hope you are blogging at a “sustainable pace” though.

    As regards to User stories and requirements gathering software development can be boiled down to two essentials:
    – Building the RIGHT thing; and
    – Building the thing RIGHT.

    Projects are voyages of discovery both for the user community, discovering what is the real nature of their problem/opportunity and the optimal means of addressing it, and also for the development community in determining the optimal means of building and delivering it. Users business needs also change in response to changes in their business domain/ market place over the life of a project. I love the Agile/ XP philosophy of “embracing change” because change represents what has been learnt and allowing it to be fed back into the project development.

    As regards the tooling v people debate I find in the Microsoft world, at least here in Sydney and presumably it is characteristic of many other places, that instead of attempting to discover practices and tools for working smarter to address our development challenges, the majority of us are waiting for the next promised tool/ visual designer from Microsoft which will solve the problem. Development is based on logic, not magic. If this mentality is not just limited to Sydney, how can we change this thinking?

  • http://scottic.us ScottBellware

    > can we get a language construct for defining hashtables inline like Ruby has?
    > And symbols too (compiler checked symbols might be cool if it’s feasible).
    > I know many people are worried about C# getting too big, but I think we could
    > fit this little request into C# 4.0. Who’s with me?


    P.S.: I asked for this last year… when there appeared to be enough time to get it into C# 3. We’re going to need to really force this issue if it’s going to make it onto the C# team’s plate.

  • http://codebetter.com/blogs/jeremy.miller Jeremy D. Miller


    Ruby is still cleaner, but C# 3 is a step in the right direction.

  • http://codebetter.com/blogs/jeremy.miller Jeremy D. Miller


    Fair point about CAB since I use that argument as a pro-Rails statement.

    “how do you think we got VB 6 guys to do OOP? ;)” — If you know the secret to this one, I’d love to know it;) Making the leap from procedural programming to OOP seems to trip up a lot of intelligent folks.

  • http://www.OdeToCode.com/blogs/scott/ Scott


    You define an interface with events and methods, and inject a concrete implementation of that interface into the workflow runtime (actually into the ExternalDataExchange service of the runtime).

    A workflow can then invoke methods on the service to communicate with the outside world, and can listen to events for incoming messages and data.

    The upside is you have a very formal communication mechanism defined by an interface and the inputs/outputs are explicitly modeled in the workflow. Events arrive at the workflow even if it is unloaded and sleeping in a persistance store.

    The downside is that the process can feel cumbersome, and on the surface it looks like a lot of abstraction if you just need to pass along a simple object. There are some constraints I left out (like all objects coming into the workflow via eventargs must be [Serializable]). Some people will avoid the overhead and drop down to a lower level mechanism, which is to just place objects into the WF queuing service for a workflow to pick up – easy and informal but prone to some of the same problems as a Dictionary.

  • http://weblogs.asp.net/kdente Kevin Dente

    As a long time “workflow guy”, I’ll add my 2 cents that visual representations of flows can be a great way to understand a complex, structured workflow. Part of the problem with WF is that the designer, in a word, sucks. It’s too primitive for anything complex, so it ends up only being useful in relatively simple cases. In other words, in the places where you can use it, you probably don’t need it. High-end BPM products generally have much richer modeling tools.

  • Mark Brackett

    Collection initializers (C# 3) almost fulfill your inline hashtable request:
    new Dictionary() { {“key1″, obj1}, {“key2″, obj3} };
    Yeah – it’s too many curly braces, but it’s not too bad.

    I agree with all your points – but it’s kind of iffy on the rolling your own WinForms framework as opposed to using CAB. There is a “network effect” of standardizing on framework(s), and there’s certainly value in that beyond the actual bits that make up CAB. How much that tips the scales in favor of CAB, I’m not sure (I personally don’t use it either…but I do feel bad about it sometimes). Also, the use of a framework encourages devs to live up to the expectations of that framework – how do you think we got VB 6 guys to do OOP? 😉

  • http://codebetter.com/blogs/jeremy.miller Jeremy D. Miller


    In terms of granularity, I think the comparison between Scenario and user story is about right, but I wouldn’t take the comparison any farther than that. It’s just a different way to do things.

    And btw, welcome.

  • Mark Nyon

    Re: User Stories over Use Cases

    The “User Stories” idea sounds like “Scenarios”, a concept that was attached to use cases when I was introduced to them awhile back. There’s some more detailed info in this book from IBM (http://www.amazon.com/Developing-Object-Oriented-Software-Experience-Based-Approach/dp/0137372485), but the idea is that the Use Case displays the actors. There’s a one-to-many relationship with Use Cases and Scenarios. A term used in that book was “Scenario Driven Development”.

    Hi, btw. my name is Mark Nyon. I just found this site a few weeks ago and am pleased to find an agile development community in .Net land.

  • http://codebetter.com/blogs/jeremy.miller Jeremy D. Miller


    Cool. How hard is the strong contract stuff to use?

  • http://codebetter.com/blogs/jeremy.miller Jeremy D. Miller


    I’m going to get around to it in the next couple months. You can check the StoryTeller codebase for examples, but I don’t think that StoryTeller is all that representative.

  • http://www.OdeToCode.com/blogs/scott/ Scott

    Yeah – the mystery meat Dictionary going into WF is something people shy away from, generally. You mostly see it pop up in demos. There is an alternative mechanism for getting data into a workflow which is very formal (too formal at times, actually).

  • GC

    Do you have source available that demonstrates this ApplicationShell, ApplicationController? I’m curious how these are supposed to interact. I thought maybe you had a public project somewhere that included this.

  • http://codebetter.com/blogs/jeremy.miller Jeremy D. Miller


    It’s easier in the end to just accept the fact that they’ll change something in the requirements as they go. I can’t speak for Use Cases, but I can say that it’s far, far easier to accept and track scope change with user stories than it is with a traditional “the system shall…” Word documents. It’s so much easier to just track a brand new user story than it is to hunt down a change to a word doc.

    Even better yet, by purposely delaying the detailed conversation until later it just doesn’t cost that much for a user to change their mind about a user story that has not yet been played in an iteration.

  • http://mygeeklife.net S.A.M.

    Great blog. I really enjoyed the part about user cases v. user stories. I find the really hard part is managing the client I.E. making sure they don’t try to change the whole product 1/2 through development.