I’m cheating here, because I’m writing this from my kitchen table. My current client is one of the big investment banks. While I’m not too enthusiastic about their culture, I can seriously get into having all the banking holidays off. I went to part of the Westchester/Fairfield County Code Camp this weekend and came away with some topics for blogging. It’s the same old discussions that just don’t go away, but I guess they don’t go away because we just don’t have the answers.
Windows Workflow Foundation
One of my colleagues, Jason Sliss, gave a talk about the Workflow Foundation. I’ve been putting off a long look at WF for quite some time, so I was interested to see what Jason had to say. I built a fairly complex workflow engine and system several years ago, and workflow has been a low level interest of mine ever since. I think WF looks interesting, but I’m not sure that I’d be that tempted to use it. Other people were complaining that WF didn’t have pre-canned solutions for many problems, but I actually like the fact that it doesn’t try to solve every problem for you. I think that making WF more or less a library to consume within your application rather than some sort of big whitebox framework will make it far easier to consume and much more flexible.
More on WF scattered below.
People Design Software
One of the first thoughts I had while watching the demo of WF is that a developer could make one unholy mess of unmaintainable crap. Unless you’re on top of things and applying the Single Responsibility Principle and worrying about testability, I think the WF would tempt you into throwing everything and the kitchen sink into the workflow classes. There’s nothing in the WF that I saw that leads to good separation of concerns. In fact, I think the designer-centric nature of WF development will encourage bad design.
What’s my point? It’s simply this: people design software, not tools. A good developer will use WF in a way that’s sustainable and maintainable (especially since business workflow’s *always* change). A bad developer will unthinkingly use the designer to drop code willy nilly and create spaghetti.
Actually, let me put it this way. Tools can create passable designs in the right circumstances. People can make great designs. People can recognize when the tools aren’t generating the appropriate designs for the circumstances and alter their approach. Free will and all you know.
Too much generalization
What’s the deal with so many frameworks trying to pass around Dictionary<string, object> hashes as the argument or state? Put me down as hating this. I’ve seen this approach cause absolutely nothing but trouble and confusion. It’s a weak contract sematically. When you’re trying to understand code it’s “mystery meat.” Anything could be in there. Writing explicit code even though that may take longer upfront versus passing around mystery meat to reuse a generic solution? I’d rather write explicit one-off code instead of passing around mystery meat arguments just to conform to a generic framework. Even if the initial write takes longer, I think the mystery meat would make you pay later.
Shouldn’t generics have ended much of the Dictionary<string, object> abuse? And if we’re gonna keep getting this stuff forced on us, can we get a language construct for defining hashtables inline like Ruby has? And symbols too (compiler checked symbols might be cool if it’s feasible). I know many people are worried about C# getting too big, but I think we could fit this little request into C# 4.0. Who’s with me?
Xaml is Ugly
Explain to the slow kids why or how Xaml is more readable and easier than expressing things in code? I’m not talking about using Xaml for user interface layout here, that I buy.
Language Oriented Programming vs Visual Tooling
Both at the Code Camp and on a podcast I listened to on WF I heard several people make the statement that WF is a better way to write workflows because the visual designer makes it easier to write and understand. That statement was thrown out with zero justification, and that’s too bad because the superiority of visualization is a whopping assumption to make unchallenged. I think we need to give Language Oriented Programming a chance here as a cheaper solution that might just end up being easier to deal with. I’ll have to come back later with some links, but I’ve seen some examples of using DSL’s in Ruby for defining workflow that I think are quicker to develop and easier to read (and hence to understand and maintain) than the visual representation in WF.
I was thinking of the “acts_as_state_machine” plugin to Rails. This kind of stuff is why I’m intrigued by IronRuby.
From Micah Martin of ObjectMentor fame: http://statemachine.rubyforge.org/
I prefer User Stories over Use Cases
I started a new little project for my client last week. I just had a couple massive Use Case documents dumped on me from an analyst on the other side of the Atlantic ocean. I was curious to see how it played out because my knowledge of Use Cases is strictly academic. The Use Case I received was textbook with actors and RUP verbiage and plenty of scenario descriptions in the Word doc. After about a half hour of looking at the Use Case I’m more than ready to say that I strongly prefer using User Stories for requirements. You can try the Mike Cohn link at the end of this section for an expert’s opinion on the difference, but first here’s my reasoning on the advantages of User Stories versus Use Cases
- User Stories are generally finer grained, and I think that’s useful both from an ease of estimation standpoint and the prioritization standpoint. I broke my use case document into about 20 different user stories for my own edification, then emailed that list back to the analyst. When I look into the Use Case I see detailed specs for a lot of different development tasks in the new screen and workflow. Some of these tasks are vital, and others are simply nice to do. As I’m going to remind the client today, I’m rolling off at the end of the year and I simply can’t get to every single wrinkle in the Use Case. The lower priority stuff isn’t going to make it into the first release. If we were using fine grained stories we’d simply play the high priority stuff and let the lower priority stuff stay in the backlog. User Stories are practically built for iterative development. The Use Case throws everything into a big bucket, and my analyst spent time detailing out features that won’t get built.
- User Stories let you get started sooner and that’s important. With a tight schedule, I’m going to be regretting the couple days that I couldn’t spend on my new project because the analyst was perfecting the Use Case document. The simple act of dividing things up into smaller chunks gets actionable requirements into the grubby little hands of the developers and testers earlier. Putting off detail on stories that aren’t yet in play is also more efficient to me. Why do detailed analysis on a requirement that might never get built?
- The conversation thing. A User Story is largely a project management device and a means to creating a common language for the team and customer about features in the system. In one sense the User Story is simply a placeholder for the conversations and interactions between the business, analysts, and developers that make up the detailed requirements process. One of my favorite sayings is “the design hat never comes off,” but it’s just as important to keep the requirements analysis hat on as well. We always understand the requirements better as we proceed through the system. A Use Case just gives you too much incentive to sign off and stop thinking.
Patterns are more important than Tooling
I’m going through one of my periodic “maybe I should go give CAB another detailed look” phases. My attitude hasn’t changed yet. If you know the underlying design patterns behind the CAB, you’ll probably be able to build something simpler that’s more appropriate for your system without the CAB. If you know the patterns behind the CAB, you’ll be more successful with the CAB. The same thing applies to the WF. If you simply understand the idea of a state machine, I really don’t think the WF holds much value for simpler workflows. Both of these tools try to solve problems in a generic way. That’s great from the reusibility standpoint, but it’s a sacrifice. My TradeCapture application is extended by a very specific mechanism that’s expressed in domain specific terms, not by “SmartParts” and “WorkItems.”
I’ve built 5-6 WinForms applications of significant complexity now. Every single one of them had:
- A main form called “ApplicationShell” than contained all of the other child forms and controls
- An ApplicationController class that governed the activation of child Presenters and views
- Some sort of IPresenter/Presenter layer supertype that all views had to implement
- An IView interface
- ICommand interface for the Command pattern
- What I call a “ScreenCollection” to track the open screens
Aha! you say. CAB has most of that stuff and you’re just giving into your NIH tendencies! I’d still say no, because every single application was different. The patterns happened in every project, but the details differed considerably. Even the ICommand interface has varied with additional security, validation, or menu specific stuff between apps. I personally like having all of these classes be customized for the application. It makes them easier to use than a generic solution.
Here’s a side argument I’ve seen crop up a couple times in the past week. Is it better to learn design patterns and principles first, or learn the same patterns and principles as a consequence of using a tool or framework that incorporates these patterns? After looking harder at the CAB, I’d say that it’s not a bad thing at all to study for the design patterns it incorporates, but I’m still dubious that working with CAB is the best way to learn the MVP pattern. I’m defintely in the camp that says it’s more important and effective to learn the patterns independently of a tool. After all, how else can you really judge whether or not the tool is really what you need?
Invest in People before investing in Tools
I see sooooooo much effort and money going into producing or purchasing tooling that will “enable” bad or undertrained developers to write software with adequate results. Software factories to tell them what to do next. Methodologies try to straitjacket developers into being spec programmers. Tools that frankly have no power because the makers are favoring safety to keep developers from hurting themselves. All powerful frameworks that try to do ease development by leaving developers very little choice or freedom. Yes, the average developer might be underskilled and undertrained, and we generally need to do something about that to make them more effective. My constant contention is that we’d be better off to raise the average developer skill level across the board. In economic terms I think it’s cheaper to invest more in developing developers than it is in fancy tooling.
What is so wrong with our value system that we favor using tooling to make people interchangeable instead of investing in people to make them, and us, more effective?