There’s a good discussion going on right now on the Yahoo XP board about Jim Shore’s post on design.
The pro-Agile and anti-Agile arguments can get pretty silly because they’re mostly using very stupid implementations of either XP or heavy processes as the examples. Just like politics I’d have to imagine that most of us live somewhere in the middle. On one end you have the argument that Agile development is nothing but cowboy coding that doesn’t have *any* design or architecture activity. After all, there’s no line item on an Agile project schedule that explicitly says “do design” or “publish the finalized Software Architecture Document.” I’ll freely admit that developers often call their process “Agile” as cover for a “code’n fix” method. We’re trying to hire a QA analyst for my team and most of the candidates (suck) make faces when they hear that we’re an Agile shop because their experiences have been negative.
There’s also a great deal of reasonable doubt that refactoring is cost effective. My view is a bit mixed. I think that refactoring is smart because you’re acting upon real knowledge from the code. On the other hand I think you better be aggressively keeping up with the refactoring or you might just building up a bunch of crud code. I think there’s a healthy mix of upfront and reflective design that’s probably a bit different for every situation. Experienced Agilists will do “Design All The Time.” They learn as they go and apply these lessons. They opportunistically create abstractions to eliminate duplication only after they understand where the abstractions should be. They constantly use refactoring to make small adjustments as they go. They do not allow technical debt to build up.
The other extreme is the “Big Design Upfront” stereotype that is the boogeyman Agile advocates use to scare people away from traditional processes. BDUF is described by Agilists as an attempt to specify all of the design to an excruciating detail before allowing any coding. The coding is then a mindless implementation of this big bang design with no room for adaptation. The fear is that the design will be unnecessarily complicated and bad designs can’t be corrected midstream because developers don’t have the power to challenge the design or change the design specifications. There’s also the fear that BDUF leads to teams that will be complacent towards the workability of the design and the code.
It’s an awfully good thing that almost nobody really does BDUF. Some pointy hair types might think that their development teams are doing BDUF, but under the covers the technical team is working incrementally and adapting as they learn because that’s the only way to succeed. I really don’t think many teams will keep walking down an obvious path to failure, unless management makes them. As one of my colleagues likes to say, “Agile is just telling ourselves the truth” and “waterfall is just a reporting tool for management.” I’ve been officially responsible for designing software for about five years now. I put a lot more effort into designing for testability and less on big pluggable frameworks than I used to, but otherwise I’d say that my approach to design isn’t that much different today inside a Scrum process as it was 5 years ago at an MSF waterfall shop. I still do a little bit of design before a little bit of coding and constantly keep notes on longer term design strategies. The only real difference is that now my design activities don’t clash with the officially proscribed process. Arguably I get to spend more energy on design today in an Agile process because I’m not wasting as much time complying with non-value added process activities.
So What Are You Afraid Of?
To a large degree I think that the arguments between continuous design and traditional upfront design are largely a waste of oxygen because we’re all talking past one another. I think our assumptions and beliefs about software development are fundamentally different. The differences between heavy, prescriptive design processes and continuous design advocates might be partially explained by our fears and assumptions.
Predictive Waterfall Mentality
- Coding rework is expensive. It’s dangerous and labor intensive to modify existing code.
- Upfront design is necessary and efficient because it eliminates extensive rework. The design should address all the requirements before coding.
- The best way to mitigate risks is to create and document the design approach early. If we can adequately nail down the requirements and design early, the rest of the project is just mechanics.
- Coordinating a team requires documentation.
- As soon as the analysis and design phases are over, scope and design changes should be minimized. Requirements change after the initial phases will jeopardize the project.
- Developers can’t be trusted and need structure and process to do things well.
- Simplicity in design is important. Overly complex designs lead to coding inefficiency and delay the delivery of code. The requirements of the project are assumed to be changing, so don’t spend any effort that does not directly relate to the immediate requirements within a project iteration.
- Heavy design before coding leads to unnecessarily complex designs.
- Technical and project risks are only be mitigated by proving a design in working code and getting customer feedback.
- It’s more efficient to get and apply feedback from coding to evolve a design than predictive design. Predictive design is too difficult.
- Technical work is best coordinated by collaboration and communication
- Scope change is inevitable. Maximize your ability to handle change by working iteratively and incrementally.
You could just boil down the Agile vs. everything else argument down to their particular views on the “Cost of Change Curve.” Personally, I do believe that the change curve can be flattened through a rigorous combination of Test Driven Development, Continuous Integration, and an investment in testing automation in general. Part of that is my experience that TDD leads you to create code that is high quality in terms of cohesion and coupling. Well written code with good architectural qualities can be modified much easier than code that is poorly structured with low cohesion and tight coupling.
For enterprise systems it’s basically a guarantee that the system will always be changed after the initial project is finished. You should probably build a system that can adapt to changes anyway. There’s two ways to go about that. You can speculatively build in lots of plugin spots and create metadata driven code (been there, got the tee shirt). Some SOA enthusiasts/architects force application teams to expose web services on systems just in case they’re valuable later. I think this is a stupid waste of resources that doesn’t result in any immediate business value. The better approach in my mind is to keep the application code limber by rigorously adhering to good coding practices (separation of concerns, high cohesion, low coupling, blah, blah, blah) and back it up with TDD and CI. If you need to expose a web service later, you’ll be able to do it — when and exactly how you actually need to do it.
The choice of development platform and tools will greatly impact your view of the change curve as well. I think that it’s no coincidence that the Agile movement is originally a product of the Smalltalk community, a language and environment apparently well-suited to evolutionary development. If I were still developing server side code in VB6 with all the COM versioning and deployment baggage I’d still believe more in heavier upfront design. .Net is much more forgiving than COM for evolutionary development and we’ve got tools like TestDriven.Net, CruiseControl.Net, and NUnit to give us more rapid feedback. You also cannot discount the advantages of an automated refactoring tool like ReSharper. Being able to safely do refactorings quickly takes a great deal of the risk and overhead out of evolutionary approaches.
I’ve also observed a marked difference in attitudes toward the technical staff between Agile and non-Agile development organizations. Agile shops make an implicit assumption that the development team as a whole should be capable of thinking while doing. Agile processes explicitly and correctly call out the importance of having talented individuals on a project team. I’ve seen a lot of criticism of Agile based on the dependence on strong developers that have a very good design sense. There is certainly some truth to that. On the other hand, have you ever seen a big upfront specification from an architect or technical lead with weak design skills? I have and it’s not pretty (think of a UML class diagram with one box that has one method called “Execute” and you’re not very far off).
In my waterfall days my management truly believed that all you needed was one or two strong people who could create all of the direction for the rest of the developers. Preferably the direction would be created in formalized in specification documents so that the stronger developers could be quickly allocated to another project. There seems to be a very strong desire to treat developers like a commodity (JustAProgrammer). I generally assign nefarious thinking to management in this case (outsourcing, low pay, interchangeable parts, keep the rabble down), but it’s probably an unfortunate reality for many big IT shops. I certainly feel that Agile processes appeal to stronger developers and it’s easier to hire and retain strong developers in a good Agile shop. It’s also impossible for a bad developer to hide in an Agile team.