Jay Kimble, CodeBetter's resident AJAX guru, issued a little challenge to us TDD bloggers about using Test Driven Development to develop a custom extension to the MS Ajax ScriptManager control. In the comments, Jeff Perrin (you need to blog more Jeff) proposes a solution that's about what I would have suggested — remove most of the functionality to a "POO" class that isn't coupled in any way to the ASP.Net runtime.
I think the most important things to know or think about upfront before transitioning to TDD are:
- There is a way to write an automated test for almost any piece of code, including the user interface itself (the idea that the GUI can't be tested is somewhat a myth)
- The biggest part of driving code through the tests is thinking through *how* you're going to split up the responsibilities of the code to be able to easily write unit tests *before* you commit to writing code. I know a lot of people blanch at the thought of doing things just for testing, but a lot of making code testable is simply a strong focus on separating responsibilities into cohesive and loosely coupled modules. Basically, traditionally good design in general.
- Assume that you will have to change the way you code in order to make writing the unit tests easier. Maybe you won't, but you should still have an open mind and be prepared to challenge the way that you write code.
- Specific to .Net, the "Visual Studio way" of creating code often detracts from testability. More on this below.
Laying down the Law
Back to Jay's issue, let's try to apply some of Jeremy's Laws of TDD.
- Isolate the Ugly Stuff. In Jay's case the ugly stuff is anything related to the ASP.Net runtime. Following law #1, boil the "meat" of the real functionality away from the "bones" of the ASP.Net runtime. Now, the functionality can be tested without all the overhead of a web server.
- Push, Not Pull. In Jay's control, divide the responsibilities into two pieces. The control itself is only responsible for handling the interaction with the ASP.Net runtime and delegating to the "POO" class that actually implements the functionality. If the "POO" class is given everything it needs to do the work, it's completely decoupled from its data source, and therefore easy to test. Known input, expected outcome. This is what I call the "Roadies and Aerosmith" analogy. The "POO" class (Aerosmith) is doing all the sexy work, but the web control (the Roadie) is responsible for giving everything to the "POO" class that it needs to do its work.
- Test Small Before Testing Big. Obviously, the whole thing, control and "POO" class, needs to be tested front to back in an integration test. Before you even attempt the big bang test, build and test the constituent pieces in isolation first. That strategy both leads to effective evolutionary design, and keeps the debugger turned off.
- Avoid a Long Tail (forthcoming, but soon because I need some content for my DevTeach talks). Jay's scenario is really too simple for this law to come into play, but think on this: if you need to pick up any piece of logic or behavior in your system and write a test for it, how much "stuff" has to come with it? For example, if you can't test some business logic without a database or Active Directory authentication, you're TDD efforts are going to be much more difficult than they could be.
More Classes? Yes!
Again referring to Jay's post, one of the commenter's is concerned that the extra classes will create performance/scalability problems and I'm would also guess an unvoiced concern over extra complexity. Two points:
- I vehemently oppose the attitude that more classes automatically equates to complexity. I think a much, much more pervasive problem in the wild is classes that are bloated in size with unrelated responsibilities. In a way, I'm personally more concerned with the complexity of any one single piece of code more than the whole. I'll take a dozen classes with cohesive responsibilities versus a single 1000 line procedure any day of the week.
- I will happily stand up and shout this all damn day. In all but the most extreme cases, the performance overhead of creating well factored code (smaller classes and methods) is trivial in the grand scheme of things. Maintainability, of which testability is a crucial component, should never be sacrificed at the alter of performance without very good reason. I cringe every time I read somebody say to avoid creating more objects as a performance optimization. Sure it is, but compared to a single inefficient database query or chatty communication?
The Underlying Problems of TDD in the .Net World
Now, the very real problem that Jay's little example exposes to broad daylight: in .Net development, and especially ASP.Net WebForms development, you often have to go out of your way to create testable code. To steal a phrase from Jeffrey Palermo, .Net does not help you "fall into success" with good separation of concerns and testability. In Jay's case, you can't simply write a small unit test for the control as a whole because any UserControl is tightly coupled to the ASP.Net runtime. You can fake the HttpContext and the page event cycle, but it's nontrivial and doesn't save you much time (and no, don't ask me exactly how. The examples I've seen relied on lots of Win32 calls and quite a bit of time with Reflector. My advice is to go to MonoRail or wrestle WebForms into an MVP structure). You *can* abstract away the HttpContext by surrounding it with abstracted interface wrappers, and I do recommend this, but the better question I should probably be asking, even as a fervent TDD advocate, is "why do I have to do all this extra stuff for testability?"
One of the most popular posts I've ever written was on using a Model View Presenter architecture with ASP.Net for testability. Half a year later I completely lost my enthusiasm for MVP with WebForms in a short morning of presentations from Prag Dave Thomas on Ruby on Rails. What I saw in only a couple hours was a far more elegant solution for testable presentation code than MVP/WebForms can possibly provide. At one point I even made Dave stop and repeat a sentence on how state and information was moved from controller to view because their technique eliminated most of the mechanical work we were doing to force ASPX pages into something testable.
One of the huge advantages of Ruby on Rails over WebForms is the inherent testability that is baked into Rails. The way you develop pages in Rails out of the box is inherently testable, in WebForms you have to apply an extra layer of stuff that isn't built in. Rails comes with built in extensions into the Ruby xUnit tool to stub out the equivalent of the HttpContext at will to simulate user requests, plus more extensions for common assertions for Controller to View interaction. You can even happily test the generated web page itself without starting up a web server, and that's a huge win in terms of the rapid feedback cycles you want with TDD.
Rails straightjackets you into an MVC breakdown by default that separates responsibilities for easier testing. ASP.Net is all too happy to let you throw everything into the code behinds. I was extremely dubious about Software Factories at first, but there might be some room for a software factory for ASP.Net that leads to better practices. We'll see.
If Jay had been building this functionality with Ruby on Rails (which already has that functionality btw), he wouldn't have had to go out of his way one single bit because Rails is inherently testable, and that's my whole point in this section. We want ASP.Net to have the same level of testability that Rails does. We should be asking Microsoft to move ASP.Net in a direction that enables us to write testable code without fighting the framework.
I don't know how much say or influence I get with this MVP thingie, but the one single thing that I would beg Microsoft for in whatever is beyond Orcas is far more consideration for testability in the .Net framework. It's the one area in particular where I think the .Net framework needs a lot of improvement. ScottGu, since you're amazingly all seeing, are you out there? Can we talk about testability in ASP.Net? Can we jump testability over new wiz bangy RAD capabilities in the priority list? Who's with me?
I really think the .Net community needs to reexamine and debate the merits and appropriateness of the Visual RAD approach. A lot of what makes ASP.Net WebForms hard to test is the extra complexity that's there to support design time controls, specifically the page event cycle. RAD is doing more damage than good because the negative impact on maintainability/testability overwhelms the "write once" advantages of RAD. Discuss.
My Stuff on Test Driven Development
- TDD Design Starter Kit: It’s All about Assigning Responsibilities
- TDD Design Starter Kit – State vs. Interaction Testing
- TDD Design Starter Kit – Dependency Inversion Principle
- TDD Design Starter Kit – Responsibilities, Cohesion, and Coupling
- TDD Design Starter Kit – Static Methods and Singletons May Be Harmful
- Succeed with TDD by designing with TDD
- Unit Testing Business Logic without Tripping Over the Database
- Haacked on TDD and Jeremy's First Rule of TDD
- Jeremy's Second Law of TDD: Push, Don't Pull
- Achieve Better Results by following Jeremy's Third Law of TDD: Test Small Before Testing Big – I think this is the most important lesson for making TDD work for you
- How much design before unit testing, and how much design knowledge before TDD?
- So How do You Introduce TDD into an Organization or Team?
- Mock Objects and Stubs: The Bottle Brush of TDD
- Why and When to Use Mock Objects
- Best and Worst Practices for Mock Objects
- Mock Objects are your Friend