TDD/BDD and the lean startup

A post at Hacker News caused a little bit of a storm on Twitter, by questioning if a lean startup needed to use Test First or if it was an obstacle to delivery.

Build the right thing, or avoid Rework

The poster complains that test first approaches don’t match with market driven evaluation of features:

Most of the code in a lean startup are hypotheses that will be tested by the market, and possibly thrown out (even harder to rewrite test cases with slightly different requirements over writing from the beginning).

First let’s look at a big reduction on the cost of delivery of software – building the right thing in the first place. Those people who come from a background in organizations with little or no process, often confuse Agile’s attack on process over software as support for having no process. It is not. In many ways classical agile is best understood in the context of what went before, heavyweight process, often waterfall such as SSADM. In these cases up to a third of the project’s time was spent in the design phase, before code was written. We don’t want the waste associated with that, but we do want just enough process and we want it at the last responsible moment.

Test-First approaches focus on ensuring that we think about the acceptance criteria for the software we are about to build. The big danger we are trying to avoid is re-work. It is estimated that 50% of a project’s overrun costs come down to rework. Creating a testable specification, through a Given-When-Then scenario or use case helps us to turn a woolly and vague requirement, such as we need to show how late a tube train is running, to something more concrete. We are forced to consider the edge cases, and the failure conditions. It is cheaper to consider these before the code is written, than through writing code and adjusting it afterwards. There will always be some rework, particularly around UX, but we can remove the obvious communication barriers by having a structured conversation around the requirements.

BDD and ATDD emphasize avoiding rework by ensuring that you have the acceptance criteria for a story defined before work begins. The process of defining those criteria, often through a story workshop is much of the value, the creation of  a shared understanding of how the product should work. Usually that workshop occurs in the preceding iteration, and takes about 10-15% of the time in that iteration. Whilst some folks consider it non-agile to work on the next iteration’s requirements, this really is the last responsible moment, because you may raise questions when discussing the acceptance detail for a story that take time to answer, and you don’t want to stall the line to answer them in an iteration.

So its important to understand that in this context Test-First is about reducing waste in the development cycle, not increasing it.

From Extreme Programming:

Creating a unit test helps a developer to really consider what needs to be done. Requirements are nailed down firmly by tests. There can be no misunderstanding a specification written in the form of executable code.

We still need to address the question of what will work in the marketplace; what if your feature is supposed to “test the water”. Usually, the concern here is whether testing is wasted effort if you decide to throw the feature away. Some of that depends on what we think the cost of working test first is – and I’ll get to that next, but an alternative to waste reduction here is to look at incremental development approaches. Figure out what the minimum marketable feature set is, and release that to gain feedback.

With subsequent iterations increment the feature set, if feedback suggests that it is successful. However, at the heart of this is that a good product depends on a good vision, cleanly and simply executed.

Think about the great products you know, they will tend to focus on doing one thing and doing it well, not being a jack of all trades. The scatter-gun approach tends to indicate a lack of product vision to me, not an effective strategy for uncovering what your users want. Build something simple, and wait for feedback on additional capabilities. But they have to buy your vision.

There are many emerging ideas on how to reduce the cost of discovering the market demand for new features, without building them. The Pretotyping guys are gaining some publicity, but there are also the Design Fiction crowd. Both talk about the step before even prototyping when you try to determine the viability of an idea. Recently at Huddle we have been practicing Documentation Driven Development in an effort to gain feedback on proposed API designs, before we build them.

Feedback, and the cost of TDD

The poster on Hacker News complains of the time taken to write tests:

There always seems to be about 3-5x as much code for a feature’s respective set of tests as the actual feature (just takes a lot of damn time to write that much code).

It’s crazy talk to suggest that developers ship code without testing it in some way. Before TDD the best practices that I used to follow talked about spinning up the debugger and tracing every line of code that you had written to make sure it behaved as you expected. Spinning up the debugger and tracing is expensive, far more expensive than unit testing where you hope not to have to start a debugging session to implement a test (it’s usually a sign – to use Kent Beck’s stick shift analogy – that you are in the wrong gear in TDD and need to shift down by writing lower level tests).

In addition, getting this code to the point you can debug may be hard, requiring a number of steps to put the system in the right step. Unit tests are isolated from these concerns, which makes their context easier to establish, you have no such luck with manual tracing.

This problem only gets worse if you have to repeatedly spin up the debugger to trace through the code as you enhance it. Most folks break a problem down into parts to solve it, adding new capabilities. They are naturally incremental. So this means you need to perform this test run through repeatedly.

In addition, unit tests are not particularly new and have been a recommended practice for year. Many developers wrote self-testing code, that came with a simple harness to establish context and exercise the code, asserting on failure, before test first capitalized on the approach, but switched to writing tests first. Anyone who cares about their code being correct ought to be unit testing, and if you write the test last you are in the unenviable position of paying the cost to trace the code while you develop as well as writing a unit test after.

So usually, any halfway responsible developer is doing more work to prove his code is correct without test first rather than with.

The problem is that we tend to measure time to write the code, which seems slower with up-front tests, instead of time to fashion working code, which is often much faster with tests. Because of this we feel that test first takes longer. But we need to look at all the effort we expend to deliver the story end-to-end, which in this case tends to make dropping test first look like a local optima.

In addition, even if there was zero gain for test first, or a little worse, the side-affect of gaining a test suite that you can re-run exceeds any likely gains from not doing test first. So the barrier for tests to prove their value in cost to develop a feature is low.

The cost of ownership of tests can cause some concerns. The poster at Hacker News complains that:

When we pivot or iterate, it seems we always spend a lot of time finding and deleting test cases related to old functionality (disincentive on iterating/pivoting, software is less flexible).

One problem with ‘expensive’ tests is often that implementers have not looked into best practice. Test code has its own rules, and is not a second class citizen. You need to both keep up to date with best practice to keep your test code in good shape. Too many teams fail to understand the importance of mastering the discipline of unit testing, in order to make it cost effective. As a starter, I would look at both xUnit Test Patterns, the RSpec book and Growing Object Oriented Software Guided ByTests if you need more advice on current practice.

I may write a longer post on this at some point, but my approach is to use a Hexagonal, or ports and adapters, architecture with unit tests focused on exercising scenarios at the port, and diving deeper if you need to shift gears. Briefly though, one reason people find tests expensive to own is an over-reliance on Testcase class per class instead of Testcase per class Feature or Testcase per Scenario. Remember that you care about putting your public API under test, which is usually the port level in a Hexagonal architecture, perhaps a number of top-level domain classes exercised in those ports. You may want to wrap internals when you need to shift down a gear, in Kent Beck’s terminology, because you are unsure of the road  ahead, but don’t succumb to the temptation to believe Test First is about testing every class in your solution. This will only become an obstacle to refactoring, because your changes will break tests, as you change the internal structure of your application, which makes testing itself feel as though it makes the application more brittle.

Often too many tests breaking is a sign that you have not encapsulated your implementation enough, exposing it all publicly, and you are over-specifying that implementation through tests.

Acceptance and Unit Tests

Let’s agree definitions. Acceptance tests confirm that a story is done-done whereas a unit test confirms that a part is correct. Or to put it another way unit tests confirm that we have built the software right, acceptance tests that we built the right software.

A lot of the above discussion focuses on unit tests, but when we get to acceptance tests the situation can get a little more complex. Acceptance testing may be automated, but it may also be manual and the trade off on when to automate is a more difficult one to make than when using unit testing.

Now the clear win tends to come when you have well-defined business rules that you want to automate. Provided you have a hexagonal, or ports and adapters architecture, you can usually test these at the port  – i.e. sub-cutaneously without touching the UX. So they tend to be similar to unit tests written at the same level, they just go end-to-end.

Automated tests tend to be a little more complex when they go end-to-end, because you have to worry about setting up data in the required state for the test to run. If you find your tests taking too long to write, you may need to break out common functionality such as setting up context in a TestDataBuilder. (The TestDataBuilder pattern is also useful in unit tests for setting up complex collaborating objects).

There is a misunderstanding that acceptance tests are about tooling, using Fitnesse, Cucumber or the like. It’s about agreeing acceptance criteria up front, the clarity of the story that is provided through that preventing rework, and the automation of the confirmation so that what is done stays done. The appropriate tooling depends on how you need to communicate the resulting tests – with a ports-and-adapters architecture you just need a need a testrunner to execute code, and that testrunner may be written in natural language or code depending on the audience.

Because much of the value to acceptance tests comes from defining done-done it would be possible to suggest that automating these tests gives less value back than the process of agreeing the test plan. However that suite of automated tests gives you quick regression testing support, and that supports your ability to re-write instead of refactor. Unit tests support preserving behaviour whilst refactoring, but this does not include changing the public interface; for that i.e. preserving the business rules while making large scale changes, you need some acceptance tests.

It is common for teams that do hit the bar of effective unit testing to fail at the bar of acceptance testing.

My previous experience working both with and without acceptance tests is that without acceptance tests we often find bugs creeping in the software that our unit test suite does not catch. This is especially true if our acceptance tests are our only integration tests. If we do no enforce the strict policy that existing acceptance tests must be green before check in, we often get builds that introduce regression issues, that may go uncaught for some period of time. It’s this long feedback loop, between an issue being created by a check in and then discovered that causes us significant issues – because we do not know what change led to the regression issue.

So by avoiding the cost of acceptance tests we risk the costs of the work to fix those regression defects and because the point between the defect being discovered and a developer making a check in long, they may cost more to fix than writing the acceptance tests would have done.

In environments where I have enforced acceptance tests being green before check-in the quality of software improved and when regression issues did emerge it was clear to the developer that thier changes had caused the issue, leading to faster resolution (and preserving the current test environments).

Of course this may depend on how long it takes to regression test your software. That situation seems to many even more marginal when we think about automating UI tests for acceptance. Often the cost of keeping pace with changes to the UI within tests can seem to be fruitless. Particularly in early stages of a product the cost of creating and maintaining UX tests can seem to exceed their value. You can end up in a scenario where the cost of a manual regression test run is less than the cost of maintaining an automated test suite, and finds more defects.

This is especially true if you have a thin UI adapter layer over your ports, with acceptance tests against the ports catching your end-to-end issues. You already have plenty of behaviour preserving functionality around your business rules, so now the UI tests should really be only picking up UI defects (of course if you have a Magic Pushbutton architecture all bets are off as you will can’t easily write tests without exercising the UI)

The problem becomes that this equation holds true, until it doesn’t. At a certain point the manual regression test costs become an obstacle to regular releases – simply put it takes you too long to confirm the software, and the cost of releasing rises to the point that you often defer it, because you don’t have sufficient value in a release compared to the regression testing effort.

At that point you will wish you had been building those automated UI tests all along.

I think there is a point where you need to begin UX automation, but it is hard to call the exact point when you begin doing that. It may depend on your ability to create adequate coverage through acceptance tests written against your ports, it may depend on the complexity of the UI. Up to now I think I have always started too early, regretted it and abandoned the effort, then resumed too later because of that. I need to learn more here before I can make a call on getting this right. Again I suspect that understanding best practice around this type of testing might help, and I’d be happy for anyone sharing links in the comments below.

TDD/BDD and the Lean Startup

Overall I think that there are two dangers to the simplistic position given by the original poster. The first is to mistake reducing the time taken to get to the point where you can run code, as reducing the time taken to get to the point where you can release code. Once you appreciate the two are not the same, and look at overall costs and benefits, the value of TDD/BDD becomes more compelling. The second is to mistake lack of understanding of best practice in TDD with a failure in the process itself. We have come a long way in the last few years in learning how to reduce the costs of writing and owning tests, and they should be judged in that context.

Posted in Agile, ATDD, BDD, STDD, TDD, Uncategorized | 7 Comments

Alt.Next

First off, apologies for the delay in writing this one; I can only plead the pressures of work at the end of last year. There are a number of blogs, in the pipe, which I hope you can expect to see soon.

However, I had promised to update people on what happened at that last alt.net conference in the UK, specifically because we declared that it would be the last alt.net conference, feeling that movement had run its course. That outcome was reported on Twitter, but I wanted to go into why, and talk about some ideas for what is next.

Alt.Next took place on 19th\20th November 2010 and we were kindly hosted by Thoughtworks. I gave the keynote on the condition, as I saw it on the state of the Alt.Net movement, something I have talked about here before on Codebetter, but the history, as I saw it, is worth repeating here.

A brief history of how the MS software community learned new ideas

15 years ago, for the old hands amongst us, most our deep understanding came from books or from peers. There were rock star developers who we learned from. Just look at the Wrox publications of this period for the epitome of this rock star author movement with its, somewhat cringe-worthy, photos of geek celebrities embossed on its front covers. I can remember poring over copies of MFC Internals and Revolutionary Guide to MCF4 Programming with Visual C++.

But the problem was that knowledge was  ghettoized; most folks read the books they needed, and magazines like MSDN. There seemed to be very little cross pollination of ideas outside individuals moving jobs.

10 years ago, the Internet started to become a source of technical knowledge. Now there were BBS systems before this, but I don’t think they had the same reach. In my recollection, the real explosion here  corresponded with the emergence of Java. Java was cool and even places like The Well had Java forums where developers could gather and talk about this new language. It was either something to do with the paradigm shift caused by Java creating a new playing field, in which the old rock stars had to work to avoid looking like yesterday’s heroes, or the cross-fertilization as people from myriad former communities come together within Java; but somehow there seemed to be a greater sharing of knowledge. The masters were revealing their secrets to the journeymen and apprentices. It was a major shift in the availability of knowledge, and even books seemed to change to reflect that.

To my recollection this was the beginning of the first Windows Developer Diaspora. A lot of smart kids wrote leaving MS emails and forum posts and moved to Java. Plus ca change. Eventually MS reacted and moved to stop the diaspora with their own extensions to the JVM, and following that the CLR.

But the result of this explosion of knowledge, and the peering over the fence from some MS developers, looking to see what all this Java noise was about, led to a few of them encountering new ideas. New ideas like ORMs, and IoC containers. And some of them realized that the similarities between Java and C# gave them the ability to port some of these OSS projects into the .NET space to get the leg up on using them themselves.

In addition, as these MS developers looked to the Java community to understand best practices that might help them use their new frameworks, they stumbled across agile software engineering.

But…this cross fertilization did not seem to spread widely in the MS community. A lot of them were so busy fighting a rearguard action around MFC and VB that they already saw .NET as an invader, let alone ideas from other communities. I think they were afraid of the ‘promiscuous’ behavior of.NET developers who flirted with these new ideas; they threatened the rock star developers dominion. And that frustrated those ‘promiscuous’ developers who were excited by the possibilities of these new ideas, who saw a better, faster way of working. But without widespread understanding of these ideas, adoption was always a struggle, and these ‘promiscuous’ developers became angry that no one was spreading the good news of the better way to their colleagues and managers, forcing them to work with impoverished tools and practices.

And I think it was in this context, that in 2008 a number of ‘promiscuous’ MVPs gather at MVP summit. Many are folks you would recognize as current of alumni CodeBetter bloggers. They clustered together through shared beliefs that the .NET community was blind and deaf to other practices, frameworks and patterns, and that MS was happy to let them remain in the dark. Their anger coalesced around the Entity Framework, which was seen as a poor alternative to Nhibernate, or even LINQ to SQL for folks that wanted to work in a new way.

Because they were associated with the EF rebellion they became knows as the NHibernate mafia at first. But David Laribee gave them a new name, and a manifesto from hacker culture with Alt.Net.

Some may feel the Alt.Net movement succeeded, some may feel it failed. However, awareness of agile software engineering ideas like SOLID or TDD is far greater, as our tools like ORMs or IoC containers. When MVC frameworks began their rise with Rails, the .NET community furnished Monorail and later MS produced ASP.NET MVC. Flawed or not, I think that the movement promulgated greater understanding of why these ideas were important. Now more and more people can work environments where these ideas can be pushed and adopted. So I think the Alt.NET movement achieved something, and I don’t think it should be regretted.

But of late the movement seems stalled. The question is what does that mean for the future of Alt.NET.

Our Discussions

A lot of the conversation on the day at Alt.Next can be centered around two perceived problems with the Alt.NET movement:

  • Messianic.NET
  • The New Orthodoxy

Many attendees brought stories of developers for whom Alt.NET was a negative moniker, applied to someone who is pushy and arrogant; with the implication that their ideas were somehow ivory-tower, not for us, and impractical. There was a feel that the ‘missionary’ nature of the Alt.NET community had done more to offend that inform, and had often been too messianic. No one likes to be made to feel stupid, but the Alt.NET community made a lot of people feel that way.

The danger of the new orthodoxy can be summed up as the belief that Alt.NET is really just a set of tools. A TDD framework, an IoC container, an ORM, and an MVC web framework. Hence the question from outside: given decent versions of all those, isn’t the problem over – can’t Alt.NET just go away? Even deeper the problem seems to be that its not just generic versions of these tools but NHibernate over EF, anything over Unity, etc. that are required to prove your credibility to the movement. There even seems to be an orthodoxy that anyone not using R# is poor software developer who cares little for his craft. “By their works ye shall know them,” seems to be the mantra of this witch-hunt.

I think that none of the attendees believed that this was what Alt.NET had originally meant. Indeed David’s proposition had been: “It’s not the tools, it’s the solution…When tools, practices, or methods become mainstream it’s time to get contrarian; time to look for new ways of doing things; time to shake it up”.

And I think for my part, and many others is that whilst Alt.NET should now be about NoSQL, Javascript MVC frameworks, Node.js etc. it seems mired in what was, not what is next, what we need to improve to be better. And in turn when those are done, it needs to move on.

And so we decided to abandon the Alt.NET moniker for future open space events.We want to move away from waging a war to spreading ideas. It is about cross-pollination with other communities, platforms. Going out to listen to other conversations and bringing that back into the .NET space. Innovating and taking those ideas out of the .NET space.

We’re not even sure what the “new thing” should be called, or even if naming it matters, and is not a mistake. Perhaps Alt.NET is dead as a movement; others may still find it useful to identify the like minded, but whatever you want to call it, I think its important to embrace the manifesto that David outlined long ago over what it became in between:

“Ralph Waldo Emerson wrote “there are always two parties; the establishment and the movement.” If you’re ALT.NET, you’re in the movement. You’re shaking out the innovation. When the movement fails, stalls, or needs improving you’re there starting/finding/supporting that next leap forward.”

For our purpose though, Alt.NET is dead. I sing its praises, but its time to find the next leap forward.

The New Diaspora

One postscript is that once again we seem to be experiencing a diaspora. Once again ideas outside the community burn very brightly. Ideas that cannot simply be ascribed to tools but to approaches and practices. In this case I believe it is partly the OSS ecosystem that non-MS software communities support so much better. Perhaps our new movement will truly be born from a reaction to this crisis, as much as Alt.NET was to the last.

Posted in altnetconf, AltNetUK | 4 Comments

Lucene.NET could use the community’s support

I don’t usually do this kind of post, but following up on my comments in Diverse.NET about the state of OSS in the community I thought it would be worth pointing out that the Lucene.NET project is looking for help. Anecdotally I know that a lot of people use Lucene.NET in London, so I suspect that situation is mirrored in the wider community. A lot of the love for Lucene.NET probably comes from its usage in NHibernate Search. RavenDB uses Lucene.NET internally. Lucene (and related pieces like Solr) are ‘big beasts’ in the search field, must as Hibernate is in the ORM field, and having a working port gives us easy access to this technology in the same way NHibernate gave us a first class ORM. So I think its an important piece of the ecosystem for us to support. For those looking for a little more Scott Cowan gave a presenation on Lucene at Skillsmatter that you can check out here

It’s probably worth reading through the thread on the mailing list archives to get a feel for the conversation there as to the direction the project is headed particularly this post.

Posted in Uncategorized | Leave a comment

Diverse.NET

Rob Conery has an  interesting piece about adopting an open source project to encourage more OSS development in the .NET space. It is something we need. But I have one slight issue to pick with Rob’s comments. The .NET community didn’t wait 7 years for MS to implement it for us. Seb went out there and wrote OpenWrap. What happened? Well it looks like outside the London .NET community folks were pretty much unaware of it it. Seb tried hard to publicize it, there is an interview about it on InfoQ back in May. We hosted a session on it at LDNUG, and JetBrains sponsored an evening with a presentation at DevCon London. But it just never seemed to get the wider-audience it needed to break out of those closed circles. Many people looked at NuProj as well. But even that project never seem to cross out of a small circle. (EDIT: And before that was Horn – thanks Paul – and even CoApp)

But MS entering the fray with nupack has practically obliterated a ‘from the community’ solution and created the 300ibs gorilla of an MS sponsored one. That may not be the ‘fault’ of MS, their intent may be entirely good, but the road to hell is often paved with good intentions. By stepping in MS may have felt it was helping the community to resolve the issue. All it did was simply remove the ability of the community to solve the issue. I understand that nupack is set up as an OSS project, but by ‘doing it for us’ they removed the community’s responsibility to solve the problem. So they have not aided the growth of community that addresses these issues. One of the hardest lessons to learn when leading or mentoring is to let others ‘do it for themselves’ even if you could do it faster and sooner. Unless you give others the chance to try (and fail) you won’t get growth and you will always be doing it yourself.

The stark reality is this – the package management issue could have been resolved without MS help through OpenWrap (or NuProj). I don’t see significant technical advantage from NuPack over OpenWrap (it might win some adoption through a UI). I don’t know NuProj well enough to comment.

That would have given the opportunity for the community to understand it could solve its problems. If MS had publicized those OSS solutions with ASP.NET MVC 3 then it would have also given enormous publicity to community efforts to solve its own problems.

I don’t, before anyone suggests that, want to denigrate the effort put in by the team working on NuPack. I just wish that it did not come with the wieght of being the MS solution to this issue.

What MS could have done was to call for folks to tell them about OSS package managers, worked with those projects to define some kind of open package standard, and then communicated to the wider development community what the standard was and who implemented it. For sure, if the community failed it might have helped to kickstart some projects with sample code, but by creating an MS solution, they pushed aside existing community efforts. I would like to think that the community gets behind the best package manager and that we chose between OpenWrap, NuPack or whatever based on what works

There does seem a lack of willingness within the .NET community to engage with a diverse ecosystem over a monoculture.In the long term this is not good for us. It does not drive the kind of innovation a successful development platform needs. A monoculture is no more positive for MS than others in this regard. I also think that the problem is sufficient that it does not only impact MS, but a lot of commercial outfits in the .NET space. I am sure there are shops that don’t use R# or CodeRush, because its not MS. There are plenty of commercial frameworks and libraries that would benefit from a widening of people’s purchasing choices. (And sure I would like a lot of tools and frameworks that are commercial to go open source and free for private use like RavenDB or NServiceBUS and licensed for commercial, because I believe it is easier to make choices around something that you can try before buying and support if the vendor fails, but diversity is diversity regardless of the model).

Somehow we need to encourage more diversity in the .NET space, in tools, languages, people. I understand that it’s a big challenge. We need IronRuby and IronPython on the platform for that diversity. We need folks to understand that they could look  Boo too. We need folks to understand there is not just ASP.NET MVC but OpenRasta, Monorail, and FubuMVC. We need folks to know not just about Entity Framework and LINQ-To-SQL, but also NHibernate, Subsonic, or even XPO. It’s not about favoritism but creating an ecosystem in which different solutions can compete.

If Alt.Net was yesterday’s war cry maybe Diverse.NET is tomorrow’s.

The tragedy is that only MS probably has the reach to the .NET community to encourage engagement with an ecosystem over a monoculture. And as MS own the monoculture, can they be convinced it lies in their best interest to support it?

 

Posted in Uncategorized | 49 Comments

BDD, Feature Injection (and the Whirlpool)

I have started to feel comfortable enough about our BDD
practice
to begin presenting on the techniques. Liz Keogh came to my last
presentation at London .Net Developers Group and suggested that I needed to
look at Feature
Injection
as part of BDD practice now. Chris Matt’s comics are a good
starting point for Feature Injection.

I’ve not tried Feature Injection, but think its definitely worth a look for
anyone using BDD. I’m keen to get a better understanding of how it might modify
our ‘today’ practice. This is just really some early notes on what I know now.

Stakeholder Stories

Liz Keogh has a useful article on how sees the overall process of BDD when
using Feature Injection over at InfoQ. The shift in
how user stories are phrased seems to be one key to understanding this.
Business Value is now being recognized as a key driver for why we are
delivering a piece of work. So we re-phrase the classic Role-Goal-Motivation
User Story as:

As a <role>

I want to <goal>

So that <motivation>

with

In order to <achieve some outcome which
contributes to the vision>
As a <stakeholder>
I want <some other stakeholder> <to do, use or be restricted by
something>

Liz calls these stakeholder stories to emphasize the difference from user stories.

Distilling the Vision

Notice how the outcome, the reference to the vision, why are we building
this software now comes first over the goal. We emphasize that it is not enough
to say what, but also to say why. How does this story help us meet the vision?
When I talk about BDD I often shorthand it as “Build the right
software” in comparison to TDD which helps us “Build the software
right”. Feature Injection seems to take this process one step further by
saying that it is not enough to restrict the developers to only building
software that satisfies a specification, but also to ensure that those
specifications are authored only for those stories that deliver on the vision.
So where does the vision come from? In DDD terms the Vision results from the Distillation
process. Identify the core domain and produce a Core Domain Vision statement.
So the technique of Feature Injection reinforces DDD’s Distillation technique.
Both Liz’s article and Chris Matt’s comics identify some standard techniques
for helping to distil what is core domain from what is not. Given that vision
the stakeholders then work to identify the broad features of what need to be
delivered. User stories come from decomposing these features.

Our software lifecycle tends to begin with Iteration -1 in which the Big
Plan, how we think we might build some software to satisfy a Vision is taken to
sponsors. At that point we have only the results of Distillation (what DDD
calls a Distillation document) and enough understanding of the Feature Set we
need to provide a rough costing (folks want to know the order of the costs to
understand whether the business value justifies the cost – software is an
economic proposition). We then move to Iteration 0 or project Chartering, where
we might capture initial stories and produce what Crystal calls an Exploratory
360
.

What I like most about Feature Injection at first reading  is this
explicitness with which it highlights what happens in Iteration -1 and
Iteration 0. These are no longer ‘locked in the attic’ in agile process terms,
but brought to the fore as a vital part of the methodology. I have come across
too many teams that have no idea about Iteration -1 or Iteration 0, push
straight into Iteration 1 and build the wrong thing. So any process that
highlights the need for this to occur gets a thumbs up from me.

What is Business Value?

If we are suggesting that our focus is on business value, how do we assess
value? Chris Matt’s admonition to focus on delivering business value over
features is something about Feature Injection that I really warm to, as again
and again we try to point to the fact that the most important metric for us is
business value. In particular Feature Injection suggests ‘popping the why
stack’ until you can arrive at a definition of business value that meets Andy
Poll’s advice that ‘A project creates business value when it increases or
protects revenue, or reduces costs in alignment with the strategy of the
organization’. That’s pith because it not only identifies revenue or cost as
the driver, but alignment with business strategy. For Chris, the place to look
for business value is in the outputs of the system. It is the outputs that you
can measure for business value. So once we identify an output, we ‘pop the
stack’ to find its business value.

Whither Use Cases?

Of late I have been wondering about the correspondence between examples and
use cases and questioning whether as we emphasize modelling from examples that
use cases did not have it right all along. So it is interesting in Liz’s
article to pick up on the same question from Eric Evans, and respond that
“You can’t ask a business person for a use case unless they’re already
technical enough to know what a use case is. But you can ask them for an
example”. That implies that they are two different approaches to the same
goal, but the user is placed in the driving seat with examples. It’s a good
explanation of the difference.

Intersection with DDD

Interestingly enough at Domain Driven Design Exchange, Gojko Adzic presented
on the intersection of BDD and DDD and feature injection was at the heart of
how he saw the two corresponding. You can see Gojko’s presentation here.
Gojko’s main assertion, as I understand it, is that having used Distillation (figuring
out what is Core Domain where you
expend modelling effort, as opposed a Supporting Domain, where you don’t) to
decide what to build we hold Specification
Workshop
to drive out the scope using Feature Injection. We then model
using to the scenarios outlined for the stories to create a Ubiquitous Language. At
the same time Eric Evans spoke about his perspective on Agile and DDD, looking
at DDD’s need to produce a Ubiquitous Language and asking if that was in
conflict with Agile preference for “Last Responsible Moment” over
BDUF. Again the solution space seemed to me to focus on modelling from
examples, with enough modelling up front, followed by further modelling as the
model is stressed. You can see Eric’s talk here,
which includes a prototypical discussion of his Whirlpool approach.

Is there too much focus on the UI here though?

One of my concerns about the process as written is some of the focus on UI.
I understand the principle that the user consumes through the UI and it is
therefore the only valid test of the system, but this overlooks the extent to
which the UI is in flux during the early part of development. Indeed the
approach identifies that we need to release early to get feedback on the UI and
revise. Those revisions cause issues for tests that are focused on the UI,
because as the UI changes, the tests break. Rick Mugridge in Fit for
Developing Software
is to test the business rules, not the workflow.

Now I don’t want folks to get the wrong impression here. We product a Balsamiq mockup of our user interface as part of the discussion on any story, often as part of our describe step, and we iterate over it. So I am certainly not saying that the UI is not vital to successful delivery to the customer or to understanding what is to be delivered. However I am more cautious around automate testing that focuses on the UI as the driver for development. I would rather drive from the rules than a UI focused test. We still have story tests to help ‘build the right thing’, in Fitnesse (replace with Cucumber, Fit, Slim, Specflow, StoryTeller etc) but they are subcutaneous tests. Much of our UI testing is via Exploratory Testing, and automated UI suites check only stable areas and act as smoke tests for regression issues.

Summary

All in all, very interesting stuff, that I will be looking into more as I try to refine my BDD practice

 

 

Posted in ATDD, BDD, Behavior Specification, STDD, TDD | 6 Comments