Seizing the BDD Nettle

Learning BDD

I have been trying for some time now to understand what the Behavior Driven Development (BDD) exponents are talking about. I’m sure that Scott and Colin Jack are sick of me asking questions on BDD.

This is pretty much a journal of my understanding. It might help others on a similar journey, hence the reason for blogging it. I know a number of people have been asking me about BDD recently at community events, so this is an attempt to share where I am in understanding so far. Be aware that there may well be misunderstandings here. I’m not claiming any kind of authority for BDD. Hopefully more experienced practitioners will help me to correct any mistakes. Please use the comments. As my understanding improves I’ll keep you posted.

For now be aware this is really a ‘notes to self’ blog.

The approach of this post is mostly historical. That is mostly because my current understanding of BDD is that it is an opinionated packaging of ideas from STDD, ATDD, and TDD. So I find it helpful to understand the ideas it is built on and where current skills form a part of the BDD mix.

A key resource for me in understanding BDD is Scott Bellware’s Code magazine article on the topic. The opinions here are my own though.

Behavior Driven Development in a historical context

On the current project we are practicing Industrial XP’s StoryTest-Driven Development. That article dates from 2004 but the practice and tools such as FIT have been around longer. For example Customer Test Driven Development, a prior name, was a paradigm as far back as 2002. STDD supplements the unit tests of TDD with acceptance tests (usually delivered in FIT or Fitnesse). The key principle in STDD is that you write new code only if an automated storytest has failed.

It might seem obvious now, but the goal of agile engineering practices is to move the process of acceptance up the chain. When acceptance occurs at the end, it is expensive to fix mistakes. Developers have to re-acquire the depth of understanding they had when they first developed the feature. It leads to ‘never-ending’ projects as acceptance drags on and on with the customer submitting more and more changes. Pushing acceptance up to the point that the code is written shortens the feedback loop making less expensive to fix issues. Pushing it ahead of writing the code prevents those defects occurring in the first place, the ideal.

STDD helps us focus on creating the acceptance criteria first, so that we prevent defects instead of fixing them after they occur.

In addition, STDD helps focus on the communication between Customer and Developer, clarifying the requirement. The article quoted above makes reference to Domain Driven Design in its comment that it is important to share the customer’s language in these tests, the DDD idea of ubiquitous language.

One aspect of STDD to note is that we have a separation between the customer tests and the unit tests. Customer Tests remain the same regardless of how we choose to implement. Unit tests cover the implementation details.

Implicit in the name – story test-driven development is that we are working with user stories and the acceptance criteria on them. Interestingly this is often less explicit in descriptions of the technique, presumably because XP practitioners assumed everyone was working with stories and acceptance criteria.

We use a mixture of Fitnesse and Selenium for acceptance testing but equivalent tools exist. Acceptance testing tools remain woefully high friction and seem a clear area where improvements could be made.

Of note is whether you use an Outside-In or Inside-Out approach. Most descriptions of STDD are Inside-Out. Although the developer has the acceptance tests as a target they then build unit-tested code from the bottom up to implement the requirements. Often the developers use a white-board or CRC card session to design the implementation. Then, at the end, they hook up the acceptance test to see everything pass (or fix anything that is not as expected). This is generally how we work on my current project. BDD descriptions often focus on an Outside-In approach. Outside-In relies on greater use of Test Doubles to replace depended upon components (DOC) that are not yet implemented. Adherents see a benefit in the design that emerges from being forced to make the DOC abstract.

Acceptance Test Driven Development

To me Acceptance Test Driven Development (ATDD) is just another name for Story Test-Driven development. Both ATDD and STDD emphasize the idea of the executable specification. We make it possible to demonstrate that we meet the specification and can keep meeting it. I guess someone may put me straight if the two differ in philosophy. I heartily recommend Gojko Adzic to learn more about ATDD, particularly the interaction between customer and developer that occurs as part of that process.

Unit tests are not enough

We moved from a focus on unit tests and integration tests today, to add test-first acceptance testing to our strategy. This can be hard, because you are used to loading to testing at the back of the iteration (or even in the next iteration). Moving acceptance testing to the beginning of the story requires you to have a conversation about how you will accept the story, with customer and test engineers if you have them, before you start the story.

In addition, you need to keep stories small so that you have acceptance throughout the iteration. It is no good defining scenarios up front if you end up confirming them at the end of the iteration. There is the pragmatic issue of overloading your test team, but also the problem of late feedback. That can be a hard discipline and we often end up with too much acceptance at the back end of the iteration from having insufficiently granular stories. We’re still working on getting better at getting our acceptance across the iteration, and it remains a real challenge. It’s one with an enormous payoff though. It is also worth noting that our failures are not failure of the goal, but a failure to excel well enough at breaking up epic stories etc.

The value add in the approach is preventing defects rather than detecting them. It is also how stories were always supposed to be used – as a promise for a conversation. This seems to be much the same insight that pushed lean manufacturers to move QA into the line to prevent defects occurring instead of simply having them point them out once they had occurred. Testing ‘at the end of the line’ is an anti-pattern that can only detect defects not prevent them.

Behavior Driven Development

Dan North coined the term Behavior Driven Development (BDD) to replace TDD. Dan felt that the emphasis on testing in the names used to that point was a distraction from the idea of confirming the behavior of the system or executable specifications. Dan’s history of BDD suggests that it began as an attempt to use Behavior Specification over unit testing (see below). At first this was simply a question of naming test fixtures to indicate that they were assessing behavior and naming tests to indicate the behavior under test. This lead to the development of a tool, JBehave, as a clearer alternative to the unit testing tools that were prevalent. Dan then realized that the emphasis on behavior meant the technique could also encompass the process of acceptance criteria for stories as well. For me one transition in BDD thinking that seems distinct from STDD and TDD is that it removes the idea of these being separate activities, pitching more an idea of scenarios and specifications instead of different granularity of tests.

Acceptance Testing again

STDD, ATDD, and BDD all emphasize the same thing: unit testing is not enough, even if done test first. The problem is that while unit testing tells us that we have built the software right, it does not tell us if we have built the right software. In an analogy you may be great at cutting down trees, but if you chop down the wrong forest, you’ve bought yourself nothing but trouble.

The classic model for development under BDD, seems similar to that for STDD, but we move from acceptance tests and unit tests to scenarios and specifications. This terminology change helps us think about the work as automating executable requirements instead of testing. That re-focuses us on these as design activities. They also improve the maintainability of the artefacts produced, because the strategy is inherently contextual. Note that when I say developer here, you might have a specialized software developer in test (SDT). When I say customer it might be a customer proxy such as a Business Analyst (BA)

  • Developer works with customer to find user stories of form: As a  <role>I want to <business activity> so that I <business value>
  • Developer works with customer to identity the acceptance criteria for that story
  • Developer writes a scenario that exercises part of the acceptance criteria of the scenario (in a Given-When-Then style). Tools like Fitnesse and Cucumber often present these customer facing tests. However, tools are not the issue here. Some folks may use an xUnit framework even at this point.
  • Developer implements fixtures to create a failing scenario
  • The developer may do some design with colleagues on a whiteboard, CRC cards etc.
  • Developer writes a failing specification for a part of the implementation that will satisfy the story. This was traditionally the unit test, but BDD shifts the perspective to a specification to ensure we still think about execuable specifications over tests (see below for more)
  • Developer writes the code to get the specification to pass
  • Developer keeps writing new specifications, until the scenario passes
  • Developer moves on to the next scenario for the acceptance criteria for the story

It’s important not to emphasize tools here. It is not a question of using Fitnesse, Cucumber, or Selenium for your scenario and a particular xUnit for your specifications but a question of thinking about how we accept software. You might, for example, use SpecUnit throughout here; or you might use Fitnesse and NUnit with a home rolled specification framework.

Behavior Specification

For some time we have been using the Four Phase Test approach. We were often explicit about the steps within a test method having an in-line setup, exercise, verify and teardown section within the test. Some people were writing this as Arrange, Act, Assert (AAA) when working in-line and we switched to using those monikers as it seemed to be more prevalent. The advantage of using a strict phase model is that it became easier to read tests – you knew how any test would be structured, where to find the pre-conditions and post-conditions and what was being exercised.The model also forced people to think about how they were highlighting the conditions important to the test, leading us to use ideas like Test Data Builders.

BDD uses the phrase Given-When-Then to refer to the phases of the test model. The naming was a deliberate choice. The idea was to focus on a specification of what the Sysem under test (SUT) should do instead of focusing on verifying parts. Shifting the names took us out of the world of tests and into the world of executable specifications. We stop thinking about setup and start thinking around the context in which the business activity is occuring. We stop thinking about calling our API and start thinking about the operation being performed. We stop thinking about testing and instead focus on outcomes.

RSpec changed the convention here to focus on Examples or Specifications. Dave Astels speaks about Behavior Specification as an alternative mindset to testing. That approach was picked up by many others including Scott Bellware in SpecUnit.

By focusing on a specification, Behavior Specification psychologically shifts us from thinking about testing into writing an executable specification for our code. With a user story approach the detail of the requirement is in the acceptance criteria so this shift to see testing as specification is just the natural outcome of thinking of requirement specifications as the driver for testing, providing the notion of expected behavior.

Objects can be described as having a push-button API. Our calculator class has Multiply, Add etc. The danger with TDD is that it can lead to testing button rather than operations. You write a test to enter ‘2’ and see it displayed on the screen. You write a test to press the Add button. But what we really want to test is that when In I add 2+2 I get 4. TDD can produce tests that don’t cover the whole specification, but fragmentary parts of it. This makes them hard to ‘read’ and do not form effective executable documentation for our system. We want to be able to read the tests and know what the code is supposed to be achieving, not that the bits function. People who have delivered large systems using a classic approach to TDD have found that the tests are not expressive enough to act as clear documentation of the system. Specifications are much better in this regard. The skills are the same but the focus of the approach shifts from testing to specifying. This ‘eureka’ moment of understanding the shift between unit tests and specifications seems to be one of those difficult to explain, but easy to appreciate areas of life.

There are additional benefits. The shift to specifications for example seems to lower the issue of property heavy APIs on a class. With unit tests, the calling code can end up containing the behavior – the workflow of calls required to complete an operation. A specification surrenders control of that back to the SUT. So a specification approach should drive the design of a better API because we focus on what we want to achieve not how we achieve it.

It’s turtles all the way down

It’s important to understand that this approach -writing a specification of how the system should behave when something happens within a context (Given-When-Then) applies throughout the stack. So the traditional dichotomy of acceptance tests and unit tests fades to be replaced by tests at different levels of granularity, potentially using different tools as appropriate.

That is enough for now

There is more to say, but this has been a long enough post for people to read, so I’ll defer anything else to some subsequent posts on the topic. Those looking for examples of how might want to read Scott’s article for now. I’ll try to post some examples later. However one key for me was to share my current understanding is that simply using behavior specification xUnit frameworks is not ‘doing BDD’. BDD is a whole lifecycle approach to verifying you have met user requirements. In that context I suspect that focusing on individual frameworks may distract from, rather than enhance, the message.


About Ian Cooper

Ian Cooper has over 18 years of experience delivering Microsoft platform solutions in government, healthcare, and finance. During that time he has worked for the DTi, Reuters, Sungard, Misys and Beazley delivering everything from bespoke enterpise solutions to 'shrink-wrapped' products to thousands of customers. Ian is a passionate exponent of the benefits of OO and Agile. He is test-infected and contagious. When he is not writing C# code he is also the and founder of the London .NET user group.
This entry was posted in Agile, ATDD, BDD, Behavior Specification, STDD, TDD. Bookmark the permalink. Follow any comments here with the RSS feed for this post.
  • Jqqsvnsd


  • Ashraful Alam

    Here is some more thoughts:
    3. In #BDD The term “Test” in TDD is replaced as “Should” in the method name, which is implemented using the concept cointed as “ensure”.
    4. #BDD is about defining the different possible behaviours of a feature as given, event and outcomes template… i.e. Getting the words right!

  • zvolkov

    It’s all about educating your BAs / PMs.