Sponsored By Aspose - File Format APIs for .NET

Aspose are the market leader of .NET APIs for file business formats – natively work with DOCX, XLSX, PPT, PDF, MSG, MPP, images formats and many more!

Customer, Developer, Tester

An interesting question that sometimes emerges for teams using XP practices is whether Testers (should you be lucky enough to have them) are Customers or Developers. There is a slight catch in the question itself. Customer and Developer are roles, not jobs, or people. A person can adopt either role. However a person should not adopt both roles at the same time. That said there is a clear distinction between the activities performed by the two roles which might best be defined as Customer determines ‘what’ and Developer determines ‘how’.

In XP Customers define the requirements (in the form of user stories and supporting detail), prioritize stories, and participate in functional testing. The Customer defines scenarios which verify that the story is ‘done, done’. In contrast Developers provide estimates and build the solution that passes the acceptance tests. Crossing the streams – Customers indicating how they want something built or Developers defining what to build always ends in a loss of the insights each specialism provides and should be avoided. Trying to decide how and what at the same time also leads to poor solutions. Often as the requirement is built around a proposed, often non-optimal solution, obscuring the true need. Other times we try to automate existing process based of manual or legacy practices over defining what the need is. Try to seperate defining what from how.

With this understanding testers naturally fit within the Customer role – because they work on defining tests for what the system does and have responsibility for confirming them. This means that for testers, understanding the domain of the system-under-test becomes a key responsibility to help them fulfill their role.

 

Posted in Uncategorized | 10 Comments

Where we are with acceptance testing and our BDD journey today

It’s a long post. Sorry about that. Get yourself a coffee
and a comfy chair.

I have spoken in the past about our path to Behavior-Driven Development (BDD)
through Story Test Driven Development (STDD) and Acceptance Test Driven Development (ATDD). Those
of you who have been paying attention will know that we essentially started
with a vanilla STDD approach to acceptance testing.

I wanted to talk a little bit about how we are authoring,
structuring, and organizing our tests today. Over the past couple of years I
have found that these are of greater concern than the technology you use. The
difference in technologies is perhaps mainly significant to the extent that
they help support authoring, structuring, or organization of story tests. I am
partially documenting this to help others who might find insight, but also to seek
feedback from others. I’m not trying to hold this up as exemplary. We certainly
have our weaknesses and failures which we need to address. However, I thought
some might find it useful to share my experience on this today.

From story to test

We use user stories to capture requirements. These do not
have enough detail to develop and are always a place holder for further
conversation prior to development.

Years ago, way before any testing TLAs, I delivered software
for primary care to the National Health Service (NHS, the UK’s public health
provider). Every year we received new requirements that we had to meet to get
our software accredited as providing the functionality someone in primary care
needed. We received this in two packages. One was a conventional list of
requirements; the other was a test pack. The test pack had a set of scenarios,
some end-to-end, some covering a specific feature that needed to pass before
you could get accreditation. Early we learnt that the test pack was the best
source of information on what we needed to build to be ‘done’. Requirements are
vague and open to interpretation. Scenarios are concrete, specific, and rarely
have interpretation. Ever get that moment where you built something, and then
had an argument with the person who wrote the requirement because they just
told you that your implementation is nothing like their requirements. Ever had
them point to some sentence that you overlooked, that changes the whole
perception of how you deliver the story. Scenarios avoid that problem because
they call attention to the correct behavior of the software. 

So the ideal is to capture the scenarios that the software
should support. Those give you the conditions for accepting the story as done.
Strictly acceptance tests for a user story are story tests.

Now the composition of your team will dictate to some extent
who is responsible for the parts of this process. In most cases I have found it
unlikely that the users of your software will be able to express the scenarios
though sometimes you might get acceptance criteria from them. In our case we
have domain experts who act as proxies for real customers, so we ask them to
define acceptance criteria – usually just a list – of things that a successful
implementation will represent. Our acceptance criteria looks a little like
this:

  • I
    expect that the Follow Up Date entered can be prior to the date the Follow
    Up Date is entered
  • I
    expect that a Follow Up Date can be changed
  • I
    expect that the Follow Up Date format should follow the format dd-MMM-yyyy
  • I
    expect to be able to ether enter a date or select from a calendar
  • I
    expect that activity history will record the previous Follow Up D

In our case we then have a separate QA function, who then
turn these acceptance criteria into scenarios that then need testing. We rarely
find that the end user looks at the test regardless of the medium used for
them. The idea that Fitnesse or Cucumber supports customer writing the tests is
only really true when you understand Customer in the XP sense as a role which
includes BAs, QA, etc..

In an ideal world we would get all of these scenarios
before beginning development, ideally by planning. That’s certainly the model
outline in FIT for Developing Software (which is a great
introduction to Story Test Driven Development)  In the actual world we get
some, but for others we both work from the acceptance criteria and communicate
about the scenario when we begin work on the story. The key is though that you
are driving with the tests either as criteria or fully-fledged scenarios. The
tests tell you what to build and when you are done.

Jim Shore refers to this communication as describe-demonstrate-develop.
The emphasis is on reached a shared understanding of what we should build and
communicate that over using a specific tool to test with.

Interestingly we occasionally fall down on getting scenarios
written and end up developing off the acceptance criteria. The result is all to
predictable. What gets developed is not quite right, or what was quite intended
and we end up with re-work. We certainly need to get better at this discipline.
A root cause analysis tends to suggest that what fails for us here is the
communication about what we should build. The storytest first approach is
forcing the correct behavior of understanding the story before we build it.

Scenarios are where you begin to see the value of
Given-When-Then as an organizing principle. Given I have an application in this
state, when I make this change, then the new state of the application should
be…

So for the above scenario we might write a scenario that
looks something like (don’t worry if you don’t get the domain):

Given that I have an Application subjectivity against a risk

with a Follow Up Date of 11-MAR-09

When I set a Follow Up Date of 24-MAR-09

Then the Subjectivity Follow Up Date should be 24-MAR-09

and the subjectivity should have an activity history record
of New Follow Up Date

with a Previous Follow Up Date of 11-MAR-09

What’s the difference between story and unit tests where I
have both?

In one sense there is no difference. You have a set of
acceptance criteria which you need to conform to. You have scenarios that
exercise these criteria. Each scenario sets up pre-conditions, exercises the
system under test (SUT) and confirms the post-conditions. Both story and unit
tests can confirm this. The difference is that the story checks the whole, but
I might wish to confirm a portion of the whole and that is when I need a unit
test. The unit test gives me greater defect localization. When the story test
fails I would hope that there would be a unit test failing somewhere that would
let me identify why the story test failed. If not I would look at the
interaction between the unit tested parts as being the source of the issue. So
a major difference between story and unit tests is granularity.

There is definitely some pressure here with folks
occasionally feeling that we are ‘testing twice’ once in the acceptance test
and once in the unit test. Most developers would prefer just to write the unit
test and not have to implement the acceptance test. I believe that acceptance
testing comes into its own for ensuring we have developed the right work and
during large-scale refactoring.

However, I think this feeling, that the acceptance tests are
not adding value, occurs most often when you slide into post-implementation
Fitnesse testing because your scenarios were not ready. A lot of the value-add
comes from defining and understanding the scenario up-front. Much of the value
in acceptance testing comes from agreeing what ‘done” means. That is often
less clear than people think and defining a test usually reveals a host of
disagreements and tacit assumptions.  When we find ourselves questioning
the Fitnesse tests it is often a symptom that we have slipped into ‘test after’.
Interestingly when Jim Shore posts about stopping acceptance testing, his alternative
seems to be exactly the kind of example-driven approach that we would consider
storytests to produce. Overall I think the direction people are heading is the
same – focus on scenarios that can be automated by developers.  Whenever
we slide on acceptance tests and drift toward starting with unit tests we often
find ourselves in re-work because we end up building the wrong thing. The unit
tests mean the wrong thing works, but it’s still the wrong thing. [More than
that, it’s just plan helpful. As I write this the acceptance tests are
prompting me to make an aspect of the current story go red that I had forgotten
about earlier; more proff that acceptance tests help you to navigate towards
done].

It is hard to write unit tests as part of the definition of
the story. You don’t necessarily know how you will build the story at planning
or definition. The classes that implement the functionality probably don’t
exist, and you cannot exercise them from a test fixture at that point. Tools
like Cucumber and Fitnesse de-couple authoring a test from implementing the
test fixture for that test. This enables you to author the scenario in your
test automation tool, before you begin implementing the fixture that exercises
the SUT. This is the key to enabling a test-first approach where you need to
communicate the scenario for confirmation prior to commencing work, such as
planning. Tools like Fitnesse
or Cucumber
is to provide an easy mechanism to communicate your scenarios.  We find
that Fitnesse allows customer and developer to communicate about the scenario
before implementation. You can also see the results of executing it.

Because the conversation is more important than the tool, a
tool like Fitnesse or Cucumber may not be appropriate to for the acceptance
test. The most likely case is that you do not have any business rules to put
under test. Don’t feel constrained to use a particular tool. The most important
thing is to define the acceptance test and then decide what tool is appopriate
to automate the test. We even have some cases where the value of automation is
too low. Where we are configuring the system for example, it may be easier to
test the rules that work with the configured system than test the act of
configuring the system itself.

One interesting aspect not really considered here though is
that the coverage provided by the more granular unit tests is still something
of a ‘blind spot’. I’m still not sure how to resolve that, or if the customer
is genuinely concerned at that level anyway. But acceptance testing is as much about communicating what
is acceptance intent as ‘testing’
. Hence we separate test and
fixture.

In addition another difference for us is that our acceptance
tests may be an be end-to-end tests of the system. There is a trade off here.
Our acceptance tests need to run within a sufficiently short time window that
developers will exercise them. End-to-end tests often end up expensive in time
to run, so folks stop running our tests. So we sometimes mock ‘expensive’
calls. Using an IoC container can help here, and we have some common code that
allows us to choose which services to mock out when setting up a new acceptance
test fixture. End-to-end tests also create a web of dependencies that may fail
for setup reasons. That can become painful in configuration heavy systems. At
the same time if the tests cannot simply set up and tear down, that may itself
be a smell that you need more effort spent on automatic build and deployment of
the system.  Going end-to-end here does gives us confidence that the
system works when the parts are hooked together. While we started with a lot of
end-to-end tests we seem to moving away from them toward more tests with
‘expensive’ collaborators mocked out. One trick here may be to seperate between
those acceptance tests that exercise  business rules, which may make
heavier use of mocks for components that do not influence the test and a few
tracer bullet tests that ensure the whole is working together.

Testing business rules not workflow

One piece of advice from Rick Mugridge in Fit for
Developing Software
is to test the business rules, not the workflow. Early
on we made a lot of mistakes with this. We tended to have verbose column
fixtures setting every single property on each of the entities involved in the
test. Part of the driving force for us on this road was that our system is
configuration heavy. So to use the system you need to set up products, set up user-defined
fields, set-up rules etc. This gives the system significant flexibility but
creates an overhead to working with it. Thatoverhead meant that when we
authored many early tests we  needed to set up a lot of configuration
data. This was too much work to write every time, and we quickly appreciated
that we needed a better strategy.

Our first approaches to this problem involved shared set up
for our fixtures. However, given sufficient properties to set this resolution
is still very fragile. Worse as refactoring tools do not refactor into the text
based fixture they have a tendency to break every time you refactor. Tests
should enable refactoring not become an obstacle to them. Any time the team
stops refactoring because of the rigidity to the tests, you have a quick way to
end any team’s love affair with storytest approaches.

So we needed a better way to factor out all of that setup
code. Our next approach was to provide tooling to allow the system to be setup
into a known state. Essentially we wrote support for exporting setup from a
system as XML and for importing it. This could then be used as part of the
setup of a scenario to put the system into a known state. This also had
benefits for deployment as it let us automate configuration of environments and
import/export records between systems. So, as is often the case, making it
simpler to test also gave us architectural improvements.

However, our biggest win probably came from using test data
builders to configure the system.

Omitting all that setup code

Some time ago I wrote about how we switched from using Object Mother to Test Data Builder.
That gave us enormous benefits within the domain model, because it made it easy
to do state based testing. It is also very expressive, because I can highlight the variables that impact that test.
This makes the test much easier to read. You can think of this as using a stub
for those values you do not need to control and overriding those that you do
need to control for the test. So if we want to set up a new entity we might
have some code like the following:

           
submission = new SubmissionBuilder()
               
.WithBrokerReference(new BrokerReference {BrokerName = “Flintstones”,
BranchName = “Flintstones – WA”})
               
.WithMarket(new Market(“Small Business”))
               
.Build();

Many acceptance tests can be handled in a similar fashion –
use defaults for anything in the setup of the fixture that does not have an
impact on what is under test. Essentially we choose to treat the Given
statements in the same way we handle our test data builders. The given
statements call out specific variables that effect the test, but otherwise rely
on default ‘stub’ values produced by an underlying builder. So the equivalent
of the above code on an acceptance test would be something like

Given a submission

With a broker named Flintstones and a branch named
Flintstones – WA

With a market of Small Business

When…

Then…

The advantage here is that we get a good signal to noise
ration when setting up the fixture to use with our test. We don’t just stub out
values that the test does not depend on, we highlight those that it does depend
on. In the fixture implementation we use the builder pattern to implement the
Given statements for the fixture. We tend to be able to re-use these builders
across a number of tests. Our fixture implementation for Fitnesse looks
something like:

public void GivenASubmission()
{
    submissionBuilder = new SubmissionPersistentBuilder(ioc);
    if (lastCreatedAccount != null)
    {
       
submissionBuilder.WithAccount(lastCreatedAccount);
    }
}

public void WhenTheBrokerNameIs(string brokerName)
{
    submissionBuilder.WithBrokerName(brokerName);
}

public void WhenTheBrokerBranchNameIs(string branchName)
{
    submissionBuilder.WithBrokerBranchName(branchName);
}

public void ThenASubmissionIsCreated()
{
    bool result;
    lastCreatedSubmission = submissionBuilder.Build();
}

Note that we tend to have an extra Then step to build all
the Given and With statements, before we exercise the SUT. The issue here is
that we wanted to have a statement that says, build the above, so that we could
create a base flow fixture that would allow re-use in derived fixtures of much
of the setup code.

Given a submission

With a broker named Flintstones and a branch named
Flintstones – WA

With a market of Small Business

Then a submission is created

When…

Then…

This is also the reason for often checking for Last Built
Foo – to see if a prior build step in the test constructed the object. If you
were only looking to re-use at the builder and not the fixture level, then this
would not be an issue to you. We could probably clean this syntax up and I
guess that is on the to do list. The advantage of some re-use here is the speed
boost it gives you getting the Given portion of your fixture working, letting
you get to the red part of red-green-refactor all the sooner. That means you
can get past the grunt work and into the interesting part – implementing the
desired functionality.

It is worth noting that sometimes you get a parameterized
test, in that you want to test the same scenario repeatedly, but for a range of
values. A column based test-fixture can work really well here – columns either
represent the variable inputs or the outputs. Just because we tend to use the
Given-When-Then style does not mean we should not use opportunity to keep the
intent (Setup, Exercise, Verify, Teardown) without
becoming overly verbose.

How do I find my acceptance criteria in future?

One problem with acceptance tests is how to organize them.
When we started we organized them by story. Find the story, find its acceptance
tests. The stories were organized by iteration. Twenty-seven sprints in that
early plan looks naive. We have too many stories and sprints to find our
acceptance tests by that organizing principle.

The first problem is that I often want to add to an existing
scenario for new functional requirements instead of always creating a new one.
This becomes hard if you couple story and acceptance tests, because you either
need to duplicate or add new acceptance criteria to the old story.

The second problem is that as an executable specification of
the code our tests have a lot of value, but not much if we cannot find it. We
should be able to find from our tests what the behavior of the system is.

Our current solution is to organize the acceptance tests
along the same lines as the site. For a given webpage on the site, we have a
given page in our Fitnesse wiki that tells you what the scenarios supported by
that page are. The mapping cannot be entirely one-to-one in that sometimes we
pop up a dialog, or perform some other workflow that we are not interested in
simulating in business rule focused tests. But it is close enough to make finding
the definition of the behavior easy.

The conceit here is that you could have the site open on one
screen and the acceptance tests open in another and have effective and
executable documentation for the behavior of that page. It is a little bit
conceited, but it works well enough to be a useful organizing principle.

Old tests, cleaning up, and throwing out

Some of our acceptance tests get bent out-of-shape by
incremental change to the system. The best answer is often to throw them away
and start again. One advantage of the ‘reflect the site’ approach above is that
it gets easier to see when to re-write. With acceptance tests even more than
unit tests it often seems to be better, to understand intent and re-write to reflect
the new state of the site than try to fix up old tests.

But what about the User Interface!

One obvious point here is that no user interface has been
defined as yet. One obvious problem is that testing through the UI depends on
the UI being completed to define tests. However, it can be worth thinking about
mocking up the UI at the same time as you write the scenarios using a tool like
Balsamiq.

We had a struggle at the beginning of the project over responsibility for defining
how this should look before realizing that it needed to be co-operative. In the
end it all comes back to the Agile emphasis on communication. The Customer asks
for features, but the Developers need to negotiate how the features will be
implemented inputting their understanding of how hard or easy something is, or
what opportunities or approaches present themselves.

The difficulty with many
traditional requirements documents is that they focus on the UI as a mechanism
for defining the requirements. In essence the UI and its behavior becomes the
requirement. By separating out acceptance testing we begin the journey of separating
what we want from how it will be implemented. That gives us more chance to
isolate the business rules. Once we understand what the user is trying to
achieve it is easier to discuss how we will achieve it. That tends to lead us
away with simply providing an electronic version of paper or legacy system
process they may use today, toward process they can use tomorrow.

Some folks still like to work from the UI down, whereas I want them to work from the acceptance tests on down, then hook up to the UI. It’s probably all tilting at windmills at some point, but for me whilst both are essential to getting it right, I see more cost in the majority of cases to the rework for not getting the business rule right than to changing the UI.

 

 

 

Posted in ATDD, BDD, Behavior Specification, Fakes, STDD, TDD, xUnit | 12 Comments

In TDD Red is not ‘does not compile’

I see a bit of a growing meme that the red step in Test Driven Development‘s (TDD) red-green-refactor cycle means that your test will not compile because there is no code to implement it yet.

Bunkum!

The red step is necessary because of the possibility that your test itself has an error in it. We face something of a chicken-and egg problem with a test. We understand that we should write a test before writing code, but we cannot write a test before writing our test code with hitting a recursion issue. So we say that we write a test before writing any production code. However, our test code is a valuable as production code, so we need to prove its correctness too.

How do we do that?

By making sure that when our test fails as expected. The four phase test model (setup, exercise, verify, teardown) and the arrange-act-assert pattern all say the same thing – put the system is a known state, exercise the change we are testing, and verify the new state of the application. So by stubbing the operation under test to put the application into an invalid state after it is called, we cann prove that our test will fail. This check of the test itself makes the bar go red in our Test Runner. Hence the ‘red’ name for this step.

A corollary to this is that you should author your tests so that you can simply prove success-failure of the test. Tests that have conditional logic for example are bad tests – you should know what your test is doing. So getting a red step to fail easily is generally also a good marker that we are indeed testing a unit. If getting a red is really hard, there is probably an architectural smell.

There is a temptation to skip the red phase and go straight to getting to green. The risk here is that you have a false positive – your test would always pass. Kent Beck talks about the model of driving in a different gear when doing TDD. Skipping the red phase can work if you feel able to drive in a high gear – the road ahead is straight, wide open and flat and you want to eat miles as fast as you can. However as soon as you hit a steep downhill gradient, bends or traffic, you need to shift down again. Usually the indicator here would be that you find your tests would probably always have succeeded when you get to green. However, having regularly encountered mistakes in my tests using the red step I would avoid driving in high gear without thinking about it.

But remember red is not ‘fails to compile’ it is ‘test fails as expected’.

 

Posted in Uncategorized | 13 Comments

MVC or WebForms: It’s more about client side vs server side

Scott Guthrie has posted on the growing debate over Webforms vs. MVC, a debate that seems to be raging everywhere (although there is more that one MVC framework out there too). I agree with Scott’s point about the nature of technical debates, and the acceptance that there are different schools of thought. However, that does not always imply that all schools are equally good solutions to a given problem.

The issue has been though I think not solely about the MVC pattern, it is also about the shift from providing rich client behavior server-side or client-side. In this area the renewed interest in AJAX and the emergence of Javascript libraries like Prototype, script.aculo.us and JQuery have been game changers. And for me those RIA paradigms dovetail better with MVC apps than with Webforms applications. At the same time the growth of REST has meant many have re-engaged with the HTTP protocol and its resource oriented model, have turned attention away from page based approaches to web development toward resource oriented ones. There is a correspondence here, because as the use of client side scripting grows so does the perspective of the server as a resource provider used by the client.

It is in many ways this shift of paradigm that MVC supports better and why using MVP approaches with WebForms is not a ‘seperate but equal’ answer to modern web development. Its not simply about better conformance to Single Responsibility Principle.

What our frameworks do for us

Our average web framework has some pretty basic requirements – handle requests for resources from the server by returning an appropriate response. In our case this is HTML. The simple nature of this model underlies the success of the web.

To support this model our framework needs to route to an appropriate handler in our code that can read the request. Now we could just interrogate that request to pull out post data or query strings, but writing a lot of that code would get old fast, so it helps if our framework can map between form, params and query string values and part of our object model. That way we can manipulate the variables of the request more easily. Now we have to take an appropriate action in response to the request. Generally this is where most of our code plugs in. We get some data and compose a response, in HTML to return in the response. Now just using Response.Write() to output our HTML becomes hard to maintain, so most of us expect our framework to give us  some sort of template so that we can separate the static and dynamic portions of our response.

The MVC pattern enters the fray here when frameworks separate the responsibilities of template to provide view, model to provide response to action, and co-ordination between the two for a request. Criticism of Webforms as lacking effective separation cf concerns is because the aspx file is both responsible for rendering the HTML used as a response and providing the controller for the request. It thus has two reasons to change, because the template changes or because the co-ordination of handling a request changes. These may be orthogonal concerns and make maintenance harder. Some patterns such as MVP try to alleviate this by enforcing separation of these two responsibilities within the framework. this criticism is valid but not complete issue; the significance of ‘MVC’ frameworks is not solely their better separation of concerns.

Now for sure the devil is in the details, but that’s pretty much it.

Bare Bones

‘MVC’ frameworks such as ASP.NET MVC or Monorail ttend to have a bare bones approach to meeting these requirements. The framework takes requests and calls an action to service them, mapping request to action parameters. We then carry out the appropriate task and render html through a view engine. MVC frameworks add little on top of this paradigm. The big win here is that this makes them simple to learn and easy to use, because the surface area of the API is small. The surface area for ASP.NET MVC is smaller than that for Webforms and is easier for new developers to learn. It also works in harmony with http and thus expectations. The principle of least surprise is important to APIs because it makes them easier to discover. Understand the problem and you are likely led in search of the solution.

Rich Internet Applications

Now ASP.NET MVC etc. allow you to return an HTML response, but HTML has its limitations in that it is static. You need to post a request to the server each time you want some new action to occur. In some cases this ‘complete form and submit’ model works, but we have become used, from smart clients, to a more dynamic model that provides feedback as we make selections. We expect choosing from one dropdown such as country to restrict choices such as US State vs. UK county or Post Code vs. Zip Code.

Out-of-the-box support in web browsers for dynamic behaviour came in the form of Javascript and manipulation of the DOM. The browser is the smart client in this approach. The issue here is that Javascript development was a challenge for many developers and consequently many development shops. Why? Because folks found client side behaviour did not just require switching to another language, but also learning the DOM, which was also sometimes uncomfortably close to metal for developers weaned on smart client Forms frameworks like Windows Forms, VB, or PowerBuilder. In addition the browser wars meant that the model for programming against the DOM was not the same from browser to browser, further complicating the code that was needed to implement a rich client in the browser. In truth it was expensive. Finally the debugging experience was poor. We were back to tracing the state of variables in order to isolate and fix problems in the browser. For many MIS departments writing sites with a rich client, heavy on javascript, was out-of-reach in other cost or skills.

To solve this problem WebForms introduces the concept of server side controls – essentially shifting the rich interaction to the server. The server side control abstracts the html control from the client side. The benefit is that it allows you a development approach familiar to forms based programmers. You set and get control properties and write handlers to react to events. But as the events are raised client-side you need effectively mirror all of the control state from the client side to the server. The problem here is that html does not share this with you on a post. You also need to record the event raised for the server side too, as a post is all you know about from http. This brings in ViewState, which captures all the state which a post is not giving you. But once it is implemented I can provide all that dynamic behaviour on the server, skipping and jumping over issues like knowledge of Javascipt and the DOM, as well as browser compatibility issues.

Of course we can quickly see that with any complex enough page ViewState will grow. As it is embedded in a page it increases the page weight which can hamper performance In addition because we must do an http post to transfer the state of our application back to the server, and then display the response, events are handled by a page refresh. This has the consequence of making the system slower than one which handles the event locally, giving rise to user complaints of flicker or slow response. In addition if you try to cope by adding some AJAX callbacks from your client  the question quickly becomes ‘who owns this state’ as you may have copies of the state both on the client and in ViewData being marshalled back and forth to the server.

Webforms is also a more complex API than MVC and thus harder for developers to learn. Most of us have been at an interview where we have asked, or been asked the question – what is the Webforms lifecycle? Understanding the stages, where viewdata is available and modifiable is not an insignificant learning challenge.

Given that, for some time I used to recommend building applications using WebForms server-side control model over building applications with client side behaviour. Why? Because of the cost and difficulty of building rich client behaviour in Javascript with the DOM outweighed the cost of learning Webforms.

What Changed?

MVC trades the complexity of server side controls for a simpler programming model, supported by client side controls. The emergence of the new breed of Javascript frameworks like Prototype, JQuery, and script.aculo.us, made it far cheaper to code the client behaviour on the client. The elegance and simplicity of these frameworks, their abstraction of browser API differences, and their rich ecosystem of control providers, lowered the barrier to entry for client-side development providing solutions to common needs such as  grids . The availability of many free controls lowers the cost of common UI widgets that have often been the preserve of expensive vendor solutions.

In addition the debugging experience for Javascript has significantly improved from the old trace and dump model with browser plugins such as Firebug and Visual Studio.

With a lower cost it has become commercially viable for more development shops to build rich client applications. Best of breed applications do not suffer from the postback flicker issue because they manipulate state locally on the browser and use AJAX calls to the server. With effective client side development, the user experience is akin to smart clients of yesterday, with none of the distribution problems for central IT departments. This rich experience available for the web lies at the heart of the SaaS revolution.

The MVC model truly shines with AJAX where the target for an AJAX call is just an action on a controller. The affinity to http’s resource based model allows for easy definition of the server side support for rich clients. The advatange of MVC frameworks is the ease with which they dovetail to client-code using frameworks like JQuery. Client-side programming is back, using modern Javascript libraries!

It is this, IMO, which has shifted the balance in favour of MVC. I do not think it any accident that the ASP.NET team shipped JQuery with ASP.NET MVC. I think it was essential to shipping an ASP.NET MVC application that a modern Javascript client library was included to support rich client behaviour on the browser instead of server side.

From my cold dead hands

The attitude of many developers may be ‘you will pry Weforms from my cold dead hands’. The march of progress is threatening to do just that. Much of theresistance is based on prior experiences with the cost of client-side development using Javascript, without understanding that the new frameworks available have significantly lowered the barrier to entry there. In addition, Silverlight opens a second front against Webforms by offering rich client development using XAML. If Javascript does not replace Webforms, then XAML will. Server-side controls had their day. They built careers, companies, and fortunes but they are legacy applications now. Client-side controls have returned with fresh offensives that will see their end. If you are building a new web project today you should be looking at an MVC framework.

For my part the question of Webforms vs. MVC is already an irrelevance. That is yesterday’s fight and victory has been declared despite a few units continuing to fight on. The next war is likely to be between Silverlight and Javascript for ownership of the smart client…

 

Posted in Uncategorized | 27 Comments

Whither Alt.Net?

Rage, rage against the dying of the light

The lack of visibility for the idea of “Alt.Net” of late has led me to ask myself the question whither Alt.Net. Or perhaps is alt.net withering? A couple of years ago the alt.net meme had real traction in blogs and events. It had become a cri de coeur for the disaffected members of the .NET community. For many who felt there was a ‘better’ way, that solid principles and practices that alleviate the pain of some software developers ills were available if you looked up and far enough around, the knowledge that like-minded people were out there gave a sense of strength and purpose. We were not alone in the dark, others like us were out there. At the first altnetconf we ran in the UK, inspired by those in the US, what was overwhelming was the feeling of ‘community’ between like-minded people who suddenly found others they could communicate with about the ideas that burnt so very brightly within them.

At the time there were warnings and criticisms of the new community. Some reasoned it would become an ‘echo chamber’. Other critics suggested that it could be ignored as those involved would just burn out leaving only silence.

Today as I ponder the state we are in I wonder if the critics have not triumphed in there soothsaying over the enthusiasm that once fired our souls. Blogs wane in fire and passion. We seem tired from the fight.

I for one say “Do not go gently into that good night, rage, rage against the dying of the light”.

What is in a name?

First I want to step aside from the question about name. David Laribee’s original manifesto set out the stall of ideas clearly whatever you want to call it.I think that all these remain true, but perhaps the heart of the movement for me is that “You’re not content with the status quo. Things can always be better
expressed, more elegant and simple, more mutable, higher quality, etc.”

Call it what you will. The point is not the name, but the passion for better software development. Objecting to the tag should not blind you to agreement on the ideals. I accept people might not like some of the baggage associated with alt.net. But let’s move on and accept that we all agree on the need for a movement that expresses the desire to be ill-content with the status quo, to rebel against the orthodoxy, to rage against the machine.

“There’s a time
when the operation of the machine becomes so odious, makes you so sick
at heart, that you can’t take part, you can’t even passively take part,
and you’ve got to put your bodies on the gears and upon the wheels,
upon the levers, upon all the apparatus, and you’ve got to make it
stop! And you’ve got to indicate to the people who run it, to the
people who own it, that unless you’re free, the machine will be
prevented from working at all!” – Mario Savio

Is the movement needed any more, and should we just declare victory and bring the troops home?

Scott Bellware recently bemoaned on Twitter that .NET developers doing
web development should be using MVC and not webforms. He pointed out
that attempts to pander to some developers by suggesting that WebForms
and ASP.Net MVC were separate but equal were disingenuous “As more good
web devs move to MVC, the WebForms camp looks ever more like
paint-by-numbers web development. The shifting demographics of web
development in .Net should be a clarion call for everyone to go back
into learning mode. The paint-by-numbers crowd is feeling the heat of
all the learning that Microsoft told them they didn’t have to pay
attention to. The amazing thing: rather that pour on the learning,
ungodly efforts are being spent on perpetuating uninformed
justifications. Dumb. No. Squarely. Definitively. ASP MVC and WebForms
are not equals. Here, I’ll say it: better web developers use MVC. The
ultimate destructive force in .Net development: Microsoft convincing
developers not to worry about deepening their skills”.

This is not academic. I regularly meet people through London .NET user group that tell me this or that senior developer has blocked use of ASP.NET MVC at this or that company because it would require learning something new. People are still struggling with this issue at development shops all around us. The ‘senior’ developer invested in maintaing his position through better knowledge of a framework or tool over transferrable skills like patterns & practices shuts down introduction of new technologies by junior developers out of fear that it might level the playing field and bring their authority into question. I hear from junior developers that entrenched senior developers unwilling to learn try to kill attempts to adopt TDD/BDD because they
do not understand and worry it will undermine their authority. Further
hint, your authority has already been undermined with those junior
developers who have long ago worked out that your fear and ignorance is
keeping you from implementing. They already understand that it is not genuine judgment that it is not
needed in a given circumstance or you have adopted and alternative like
Design By Contract.

I still hear people suggest that TDD is too expensive, and yet complain that developers are not producing quality code and they have too much re-work. Hint the cost of re-work is greater than the cost of doing TDD to prevent re-work (or the cost of manual testing to prevent defects exceeds the cost of TDD by a large margin). I still hear people object to story tests but complain about developers building software that does not match customer expectation. HInt, the cost of adopting a BDD approach to building the right thing is less than the cost of re-work from incorrectly fulfilled requirements.

People talk about having adopted agile at an organization but are often practicing cargo-cult agile, scrumerfall or waterscrum or worse agile as no-process at all.

We have by no means won this struggle. It is still being played out every day, despite the fact that the solutions may seem obvious to us.In some ways we have only reached the point of witnessing the creation of evasions, half-truths,and lies as to why such practices are inappropriate to an organization or covering up incorrect adoption.

Alt.Net was always about challenging these ‘comfortable lies’. As a community we cannot become content to live with
the ‘comfortable lies’.

Sometimes effecting change can seem like a war. There are a lot of
people who have established comfortable positions for themselves as
prophets of a technology and do not want that comfortable home
disrupted. Unfortunately we cannot accept the lure of that comfortable
position without our skills eroding and failing to keep pace with the
wider industry. This necessitates conflict between the prohets of the
new and the saints of the old: “The tree of liberty must from time to time be refreshed with the blood of patriots and tyrants. It is its natural manure.” If you have not engaged with ASP.NET MVC you need to be, now. If you have you need to be exploring the new alternatives to understand their implicit criticism.

I don’t think we can declare victory in any sense. We may be tired of
the casualties, but walking away from the fight right now just lets the
other side know that they can outwait us. Someone told me a story recently of an IT department that went agile and became extremely productive. But when the thought leadership behind the change,  the nay-sayers moved in to crush the agile practices and return to their old safe comfortable world. There is a group that reckons it can outlast the adherents of change and return to mediocrity when those agents tire or move on. You just can’t quit the fight if you don’t want that to happen.

Did the movement simply become the new orthodoxy

One criticism raised against alt.net is that it would simply become
the new orthodoxy. It could be suggested that the decline inalt.net activity is because people have simply become comfortable with a new set of orthodoxies: Scrum, TDD, NHibernate etc, that they will not challenge.

However, nothing stands still. The orthodoxies of Agile development Scrum, XP, and Crystal amongst others, are being challenged. Kanban grows rapidly as an alternative to agile storyboards sometimes provoking debate. The notion that we ahve adopted Scrum or XP and can rest on our laurels is to fall into the trap.

Many find
limitations with
ASP.NET MVC’s implementation, which has led to projects like FubuMVC and OpenRasta let alone MonoRail.

Source control seems to be leaving behind the centralized model of CVS and SVN for the distributed model from GiT.

Many of the people involved in championing these alternatives, championed thier predecessors. So I don’t think the movement is doomed to become the new orthodoxy provided it continues to embrace investigating and adopting these alternatives as they emerge. The goal is to do the best we can, and what is ‘best’ will keep changing. Sure alt.net might pause for a period, absorbing one revolution before seeing its shortcomings and starting another, but I don’t believe the community incapable of moving forward, mired in some new orthodoxy.

Did we scratch the itch of ‘people like us’ and just move on to Twitter?

There is a danger that the biggest need we had was to feel less alone, and once we found people who thought like us moved into an echo chamber where we could moan about everyone else. In some ways Twitter is the ultimate example of this echo chamber. We can follow like minded people and hold conversations with them, ignoring everyone else.

I like Twitter, please don’t get me wrong. It has enabled the kind of informal social activity for altnetuk and the London .NET user group that I found difficult to support through mailing lists. But a lot of the discussion around practices, tools, ideas seems to have moved there. Twitter has become a great radar for seeing what folks think is important and interesting out there that merits further research. However, I think there is a danger for the community here for two reasons.

First Twitter is a passable medium for conversation, but not a good vehicle for a comprehensive discussion of ideas. The length of tweets is too limiting, the ability to construct argument too limited. I am as bad as anyone for having spread a reply or opinion across a half-dozen tweets. But we should recognize that Twitter is ineffective at explaining complex ideas. It’s great that snapshot comments have somewhere to go and don’t crowd the blogosphere, but we need to continue to blog or write articles about the ideas for clarity.

Second Twitter relies much more on you knowing about the person with the key ideas and following them. Blogs are easier to find by searching, or by following links. They have far more discoverability. The rise of blogs represented a phenomenal democratization of software development knowledge. Previously such knowledge had often been hoarded by cliques as an exploitable resource through books, training, or consultancy. Blogs shared knowledge in a way that meant best practice ideas were widely available. The danger is a drift back to cliques as bloggers, tired of criticism, return to narrow cliques to disseminate knowledge and practice. Ultimately this becomes self-defeating and prevents wider acceptance of standards and practices.

The danger of Twitter becoming an echo chamber is far greater than the danger of blogs becoming such. So I believe alt.net needs to continue to blog to reach out beyond the echo chamber. But more than this, alt.net represents confort and support to other people making the journey. All those developers looking for someone to say ‘I believe in being the best developer that I can’ too’. That moral support can mean a lot to people struggling with obstinate colleagues and managers. It can mean a lot to point to a group and say “but I’m not the only one that thinks this way”.

Was struggle replaced with co-operation

We might be hopeful and consider that it is possible we moved from opposition and rebellion to cooperation and inclusion. Did we achieve its goals and simply become incorporated in the mainstream. Is the rise of ASP.NET MVC, MSTest, POCO options for EF just a sign that the movement has been incorporated in the mainstream? Whilst I think engagement is positive, after all it is the only way to succeed and driving change, I don’t think that engagment means there is no need fo alt.net any more, because I think the movement was about more than just a few “poster-child” issues. I also don’t think it is about rebellion against co-operation with MS or anyone else. I think it is about driving forward the quality of our own practice.

Software Craftsmanship

Many of the ideals of Alt.NET seem to be shared with the Software Craftsmanship movement. In some ways Software Craftsmanship seems to be the generic form of the alt.net movement. For me key texts in Software Craftsmanship are those like The Pragmatic
Programmer
, Software Craftsmanship the New Imperative, Clean Code, Agile Principles, Patterns and Practices in C#, Test Driven Development By Example to name just a few. Indeed the XP
movement could be seen as a call to craftsmanship. 

I talked about the asset of looking back to craftsmanship for models when talking about master builders instead of architects back at the beginning of 2008. One quote from Goldthwaite is worth repeating

“a mason could operate as an architect only if he became head of a
works staff. …he had to prove himself also as an administrator and
supervisor who could be responsible for procuring materials and
equipment, hiring and supervising workers, and, in general, directing
all the technical operations of the construction enterprise”.

This idea that a journeyman becomes a master by understanding the
whole enterprise is particularly appealing as an ideal. You cannot just
focus on code, you need to understand the whole enterprise of
construction.

I think that the manifesto is something that many who consider themselves alt.net would agree with. It is also an indication that the passion to improve on a community’s existing practices does not reflect unique problems in the .NET community but is parter of a wider agenda in the software development community to improve the standards of our industry.

I’m not proposing dropping the alt.net moniker in favor or Software Craftsmanship. The Software Craftsmanship movement has specific goals and practices which align with alt.net, but it is not alt.net. What I am suggesting is that the alt.net movement intersect with broader movements like software crafstmanship. I think that engagement with broad movements like Software Craftsmanship would help many of us to broaden our horizons to the initiatives within the wider software community to improvement. Attending dojos or katas for different languages would help us all. I don’t think alt.net engagement needs to be replaced by Software Craftsmanship engagment, I think membership of both communities should not be orthogonal or exclusive in our minds at all.

Wither ALt.NET?

So I don’t think Alt.Net needs wither or be replaced by something else. I think its spirit ou’re not content with the status quo. Things can always be better
expressed, more elegant and simple, more mutable, higher quality, etc.” remains alive. I think the value in a community that supports those who practice that within the .NET community remains, even if needs to re-invent its passions on a regular basis.

 

 

 

 

Posted in Uncategorized | 27 Comments