Sponsored By Aspose - File Format APIs for .NET

Aspose are the market leader of .NET APIs for file business formats – natively work with DOCX, XLSX, PPT, PDF, MSG, MPP, images formats and many more!

More Tips for Getting Started with Agile

Welcome to my stream of conscience series about topics that are popping up on a new team that's largely comprised of folks new to Agile practices.

The "Build" Should Remove Project Friction

Part of a full fledged Continuous Integration effort is an automated build script that can lay down all of the environmental dependencies of a system.  In other words, running the build file on a local workstation should make your workstation able to run that code.  When you're updating your version of the code, especially for the first time or with stale code, you might make a habit of running the build file before you begin coding to guarantee that your box is ready to execute the code.

On the other side, be a responsible member of the team and add any new environmental setup from your work into the automated build script.  By putting that setup into the build file you've accomplished two things:

  1. You've effectively documented the environmental setup dependency
  2. You're propagating the environmental change to the other developers, the build server, and the test environments

Here's a story about an irresponsible developer that routinely made environmental changes to the code without adequately warning the other developers on the team.  I'll call him "Jeremy."  One day Jeremy came in early and created a new web service project for some of the new backend functionality.  Jeremy let Visual Studio.Net make the IIS virtual directory for him.  He then proceeded to write integration tests against the new web service and tied the user interface to the backend.  He then checked in and went to lunch.  The other hapless developers updated their code and quickly discovered that the application no longer functioned on their workstations.  Because they didn't know about the new virtual directory dependency that Jeremy had introduced, they spent a fair amount of time trying to troubleshoot the application failure.  At the some time the testers pulled down the new build and discovered that the application no longer worked.  They were unable to work at all.  If Jeremy had simply added a task to the build script to setup the proper virtual directory (and used CruiseControl.Net), he could have saved the other developers' and the testers' time.

When Estimating, Remember the Whole

Story estimation is an important activity that feeds the project management types with increasingly accurate information they need to plan the project timeline.  It's not fun, and it's definitely an "eat your broccoli" type of activity, but it's your best tool to maintain a sustainable pace.

In a typical Agile process you spend a fair amount of time estimating stories.  Contrary to some popular belief, Agile isn't (not supposed to be) pure cowboy coding and unplanned chaos.  The core of Agile project management is adaptive planning.  The plan is constantly updated to reflect the latest estimates from the developers.  The crucial assumption is that developer estimates basically suck on day one and get progressively more accurate as the project proceeds. 

A crucial part to better estimates is to take a broader view of the development work rather than focusing on the code only.  Making a story cross the line into the "done, done, done" category involves more than just the code.  Many, if not most, developers are consistently overly optimistic in their estimates.  I'm betting part of the culprit (beyond developer hubris) is failing to pay attention to the ancillary tasks that take place as part of development.  When you're estimating a story, you need to also account for the time it's going to take to make extensions to the build automation, test automation support, and other sorts of overhead.  For example, a user story might only require a couple hours of coding, but spawn so many possible permutations of input that writing automated tests takes several times longer than just the code.  You might even make a point of calling out these extra "hidden" tasks out explicitly when you're estimating stories.

Sustainable Pacing requires some Trail Blazing

One of the attractive tenets of Agile Development is Sustainable Pace.  I think everybody agrees that eliminating overtime is a good thing, but achieving that goal is a different animal altogether.  There's also a flip side to Sustainable Pace, the development team cannot afford to be idle or working ineffectively at any point because they're waiting on some other input like requirements or infrastructure.  To maintain a level of efficiency you need somebody to be out a little bit ahead of the developers making sure that the next batch of stories and environment needs are taken care of.  That developer abstraction layer is essential. 

I don't have any particular recipes for solving this problem.  My previous team never entirely solved the problem.  Ideally I'd like to have dedicated business analysts or even testers that stay just ahead of the developers to insure that the developers are always fed with detailed requirements for the next iteration's stories.  Since we don't have dedicated BA's, I'm suggesting that my new team follow a practice of identifying analysis holes for iteration n+1 during planning for iteration n and work analysis stories an iteration ahead to maintain a steady path.  I'd like to prevent cases where the development team is completely idle waiting for requirements decisions to be made.  When I was very new to Agile developement I sat in and listened to two different teams have an impassioned argument about whether or not the analysis stories were a good thing.  I can't say that I understood the arguments on either side then, so we'll try it anyway and fix it later.

Long iteration kickoffs can hamstring a team.  If iteration planning becomes overly long, some pre-tasking meetings near the end of iteration n-1 might make planning for iteration n go smoother.  Getting to day 1 of an iteration and realizing the analysis work isn't baked or understood isn't a good feeling.

In the end, my goal is to develop a team and ecosystem that can continuously pull down stories for an iteration, finish off those stories within an iteration, and repeat without pause for the duration of active coding.  Meeting that goal means having half an eye cocked looking ahead and removing obstacles to the developers (and testers).

About Jeremy Miller

Jeremy is the Chief Software Architect at Dovetail Software, the coolest ISV in Austin. Jeremy began his IT career writing "Shadow IT" applications to automate his engineering documentation, then wandered into software development because it looked like more fun. Jeremy is the author of the open source StructureMap tool for Dependency Injection with .Net, StoryTeller for supercharged acceptance testing in .Net, and one of the principal developers behind FubuMVC. Jeremy's thoughts on all things software can be found at The Shade Tree Developer at http://codebetter.com/jeremymiller.
This entry was posted in Continuous Integration, Starting a new Project. Bookmark the permalink. Follow any comments here with the RSS feed for this post.
  • http://www.red-gate.com/products/SQL_Compare/index.htm richard collins

    Peter, Jeff et al,

    There are a variety of ways people keep schema development under version control. There’s a whitepaper on the three most typical methods written by Brian Harris here:http://www.red-gate.com/products/SQL_Compare/technical_papers/improved_database_development.pdf

    And using SQL Compare 6, with two-way synch to SQL files, is probably the fastest and most reliable way of doing it: http://www.red-gate.com/products/SQL_Compare/index.htm


  • Josh


    Great site. Your thoughts on Agile reminded me of a recent article I read in Queue. Perhaps you two share similar opinions? See:



  • http://exold.com/ David

    We create our database objects in XML files; the build process generates the database creation / upgrade scripts (as well as various ancillary products, including C++ OLE DB accessors, front-end validation code, and the like), and our automated build script uses the newly-built scripts to create a database and run tests against it. It took an unfortunate amount of time and effort getting the codegen right, but the ability to keep the original source files under version control, to keep a myriad C++, C#, and T-SQL files in lockstep, to modify the functionality in dozens or hundreds of generated files with a single edit, and to recreate and run database scripts as part of the normal build process has made it well worth it.

  • http://codebetter.com/blogs/jeremy.miller jmiller

    Thanks for tip Bob.

  • Bob Archer


    As you said, we integrate the Db Builds within NAnt be automating a tool we use called Db Ghost. Db Ghost is designed for db change management and does a greate job. The Db schema changes are checked in an labeled with the source code. NAnt uses Db Ghost automation to have Db Ghost update the Db schema or version to match the build version.


  • http://codebetter.com/blogs/jeremy.miller jmiller


    Database’s are the final frontier of Continuous Integration, and I don’t have a great answer for this.

    You might check out Promod and Scott Ambler’s Agile Database book.

    The next time I work on a system where my team controls the database, I’m going to try to use the Ruby on Rails Migration stuff. It’s got the most compelling story I’ve seen yet for handling graceful database evolution.

    Steve Eichert blogged about it here: http://www.emxsoftware.com/RubyOnRails/Ruby+on+Rails+Migrations+Explained

    I definitely want all of the database DDL scripts in Subversion. Ideally, I have the test and/or build database rebuilt as part of the CI build to guarantee that the version of the database schema is locked with the version of the code, ESPECIALLY IF THE DATABASE SCHEMA IS CHANGING. In other words, you want to create traceability between all elements of a codebase. Knowing a CC.Net build number should lead me to the exact version of the database schema.

    You definitely want the DBA’s or database designers checking their changes into the same source control repository and making sure they are working under the exact same Continuous Integration discipline that the developers are to discover database/code synchronization or compatibility issues. I.e., a database change breaks the code.

    Sorry I couldn’t really be much help.

  • Jeff

    I would also love to hear your recommendations for keeping your database or schema generation scripts under version control. Our company currently generates create scripts from SQL Server’s Management Studio, which is a tedious manual step. Our build script does not regenerate the database, but a separate rebuild script is available to be run on demand.

  • http://codebetter.com/blogs/jeremy.miller jmiller

    You’re assuming that I, master coder extrordinaire, actually know how to upload an image to the blog? I’ll try.

    In the meantime, you might go check out TreeSurgeon. I deviate only in superficial ways.

  • http://davidhayden.com/blog/dave/ dhayden


    Could you post an image with a directory snapshot of your typical file structure that shows how you organize your source code, tools, 3rd party components, build file, etc?

  • http://beardaventures.blogspot.com David Kemp

    Thanks for posting these articles. I realise it’s difficult to take time to reflect on things like this when you’ve just started a new job, but it gives a real insight into the agile process, which, for me, is pure gold.

  • Peter

    What is your best practice for version controlling your database schema?

  • Travis

    Man, what i’d give to work in a “truly” agile team.