Do you really know where that code has been?

Last week I did a talk at DevTeach called “Creating a Maintainable Software Ecosystem” (the slide deck is here).  On one of my slides I had the following bullet point:



NEVER build and deploy from a developer environment


Even though it showed up on the slide I don’t think I got around to explaining why I don’t like creating deployment packages on a developer workstation.  Yesterday someone pinged me wanting to understand why I say you shouldn’t make official builds on a development workstation.  Why should I make it a standard that we don’t do builds on a developer workstation?


I’ll start by telling some horror stories.  5 years ago I built my very last Windows DNA web application and left the company the following spring.  At the time our standard practice was to compile VB6 DLL’s on a development workstation and check the build products into source control.  I would then write up instructions for our configuration management group to manually deploy the COM DLL’s as part of any upcoming production push.  It worked exactly as well as you’d think.  Almost a year after I left the company and project I got a phone call from a former colleague.  He needed to make some changes to one of the COM DLL’s, but when he downloaded the code out of source control a class was missing and the project wouldn’t compile.  He was screwed.  It was worse than he knows because I think I made some last minute code changes in the actual production push that never made it into source control.


On another project I was asked to redesign a problematic system.  I asked for the source code for the existing system.  Nobody knew where it was.  The original developer was a victim of a recent “Reduction in Force” and wasn’t around to help.  Fortunately, someone thought to go check the original developer’s old cubicle and found an ancient looking notebook computer in an unlocked drawer that had a copy of the code on its hard drive.  We think it was the code anyway.


Sure, these are horror stories of incompetence, but if we’d had a policy forbidding production pushes originating from developer workstations and some level of repeatable builds directly from source control we could have headed off some serious problems.


 


First Causes


Before I even bother to talk about how and where I want the builds made, let’s put out the end goals as I see them:



  1. Repeatable builds.  If anything should go wrong, and it will, can I quickly reproduce the build?
  2. I want to know the provenance of a build.  Where did it come from?  What exact version of the code is in there?  Is it free of foreign bodies of code?  When it comes time to troubleshoot production issues, will I know what code to look at?
  3. I don’t want to screw up the production deployment.  Building is never as simple as just compiling, even for XCopy applications. 
  4. I want to know that the exact version of the code being deployed is the version of the code signed off by my QA folks

If you build on the developer workstation, and especially if you build from your normal working directory, I think you run a significant of blowing all four points.  If your building from your source code on your box there’s nothing to guarantee that all the code you’re compiling against is both in source control and the proper version of the code.  It’s just the code you happen to have on your workstation.  I don’t care how conscientious you and your team is, you will flub up with source control from time to time. 


 


Build on the Build Server


My teams do official builds to testing and production on the build server as part of the CruiseControl.Net build.  One way or another, I like to have the CruiseControl.Net build put together a deployable package on each and every successful build and stash it off to the side somewhere easy to get to.  Ideally, I like to have the test deploys be something that can be manually triggered to deploy a given version in a single step by the testers themselves.  Why is this important?  Let’s go back to our original list of first causes:



  1. Repeatable builds.  CruiseControl.Net runs an automated script with no manual intervention.  If the script works, I say we have repeatability.

  2. I want to know the provenance of a build.  CruiseControl.Net is pulling the code directly out of the master source control repository.  If there’s anything missing, the build won’t complete.  Besides, we want the master source control repository to be the single authoritative source right? 
  3. I don’t want to screw up the production deployment.  All build steps should be automated.  Manual builds are time consuming, but more importantly, they can be error prone.  For example, several of my projects will ratchet up the tracing levels or inject stubs in development environments, but I’ll use the NAnt build to guarantee that the correct configuration is used in production.  I want that configuration step to be automated.
  4. I want to know that the exact version of the code being deployed is the version of the code signed off by my QA folks.  See the following sections:

 


Tracking Build Number with CruiseControl.Net and NAnt


One of the first targets I add to the NAnt script of a new project is versioning support.  In the StoryTeller.build file for my StoryTeller project I have the following target:


  <target name=version description=mark AssemblyInfo builds with the build number>
    <property name=assembly-version value=${project.version}.0000 />
 
    <if test=${property::exists(‘CCNetLabel’)}>
      <property name=assembly-version value=${project.version}.${CCNetLabel} />
    </if>
 
    <echo message=I’m running/>
    <echo file=version.xml message=&lt;version&gt;${assembly-version}&lt;/version&gt;  />
 
    <echo message=Marking this build as version ${assembly-version} />
    <asminfo output=src/CommonAssemblyInfo.cs language=CSharp>
      …
 
      <attributes>
        <attribute type=AssemblyVersionAttribute value=${assembly-version} />
        …
      </attributes>
 
      …
    </asminfo>
  </target>

Pay attention to the bolded parts.  When the NAnt script runs it will create a new file called CommonAssemblyInfo.cs that contains assembly attributes to identify the product name and build version.  This file is linked to each and every binary in my solution.  As long as this target runs before the compilation I can embed the contents of the “assembly-version” property into the version number of each binary.  The “assembly-version” is created at runtime by appending the current “CCNetLabel” property to the project version.  Assuming the build is successful, I get a batch of binaries that can be clearly traced to a known CruiseControl.Net build.  The “CCNetLabel” is supplied to NAnt from CruiseControl.Net itself and wouldn’t be available if you simply ran the StoryTeller.build file manually.  In fact, when I run the NAnt build manually all the binaries will be versioned (as of this writing) as “0.8.0.0000.”  Seeing the four zeros at the end of the build tells me this is a local developer build.  In a real project that tells me that these assemblies are not to be trusted because I don’t know where this build came from.


Ok, that’s nice.  You know what CC.Net build the assemblies came from, but how can you know exactly which version of the code in source control made up the binaries?  Simple, we just have CC.Net tag the source control with the build number after successful builds.  Fortunately, the CC.Net + Subversion combo makes this very simple.  Here’s the snippet from the ccnet.config file for StoryTeller to do just this:


 


  <project name=StoryTeller  queue=StoryTeller>
    <workingDirectory>d:\work\StoryTeller</workingDirectory>
    <modificationDelaySeconds>10</modificationDelaySeconds>
 
    …
 
    <sourcecontrol type=svn>
      <trunkUrl>http://storyteller.tigris.org/svn/storyteller/trunk</trunkUrl>
      <workingDirectory>d:\work\StoryTeller</workingDirectory>
      <tagBaseUrl>http://storyteller.tigris.org/svn/storyteller/tags</tagBaseUrl>
      <tagOnSuccess>true</tagOnSuccess>
    </sourcecontrol>
 
    …
    </publishers>
 
  </project>

After each successful build, CC.Net will issue a command to Subversion to tag the current revision of the repository trunk as the CCNet build number.  At any time I can retrieve the exact version of the code matching the binaries by the tag number.  It’s now trivial to retroactively create a support branch from a given CC.Net build.  Any fights over what code made it into the official build are relatively easy to solve by simply going to the repository browser.


 


Removing Project Friction 


The previous part talked about the mechanics of the build, but now let’s talk over the workflow.  I want to be able to deploy code to testing and even production with minimal friction.  With a team of any size and embedded testers, I could easily be pushing code to the testers 3-5 times or more a day on average.  To do this effectively, I want to shut down the possibility of errors and miscommunication between myself and the testers.  Specifically, I’m concerned about these issues:



  1. We screwed up the push.  I hate getting bug reports that are caused by botched testing pushes.  100% automation of builds with environment tests baby!

  2. The testers tested the bug fix before the bug fix was deployed.  Don’t waste the testers time.  When you fix bugs or new stories, you always tell the testers which build number contains the fixes and new features.  On the other side, the testers have to be aware of the current build number.  Make the build number obvious in some way.  Either teach them to check the assembly versions themselves, put the build number in some kind of “About” screen, or put the build number into the title bar.  However you do it, strive to cut down on miscommunication problems betwixt you and the tester.  “I thought you had already pushed that code” or “oops, you don’t have the right code yet” comments need to become a thing of the past.

  3. I want to do the push fast and get right back to work.  By wanting to work iteratively in smaller steps I’m going to need to make far more pushes to testing.  I want them to be automated to spare my time.

 


Closing Thoughts


I find it more than a little ironic that traditionalists like to label Extreme Programming as irresponsible hacking, but yet I’ve learned far more about better configuration management practices from my XP experience than I ever did in a traditional software shop that was desperately chasing higher CMM levels. 


We all want to be safe, know where things are, and generally stay under control.  I also want to go fast and get stuff done.  I don’t want laborious configuration management practices holding me back and retarding my progress.  I never want the fear of the configuration management process to stop me from making the right technical decisions.  For all of these reasons and more, I strongly prefer using project automation wherever possible in place of fancy manual work processes.  The case in point for me was a situation last spring where we hacked up some code because it was just too freaking hard to go through the client’s database change management process.


I think this issue is another point in favor of blurring the lines between project roles.  In my old traditional environment we had a separate configuration management team and the developers had nothing to do with this process other than filling out paperwork.  As I discussed in my horror stories, I was able to color within the lines of their official process and still screw up the configuration management royally.  In a small Agile shop us developers were responsible for configuration management.  We implemented Continuous Integration and automated builds ourselves.  We actually had far, far better control over what code was where inside our environment because change management was baked into the day to day development.


 


The morale of the story.  If you have a request for a blog topic, you’re not in a hurry, and I already have the content sitting around, I will happily write up a blog post for you.

About Jeremy Miller

Jeremy is the Chief Software Architect at Dovetail Software, the coolest ISV in Austin. Jeremy began his IT career writing "Shadow IT" applications to automate his engineering documentation, then wandered into software development because it looked like more fun. Jeremy is the author of the open source StructureMap tool for Dependency Injection with .Net, StoryTeller for supercharged acceptance testing in .Net, and one of the principal developers behind FubuMVC. Jeremy's thoughts on all things software can be found at The Shade Tree Developer at http://codebetter.com/jeremymiller.
This entry was posted in Continuous Integration, Featured, StoryTeller. Bookmark the permalink. Follow any comments here with the RSS feed for this post.
  • LiamD

    Should AssemblyInfo.cs not be placed in Version Control? What if I want to go back to an old revision in the repository and try to build it the build output will be different from that created by the CC build, since AssemblyInfo.cs is created on the fly based on “CCNetLabel”.

    So in order to create the exact same executable AssemblyInfo has to be in version control too?

  • Paul Hatcher

    Regarding the comment made about how “expensive” CI can be to set up, what I’ve done is to make a very simple common msbuild file and an additional msbuild file per solution that just lists the assemblies to be tested.

    This gets me the best of both worlds; I can build on my development machine before checkin to see if the unit tests pass and get the code coverage stats etc, but I know that when I check in, that the same build process will be executed on the build server.

    The trick is to put the tools (NUnit, NCover, etc) into source control as well and have that as the primary dependency for each project on the build server and also grab a copy onto each developer’s workstation – this ensures that most environmental build issues disappear.

  • flipdoubt

    I wrote my own silly little executable to update Assembly attributes by name. Thanks for all your help guys!

  • flipdoubt

    @Jeremy
    Thanks for all the very valuable help thus far. I’m having a hard time finding the MSBuildContrib project. I’ve searched both CodePlex and Sourceforge. Any ideas as to where I can find it?

  • http://codebetter.com/blogs/jeremy.miller Jeremy D. Miller

    @flipdoubt,

    You’ve created a build script in either NAnt or MSBuild right? Before the “compile” step, you need to call some sort of “version” task to build a CommonAssemblyInfo. There’s an example of doing this in NAnt in this post with the task in NAnt. I know there’s an MSBuild equivalent, but I think you have to go to MSBuildContrib for that.

    Once you can generate the CommonAssemblyInfo file, you just make every single project in your solution *link* to that file and remove everything but the assembly name from each AssemblyInfo.cs file created by Visual Studio.

  • flipdoubt

    @Jeremy,

    Thanks for all the advice.

    If my CCNET label makes a valid assembly version, how would I go about slipping that into the one and only SolutionInfo.cs file I have referenced by all 24 projects in my solution? Do I use some kind of CCNET task before the MSBuild task? Which one?

  • http://codebetter.com/blogs/jeremy.miller Jeremy D. Miller

    @flipdoubt,

    You don’t need to convert the sln file to anything else. Either MSBuild or NAnt can happily compile just by feeding in the sln filepath and build target. I’m not necessarily recommending NAnt over MSBuild. I use NAnt because I’ve used it for quite a while and I don’t see anything compelling about MSBuild to make the switch. Either tool will get the job done.

    Of course I’m going to give a long, hard look at moving to Rake for the next adventure.

  • flipdoubt

    @Ian:
    I figured a .SLN file must not be an MSBuild file because a .SLN does not contain XML. I’m asking how to convert it to a true MSBuild file so that I can add some of these additional tasks you’re suggesting.

    @Jeremy:
    I know you’re not saying this outright, but it sounds like your general recommendation would be to switch from MSBuild to Nant. I’m asking about a tool to translate .SLN to either MSBuild or Nant because I want to keep this as simple as possible. I just want to build the assemblies and add a task to assign the SVN revision number as the assembly version.

  • Ian Wood

    Sorry, bit of a typo – this should be

    Also, as far as I know, .sln files are NOT strictly MSBuild files, the .csprojs are. However the MSBuild Task can a .sln file as its Projects Property.

  • Ian Wood

    @flipdoubt

    You may find this article useful. Its a little old but answers some of the questions you have asked.

    http://blogs.msdn.com/mswanson/archive/2004/10/05/238423.aspx

    >So you’re saying you have $(CCNetLabel) in your AssemblyInfo.cs or SolutionInfo.cs?
    No, the PropertyGroup is in an MSBuild file, e.g. Build.pro.

    Also, as far as I know, .sln files are strictly MSBuild files, the .csprojs are. However the MSBuild Task can .sln file as its Projects Property.

    We version are assemblies using a Task file from GotDotNet ( http://www.gotdotnet.com/Community/UserSamples/Details.aspx?SampleGuid=5C455590-332C-433B-A648-E368B9515580 ) but instead of modifying the .csproj of each project it is done on the CI machine during its build. This way there is nothing for developers to do when a new project is added.

  • http://codebetter.com/blogs/jeremy.miller Jeremy D. Miller

    Assuming that you’re on 2.0+, there isn’t anything to it. The .SLN file is an MSBuild file. In NAnt I just shell out to MSBuild to do the compile step like this:




    where ${compile.target} is either “debug” or “release”

  • flipdoubt

    Here is a general question: How do I turn my .SLN file into an MSBUILD file so I can add tasks? Maybe that is a silly question that simply show how little I know.

  • flipdoubt

    So you’re saying you have $(CCNetLabel) in your AssemblyInfo.cs or SolutionInfo.cs? If so, that’s pretty cool, but you obviously can’t compile and debug it in VS.NET that way. In what part of the process do you slip it in?

    I would love to version my deployment assemblies that way.

  • Ian Wood

    Dear flipdoubt,

    Sorry not be clearer.

    That is in the MSbuild file.

    The way I have set up out Build is that all the work is done in the MSbuild files ( there are several ) rather than CCNET.

    I have done this so that the scripts are more reusable and to be able to isolate things when there are problems. For instance there were the occasional problem due to the fact that the CCNet runs as ASPNET
    rather than the user of the box. As I was able to run the exact same script I was able to tell the problem was due to it been run as ASPNET.

    As an aside we use the CCNETLabel to version our deployment assemblies. This is $(CCNetLabel) which is passed to MSBuild from CCNET “by magic”.

  • flipdoubt

    @Ian,

    Cool beans! But what kind of markup is that? Does it go in the MSBuild file, the CC.NET config, or what?

    Thanks, by the way.

  • Ian Wood

    It is easy to get the SVN Revision number. Here is how we use SVN checkout.


    LocalPath="..\Code">



    You can then use $(Revision) where ever you need it.

  • flipdoubt

    Nevermind. I figured out that I need to set the build argument to /p:Configuration=Release and leave the Targets property blank. Now I just have to figure out how to pull off your labeler magic without Nant or figure out how to get the SVN revision number.

  • flipdoubt

    Thanks for the offer of help!

    I spent some time wiring up a three project solution using an MSBuild task in CC.NET. If I use the solution file as the MSBuild file, I can’t set the Target property to Build. Does the solution file not contain any targets?

  • http://codebetter.com/blogs/jeremy.miller Jeremy D. Miller

    Feel free to send in questions, or better yet, post it here.

  • flipdoubt

    Hmmm, the Tomcat, JRE, and MySql of TeamCity seem kinda intimidating. I think I’ll go back to digging into CC.NET.

  • http://codebetter.com/blogs/jeremy.miller Jeremy D. Miller

    @flipdoubt,

    “One thing that bugs me about CI is that it requires sooooo much configuration.”

    How so? Vanilla CC.Net takes very little configuration to get up and going. I’ve set up a new CC.Net build from scratch in under an hour on several occasions. I’ve given TeamCity a couple tries, mostly to get a unified build monitor for both Java server and .Net client. My feeling was that I can get CC.Net up much faster than I can TeamCity, even with TeamCity’s GUI screens for configuration.

    JetBrains did open source it though.

    The components on the build server is a bit of a nightmare. They should have a solution for licensing strictly on the build server.

  • flipdoubt

    Great post! One thing that bugs me about CI is that it requires sooooo much configuration.

    Right now, I work in a shop that does things the “horror story” way. Heck, I’m the only one that checks things into SVN because others find it mysterious. But even I don’t use Nant. I “use” MSBuild. Really, I have a .BAT file that builds my solution with DEVENV.EXE. Still, it all comes from my developer machine because we only have so many licenses of the components.

    Anyway, with all time you have to invest in Nant and CCNet, do you guys have any insights on the recently released TeamCity Professional Edition from JetBrains? It seems to do most of the same stuff, but I wonder whether someone with real experience has used it enough to give a true evaluation. I’m just afraid of going down one path, spending gobs of time configuring the environment, cajoling the component vendors to cough up licenses for the build server, and then find that I have a very fragile environment based on an inferior tool. I dunno.

  • alberto

    > Then I suggest you go read all the older stuff on this blog, that’s how I’ve learned about CI, testing and stuff.

    I’m doing it already. A lot. It’s not that I want you guys to solve all my newbie questions without reading a word.

    Thanks both for your insights, double to Jeremy for your blog, I’ve found a lot of very interesting stuff.

  • PascalL

    @alberto

    >I’ve just discovered CI and there’s a lot of stuff and good practices I want to learn about.

    Then I suggest you go read all the older stuff on this blog, that’s how I’ve learned about CI, testing and stuff.

    We got like 15 projects in production right now. About 10 of them use our core librairies and only 6 use our financial librairies. It’s not that hard to know which project use what version of the librairies and we have a tool to search the source code of every project in case we want to make a breaking change. We are also trying to keep the public API as thin as possible to avoid problems with the librairies consumers.

    Our code and DB scripts are in the same repository branch so when I do an update and see that new scripts have been commited I run a macro to update my personnal DB on the server. Then I know my code and DB are in synch.

  • alberto

    Ok, so you pull the libraries. That’s the approach i’m following right now. But I don’t have an easy way to get the inverse dependencies of the libraries (to see which projects are using the library). Do you?

    For the databases, it’s then again up to the developer when to update the db, while the code may not work if he doesn’t. That’s a smell for me. Anyway, do you update or drop and create the DB?

    I’m sorry if I am being too bore. I’ve just discovered CI and there’s a lot of stuff and good practices I want to learn about.

  • PascalL

    @alberto

    What we do for that is we branch the code for the libraries. We update the assembly reference only has needed. If we don’t any new feature we just don’t update the reference until after a release so we could control behavior changes or bugs using the librairies.

    The libraries have their own CI build and we update the DLLs in SVN only on a succesfull build. We also ask the libraries team to handle breaking changes in all the projects using them.

    For DB upgrading we write scripts and it’s the CI job to actually update our QA DB. Developpers can also update their own DB every time they update the code.

  • alberto

    Just as a clarification.
    My concern with dependant home-made libraries is with a couple of libraries that I use in several projects. That’s why I don’t really like the idea of using a reference to the project, because when you introduce a breaking change, you are automatically breaking all the other projects.

  • alberto

    Could you deep on those two topics a little more?
    How do you keep changing databases (the tables themselves, not the data) in sync when changes are made?
    How do you keep the state of your databases consistent between tests?

    How do you automatically push a dependency?
    I have considered two approaches for my libraries (both of which I code).
    One is to build it and include it in SVN as any other 3rd party lib.
    The other is to build the project using a reference to the library project.
    A third approach would be to use something like Maven to manage dependencies, but I haven’t found anything specific for .Net.

  • http://codebetter.com/blogs/jeremy.miller Jeremy D. Miller

    @Alberto,

    You wrote: What do you mean by “One database per developer “AND ENVIRONMENT”?

    By environment I just meant that the build server should have a dedicated database and the test and/or UAT environments should also have their own dedicated databases. Sharing a database is tantamount to sharing needles as far as automated testing is concerned.

    “Propagating dependent libraries. Manual or Automatic?”

    If at all possible the mechanical act of propogating binary dependencies from one codebase to another is automated. The question really becomes: “does the dependency push happen automatically during the CI build, or is it kicked off manually?”

    If you own both codebases I say automated. If it’s a team handoff, I vote for manually kicked off scripts. Purely manual processes are bad no matter what in my book.

  • http://sharpbites.blogspot.com alberto

    My comment just got lost. :(

    I’ll try to write it again.

    I have a couple of questions about your slides.

    What do you mean by “One database per developer “AND ENVIRONMENT”?

    Whats your answer to your question:
    Propagating dependent libraries. Manual or Automatic?

  • http://codebetter.com/blogs/jeremy.miller Jeremy D. Miller

    @AndyB,

    The tagging is done automatically with the build. I haven’t had any space issues with that yet. SVN at least makes soft copies, so it isn’t that expensive. For releases I generally just like to track the CC version in production. I can make retroactive branches at will if necessary.

  • http://thunkthing.blogspot.com/ AndyB

    Jeremy

    As a matter of practice, do you take tags for every single build or just release (incl. release-to-test) builds?

    The reason I ask is in a similar vein to that of Chris’s question – you will end up with a major amount of tags (e.g. I’m currently on build 1251 of the project I’m developing at the moment as we do our CI nightly).

    Also, if you have the build number and you always commit to the trunk of Subversion, then surely that will give you the snapshot of code you require?

    BTW, great post again Jeremy.

  • http://codeprairie.net/blogs/chrisortman/default.aspx chrisortman

    On one of my projects I’m on build 230. Do you periodically delete old tags?

  • http://codebetter.com/blogs/jeremy.miller Jeremy D. Miller

    Chris,

    I think I’d be fibbing if I gave you too much more than “that’s the way I learned to do it.” I do like using the CC.Net build because:

    * lots of things trip off SVN versioning besides code changes
    * I only want to care about versions that pass a CC.Net build anyway

    You can simply switch CC.Net to use the SVN number and make that visible throughout.

  • http://codeprairie.net/blogs/chrisortman Chris Ortman

    Could you explain why you prefer to tag the build on success rather than use the SVN revision as part of your assembly version?

  • http://brewder.blogspot.com/ Brian Brewder

    I completely agree. I’ve been responsible for automating the build for my team and it’s been a great success. Before I did it some of the devs were not too sure about it, but they have all been converted to believers :).

    If anybody is interested, I’ve got some posts on my blog on automated builds. http://brewder.blogspot.com/search/label/build. One of the posts includes links to an article I wrote on Code Project about how to compile projects (VB.Net and C# projects anyway) in the correct order without solution files or manual configuration (just standard project files).

  • http://codebetter.com/blogs/jeremy.miller Jeremy D. Miller

    @Bil,

    It’s more fun to taunt you.

    If you want a comeback, you could try: “Hey Jeremy, having fun with legacy .Net 1.1 code today?”

  • http://weblogs.asp.net/bsimser Bil Simser

    Dude, don’t remind me about TFS->Subversion. I’ll get you on the phone to my boss and you can duke it out with him over it.

  • http://codebetter.com/blogs/jeremy.miller Jeremy D. Miller

    Not me, thank Owen Rogers, Mike Roberts, and the CC.Net gang

    P.S. To you TFS using folks, did I mention how easy CC.Net and Subversion is? ;-)

  • rgdavies71

    That makes sense, I knew you’d have it covered :)

    We are just getting the build process automated with TFS, I’ll go and dig around myself and see what the story is …

  • http://codebetter.com/blogs/jeremy.miller Jeremy D. Miller

    I’d have to dig for a second, but I believe that CC.Net remembers the SVN revision number and uses that to create the tags for that very reason.

  • rgdavies71

    Good stuff as always.

    One point – by labelling the source post-build, is there not a danger of code being checked in between the build and the labeling? We prefer to label first, then get the source from that label.

  • http://bryantb.blogspot.com Bryant

    Great post! Long time reader, first time poster here. Your blog saves me a lot of time as a manager of a team of developers. We’re currently in the middle of a two+ year migration from an enterprise Oracle Forms application using a modified Scrum development process. Several of the developers are new to .NET so have a lot of questions. Many times I can just give them a URL to one of your posts and their questions are answered.

  • http://weblogs.asp.net/bsimser Bil Simser

    Thanks mate. This is perfect and what I was looking for. I encourage everyone to go check out the StoryTeller.build file as it contains some nice pieces for handling the build process. I need to figure out how to tag builds with TFS so I’ll probably blog that shortly as a companion to this post.