Parallel Pair Programming

I’ve spent a bit of time working with Les recently and it’s been quite interesting working out the best way for us to pair together as he’s working as a front end developer on the team which means he’s best utilised working on the CSS/JavaScript/HTML side of things.

Having said that there are often features which require both front end and backend collaboration and we’ve been trying to drive these features from the front end through to the backend rather than working on the backend code separately and then working with Les later on to hook it all up to the frontend.

As a result we’ve found that it seems to be most effective to prototype any backend code that we write while working together such that we just write enough code to allow us to see that everything is hooked up correctly.

Once we’re happy that we’ve got that working correctly then we split up and work side by side, effectively parallel pairing as Vivek Vaid describes it.

At this stage I would focus on driving out the backend code properly with a more test driven approach and Les works on tidying things up from a front end perspective and ensuring that the feature’s look and feel is consistent across browsers and doing any other front end related functionality.

The benefit that we’ve seen from doing this is that we’re able to work together on the code where we get the most value from doing so and then split up to work on the other things where we wouldn’t add as much value to each other in a pair programming situation.

Since we’re working directly together at the beginning of a story we also get the benefit of iterating/talking through the potential approaches that we can take and I’ve often found that I take a better approach to solving a problem when working with a pair than when working alone.

The thing that we have to be careful about when doing this is ensuring that we’re not treading on each other’s toes because we are working around the same part of the code base and often on the same files.

This means that there is a bit of merging to do but as we’re also sitting next to each other it hasn’t proved to be too much of a problem. I think using a source control tool like Git or Mercurial with their merging capabilities would probably make this a non issue.

It seems to me that we’re still getting the benefits of pair programming with this approach and I’m more convinced that this might be a useful approach in other situations as well.

Posted in pair-programming | 1 Comment

node.js: First thoughts

I recently came across node.js via a blog post by Paul Gross and I’ve been playing around with it a bit over the weekend trying to hook up some code to call through to the Twitter API and then return the tweets on my friend timeline.

node.js gives us event driven I/O using JavaScript running server side on top of Google’s V8 JavaScript engine.

Simon Willison has part of a presentation on slideshare where he describes the difference between the typical thread per request approach and the event based approach to dealing with web requests using the metaphor of bunnies. He also has a blog post where he describes this is more detail.

Another resource I found useful is a video from jsconf.eu where the creator of node.js, Ryan Dahl, explains the philosophy behind event driven I/O and gives several examples using node.js.

These are some of my thoughts so far:

  • I’m not used to have so many callbacks spread all around the code and I’m still getting used to the idea that they aren’t executed until the event actually happens!

    I often find myself looking at a piece of code and not understanding how it can possibly work because I’m assuming that the function passed in is executed immediately when in fact it isn’t.

  • If you make a web request the response comes back in chunks so the callback we setup to capture the response will be called multiple times with different parts of the response message.

    For example I have this code to call Twitter and return all my friends’ status updates:

    var sys = require("sys"),
        http = require('http')
     
    exports.getTweets = function(callBack) {
        var twitter = http.createClient(80, "www.twitter.com");
        var request = twitter.request("GET", "/statuses/friends_timeline.json",
                                      {"host": "www.twitter.com",
                                       "Authorization" : "Basic " + "xxx"});
     
        request.addListener('response', function (response) {
            var tweets = "";
     
            response.addListener("data", function (chunk) {
                tweets += chunk;
            });
     
            response.addListener("end", function() {
                callBack.call(this, tweets);
            });
        });
     
        request.close();
    };
    

    I originally thought that the listener for ‘data’ would only be called once but it gets called 8 times sometimes so that I’ve created the ‘tweets’ variable which allows us to wait until we have the full response before firing the callback when the ‘end’ event is fired.

    I’m not sure whether I’m missing the point a bit by doing this and I think I possibly need to get more used to designing functions which can deal with streams rather than expecting to have all of the data.

  • It seems like node.js would be perfect for a version of my colleagues Julio Maia and Fabio Lessa’s http-impersonator which is a Java application used to record and replay requests/responses made across http-based protocols.

    I haven’t quite worked out the best way to test the above code – ideally I want to stub out the HTTP request so that the test doesn’t have to go across the wire. Micheil Smith pointed me towards fakeweb which allows the faking of HTTP requests/responses in Ruby so I’ll probably have a go at creating something similar.

So far node.js seems really cool and writing code using it is really fun. I’m still not sure exactly where it will fit in some of the architectures that I’ve worked on but the model it encourages feels really natural to work with.

Posted in javascript, node.js | Leave a comment

A reminder of the usefulness of Git

 

Despite the fact that none of the projects that I’ve worked on have used Git or Mercurial as the team’s main repository I keep forgetting how useful those tools can be even if they’re just being used locally.

I ran into a problem when trying to work out why a Rhino Mocks expectation wasn’t working as I expected last week having refactored a bit of code to include a constructor.

I wanted to include the Rhino Mocks source code in our solution before and after the refactoring and step through the code to see what was different in the way the expectations were being setup.

My initial thought was that I could just check out the repository again in another folder and then include the Rhino Mocks source code there and step through it but unfortunately we have all the projects set up to deploy to IIS so Visual Studio wanted me to adjust all those settings in order to load the solution in the new checkout location.

I probably could have gone and turned off that setting but it seemed a bit too much effort and I realised that I could easily use Git to help me solve the problem.

I took a patch of the changes I’d made and then reverted the code before checking it into a local Git repository.

I updated the solution to include the Rhino Mocks code and then created a branch called ‘refactoringChanges’ so that I could then apply the patch that I’d created with my changes.

It was then really easy to switch back between the two branches and see the differences in the way that the Rhino Mocks was working internally.

The actual problem eventually turned out to be the way that the code calls Castle DynamicProxy but I didn’t get the chance to look further into it – we had learnt enough to know how we could get around the problem.

I’m in the process of including the source code for all the 3rd party libraries that we use in the solution on a separate Git branch that I can switch to when I want to debug through that code.

Sometimes I end up having to close down Visual Studio and re-open the solution when I switch to and from that branch but apart from that it seems to work reasonably well so far.

 

Posted in software-development, version-control | 2 Comments

Preventing systematic errors: An example

James Shore has an interesting recent blog post where he describes some alternatives to over reliance on acceptance testing and one of the ideas that he describes is fixing the process whenever a bug is found in exploratory testing.
He describes two ways of preventing bugs from making it through to exploratory testing:

  • Make the bug impossible
  • Catch the bug automatically

Sometimes we can prevent defects by changing the design of our system so that type of defect is impossible. For example, if find a defect that’s caused by mismatch between UI field lengths and database field lengths, we might change our build to automatically generate the UI field lengths from database metadata.
When we can’t make defects impossible, we try to catch them automatically, typically by improving our build or test suite. For example, we might create a test that looks at all of our UI field lengths and checks each one against the database.

We had an example of the latter this week around some code which loads rules out of a database and then tries to map those rules to classes in the code through use of reflection.

For example a rule might refer to a specific property on an object so if the name of the property in the database doesn’t match the name of the property on the object then we end up with an exception.

This hadn’t happened before because we hadn’t been making many changes to the names of those properties and when we did people generally remembered that if they changed the object then they should change the database script as well.

Having that sort of manual step always seems a bit risky to me since it’s prone to human error, so having worked out what was going on we wrote a couple of integration tests to ensure that every property in the database matched up with those in the code.

We couldn’t completely eliminate this type of bug in this case because the business wanted to have the rules configurable on the fly via the database.

It perhaps seems quite obvious that we should look to write these types of tests to shorten the feedback loop and allow us to catch problems earlier than we otherwise would but it’s easy to forget to do this so James’ post provides a good reminder!

Posted in software-development, testing | Leave a comment