OpenRasta: Today and tomorrow

The OpenRasta community has experienced a bit of a renaissance. And so it should, for there is much to rejoice about, and really cool stuff happening in the pipeline.

Because the time is right to reopen those communication channels we once had, the community is organising an OpenRasta MeetUp to talk about where we are and where we’re going, with great presentations, nifty additions to the openrasta family and a few reveals of where we’re going.

Come join us, at the just-eat offices, for an amazing evening of tech, beer and probably some food, on August the 22nd.

The Future of OpenRasta

Friday, Aug 22, 2014, 6:30 PM

Location details are available to members only.

41 Developers Attending

Talkers are to being scheduled, if you would like to talk please contact us.

Check out this Meetup →

Posted in Uncategorized | Leave a comment

Unit testing is out, Vertical Slice Testing is in

We have been doing testing for a long time. Some people are practicing TDD, but I think that’s only 46 people in the world and they all follow my twitter feed.


We all know the common issues people have with writing large test harnesses around systems: it seems to take a long time, existing code is difficult to test, subsequent changes to code makes a lot of existing tests fail or become redundant, and the test harnesses often can become brittle, in no small part because of the abusive use of mocking.

As a recovering TDD addict, I used to tell people, like many others do, that the issue with their TDD was that they didn’t do it right. If TDD is too hard, do more TDD! In other words, if it hurts, just bang that leg with that baseball bat a bit harder, eventually you will not hurt anymore.

I will come straight out with it: this approach to unit, integration and tdd testing has, by far and large, failed. It’s over. It is time to move on. I have, and I now do VErtical Slice Testing.

A law of diminishing returns

Whatever your test coverage, there is a point where more tests don’t equate more quality. Actually, let me be entirely honest here, I do not believe that a good test harness leads to better perceived quality software for your users.

The people you are trying to please with a good test harness are the ones using your system that are not on a high bandwidth medium with you. Usually that would be your team. To reduce the risk of them experiencing issues, and to increase the chance of them liking your software, you write tests so that their experience is positive.

A long time ago, I realized that testing everything is not always feasible for me, and I practice TDD all the time. A quick mental review of the teams I worked with in the last 12 years I’ve been practicing TDD also tells me that this fact is rather general.

Mocking considered evil

I’ve never been a big fan of mocking frameworks, for one simple reason. Simple stubbing relies on assumptions on the functioning of the mocked part of your system that rarely match reality, for no one is going to look at all the documentation of an API and decompile the code when writing a one-line stub.

As you progress in your implementation, you may learn about more scenarios you didn’t cover, and provided you are in that tiny little fraction of a percent of people refactoring your test harness as you progress your implementation, you may feedback some of that knowledge in previous tests you wrote and in your test doubles, while caressing your pet unicorn and enjoying a bit of sunshine on planet Equestria. Or if you’re like the rest of us, you probably won’t.

Increasing returns for user interfaces

Admitting to myself that I could not fix those problems (you cant fix stupid!), I started trying to understand how I could bring some of the TDD practices I enjoyed (fast feedback, repeatable tests, etc) while increasing the perception of quality, leaving any theoretical or coverage notions behind me. Instead of starting from the components I want to put under test, I start with what my users, hum, use.

If I provide users with an API, I will start by the API that I decide to publish, and consider everything else implementation details. I will only document and polish, a.k.a. test, what other people may use.

All the same, if my user interface is not an API, but some sort of UI, there is very little reason to care about all the scenarios a component may face that cannot be triggered from that user interface.

The process of discovering your user interface has an added advantage. The answer to most but what if questions about such a top-down approach usually unveils an additional user interface you didn’t quite know you had (looking at you, JSON SPA AJAX <insert buzz word of the day> Frankenstein “web” apps).

This is already an understood concept, and is usually referred to as acceptance testing.

At warp speed, everything is relative

A common issue arises from using existing acceptance-driven tools. Automating the browser is slow, automating a DB is as well, so is the file system. Each of those may also fail for reasons that have nothing to do with your code.

That would make your tests brittle and slow, which inexorably will lead to longer feedback cycles, harder to run tests, which would get us straight back to my introduction and why traditional approaches to TDD have failed.

Acceptance testing recommends, to avoid such a problem, the creation of an alternative interface allowing you to remove the browser from the equation. This is no longer necessary. With tools such as zombie being available for free, you can run an in-memory browser that behaves like a browser, runs in-memory and is increadibly fast. No more Selenium issues on running your automated test suite, no interaction with your operating system’s UI library, it’s all fast and beautiful. And if your user interface is an API, unit testing frameworks and test runners have provided those advantages for many, many years.

External-facing components are now in-memory, making executing our tests fast and reliable by not triggering external systems.

I now apply the same concept to internal facing components. Instead of mocking out the inner-most elements, such as the file system, the network or a database, of my system on a per-method or per-component basis, I use libraries that give me, out of the box, an in-memory version of their system that is functionally equivalent to the real one.

It means an in-memory file system would implement the same locking primitives as the real systems, the same rules around reading, writing or seeking data, and be as close as the author of the library can make it to the real system.

In other words, a VEST-friendly library turns the unit tests of that library on it’s head. The component accessing external systems is developed alongside it’s test-double variant, and both are built to react, error-out and validate calls in the same way. The test double can be shipped. I don’t write the mocks, they come as part of the package.

There are many advantages to such an approach. The author of the library knows intimately the contract he provided you as part of his API. The act of providing an in-memory version means this knowledge is now expressed explicitly, forcing error conditions (which are very much part of a user interface in an API) to be accounted for.

A VEST-friendly library will usually end up testing explicit contracts in their test harness, so we go one step further. A library author can ship the test harnesses that exercise the contract they expose for all the conditions that are known to potentially exist, and once again, we turn test code into shipping code: if the author has written a test harness, and the author builds two components implementing an interface, the test harness for the explicit public interfaces can be shipped, as it’s probably already written.

I believe this process to be recursive and leading to the pit of success, as any library built using VEST in mind will naturally feed the VEST approach.

Vertical Slice Testing

The VEST approach replaces all external system calls by functionally equivalent stable and controlled ones, both at the outer-most layer (whatever exercises the user interface) and the inner-most one (whatever touches the low-level IO APIs).

By using VEST, I can focus on delivering web pages or components that focus on user-interaction, I can run the whole system on every test, and do it very quickly. I can change a lot of inside internal code without breaking existing testing scenarios. And should I wish to implement components replacing the ones in my libraries, I can do that without writing tests, because the test harness is already there, testing the contract for me.

Note: As I’m abroad and not available that much, don’t expect quick answers in the comments section but I’ll try my best. Subjects not covered but not forgotten: my team as a user, availability of VEST libraries, code samples, library author burden, brown field projects, generic crazy-talk, unicorns not being ponies, “you never worked in a team!”, “in an ideal world…”, etc.

Posted in Uncategorized | 41 Comments

Relaunching SerialSeb – Practical ReST and then some

Well. After 5 months of soul-searching, ups and downs, I’m back, more focused than ever (think laser beam focus). This week has seen the launch of several important projects for me, and I’m very excited to share them with you, a year after my last blog post! The hiatus is over and Seb is back.

Practical ReST – the book

I’ve been talking for years about writing a brain dump of what I know of ReST. The project has started, you can pledge to buy your copy today (in various editions) and get content as early as next month!

The response has been amazing, with three quarters of the minimum funding pledged in 48 hours. Of course, the more contributors the more time I can allocate to the book, the quicker the book will be in your hands, so don’t hesitate!

Writing a book in itself is a big challenge, and the scope is very ambitious. But it’s also a foray into self-publishing, and I expect it to be an amazing opportunity for learning.

Practical ReST – the class

I’ve been giving a ReST class for a while, but I’m rewriting it so it follows the same structure as the book, and make it open to people with many technological backgrounds.

First date should be in May in London, and I expect Warsaw, Vilnius and other locations to pop up this year.

If you want to see a workshop organised in your city (or your company), pop me an email.

I’m also launching a referral program *and* group rebates, so if you’re a training company and you want to add the course to your books, you can too.

Hanselminutes podcast

Scott was nice enough to have me on his show this week, talking about Practical ReST. It’s published and ready to be listened to, and while I was half-way through an asthma attack and having microphone problems, Scott saves the day by being a great host.

Cancer Research UK

You may remember that I asked for your contributions to help me raise money for cancer research UK. I can now announced that we raised more than 700£, and that both my objectives of non-drinking were achieved. I cannot thank you all enough for your generosity.

It had to happen. I now have my own web site. You cannot imagine how difficult it is for me to put work and effort in talking about what I did rather than what I want to do. But you will find on there a link to all my previous talks, with videos and slides (where applicable), as well as where you’ll find me next, my future talks and where classes will be delivered next.

What next?

ReST will be my focus for the next few months. I have a couple of other projects up my sleeve, but this is enough announcements for now, don’t you think?

Posted in Uncategorized | 7 Comments

Will Seb shave his head?

Well, it has been quite a few months since I’ve blogged. There are many reasons why, and I may or may not break the hiatus in 2013, you’ll have to wait to find out.

In the meantime, I’m giving my drinking money to charity by giving up alcohol, and I’ve set-up a challenge. If you help me raise £1,000 for Cancer Research UK, I will be shaving my head, and you’ll get a nice video of it too.

So if like me you feel that it’s your time to give back, but you’re attached to your Tony n Guy creation, or you just don’t want to give up that Pinot Gris, please help me raise funds by donating on

Posted in Uncategorized | Leave a comment

Microsoft and Open Source

I have been a vocal, and sometimes harsh, critic of Microsoft’s approach to Open Source Software. I call that activism, some call it whining, ranting or pisser dans la soupe. People are entitled to their opinion, and mine has been formed after working in the Microsoft sphere for more or less 14 years.

Over the years, the various component organizations within Microsoft have had various levels of involvement with Open Source, with different philosophies, and from what I have gathered over the years from post-conference drinks, internal conflicts that can be quite a challenge.

As this post is going to be long, I’m going to sum-up the problem, and you can go to the conclusion bit to read what I believe should be done differently.

Microsoft has no vision when it comes to Open-Source, no strategy and no leadership. Some groups are seen as more progressive, and do tend towards a better approach to OSS. Some people consider those steps as sufficient to stop pressuring Microsoft towards responsible OSS, and I disagree: it’s only some groups, and even within those groups, the advances are still very much one of exporter. Recognizing Microsoft as an OSS-friendly player is like recognizing the People’s Republic of China under the Reagan administration, it’s a step in the direction of normalization, not a recognition of the Human Right lifetime achievements of Mao Tsedung.

Untitled6You know why I know Microsoft has no leadership and vision when it comes to Open-Source? Because I asked Steve Ballmer when he was in London, and he replied with this: “I don’t know, but we won’t impose any view on our divisions. I’ll come back to you by email though.” Steve, my email is still and I’m looking forward to your views.

Where we come from

The current state of affairs is not bad, considering where Microsoft was years ago. We’re talking about a company that positioned itself as the defender of private code, vilifying OSS licenses, having no visible OSS project going and providing its value as the integrated stack: it’s all MS or none. We have certainly come a long way.

Opening up slowly

I think under the influence of Scott Guthrie, an internal change started deep within the team ranks, certainly a movement that resulted in the recent announcement of the opening up of the webstack as a true Open-Source project.

Now that’s commendable, and I applaud the move. Open-Sourcing and opening up to collaborative development for stuff you’ve built is great. Well done.

Not invented in Redmond syndrome

Let’s be clear. The goal of MVC was to build a better Monorail, as opposed to help Monorail productize their framework for inclusion in Visual Studio.

The exact same thing happened with Nuget. With multiple projects with different approaches already in the work, Microsoft started their NPack project (I seem to recall someone at Oredev mentioning it was originally built as part of the WebPages tooling). It wasn’t to be open-source to start with, hence why the existing OSS projects didn’t get the news until much later.

But, I hear you say, Nuget is an open-source project from Outercurve, and they combined their code with another project. The open-sourcing was decided very late in that process, once the investment in code was so large that the ship had left already and nothing would stop it. And on a side-note, there was no merging of the code, the Outercurve foundation’s press release was inaccurate, just look at the history.

What really happens when a new project gets produced by Microsoft? It sucks the air out of a lot of community projects, often including all the best ideas of said projects, and Microsoft gives nothing back. If you’re in that boat, you either redraw your plans based on their new announcements (usually rewriting on top of the new shiny stuff), or you dissolve your group and join their OSS project (the Nuget / Nubular case).

Certain projects (like nhibernate and nunit) continue to happily go forward, and maybe the OSS alternatives that are impacted by Microsoft entering a space just were not good enough. I don’t know, we’ll see in a few years.

Adoption of existing projects

Of course it can be said, and there again, I commend Microsoft for the move, that libraries such as jQuery, modernizr and now Newtonsoft.Json, are being used, supported by Microsoft and shipped alongside the product.

This is also a great departure from time immemorials. Makes no mistake however, Microsoft is being pushed by market pressure to integrate with those frameworks. The change is only that they’re abandoning slowly the idea that they’ll release until they gain marketshare, in those domains where Microsoft has historically been slow at releasing compared to alternative platforms.

This is only an opinion piece, but I’ll suggest that the move to gain web development share has pushed Microsoft towards dropping the projects they had and failed to make popular (the  AJAX stuff that got more or less killed when jQuery was adopted). And I think that’s great.

Pragmatism is also the reason why newtonsoft.json is being used by Web API. The current Json providers on .net are broken, there is no way to provide the customization people want, and fixing them cannot be done in the same release cycles as MVC, as some are part of the core framework (and may or may never be fixed, but certainly not in time for the next release).

It was not chosen because Microsoft wants to adopt more external OSS projects as their core vision. It was chosen because they couldn’t fix what was broken and rebuilding would have been silly.

On a side-note, twitter conversations started going in the direction of why Newtonsoft.Json was chosen over other libraries. I made the assertion that provided it’s an internal implementation with no exposed surface, the choice of *any* json lib would’ve been fine. ScottHa said it was because the library was the most popular, and testing was done on their part. That lead me to the assertion that if the only library under consideration was chosen for popularity, and there is no exposed surface of that lib on public APIs, the choice is one of brand-surfing and gives extra marketing points to their OSS credentials. If some of the API is exposed, it was probably a reasonable choice. I don’t care which library you chose, I care the process you used to chose which library and the reasoning behind it. I question (and by question, I mean discuss) the motives, not the solution.

[Edit: After looking for it (as the twiterverse didn’t reply to the specific of the discussion), one object from *is* exposed on the JsonMediaTypeFormatter, so that is that and I’ll stick to my original analysis, it’s probably a reasonable choice.]

There is no vision, because there is no systemic approach to deciding between building and joining communities. Web API could have quite happily join existing OSS projects that dealt with this problem area years ago, but it wasn’t to be. Nuget funding could have been used on top of existing projects, instead of a pure creation, but it wasn’t to be either. What is the decision process behind the rebuild or join decision?

And can anyone that has had their project get the oxygen sucked out of, with nothing given back, be blamed for distrusting the company?

220px-Borg_Queen_2372The corporation

When the Web API project was being kicked off, Glenn Block called me asking me if I thought he should take the job, and if he did, would I help. I told him what I’m telling you now: Yes, you can’t do as bad as the previous attempts, and I’ll happily help to make sure the architecture and model fits the reality of what developers want, we’ll all win. I helped because I was asked. Ideally I’d rather they had asked me to work on OpenRasta and create a new version with us, but it was not to be. In the end, the two frameworks may have very different code, but most of the concepts and conventions are exactly the same, so everyone will feel right at home on both platforms, and that’s in no small measure because the brilliant minds behind Web API respected HTTP as much as I and the other advisors did.

More recently, Drew Miller from the NuGet team has been very active in opening communication channels with OpenWrap, something that has not existed ever, and he should be applauded for it: I can now have a discussion about our interop story with someone that wants to make things better. Again, I’d rather the NuGet team had joined OpenWrap rather than communicate to us their imminent release, way too late to get any positive outcome, but at least it’s a step in the right direction.

Those are individuals, not the company, and out of all the voices out there, they are a minority. You cannot judge the whole company on specific individuals, and teams cannot use the excuse Microsoft is really many companies. That is certainly true, but completely irrelevant to external agents when judging how the company deals with OSS.

transparencyHow to fix this

Three words, transparency, transparency, transparency.

When you have an idea for a product, and there are existing Open-Source projects out there that have adoption and could really help the resources of a big vendor, try and justify the choice.

If you’re going to rebuild, take the phone, call the leader, explain your plans, and don’t be a douche about it (aka telling someone “we realize we are stepping on toes here but we’re big so it’ll happen” is not particularly a great way to deliver the news). Chances are, they’ll probably understand the reasoning. ISVs have had that communication channel from Microsoft for years, and get to know about competing products way in advance, but it’s not the case for OSS projects.

Be open about the process afterwards too. The policy of not mentioning anyone, or pretending you cannot look at existing projects at all in public, would make anyone think you have something to hide. It probably is against some corporate policy somewhere, but make that change, you did do it about the no-OSS policy.

And finally, value the people on your platform, the loud mouths like me and the small guys that abandon their projects because you started yours. We are on your platform because we like it, and we want to make it better. And maybe if you’re going to step on people’s toes, try to find a band aid for them to sweeten the blow, and find a way to collaborate on something else with them. Don’t lose the people that have been developing the ideas you’re going to be building on, it’s not good business.

If these things had happened in the last few years, I wouldn’t distrust Microsoft-the-company as much as I do now. But the bridges that are being built by bright individuals inside the company do go in the right direction.

Is that going to be enough? I sincerely don’t know, but until we’re there, I’ll continue shouting at the top of my voice for Microsoft to become the business partner my OSS projects wants to be dealing with.

It’s called tough love, silly.

[Edit: added an additional paragraph to project competition]

[Edit 2: added a bit on the choice]

Posted in Uncategorized | 21 Comments