Microsoft and Open Source

I have been a vocal, and sometimes harsh, critic of Microsoft’s approach to Open Source Software. I call that activism, some call it whining, ranting or pisser dans la soupe. People are entitled to their opinion, and mine has been formed after working in the Microsoft sphere for more or less 14 years.

Over the years, the various component organizations within Microsoft have had various levels of involvement with Open Source, with different philosophies, and from what I have gathered over the years from post-conference drinks, internal conflicts that can be quite a challenge.

As this post is going to be long, I’m going to sum-up the problem, and you can go to the conclusion bit to read what I believe should be done differently.

Microsoft has no vision when it comes to Open-Source, no strategy and no leadership. Some groups are seen as more progressive, and do tend towards a better approach to OSS. Some people consider those steps as sufficient to stop pressuring Microsoft towards responsible OSS, and I disagree: it’s only some groups, and even within those groups, the advances are still very much one of exporter. Recognizing Microsoft as an OSS-friendly player is like recognizing the People’s Republic of China under the Reagan administration, it’s a step in the direction of normalization, not a recognition of the Human Right lifetime achievements of Mao Tsedung.

Untitled6You know why I know Microsoft has no leadership and vision when it comes to Open-Source? Because I asked Steve Ballmer when he was in London, and he replied with this: “I don’t know, but we won’t impose any view on our divisions. I’ll come back to you by email though.” Steve, my email is still seb@serialseb.com and I’m looking forward to your views.

Where we come from

The current state of affairs is not bad, considering where Microsoft was years ago. We’re talking about a company that positioned itself as the defender of private code, vilifying OSS licenses, having no visible OSS project going and providing its value as the integrated stack: it’s all MS or none. We have certainly come a long way.

Opening up slowly

I think under the influence of Scott Guthrie, an internal change started deep within the asp.net team ranks, certainly a movement that resulted in the recent announcement of the opening up of the asp.net webstack as a true Open-Source project.

Now that’s commendable, and I applaud the move. Open-Sourcing and opening up to collaborative development for stuff you’ve built is great. Well done.

Not invented in Redmond syndrome

Let’s be clear. The goal of MVC was to build a better Monorail, as opposed to help Monorail productize their framework for inclusion in Visual Studio.

The exact same thing happened with Nuget. With multiple projects with different approaches already in the work, Microsoft started their NPack project (I seem to recall someone at Oredev mentioning it was originally built as part of the WebPages tooling). It wasn’t to be open-source to start with, hence why the existing OSS projects didn’t get the news until much later.

But, I hear you say, Nuget is an open-source project from Outercurve, and they combined their code with another project. The open-sourcing was decided very late in that process, once the investment in code was so large that the ship had left already and nothing would stop it. And on a side-note, there was no merging of the code, the Outercurve foundation’s press release was inaccurate, just look at the history.

What really happens when a new project gets produced by Microsoft? It sucks the air out of a lot of community projects, often including all the best ideas of said projects, and Microsoft gives nothing back. If you’re in that boat, you either redraw your plans based on their new announcements (usually rewriting on top of the new shiny stuff), or you dissolve your group and join their OSS project (the Nuget / Nubular case).

Certain projects (like nhibernate and nunit) continue to happily go forward, and maybe the OSS alternatives that are impacted by Microsoft entering a space just were not good enough. I don’t know, we’ll see in a few years.

Adoption of existing projects

Of course it can be said, and there again, I commend Microsoft for the move, that libraries such as jQuery, modernizr and now Newtonsoft.Json, are being used, supported by Microsoft and shipped alongside the product.

This is also a great departure from time immemorials. Makes no mistake however, Microsoft is being pushed by market pressure to integrate with those frameworks. The change is only that they’re abandoning slowly the idea that they’ll release until they gain marketshare, in those domains where Microsoft has historically been slow at releasing compared to alternative platforms.

This is only an opinion piece, but I’ll suggest that the move to gain web development share has pushed Microsoft towards dropping the projects they had and failed to make popular (the asp.net  AJAX stuff that got more or less killed when jQuery was adopted). And I think that’s great.

Pragmatism is also the reason why newtonsoft.json is being used by Web API. The current Json providers on .net are broken, there is no way to provide the customization people want, and fixing them cannot be done in the same release cycles as MVC, as some are part of the core framework (and may or may never be fixed, but certainly not in time for the next release).

It was not chosen because Microsoft wants to adopt more external OSS projects as their core vision. It was chosen because they couldn’t fix what was broken and rebuilding would have been silly.

On a side-note, twitter conversations started going in the direction of why Newtonsoft.Json was chosen over other libraries. I made the assertion that provided it’s an internal implementation with no exposed surface, the choice of *any* json lib would’ve been fine. ScottHa said it was because the library was the most popular, and testing was done on their part. That lead me to the assertion that if the only library under consideration was chosen for popularity, and there is no exposed surface of that lib on public APIs, the choice is one of brand-surfing and gives extra marketing points to their OSS credentials. If some of the API is exposed, it was probably a reasonable choice. I don’t care which library you chose, I care the process you used to chose which library and the reasoning behind it. I question (and by question, I mean discuss) the motives, not the solution.

[Edit: After looking for it (as the twiterverse didn’t reply to the specific of the discussion), one object from json.net *is* exposed on the JsonMediaTypeFormatter, so that is that and I’ll stick to my original analysis, it’s probably a reasonable choice.]

There is no vision, because there is no systemic approach to deciding between building and joining communities.

Asp.net Web API could have quite happily join existing OSS projects that dealt with this problem area years ago, but it wasn’t to be. Nuget funding could have been used on top of existing projects, instead of a pure creation, but it wasn’t to be either. What is the decision process behind the rebuild or join decision?

And can anyone that has had their project get the oxygen sucked out of, with nothing given back, be blamed for distrusting the company?

220px-Borg_Queen_2372The corporation

When the Web API project was being kicked off, Glenn Block called me asking me if I thought he should take the job, and if he did, would I help. I told him what I’m telling you now: Yes, you can’t do as bad as the previous attempts, and I’ll happily help to make sure the architecture and model fits the reality of what developers want, we’ll all win. I helped because I was asked. Ideally I’d rather they had asked me to work on OpenRasta and create a new version with us, but it was not to be. In the end, the two frameworks may have very different code, but most of the concepts and conventions are exactly the same, so everyone will feel right at home on both platforms, and that’s in no small measure because the brilliant minds behind Web API respected HTTP as much as I and the other advisors did.

More recently, Drew Miller from the NuGet team has been very active in opening communication channels with OpenWrap, something that has not existed ever, and he should be applauded for it: I can now have a discussion about our interop story with someone that wants to make things better. Again, I’d rather the NuGet team had joined OpenWrap rather than communicate to us their imminent release, way too late to get any positive outcome, but at least it’s a step in the right direction.

Those are individuals, not the company, and out of all the voices out there, they are a minority. You cannot judge the whole company on specific individuals, and teams cannot use the excuse Microsoft is really many companies. That is certainly true, but completely irrelevant to external agents when judging how the company deals with OSS.

transparencyHow to fix this

Three words, transparency, transparency, transparency.

When you have an idea for a product, and there are existing Open-Source projects out there that have adoption and could really help the resources of a big vendor, try and justify the choice.

If you’re going to rebuild, take the phone, call the leader, explain your plans, and don’t be a douche about it (aka telling someone “we realize we are stepping on toes here but we’re big so it’ll happen” is not particularly a great way to deliver the news). Chances are, they’ll probably understand the reasoning. ISVs have had that communication channel from Microsoft for years, and get to know about competing products way in advance, but it’s not the case for OSS projects.

Be open about the process afterwards too. The policy of not mentioning anyone, or pretending you cannot look at existing projects at all in public, would make anyone think you have something to hide. It probably is against some corporate policy somewhere, but make that change, you did do it about the no-OSS policy.

And finally, value the people on your platform, the loud mouths like me and the small guys that abandon their projects because you started yours. We are on your platform because we like it, and we want to make it better. And maybe if you’re going to step on people’s toes, try to find a band aid for them to sweeten the blow, and find a way to collaborate on something else with them. Don’t lose the people that have been developing the ideas you’re going to be building on, it’s not good business.

If these things had happened in the last few years, I wouldn’t distrust Microsoft-the-company as much as I do now. But the bridges that are being built by bright individuals inside the company do go in the right direction.

Is that going to be enough? I sincerely don’t know, but until we’re there, I’ll continue shouting at the top of my voice for Microsoft to become the business partner my OSS projects wants to be dealing with.

It’s called tough love, silly.

[Edit: added an additional paragraph to project competition]

[Edit 2: added a bit on the json.net choice]

Posted in Uncategorized | 21 Comments

Http Caching is complicated for everyone

As you may know, one of the talks I’m giving this year is HTTP caching 101.

I feel it’s important to educate everyone on HTTP. One of the interesting things is the fact that the spec names things in a very, well, strange way.

Take my good friend Ayende that complains about Silverlight not respecting the HTTP specification. Now I think everyone will agree that Oren is a brilliant person, so if he gets it wrong, it’s probably the spec that sucks. Point is, silverlight is not necessarily wrong in this case.

See, a resource representation (the thing that is controlled by caching instructions) can be in three states: Up-to-date, fresh and stale.

Up-to date representations are the ones that we know are current according to the origin server.

When a representation is being sent back with no expiry and no caching instructions, it’s allowed (and encouraged) to cache it for however long an implementation may decide, it’s left up to the developers. The spec encourages using heuristics for this. If you want something more reliable, you should provide a max-age or an Expires (or ideally, both).

Now, as long as the representation hasn’t expired, it’s considered by caches (proxies or your browser cache) as fresh.

Now that’s where things become confusing. When you send back a must-revalidate with your response, it doesn’t mean that the client needs to revalidate at all times with the server. It means that a client needs to revalidate once the representation is stale, aka it has gone over it’s expiration date. In the case ayende shows here, the client makes the decision that the data was cacheable previously (I’d think that the response would’ve had a Last-Modified so heuristics could kick in), and so doesn’t see the point of asking the server to validate a fresh cache entry. This is why when Oren added the Expires header in the past, the entry became stale on arrival, making must-revalidate work as expected.

So what if you want the client to revalidate always before serving a cached entry? In that case, you can send a no-cache, which of course doesn’t mean do not cache. It means that the client needs to revalidate both fresh and stale entries, and then may use the cached copy if it’s up to date.

So to sum up: no-cache doesn’t mean do not cache, must-revalidate doesn’t mean always revalidate, most people get http caching wrong because it is not intuitive Thankfully, if you come to my talk, or my ReST training course, or use the openrasta-caching plugin, you won’t have to care much about those problems.

Posted in Uncategorized | 5 Comments

The new OpenEverything organization

OpenRasta has been there for many years, and is the most widely used alternative framework for ReST applications.

OpenWrap has been there for a year and a half, and has some usage for advanced users wanting a nicely-architected package manager that doesn’t take dependencies on lots and lots of other technologies and doesn’t require VS.

And we have sub-projects, such as OpenFileSystem, an IO abstraction library that ought to  make using the file system easier, shipping with an in-memory version that should work the same way as your target operating system.

Now all those projects are at varying stages in their life, and the amount of work I used to give to only OpenRasta has been vastly disolved, which in turns means that some users are starting to get quite annoyed with our dreadful homepage and our lack of binary downloads.

Add to this the competitive pressure on OpenRasta introduced by Nancy and the future openrasta copycat WebAPI from the photocopier machine Microsoft, and you’ll understand that the current trend is not sustainable.

This year I’ve decided to simplify my life and delegate much more of my work to a community that has been so far very supportive but very unwilling to contribute anything back. I also want to go back to enjoying working on OSS software to build great solutions and not focus my energy on simply competing with any tool and framework microsoft invents to duplicate existing OSS solutions. I have a vision and I want to deliver on that, OpenRasta and OpenWrap are two sides of that, and I want to start making them work together the way I want without having to be dragged in competitive races with Microsoft’s marketing dollars.

So we had a planning meeting with some of the companies that use OpenRasta in anger in London and want to step up their contributions (7digital and huddle, we love you guys!), and decided that we were going to change how things are done in the OpenEverything projects.

Project organization

First and foremost, I’m going to contribute much less to the 2.1 branch of OpenRasta, and patches / merges / new builds are going to be handled by the contributors that are given write access to the git repository. This gives away a lot of my power.

I’m going to be focusing on the outstanding modules for OpenRasta that add a whole bunch of functionality that will be rolled-back in 3.0: the caching module (soon to arrive in 0.2 version), the new config API (with compact fluent API and convention-based config) and the dynamic / json out-of-the-box support. There’s also a new openrasta-diagnostics module that I’ve started building and want to get out there. I’ll also contribute some work towards the async support that has been requested by some.

OpenFileSystem 2.0 is going to be under Henrik Feldt supervision, and will be a merge of the amazing work he did on long-path and transactional file system support.

All of our versioning is going to be a strict application of http://semver.org 2.0 (which is now the only standard in OpenWrap), so that will solve any breaking changes being introduced without knowing.

Of course, OSS is not a democracy, so I keep a veto right on certain changes that would impact future versions, or architectural changes that do not fit with the vision I have for all those projects. This is give and take, I give some of the control of more than 4 years worth of my development time, but I will keep the main architect role for the time being.

As for releases, anytime someone with commit access wants to create a new release, the process is very simple: they email on the mailing list, and if there’s no majority opposition, we have a new build. Simples.

Orphaned projects

Sadly, we do have some orphaned projects in all this. Mostly, the container support for each of the IoC containers is not up-to-date for StructureMap (tests are broken) and Unity (I have no understanding of the code, so while I *think* I fixed the latest build, I can’t try it out). At this stage, anyone with a vested interest in those projects should start contributing and come help out, or risk that code not being maintained anymore.

OSS is not just take, it’s give too

That leaves a real question for those companies that have used OpenRasta to make money and provide a killer-architecture and have contributed absolutely nothing back since the beginning.

I’ll let you in on a little secret. I give the code away on everything, so it is a company’s legal right to take and not give back. As a side-effect, it is my complete right to completely ignore those that make money on our hard work and want to keep it all to themselves, and my responsibility to not prioritize the needs of those that contributes over those that are selfish..

OSS is also about giving back, and this year it’s what is being enforced on the OpenEverything projects. Fostering contributions is what we want to get to. No more free-lunch though, that’s for sure.

Conclusion

This new organization should let OpenRasta absorb patches and get on the 2.1 release cycle faster, and then on to 2.2, etc. This will allow me to focus on what I want to build for 3.0, and this will also allow more contributors to own projects more directly than before.

If you have comments or feedback on all those new plans, please let us know on the now unified openeverything-dev mailing list.

Please don’t use it for user questions, we’ve moved all support of all those frameworks to stackoverflow where google can do its job and users can find their answers more quickly.

Posted in Uncategorized | 3 Comments

HTTP tour: Manchester, Edinburgh, Glasgow, Dundee and Aberdeen.

I’ll be touring the north of Britain from the 18th of January with two talks: HTTP caching 101 and Links, forms and Unicorns.

The details are at http://www.gep13.co.uk/blog/?p=609 so if you’re interested in HTTP or/and ReST, do come along!

Posted in Uncategorized | 2 Comments

Free OpenWrap workshop Saturday 10th December in Lier, Belgium

You get 4 hours and a lunch (complimentary from PeopleWare) to come and talk about package management and how it can change the way you build software. It’s completely free, you just need to register on eventbrite.

As I highlighted before, if you’re anywhere in Europe where you know of a user groupb, I’ll be happy to come and share what I learnt in the last two years building composite systems and how to build them using a package manager. Just email me at seb at serialseb dot com, and let’s make it happen!

Posted in Uncategorized | 1 Comment