Katana License Lifts Windows-only Restriction

Over the past few months, a great deal of attention has been paid to the following clause used in most of the licenses associated with the NuGet packages that we and other teams at Microsoft ship.

“ a. Distribution Restrictions. You may not
distribute Distributable Code to run on a platform other than the Windows platform;
 
In the case of the ASP.NET-related projects, including project Katana, this license (and the associated restriction) does not apply to the source code, but rather to the compiled binaries that are distributed via NuGet (the Katana source code is released under the Apache 2.0 license). This means that even today, it is perfectly reasonable to build the source code yourself and run it on Mono on whatever platform you choose.
 
But why should you have to jump through those hoops just to satisfy a clause in the license?
 
As Microsoft teams continue to take more and more components out of the traditional box products and deliver them as NuGet packages, the frustration over this restriction has grown proportionally. And in the particular case of project Katana, where one of the key selling points is portability, and where we want other OSS frameworks to build on top of us, it quickly became clear that the restriction fundamentally did not align with one of the core goals of the project.
 
So we changed the license.
 
With the release of Katana 2.0, which will accompany Visual Studio 2013, the Windows-only restriction will be removed from the Katana binary license. It’s important to note here that at this point, the exception applies only to the Katana packages.
 
While it may appear to be a small step, we’re very excited as we believe it to be a significant one in the right direction!
Posted in Uncategorized | 9 Comments

Help me understand my network

(preface: I love my wife who, upon reading what I had originally written, suggested that I calm down and edit out some of the intro to this post)

I’m in the midst of a home networking _situation_ right now, and I’m hoping that one of you who undertands networking better than I do can help me understand what may be going on…

When we switched to cable-based Internet service a while back, I decided to go with an all-in-one router/gateway/DHCP server/cable modem – all in this tight little package. About that same time, I cancelled our phone service and switched to OOMA – nothing like having a single point of failure. However, I had never had a service failure – until yesterday.

Yesterday at around noon, our Internet service went down. I waited for a while because this was pretty common when we had DSL and it used to always come back up after a while. But the service didn’t come back up – so this morning I called Wave Broadband and they sent out a service tech. When the tech checked the ISP endpoint and my cable modem connection, everything checked out fine – but no service connection. We then reset the cable modem and noticed something interesting. I now had an Internet connection for one computer that had a wired connection to the router, but no others, and the wireless network was still completely down. The service tech said it must be a problem with my router and left.

So I did a little more digging. on running ‘ifconfig’ at the terminal, I noticed the following entry for interface ‘en0′ (which is my ethernet adapter):

inet 66.235.22.218 netmask 0xffffff00 broadcast 66.235.22.255

Uhh, that’s not an allowable IP address from _my_ DHCP server (I keep things nice and local with 192.168.100.1-50) – so where is it coming from?

Looking deeper with ‘netstat -nr’, I noticed the following

Destination Gateway     Flags Refs Use Netif Expire
default     66.235.22.1 UGSc  60   0   en0

Ok, well that’s definitely not my gateway – so who’s is it – running ‘whois “n 66.235.22.1″‘ confirmed what I suspected all along – that my computer had its network configuration set through DHCP not through my DHCP server but directly by my ISP.

So the question is – why??

I called Motorola to see whether there were some firmware updates available. They told me that even if there were (there are), Motorola is not allowed to make them available as a download link because of some FCC rules stating that the ISPs must be the ones to distribute firmware updates – presumably to ensure compatibility with their networks. That said, the Motorola support folks were super-helpful – and led me to one other discovery: when I unplug the coax from the all-in-one, everything on my LAN works – my computer’s IP assigned from my DHCP server, and all my wireless devices can once again connect. However, the second I plug the coax in and the all-in-one downloads information from Wave Broadband, everything goes back to its screwed up state.

The Wave tech support people first told me that I didn’t really understand how DHCP worked and that everything was fine – and then later, after I pushed them on it, told me that there was a real problem, but that everything looked fine on the cable modem side of things and that because my device was not theirs, they refused to help me further troubleshoot the issue.

So what are some possible reasons for my network being so screwed up?? At the moment, only one computer (e.g. the one I’m writing this on) is capable of connecting to the Internet and my wireless network is down – presumably because it can’t assign IP addresses to the devices that are trying to connect to it. I’m not sure what’s going on or how to test to confirm, but I think that the configuration that my all-in-one pulled down from Wave is somehow disabling my DHCP server (and it looks like my router as well).

I don’t really have a ton of time to experiment, as I need to be able to get VOIP working again as well as ensure that my wife will have Internet access when I take my computer with me to work in the morning. However, I’m thinking that the best solution at this point is to separate my router/gateway/wireless unit from my cable modem. That’s historically been my philosophy and for whatever reason, I took a risk this time. Bleh. I’ve been looking at this unit for the router and will probably go with whatever cable modem is supported by Wave to ensure that they can’t as easily dismiss me in the future.

So, any of you with a deeper understanding of networking 1) have an idea of what could be going wrong here, and 2) how I could test for it?

Also, any general recommendations on my upcoming network setup redo?

Or, any similar horror stories you want to share to make me feel better? :)

Posted in Uncategorized | 27 Comments

Installing Neo4j in an Azure Linux VM

I’ve been playing with Neo4j a lot recently. I’ll be writing a lot more about that later, but at a very very high level, Neo4j is a graph database that in addition to some language-specific bindings has a slick HTTP interface. You can install it on Windows, Linux, and Mac OSX, so if you’re more comfortable on Windows, don’t read this post and think that you can’t play with this awesome database unless you forget everything you know, replace your wardrobe with black turtlenecks, and write all your code in vi (though that is an option). For me, though, I hate installers and want the power of a package manager such as homebrew (OSX) or apt-get (Linux). So I’m going to take you through the steps that I went through to get neo4j running on Linux. And just to have a little more fun with things, I’ll host neo4j on a Linux VM hosted in Azure.

For the Azure tasks, I use the CLI tools for as much as humanly possible (and continue to look forward to the day when I can do everything through the CLI). Glenn has blogged quite a bit about the CLI, so I’m not going to go into any depth here, but in case you haven’t installed the CLI tools, you can get them through Node Package Manager with the following command:

npm install azure-cli

After the CLI tools are installed, just type ‘azure’ in your terminal window for some rockin’ ASCII art and a list of the available commands.

So to create my new linux VM, I’ll run the following command:

azure vm create hldlnxneo b39f27a8b8c64d52b05eac6a62ebad85__Ubuntu-12_04_1-LTS-amd64-server-20121218-en-us-30GB howarddierking -e -l “West US”

While you can easily look up what this means, here’ the quick rundown:

  • “hldlnxneo” is the dns name of my VM (I’ll be able to access it via hldlnxneo.cloudapp.net later)
  • the giant string is an existing image name for an Ubuntu distribution – you can see the list of existing images with the command ‘azure vm image list’ – you can also upload your own if you prefer.
  • ‘howarddierking’ is the default user account that’s created – you’re asked to specify the password later
  • the ‘-e’ arg says to enable SSH on the default port (22)
  • the ‘-l’ arg allows you to specify the region (data center) where you want the VM created.

So once I have my virtual machine created, I can easily access it with ssh (ssh is another big incentive for me choosing a linux Azure image):

ssh howarddierking@hldlnxneo.cloudapp.net

And I’m now in a remote shell session on the Azure VM. Insanely awesome.

Now, all I need to do is install Neo4j using apt-get. It took me a while to land on this page, so my hope is that at the very least, this post helps elevate it in search results (or at least provides another avenue to reach it). Additionally, there was one error/difference in the instructions that I’ll note here.

Enter the superuser/root shell with:

sudo -s

This will be a convenience to keep us from having to preface many of the below commands with sudo.

Add Neo4j on Debian/Ubuntu

  1. Add Neo4j repository to APT:
    echo ‘deb http://debian.neo4j.org/repo stable/’ > /etc/apt/sources.list.d/neo4j.list
  2. Update depedency lists: apt-get update
  3. Install Neo4j: apt-get install neo4j (say “yes” to all questions)

Confirm Neo4j is running locally

Note: at this point, you can exit the root shell with the ‘exit’ command.

curl http://localhost:7474

Make Neo4j accessible from the outside

Only do this if it is necessary, for instance when your services accessing Neo4j run on a different host. Make sure to secure the instance by enabling SSL and adding authentication (like the authentication-extension.

  1. Find and open: /etc/neo4j/neo4j-server.properties
    1. this is the location on my Azure VM where apt-get installed neo4j – if you don’t find the file here, you can always find out where it was installed with the command ‘dpkg -L neo4j’
  2. uncomment the line: #org.neo4j.server.webserver.address=0.0.0.0
  3. Check that SSL access is enabled: org.neo4j.server.webserver.https.enabled=true
  4. restart the Neo4j Server: sudo /etc/init.d/neo4j-service restart
    1. ** this is different than the official version where the command is neo4j-server
  5. Navigate to the Azure portal and open port the port that Neo4j runs on (7474 by default) – in my case, I’ll map port 80 to port 7474 using the “add new endpoint” function as shown below…Screen Shot 2012-12-28 at 11.24.16 PM
  6. Check if it is accessible from the outside: ‘curl http://hldlnxneo.cloudapp.net’

And there we go – we have a new Neo4j database instance just ready for new nodes and edges!

Much more to come on those topics…

Posted in Uncategorized | 4 Comments

Versioning RESTful Services

I’ve talked about this in various venues and also cover it in my Pluralsight REST Fundamentals course, but the topic of how to version RESTful services has been popping up a bunch recently on some of the ASP.NET Web API discussion lists, and my friend Daniel Roth asked if I could serialize some of that presentation content into a blog post – so here goes.

First, note that while the focus here is on RESTful services and not just HTTP services, the same principles can potentially apply to HTTP services that are not fully RESTful (for example, HTTP services that do not use hypermedia as a state transition mechanism).

When talking about versioning, the most important question to ask is “what are you wanting to version?”  The logical extension to this question is, “what is the contract between your service and client?”  This is naturally important since the contract is the thing you want to version.

In the “old world” of Web services, the contract was the service.  Service actions (and associated semantics) along with data formats and other metadata were covered by the definition of the service, which was exposed as a single URL (the service, that is – I’m grouping together all RMM L0 services here). As such, when it came to the question of how to version the service, the answer was generally pretty simple: if the contract is the service, and the service is exposed as a URL, then the solution is to version the URL. As such, you’ll see a lot of this if you browse around –

http://nuget.org/api/v1/

http://nuget.org/api/v2/

(Not trying to pick on NuGet here – it just happens to be a service API that I’m pretty familiar with at the moment)

It doesn’t take much imagination to see how unwieldy this can get after even a few iterations – especially when you’re clients are interacting with the service by generating strongly typed proxies and then pretending that there is no network (yes, I am picking on WCF here).

So how is this different for RESTful services?

Well, we should start by again asking the question, “What is the contract for a RESTful service?” The answer is, IMHO, the uniform interface.  The uniform interface is made up of 4 constraints –

  • Identification of resources
  • Manipulation through representations
  • Self-descriptive messages
  • Hypermedia as the engine of application state (HATEOAS)

While all of these constraints are important to understand and take into consideration for the overall design of a RESTful system, I’m going to highlight the first 2 with respect to versioning.

First, let’s look at resources. A resource is any named information or concept – this is nice from a philosophical perspective, but maybe a little less helpful in how it enables us to think about versioning, so let’s look at it another way (as Fielding describes). A resource is a 0..n mapping between an identifier and a set of entities that changes over time. Here’s a concrete example:

I have 3 daughters: Grace, Sarah, and Abigail. However, this was not always the case – in fact, during the time period before Grace was born, the resource map of my family looked like the following:

As you can see, I had defined (in my mind) a bunch of named concepts – and at that point in time, they didn’t map to any actual entities (e.g. kids). Now, when Grace was born, the resource map looked like the following:

As you can see here, all of my named concepts map to a single entity – Grace. But what about when Sarah came along?  Then the map changed to the following:

As you can now see, my “children” collection resource maps to multiple entities, and “youngest child” now maps to Sarah rather than Grace.

The point here is that the resource *concept* has not changed here – and more importantly, though the specific entity mappings have changed over time, the service has done this in a way that preserves the meaning of identified domain abstraction (e.g. children).

A representation, on the other hand, is an opaque string of bytes that is effectively a manifestation of a resource. Representations can come in many different formats and the process of selecting the best format for a given client-server interaction is called content negotiation. The self-descriptive message constraint of the uniform interface adds that the information needed to process a representation, regardless of format, is passed in the message itself.

I wanted to give this brief explanation of resources and representations because it’s important to have a clear understanding of what they are so that you can know when to version them. So let’s get back to versioning…

So, if the contract for a RESTful service is the uniform interface, then the answer to the question of how to version the service is “it depends on which constraint of the uniform interface you’re changing.”  In my experience, there are 3 common ways that you can version (I’m sure there are more, but these are the 3 that I’ve come across most regularly).

Versioning Strategy 1: Adding content to a representation

In the case where you’re adding something to a representation – let’s say that you’re adding a new data field “SpendingLimit” to a customer state block as follows:

{

“Name”: “Some Customer”,

“SpendingLimit”:”unbounded”

}

In this case, the answer to the versioning question is to just add it.  Now, this assumes that your clients will ignore what they don’t understand. If you’ve written clients in such a way that you can’t make that assumption, then you should fix your clients J – or perhaps you need to look at the next strategy…

Versioning Strategy 2: Breaking changes in a representation

In the case where you’re either removing or renaming content from an existing representation design, you will be breaking clients. This is because even if they are built to ignore what they don’t understand, by making this sort of change on the server, you’re changing what they already understand. In this case, you want to look at versioning your representation. HTTP provides a great facility for doing this using content negotiation.  For example, consider the following:

GET http://localhost:8800/bugs HTTP/1.1

User-Agent: Fiddler

Host: localhost:8800

accept: text/vnd.howard.bugs.v1+html

This request gives me the following response fragment – as you can see, I’m working from an HTML base media type:

HTTP/1.1 200 OK

Content-Length: 1107

Content-Type: text/vnd.howard.bugs.v1+html

<div id=”links”>

<h2>Navigation</h2>

<a href=”/bugs” rel=”index”>Index</a><br/>

<a href=”/bugs/backlog” rel=”backlog”>Backlog</a><br/>

<a href=”/bugs/working” rel=”working”>Working</a><br/>

<a href=”/bugs/qa” rel=”qa”>In QA</a><br/>

<a href=”/bugs/done” rel=”done”>Done</a>

</div>

Now what if, for some reason, I needed to change the link relationship values? Remember that based on the hypermedia constraint of the uniform interface, my client needs to understand (e.g. have logic written against) those link relationship values, so renaming them would break existing clients. However, in this case, I’m not really changing the meaning of the resources or the entities that the resources map to. Therefore, I can version my representation and enable clients that know how to work with the newer version to request the newer version using the HTTP accept header as follows:

Therefore, this request:

GET http://localhost:8800/bugs HTTP/1.1

User-Agent: Fiddler

Host: localhost:8800

accept: text/vnd.howard.bugs.v2+html

Will now give me the new response format:

HTTP/1.1 200 OK

Content-Length: 1107

Content-Type: text/vnd.howard.bugs.v2+html

<div id=”links”>

<h2>Navigation</h2>

<a href=”/bugs” rel=”index”>Index</a><br/>

<a href=”/bugs/backlog” rel=”backlog”>Backlog</a><br/>

<a href=”/bugs/working” rel=”active”>Active</a><br/>

<a href=”/bugs/qa” rel=”testing”>Testing</a><br/>

<a href=”/bugs/done” rel=”closed”>Closed</a>

</div>

One other thing that I want to mention here – you’ve probably noticed that I’m using the language of representation design and representation versioning as opposed to content type design/versioning.  This is deliberate in that many (most?) times, you’re going to design your representations completely on top of existing content types (e.g. xml/json/html/hal/etc).  Without wading into the custom media type debate in this post, my point is that when I’m talking about versioning the representation here, I’m talking about versioning the domain-specific aspects of your representation that your client needs to be aware of.

Versioning a representation over an existing media type will look slightly differently from what’s shown above in that you’ll pass a standard media type identifier in the HTTP accept header along with an additional metadata element to identify your representation-specific aspects and then do content negotiation based on the combined description.  There are several different ways to add the representation-specific metadata, including media type parameters and custom HTTP headers.

Versioning Strategy 3: Breaking the semantic map

In both of the prior strategies, all changes, breaking and non-breaking, have been related to the representations. This is a really good thing as it enables the resources (and more importantly, the URL resource identifiers) to remain stable over time. However, there may be occasions – hopefully rarely – when you need to break the meaning of a resource and therefore the mapping between the resource and its set of entities. As an example, as I get older and my kids grow up and leave, let’s say that I start returning my pets as children. In this case, I’ve changed the meaning of the “children” resource and thereby broken that aspect of the contract between my client and service. The solution then, in this case, is to version the resource identifier itself.

If I did a lot of this sort of resource versioning, it is very possible that I could end up with an ugly-looking URL space. But REST was never about pretty URLs and the whole point of the hypermedia constraint is that clients should not need to know how to construct those URLs in the first place – so it really doesn’t matter whether they’re pretty or ugly – to your client, they’re just strings.

So to summarize

This post ended up being longer than I had planned, so here’s the summary.

  • In REST, the contract between clients and services is the uniform interface
  • How you version depends on what part of the uniform interface you’re changing
    • If you’re adding only, go ahead and just add it to the representation. Your clients should ignore what they don’t understand
    • If you’re making a breaking change to the representation, version the representation and use content negotiation to serve the right representation version to clients
    • If you’re changing the meaning of the resource by changing the types of entities it maps to, version the resource identifier (e.g. URL)
Posted in architecture, HTTP | Tagged , , | 24 Comments

New Web Optimization Pre-Release Package on NuGet

Today we pushed a new [pre]release of ASP.NET Web optimization to nuget.org – version 1.1.0-alpha1. This release includes some bug fixes and also a few new features that we wanted to share sooner than later. You may have also noticed that WebGrease 1.3.0 was published to NuGet just a few days ago – 1.3.0 addresses several bugs found in the JavaScript and CSS minification process, so regardless of whether you choose to update Web optimization, make sure to update your WebGrease package.

Here are the new features included in 1.1.0-alpha1 –

CDN fallback

In the first version of the library, we enabled you to specify an alternate CDN location for a bundle and then tell the framework whether or not that CDN path should be used instead of the local path when BundleTable.EnableOptimizations=true (or the ASP.NET debug configuration attribute is set to false when not explicitly setting the EnableOptimizations property).

However, this feature didn’t address the case where you wanted to start with a CDN path and then fall back to the origin server in the case that the CDN was unavailable. In this latest release, we’ve added this support by way of a new property on the bundle, CdnFallbackExpression.  For example, consider the case where we want to reference jquery from a CDN location and then fallback to the local bundle in the case that jquery cannot be reached on the CDN. For this, we need the following entries in the BundleConfig class’ RegisterBundles method..

bundles.UseCdn = true;

BundleTable.EnableOptimizations = true;

var jqb = new ScriptBundle(“~/bundles/jquery”, “foohost”)

.Include(“~/Scripts/jquery-{version}.js”);

jqb.CdnFallbackExpression = “window.jquery”;

bundles.Add(jqb);

In this code, we’ve enabled both optimizations mode and CDN support; we’ve also specified the CDN path as “foohost” (a known invalid location) for the jquery bundle. We then simply need to add the fallback expression “windows.jquery”. As you already know, this represents the jquery object and it will be used in the following script generated by the framework.

<script>

(window.jquery)||document.write(‘<script src=”/bundles/jquery”><\/script>’);

</script>

This script will query to see if the jquery object exists (indicating that jquery was successfully fetched from the CDN), and if not, will write a new script tag into the DOM with the src pointed at jquery on my server.

Element template strings

It used to be that choosing to have full control over your HTML markup meant giving up debug/release support. We’ve eliminated that tradeoff in this latest release with the “RenderFormat” method.  This helper method accepts, in addition to the usual list of bundle references, a format string which represents the HTML to render. For example, to render the jquery bundle with HTML5 markup that takes advantage of the new async attribute, I can use the following:

@Scripts.RenderFormat(“<script src=’{0}’ async></script>”,”~/bundles/jquery”)

I’m now able to see that the async attribute is present when optimizations are turned off…

<script src=’/Scripts/jquery-1.7.1.js’ async></script>

…And, of course, when they are turned on…

<script src=’http://ajax.aspnetcdn.com/ajax/jQuery/jquery-1.7.1.min.js’ async></script>

Virtual Path Provider

Finally, while it may not one of those scenarios that is broadly used, supporting ASP.NET’s virtual path provider is absolutely necessary for certain application types – a good example is CMS applications that may store resources in a database or a path that is different than what would be a direct mapping to the file system.  The new Web optimization framework release now supports VPP, and while I’m not going to show how to develop a custom VPP (that topic would probably take a post or 4 by itself), here’s how you configure the Web optimization framework to use it.

BundleTable.VirtualPathProvider = HostingEnvironment.VirtualPathProvider;

Note that this code example is a no-op, since we default to use the hosting environment’s VPP if you don’t specify your own.

What’s Next?

As we continue towards the stable 1.1.0 release, the one additional feature that we’re working on, which has been pretty highly requested, is support for build-time optimization. There is some support for this today via the executable that is included in the WebGrease package – however, it’s not a first class feature, and that’s what we want to enable. More to come as that feature gets a little more baked…

So take a look at the new alpha release and let us know what you think at http://aspnetoptimization.codeplex.com/

Posted in Web | 22 Comments