Moving Blog for a While…

I’m moving my blog off CodeBetter for a while at least…

It’s not because I’ve got any issue with CodeBetter – the talent present on this blog is top end – and they have been nothing but awesome for my time here (in fact, I’ll be bouncing ideas off of Brendan as I conduct some of my latest experiments in cloud-hosting a blog).

This is really about my having some strong beliefs about cloud architecture and wanting to put them to the test in a practical way.

The new blog is at http://blog.howarddierking.com (the RSS feed is http://blog.howarddierking.com/rss.xml). It’s not very pretty at the moment, but I hope you won’t hold that against me. I’ll be improving it incrementally as I discover new services/techniques. Naturally, if you have suggestions, let me know.

So for now, goodbye CodeBetter. I’m sure we’ll see each other again one day…

Posted in Uncategorized | Leave a comment

Getting Started with OWIN and Katana

MSDN Magazine just published an article I wrote on getting started with OWIN and Katana. If you’re unfamiliar with these terms, think of OWIN (and the Katana project) as a way to compose different .NET-based frameworks and components together into a single, light-weight Web application (which can then be hosted in a variety of different ways). In fact, in the article I walk through building a single application using NancyFX, SignalR and Web API – as well as hosting it in both IIS and in an executable run from the terminal.

Check it out!

Posted in Uncategorized | 1 Comment

The Secret to RESTful Services is RESTful Clients

I’m sitting in the airport waiting to head back to Seattle after several great days at RestFest here in Greenville, SC. Many thanks to Mike Amundsen and Benjamin Young for being such gracious hosts. I’m pretty tired at the moment, but I had a couple takeaways that I wanted to get out there – mainly so that if I forget, I have a reference point to return to.

First, there were a few technologies that caught my attention that I want to spend some time going deeper on. Some of them I had heard of but had not yet gotten around to looking at – some were completely new to me. At any rate, in no particular order, here’s a list of stuff to look at in more detail:

  • http://apiary.io – API documentation, testing, mocking
  • JSONt – A JSON template language
  • Siren – Another hypermedia media type for JSON
  • RDF(-a), SPARQL, Semantic Web, Linked Data – This is a pretty gigantic space, and so step 1 here is just understanding what all of the different pieces are. However, given the work I’ve been doing with graph databases, this space feels like a natural evolutionary path forward.

The other big takeaway for me, which is really the thrust of this post, is that we (including myself here) proponents of RESTful (or hypermedia, if you prefer) architecture have spent an unhealthy amount of focus on the server. In fact, I would argue that when most people talk about REST, they are talking about building RESTful services (and as a result, focus on things like URI design). Ironically, by spending too much focus on the server, the hypermedia constraint – which is really the key differentiator for REST IMO – remains abstract and something that dogmatic service developers stick into responses with only a vague idea of how we think a client will/should use those hypermedia controls.

This disconnect between service and clients creates the following kind of interaction that happened during the “lightning talks” at RestFest this year.

  • [me to front-end developer] – “having consumed lots of different APIs, what aspect of an API makes it either a pleasure or a pain to work with”
  • [front-end developer] – “that would have to be API documentation”
  • [presenter (a few talks later] – “we should be providing less documentation…client developers should be able to discover our API by ‘following their nose’”

Ummm, hang on a second….

Sure, that sounds like a great idea..

But that’s not the reality that the front-end developer just told you that she lives in.

So where’s the disconnect?

I believe that the problem is that while we may intellectually understand that REST describes the entire system (and not just the server), we don’t practically understand what this looks like because we generally build only the server applications and then make a bunch of assumptions about how the clients will “call the server.”

What I think we need to do differently is adapt a practice from TDD – that is, start out by building a client first, and then build the server from the outside-in. Building a client first will give us a really clear picture up front on just how usable hypermedia controls are in practice and will hopefully cause us to think more carefully about what controls we use, or where we put those controls (perhaps not everything should be in a header?).

At the moment, I’m testing this approach with RestBugs, and I hope to have something to show in the next week or so. But the big idea is this: I’m going to write my client to consume a series of static [JSON] files and will see how easy it is to drive my entire workflow through those resources. I’ll then use that experience (and the resulting representation design) to drive the design of my server, which will then drive the runtime execution of my client.

“recursion…see recursion”

I don’t know whether this approach will work yet, but I’m optimistic…

Succeed or fail, should be fun.

Your thoughts?

 

Posted in Uncategorized | 10 Comments

Streamlining Dependency Management in Visual Studio

I’ll start out with a disclaimer: What I’m going to talk about in this post represents some thoughts that are being batted around by a few of us. It does not represent any decided on features for any future version of Visual Studio.

Now, with that out of the way, one aspect of Visual Studio that drives me absolutely crazy is the number of different experiences avaiable for adding dependecies into my project. Let’s take a quick inventory based on Visual Studio 2013:

  • Add Reference – for assemblies, project references, COM references and SDKs
  • Manage NuGet packages – for…well…NuGet packages
  • Add Service reference – for SOAP-based services
  • Add Connected Service – for Azure Mobile Services integration

In addition to this list, there are also plenty of ‘pseudo-dependency experiences’ – things that may be more specific than say, ‘add reference’ but have the same basic characteristics. A good example of this in Visual Studio 2013 would be the Add->Push Notification feature. Also, there are plenty of 3rd party and future experiences that aren’t on this list and would add yet another menu option and user experience.

Stepping back and looking at all of these different component types and related experiences, they all need to do the same basic operations (note that I said need here – some of the current experiences have a weak or no story for some of these):

  • Enable discovery of components
  • Enable installation of components
  • Enable update of components
  • Enable removal of components

The differences lie not in what the experiences need to be, but rather in how they need to work for different component types. For example, consider the installation experience. There exists a common semantic across all types of dependencies – “I want to add stuff to my solution to help me focus on writing code that adds value to my application.” While this forms the big idea for what adding a dependency should mean, it can manifest in very different ways based on the type of ‘stuff’ you’re adding. For example, adding a NuGet package dependency may add assembly references and drop some content files in your project, while adding Azure Mobile Services dependency may provision a service in the cloud, add some assembly references and scaffold out some code to enable your code to more easily communicate with the service. Unsurprisingly, these same kinds of differences crop up when looking at update and removal as well.

In terms of managing dependencies, one comment I’ve heard from more than a few people goes something like this: “Well, NuGet has emerged the clear winner in the dependency management space – so why not just NuGet all the things?” Two responses there:

  1. Is NuGet really the clear winner in all contexts? I’m seeing a lot of folks starting to use NPM for other-than-node applications and I’m seeing a lot more chatter and use of Bower for client-side dependencies in Web applications. Don’t get me wrong – I’m a big fan of NuGet. However, I think it’s dangerous to craft a strategy around the idea that a particular technology is and will forever (or at least for a really long time) be *the* technology. Besides, we’re all a little hipster when it comes to technology – once a tech becomes the established thing (dare I say, the ‘enterprise’ thing?), it’s no longer as exciting as the next up and coming thing :)
  2. By trying to make NuGet into the “one ring to rule them all”, we would end up destroying NuGet. Yes, I said destroy. I’ve been around for a while now, and nothing seems to kill a promising technology faster than initial success followed by a morphing of the technology to address scenarios that go far beyond its initial vision and goals. For NuGet to continue to evolve in a way that is useful, I believe that it needs to manage the scope of what it should do, and then focus on doing those things really, really well.

The solution is not to make one package manager the “one”, but rather to make it insanely easy (transparent-even) to use multiple package managers in Visual Studio in a single depenedency management experience.

Oh, and did I mention that we should reduce the number of menus in the solution explorer context menu??

So at a high level, here’s what we’re thinking about:

  • Create a single point of entry/UX for managing dependencies in a Visual Studio project or solution – both Microsoft and 3rd party (e.g. NuGet, assemblies, NPM, Bower, etc…)
  • Dependency discovery is “search-first” by way of a common service for indexing and making contextual recommendations. This would be a pretty profound change from the traditional Microsoft developer tools world of enumerating lists and trees.
  • Different types of references can have their own user workflows. Not sure the best way to design this (you’ll be relieved to know that we immediately ruled out the “10 layers of modal dialog” option :))
  • Dependency types are identified by name and/or iconography – but not physically separated by tabs, etc. The point is that search goes across providers and may show, for example, a Bower package and a NuGet package in the same list of results.
  • All dependencies are first class in Visual Studio. Practically, this means that are all managed (e.g. update/remove) the same way in Visual Studio components such as Solution Explorer.

If we moved forward with this, it would be a pretty significant change to some pretty core Visual Studio experiences. So I want to get your thoughts. Here are a couple initial questions:

  • Do you currently use multiple dependency management tools experiences when building apps – and if so, for what types of apps?
  • Do you currently end up leaving Visual Studio completely for certain types of dependencies – and if so, for what types of dependencies and for what types of apps?
  • If you’re currently switching between different experiences for dependency management, is this something that’s a point of frustration (or possibly a frustration that you’ve just gotten used to)?
  • Is a consolidated dependency management experience something that you would find helpful/valuable?
  • Moving to a search-first experience feels like a pretty big shift – is it realistic to attempt this or do you feel like you need to be able to browse/enumerate?

I’m sure I’ll have tons more questions as we continue to explore this space, but want to kick off the conversation and get your initial reactions/thoughts.

Update (2013-10-8): came across the following article which describes a solution to the same kind of problem that this approach would address: http://www.wired.com/wiredenterprise/2013/10/runnable/

Posted in Uncategorized | 9 Comments

Versioning RESTful Services v2

See what I just did there? ^^

Done :)

Since publishing both the REST Fundamentals course as well as my previous post on RESTful API versioning, I’ve had the opportunity to think more about versioning in practice as I’ve been a part of designing the next major revision to the NuGet API (v3). This has been very helpful for a couple of reasons, but the most important one is that NuGet is a real application that has both non-trivial scale requirements and a number of different clients – many of which are not controlled by the NuGet core team. Going through this API design exercise has both validated some of my earlier thinking as well as revised some other thinking. The point of this post is to lay some of that out. The NuGet experience has also surfaced some other architectural concepts which I’ll mention here but save the details for a later post (part of why this post was so delayed is that I was thinking that I would try to cover both new architectural principles and API versioning revisions in the same post – and I’ve now realized that there’s no need to hold up one for the sake of the other).

Let’s start out by reviewing the principles and strategies I laid out in the previous post for versioning RESTful APIs:

The key versioning principle is this: how you version depends on what part of the uniform interface you’re changing

  1. If you’re adding only, go ahead and just add it to the representation. Your clients should ignore what they don’t understand
  2. If you’re making a breaking change to the representation, version the representation and use content negotiation to serve the right representation version to clients
  3. If you’re changing the meaning of the resource by changing the types of entities it maps to, version the resource identifier (e.g. URL)

So where has my thinking changed? It’s primarily around strategy #2 and it relates to several of the comments from that earlier post. For example, Aaron comments:

Where this breaks down is that I cannot specify the accept type in the image tag, so how can I be sure that the server will default my request to image/jpg?

Further, If i wanted version 2, I cannot specify that either. So I feel like there might actually be some validity in using a version based uri scheme for these types of scenarios.

I agree.

In my original thinking, I was much more zealous about the promise of server-driven content negotiation than I am today. There are a few reasons my ideological shift away from the panacea of server-driven content negotiation based on the NuGet API v3 experience (really, it’s not just because Fielding avoids it – though I tend to agree with his reasoning).

  • Inconsistent support from caches
  • Different levels of support from clients with regard to their ability to manipulate the outgoing HTTP request
  • Lack of determinism for handling the response
  • Reliance on compute-bound service architectures

I think that the first 2 reasons are pretty well understood, so I’ll focus on the last 2.

Lack of Determinism in Responses

Even when I was promoting it for everything, I have always had some level of discomfort for purely server-driven content negotiation as it can put the client in a rather helpless position with regard to the responses it receives and what it can do with those responses. The client can certainly ask for a particular version or format of a representation, but at the end of the day, the server is in control of what the client gets. It’s a bit like when I cook dinner for my kids – “you get what you get and you don’t get upset”.

Or, put another way, it’s kind of like this…

It’s fine to take suggestions and give the client back what you think will work best. However, this should be for convenience/optimization. The ability of your server to correctly interpret a client’s preferences should not be the sole way to enable the client to work with different variations of server resources. Instead of the purely server-driven approach, I’m now building systems whereby any semantic change yields a new resource with a new resource identifier (URI).

Tactically, there are several approaches that can be considered. One would be to simply not use server-driven content negotiation and always return links for the different variations of a resource.

A more hybrid approach would use server driven content negotiation to provide a “best guess” resource which would then use client-specified preferences (via metadata such as HTTP’s accept header) to either issue redirects to the correct concrete variation resource or return a response based on client preferences which would then include links to those alternate variations as well as a link to the canonical resource for the response body itself. This link to the canonical resource is critical as the response from the best guess resource will vary by client and likely change over time.

Given the example from the earlier post of the client-server interaction for getting a bug representation, this hybrid data flow might look something like this:

  1. client sends GET request for /api/bugs
  2. server response looks like the following:
    1. contains a JSON-formatted body
    2. includes a content-location header with the value /api/bugs.v1.json
    3. includes links in the body to /api/bugs.v1.xml and /api.bugs.v2.json (and XML, naturally) – note that these need link relationship values that the client can make sense of – the client should not be required to parse or analyze the URI itself.
  3. client processes the JSON response and optionally bookmarks (saves) the /api/bugs.v1.json URI to use in future requests
  4. if the client would rather work with XML, then the client grabs the URI for the XML representation (from the suppied link relationship type) and follows it.

The key to all of this is – as is the case with nearly all RESTful interactions – that the client determines which URIs to follow based on some metadata (the link relationship type) rather than by any sort of examination or parsing of the URI itself.

One of the areas here that I’m still working through is around how much to leverage HTTP’s controls for communicating some of the link semantics (the content-location header is a good example). While it would seem like the obvious right answer to use every available HTTP control, our experience with the NuGet API design is suggesting that broader architectural goals may trump HTTP “purity”

Using Non-Compute-Bound Architectures

One of the comments I made in the last module of the Pluralsight course is that when it comes to cloud computing, the benefit of a RESTful architecture is that it lets you leverage the infrastructure of the Web. Put another way, a RESTful design should scale more cheaply than a non-RESTful design. When looking at the cloud from an architectural point of view, all services basically fall into 2 buckets: compute and storage. Of these, storage is universally cheaper, less error-prone, and more scalable than cloud services which are more compute-based. Examples of compute-based cloud services include things like:

  • Virtual machines
  • Web roles/sites/etc. (environments that run back end Web application code)
  • Cloud databases

Whereas examples of storage-based cloud services include things like:

  • Blob storage
  • Table storage

While it’s certainly possible to achieve high degrees of performance and reliability from compute-based services, it generally requires more investment than services built on  storage-based cloud services. As such, for services that have high availability and scale needs, moving as much of the “api” to static, linked data in cloud storage is an appealing design. In such a design, changing the state of the data is the responsibility of many different agents that monitor for state conditions and then act across one or more of the data. This topic warrants a post or 2 of its own, but I mention it here because these larger architectural considerations have had an impact on how I’ve been thinking about versioning and content negotiation (e.g. how do you version if every resource needs to be stored as linked blobs?). For example, cloud storage providers and CDNs have limitations with respect to what HTTP headers they can serve. Also, dynamic content negotiation is not an option without introducing a compute-based intermediary. Again, the point of mentioning this architectural thinking is only to show how it impacts the tactical choices that I might make when deciding if and how to use content negotiation for versioning.

Ramifications

The most notable impact of this revised approach for versioning (and content negotiation in general) is that it results in a very large number of URIs to manage. For the server, this isn’t a terribly difficult problem as we have lots of intelligent libraries and infrastructural services for managing HTTP resources. For the client, however, it further underscores the need to write clients in a way that they treat URIs as opaque strings, and simply follow links based on the link relationship type metadata. By extension, this also means that the path of resources consumed by a client will be heavily directed by which bookmark or entry point resource the client starts from – so make sure that your RESTful service is providing a good entry point resource with a set of choices (links) for alternate resources. These could be different versions and/or different formats. Also, it’s a good idea to provide similar links with each resource consumed by a client (as appropriate), since you never know what resource will be bookmarked by a client and later used as a starting point.

Revised Summary

So, in my revised thinking, the overarching principle remains the same:

how you version depends on what part of the uniform interface you’re changing

But I’ll add another principle (well, borrowed it from Fielding actually):

all important resources must have URIs

This ends up impacting versioning strategies as follows:

  1. If you’re adding only, go ahead and just add it to the representation. Your clients should ignore what they don’t understand
  2. If you’re making a breaking change to the representation or changing the meaning of the underlying resource, create a new resource with a new name (URI)
  3. Use content negotiation in such a way that it can provide an optimized path to resources, but always give the client control (by way of links) to make different choices

I hope that this discussion continues to prove useful. And should I continue to revise my thinking, I will try and be a bit more timely in publishing my thoughts.

Finally, many thanks to Aaron and Cort for continuing to nudge me to get this post written!

Posted in HTTP, Uncategorized | 4 Comments