I recently blogged on a feature which I worked on, custom search commands for Splunk in our Python SDK. Custom search commands allow you to extend Splunk’s search language with new commands that can do things like apply custom filtering, perform complex mathematical calculations not in the box, or generate events dynamically from an external data source like an external API. Currently we only support them in Python but we’ll be adding more languages in the future.
In the first post of the series I talk about how to build a simple generating command.
A few weeks ago, I read a great post from Kin Lane where he was talking about the Open API movement in government. One of the things he discussed was the momentum around APIs and adoption of emerging standards for API documentation. In particular he mentioned Swagger (which I was familiar with), Blueprint, and RAML.
Seeing this traction and being a general fan of helping move the API adoption ball forward, I thought it might be interesting to explore using one of these formats for Splunk’s API. So I decided to wade in to each of the formats and well share my initial thoughts on the best of all places, twitter! Flash forward a bunch of tweets and that turned into a very engaging twitter discussion (or more like a tweetacaine) between Kin, myself and the authors of the different standards, namely Tony Tam (@fehguy) of Swagger, Uri Sarid (@usarid) of RAML, and lastly Jakub Nesteril (@jakubnesetril) of API Blueprint as well others who weighed in like Rob Bihun (@robbihun). This then led to a bunch of us heading to Jabbr where we could stop spamming the rest of the twitterverse and carry on a real conversation.
It was a really interesting discussion where we discussed the goals of each format. Being the hypermedia guy that I am, I was very interested in the level of (or lack of) support for hypermedia in these formats.
After all the discussion it turns out there is a lot of commonality between the different formats:
One key thing that came out of the conversation was that all of these standards seem pretty intent on not repeating the mistakes of WSDL. That is the documentation is a guide, and describes a contract, but the description is just to help you, but not a language you must speak to get in the door. And this my friends, is a very GOOD thing!
The main difference seems to revolve around the types of capabilities the formats offer, which does impact the conceptual overhead for working with that format. Swagger stands out as trying to be as minimalist as possible, but there is a tradeoff there in how far you can go with it.
Another key difference appeared to be around URIs. Swagger appears very much to be centered around the notion of URIs as APIS, while the others seem to take a more resource oriented approach.
Blueprint and RAML want to offer more value beyond just docs. Blueprint for example supports Dredd for testing your server API. Blueprint wants to even help you implement your server using the Blueprint doc as a guide for even helping you generate a server implementation.
None of the formats (at that time) had any real first class support for hypermedia.
Kin being the awesome guy he is, captured the entire web 2.0ish (if there’s such a thing) discussion from twitter to Jabbr in a recent blog post here. It’s an interesting experiment as it is basically just a capturing of a really large brain dump. It’s like being in the room with us though, everything is there
One thing positive that came out of this discussion is it helped reignite some momentum around first class hypermedia support in API Blueprint and RAML. For Blueprint, there’s an active discussion going on now on a new Resource Blueprint on github here.
Another piece of progress, Kin has had some great further discussions with the creators on the motivations behind their libraries which he’s written up here.
I am really thankful to Kin, Tony, Uri, Jakub and Rob for the great convo!
What are are your experiences around API documentation? Have you used any of these formats? Are you using others?
Update: Phil Haack just posted his own reflections on Duck Typing which by the way has a much cooler title than mine!
Another Update: A coworker on my team just passed me an awesome video from Daniel Spiewak describing the fundamentals of type inference. As part of the the talk he describes how structural typing works in Scala. I didn’t realize how much type inference ties in here, but it appears to relate very strongly. Near the end he also talks about Duck Typing as well. If you are a language geek or want to just understand the nuts and bolts, watch it: http://screencasts.chariotsolutions.com/uncovering-the-unknown-principles-of-type-inference-
Recently Eric Lippert (One of the fathers of C#) had a nice post entitled “What is Duck Typing?”. In the post Eric looks at Duck Typing and tries to clarify what it is or isn’t in particular critiquing the Wikipedia article on the topic. His conclusion is that Duck Typing is not a special type system and it is either another name for late-binding or completely meaningless. You should read it!
I’ve been working with Duck Typing for a while, more recently the past few years as I’ve been working mostly with dynamic languages. I agree completely with Eric that it’s not a special type system. It’s not meaningless though, it is something!
And that leads to this post. I’m going to share my own thoughts on Duck Typing, what it is and what it isn’t, the different types and give examples across several different stacks/languages to back up my points. I’ll also talk about how Duck Tying is evolving in newer stacks like Scala and Go.
What is it?
Duck Typing is a programming technique where a contract is established between a function and its caller, requiring the parameters passed in by the caller to have specific members present.
Update: I found this definition which seems much more accurate than the Wikipedia version: “In a language that supports DuckTyping an object of a particular type is compatible with a method or function, if it supplies all the methods/method signatures asked of it by the method/function at run time.”
There are two types of contracts, Implicit Contracts and Explicit contracts which will both be explored further in this post.
Note: In traditional object-oriented statically typed languages Duck Typing is commonly substituted with using interfaces and inheritance though you can do Duck Typing in many modern static languages.
Is Duck Typing the same as Late Binding?
Duck Typing differs from late binding which is a general term for delaying looking up an object’s members at compile time and doing the lookup at runtime.
In an implicit contract, the contract is implicitly defined solely based on which members of the arguments passed in by the caller, that the code in the function actually accesses.
The contract here is only in the code. The fact that an execute function is invoked in executeRule() means it will fail unless execute() is present.
This kind of code is very easy and common to write in JS due to the very fact that it is dynamic. Similarly you can easily do the same in other dynamic languages like Ruby and Python.
In some stacks these implicit contracts are even formalized as part of the core libraries. For example in nodejs, streams are implicit contracts from a stream implementer perspective. All functions that accept streams are technically using Duck Typing.
What about Method Missing type?
Please feel free to disagree with me on this 😉
Ok, now let’s see Duck Typing in statically typed languages in particular .NET.
VB.NET has had support for using Duck Typing for a long time. As long as your objects are declared as type Object you can party on them in a similar fashion.
C# with dynamic
In C# thanks to the dynamic keyword we can do Duck Typing. What’s nice about this from a C# perspective is you can pass either a dynamic object or a statically typed object and get the same result as long as the required members are present.
Boo is a really cool statically typed .NET language. Years before dynamic ever existed, Boo introduced the duck typespecifically for supporting Duck Typing. Its behavior and dynamic’s are very similar as you can see below.
.NET/JAVA with Reflection
You can even use reflection to achieve the same, but it is far less natural. Basically you use the reflection APIs and invoke members. As Krzysztof Cwalina mentioned in one of this earlier posts, the .NET framework uses this approach for enumerables. Using the previous example, here it is done with reflection in C# (yes this code can be refactored to be more performant).
The same approach can work in Java / any static typed language that has it’s own reflection type APIs.
The downside of this approach is it requires you to do the reflection heavy lifting. Above I am calling a single method, but imagine if I am accessing multiple members, the code starts to get ugly quick. Is it still Duck Typing though? Technically probably yes, but it is definitely not leveraging language features, rather it is utilizing runtime hacks introspection to get the Duck Typing behavior.
Interfaces in traditional static languages
Now the question then becomes is using an interface in C#/Java achieving Duck Typing? Below you can see I introduced an IRule interface, and ExecuteRule now expects an IRule instance.
I would say the answer is no, it is not Duck Typing. The reason it because the object getting passed to the Execute method must explicitly implement an interface in order to be compatible. It is not enough for it to have the Execute method on it’s surface, as in the previous cases.
Explicit Contracts via Structural Typing
In all of the previous examples that were mentioned, we saw two common traits:
The contract is established implicitly (in the code)
There is no compile time safety.
In recent years, a new player has appeared on the block called Structural Typing which provides a way to do Duck Typing in a compiler safe manner that works with static types. It is exciting to see these advancements!
Structural Typing has been around for a while in languages like C++ (kudos to Stephen Cleary) and O’Caml (kudos to @PickedUsername for that ref). More recently it has been making an appearance in newer languages like Scala (which Eric mentions in his post), and Go. Similar to Duck Typing, the presence / absence of members on the parameters passed in by the caller determines if the contract is satisfied. The difference however is that the the contract is specified explicitly in a manner that can be used at compile time to ensure compatibility.
In the case of Scala this is done by inlining metadata in the function definition which describes the expected contract.
In Go you can do the same using interfaces. What’s special (and beautiful) about the Go interface which differs from other languages, is it retroactively applies to any object that has the members present in the interface.
In either case the behavior is the same. If E(e)xecuteRule is called passing an object that does not have an E(e)xecute() function it will fail at compile time. The object itself has no knowledge however of the contract though the contract itself is explicitly declared externally.
Both Duck Typing and it’s newer cousin Structural Typing are techniques which provide a contract to the caller that is based on the structure of the parameters getting passed in. The contract itself can be implicitly established in code, sacrificing compile time checking, or explicitly by using advancements in newer statically typed languages to get compile time safety.
Parting thought – Now that newer languages are treading the path for achieving duck typing in static typed languages with compiler safety, I hope we see other languages follow suit. I for one would love for C#’s interfaces to work the way they do in Go. The first time I saw the language supported this I was literally drooling. Can you hear me Anders?
[Update: I removed the names I had listed below, as there is no way I could capture all the people I worked with and I don’t want to miss anyone]
Two weeks ago I left Microsoft and joined Splunk. After 8 wonderful years this was not an easy decision. Why did I leave then? One reason, I am ready for a change. I looked within myself and felt the time is right. I am ready to take these years of learning and use what I’ve learned and the experiences gained outside Microsoft in a smaller growing company.
Splunk offers an amazing product which helps IT-Pros and Developers analyze machine generated data in realtime. It is an invaluable resource (from what customers tell me) in diagnosing problems that occur at the hardware, system or application level. It has many uses far beyond that, but everything revolves around operational intelligence.
I am really excited to have joined Splunk and to be working on such a cool, powerful, and innovative product. I really like the team and the vision. I am looking forward to helping to take the developer story to the next level and to working with you to make it happen.
Microsoft and moving on
Almost 8 years ago I received my blue badge and started at Microsoft. I came after 10 years of working in the industry, and about 10 years of wanting to work at Microsoft. I was so excited to have made it, I remember thinking “You have arrived”. I was passionate and driven to help change the world through technology. I was looking forward to working with amazing people and on important projects. Flash forward, I can say for sure that I found what I was looking for, and much more than I would have imagined.
Microsoft really was a land of opportunity for me. I joined at time when developers were being very critical of Microsoft’s platform. The common criticism which echoed very loudly from groups like ALT.NET was that the MS Platform/.NET was too heavy, did not embrace good software development practices that the community cared about like TDD, and that it did not play well with open source.
I found myself in a place where there was an opportunity to make a difference. It somehow became my personal mission to help change the way we build the platform and look at open source. I worked toward this throughout most of my time at Microsoft in various teams like Prism (patterns & practices), MEF, Web API and Windows Azure/Node SDK. I was not alone. I had a long list of wonderful people (too large to do justice to) I worked with internally toward this goal.
That mission brought me in places I never imagined. I got to work across the company with various teams helping them to adopt a newer approach. I got to travel the world and talk about the work we were doing, I met with (and worked with) fantastic people both inside and outside of Microsoft. I learned a lifetime. I couldn’t have hoped for better!
I am really happy with the progress that has been made at Microsoft. The DNA has changed in much of the dev platform. Microsoft did a complete 180 on the attitude toward OSS and supporting OSS tools. Nuget has been a big help in making open source much easier to consume in .NET. Several SDK projects are on Github, Apache 2 licenses for Microsoft projects are everywhere, Codeplex supports git, and many platform projects are taking pull requests from the community. In addition, Microsoft is proactively supporting other open source projects like Node and jQuery.
With all this change, there are new critiques. In .NET there is the concern that Microsoft’s OSS efforts overshadow other efforts of the community. Some are concerned that Microsoft is investing too much in other stacks (like Node, PHP and recently Java). Others are worried about the future of .NET itself. I understand the concerns, but for me personally, things are is much better than where they were.
As to my experience at the company, was it always rosy? No, there were plenty of challenges and I had plenty of my own tough moments. Look, it’s not a perfect company (I am not sure there is one) by any means and its definitely got its problems. I had my own particular style and passions that didn’t always gel with my management. I definitely had my frustrations and felt plenty of times like a square peg trying to fit in a round hole. Regardless, the pros definitely outweighed the cons. I couldn’t have dreamed such an experience would have been possible. I would do it again.
I want to thank everyone (too many to name) who has worked with me over the years and supported my efforts at Microsoft, both inside and outside the community. You made my work possible and I learned a lifetime from you.
Thank you to all my managers and mentors both internally and externally who’ve invested their personal time and energy to help me, you all know who you are!
One person I need to mention, Scott Guthrie. The things you’ve done with .NET and Windows Azure have been fantastic and I look forward to the future. You’ve been a personal force in my career and it’s fair to say that I would have left Microsoft a long time ago had I not believed in your leadership. Thank You!
Script Packs are a really cool extensibility point we added into scriptcs. A pack delivers a bundle of functionality that makes frameworks more palatable to consume from script. They are available as nuget packages making them very easy to consume.
For example, if you look at our Web API sample, you’ll see there’s a bunch of friction if you just try to get Web Api working from scratch.
You need to add using statements for each namespace you want to use. This is a lot more painful than one might think when you don’t have intellisense.
You need to configure web API, this involves creating a host, defining default routes etc. Adding lots of object creation and such starts to make the script pretty hairy. Not impossible, but painful when there’s no template.
You need to teach Web Api how to resolve controllers in script by implementing a custom controller resolver.
Now pull in the Web Api script pack (scriptcs –install scriptcs.webapi) using the Require<WebApi>() function and your boiler plate code evaporates to this:
The script pack does all of the following to make the experience better:
Removes the need for using statements for common namespaces. The script pack provides those which is why you don’t have to add the web api namespaces in your example above.
Adds dll and nuget package references that bring the dependencies the framework needs.
Removes general boilerplate code. In the previous sample you need to create a host, define routes etc as I mentioned. In this case the script pack creates the host for you and configure with the default routes. You can customize if you need to.
Provide APIs to fill gaps that prevent the framework from working well in script / supporting dynamically emitted assemblies. The Web Api script pack brings in and configures a custom controller resolver for you.
We’re just getting started with the work we’ve done with script packs, but they are a really nice extensibility point and really take advantage of nuget as a delivery mechanism. The community has been rising to the occasion and building out quite the gallery as well.
There’s some great posts about script packs covering topics like how to build them or even use them from the REPL that you should really check out.