Windows Server Containers are coming whether you like it or not

After posting giddily on Docker in the Windows world recently, Microsoft released Windows Server 2016 Technical Preview 3 with container support. I’ve had a chance to play with it a little so let’s see where this goes…

It’s a preview

Like movie previews, this is equal parts exciting and frustrating. Exciting because you get a teaser of things to come. Frustrating because you just want it to work now. And extra frustration points for various technical issues I’ve run into that, I hope, are due to the “technical preview” label.

For example, installing container support into an existing VM is mind-numbingly slow. Kudos to the team for making it easy to install but at the point where you run ContainerSetup.ps1, be prepared to wait for, by my watch, at least 45 minutes without any visual indication that something is happening. The only reason I knew something was happening is because I saw the size of the VM go up (slowly) on my host hard drive. This is on a 70Mbps internet connection so I don’t think this can be attributed to “island problems” either.

I’ve heard tell of issues setting up container support in a Hyper-V VM as well. That’s second-hand info as I’m using Fusion on a Mac rather than Hyper-V. If you run into problems setting it up on Hyper-V, consider switching to the instructions for setting up containers on non-Hyper-V VMs instead.

There’s also the Azure option. Microsoft was gracious enough to provide an Azure image for Windows Server 2016 pre-configured with container support. This works well if you’re on Azure and I was able to run the nginx tutorial on it with no issues. I had less success with the IIS 10 tutorial even locally. I could get it running but was not able to create a new image based on the container I had.

It’s also a start

Technical issues aside, I haven’t been this excited about technology in Windows since…ASP.NET MVC, I guess, if my tag cloud is to be believed. And since this is a technical preview designed to garner feedback, here’s what I want to see in the Windows container world

Docker client and PowerShell support

I love that I can use the Docker client to work with Windows containers. I can leverage what I’ve already learned with Docker in Linux. But I also love that I can spin up containers with PowerShell so I don’t need to mix technologies in a continuous integration/continuous deployment environment if I already have PowerShell scripts set up for other aspects of my process.

Support for legacy .NET applications

I can’t take credit for this. I’ve been talking with Gabriel Schenker about containers a lot lately and it was he who suggested they need to have support for .NET 4, .NET 3.5, and even .NET 2.0. It makes sense though. There are a lot of .NET apps out there and it would be a shame if they couldn’t take advantage of containers.

Smooth local development

Docker Machine is great for getting up and running fast on a local Windows VM. To fully take advantage of containers, devs need to be able to work with them locally with no friction, whether that means a Windows Container version of Docker Machine or the ability to work with containers natively in Windows 10.

ARM support

At Western Devs, we have a PowerShell script that will spin up a new Azure Linux virtual machine, install docker, create a container, and run our website on it. It goes without saying (even though I’m saying it) that I’d like to do the same with Windows containers.

Lots of images out of the gate

I’d like to wean myself off VMs a little. I picture a world where I have one base VM and I use various containers for the different pieces of the app I’m working on. E.g. A SQL Server container, an IIS container, an ElasticSearch container, possibly even a Visual Studio container. I pick and choose which containers I need to build up my dev environment and use just one (or a small handful) of VMs.

In the meantime, I’m excited enough about Windows containers that I hope to incorporate a small demo with them in my talk at MeasureUP in a few scant weeks so if you’re in the Austin area, come on by to see it.

It is a glorious world ahead in this space and it puts a smile on this hillbilly’s face to see it unfold.

Kyle the Barely Contained

Posted in Docker | Tagged , | 3 Comments

Docker on Western Devs

In a month, I’ll be attempting to hound my share of glory at MeasureUP with a talk on using Docker for people who may not think it impacts them. In it, I’ll demonstrate some uses of Docker today in a .NET application. As I prepare for this talk, there’s one thing we Western Devs have forgotten to talk about. Namely, some of us are already using Docker regularly just to post on the site.

Western Devs uses Jekyll. Someone suggested it, I tried it, it worked well, decision was done. Except that it doesn’t work well on Windows. It’s not officially supported on the platform and while there’s a good guide on getting it running, we haven’t been able to do so ourselves. Some issue with a gem we’re using and Nokogiri and lib2xml and some such nonsense.

So in an effort to streamline things, Amir Barylko create a Docker image. It’s based on the Ruby base image (version 2.2). After grabbing the base image, it will:

  • Install some packages for building Ruby
  • Install the bundler gem
  • Clone the source code into the /root/jekyll folder
  • Run bundle install
  • Expose port 4000, the default port for running Jekyll

With this in place, Windows users can run the website locally without having to install Ruby, Python, or Jekyll. The command to launch the container is:

docker run -t -p 4000:4000 -v //c/path/to/code:/root/jekyll abarylko/western-devs:v1 sh -c 'bundle install && rake serve'

This will:

  • create a container based on the abarylko/western-devs:v1 image
  • export port 4000 to the host VM
  • map the path to the source code on your machine to /root/jekyll in the container
  • run bundle install && rake serve to update gems and launch Jekyll in the container

To make this work 100%, you also need to expose port 4000 in VirtualBox so that it’s visible from the VM to the host. Also, I’ve had trouble getting a container working with my local source located anywhere except C:\Users\mysuername. There’s a permission issue somewhere in there where the container appears to successfully map the drive but can’t actually see the contents of the folder. This manifests itself in an error message that says Gemfile not found.

Now, Windows users can navigate to localhost:4000 and see the site running locally. Furthermore, they can add and make changes to their posts, save them, and the changes will get reflected in the browser. Eventually, that is. I’ve noticed a 10-15 second delay between the time you press Save to the time when the changes actually get reflected. Haven’t determined a root cause for this yet. Maybe we just need to soup up the VM.

So far, this has been working reasonably well for us. To the point, where fellow Western Dev, Dylan Smith has automated the deployment of the image to Azure via a Powershell script. That will be the subject of a separate post. Which will give me time to figure out how the thing works.


Posted in Docker | Tagged , | Leave a comment

Docker is coming whether you like it or not

I’m excited about Docker. Unnaturally excited, one might say. So much so that I’ll be talking about it at MeasureUp this September.

In the meantime, I have to temper my enthusiasm for the time being because Docker is still a Linux-only concern. Yes, you can run Docker containers on Windows but only Linux-based ones. So no SQL Server and no IIS.

But you can’t stop a hillbilly from dreaming of a world of containers. So with a grand assumption that you know what Docker is roughly all about, here’s what this coder of the earth meditates on, Docker-wise, before going to sleep.


Microservices are a hot topic these days. We’ve talked about them at Western Devs already and Donald Belcham has a good and active list of resources. Docker is an eerily natural fit for microservices so much so that one might think it was created specifically to facilitate the architecture. You can package your entire service into a container and deploy it as a single package to your production server.

I don’t think you can understate the importance of a technology like Docker when it comes to microservices. Containers are so lightweight and portable, you just naturally gravitate to the pattern through normal use of containers. I can see a time in the near future where it’s almost negligent not to use microservices with Docker. At least in the Windows world. This might already be the case in Linux.

Works On My Machine

Ah, the crutch of the developer and the bane of DevOps. You set it up so nicely on your machine, with all your undocumented config entries and custom permissions and the fladnoogles and the whaztrubbets and everything else required to get everything perfectly balanced. Then you get your first bug from QA: can’t log in.

But what if you could test your deployment on the exact same image that you deployed to? Furthermore, what if, when a bug came in that you can’t reproduce locally, you could download the exact container where it was occurring? NO MORE EXCUSES, THAT’S WHAT!

Continuous Integration Build Agents

On one project, we had a suite of UI tests which took nigh-on eight hours in TeamCity. We optimized as much as we could and got it down to just over hours. Parallelizing them would have been a lot of effort to set up the appropriate shared resources and configurations. Eventually, we set up multiple virtual machines so that the entire parallel test run could finish in about an hour and a half. But the total test time of all those runs sequentially is now almost ten hours and my working theory is that it’s due to the overhead of the VMs on the host machine.

Offloading services

What I mean here is kind of like microservices applied to the various components of your application. You have an application that needs a database, a queue, a search components, and a cache. You could spin up a VM and install all those pieces. Or you could run a Postgres container, a RabbitMQ container, an ElasticSearch container, and a Redis container and leave your machine solely for the code.

When it comes right down to it, Docker containers are basically practical virtual machines. I’ve used VMs for many years. When I first started out, it was VMWare WorkStation on Windows. People that are smarter than me (including those that would notice that I should have said, “smarter than I”) told me to use them. “One VM per client” they would say. To the point that their host was limited to checking email and Twitter clients.

I tried that and didn’t like it. I didn’t like waiting for the boot process on both the host and each client and I didn’t like not taking full advantage of my host’s hardware on the specific client I happened to be working on at that moment.

But containers are lightweight. Purposefully so. Delightfully so. As I speak, the overworked USB drive that houses my VMs is down to 20 GB of free space. I cringe at the idea of having to spin up another one. But the idea of a dozen containers I can pick and choose from, all under a GB? That’s a development environment I can get behind.

Alas, this is mostly a future world I’m discussing. Docker is Linux only and I’m in the .NET space. So I have to wait until either: a) ASP.NET is ported over to Linux, or b) Docker supports Windows-based containers. And it’s a big part of my excitement that BOTH of those conditions will likely be met within a year.

In the meantime, who’s waiting? Earlier, I mentioned Postgres, Redis, ElasticSearch, and RabbitMQ. Those all work with Windows regardless of where they’re actually running. Furthermore, Azure already has pre-built containers with all of these.

Much of this will be the basis of my talk at the upcoming MeasureUP conference next month. So…uhhh….don’t read this until after that.

Posted in Clear Measure, Continuous Integration, Docker, Microservices, Presenting, TeamCity, UI Testing | Tagged , , , , | 3 Comments

Microservices, or “How to spread the love”

For some time, people have been talking about microservices. I say “some time” for two reasons: 1) It’s a good opening line, and 2) I have no clue how long people have been talking about them. I just heard the term for the first time about four months ago. So if I start talking about them now, while I still know virtually nothing, I can get at least two more future posts on the subject talking about how I was doing it wrong in the beginning.

In the meantime, I have been talking about them quite a bit recently. We’ve been using them on a project at Clear Measure and I’d like to think it’s been successful but it’s too soon to tell. I feel good about what we’ve done which, historically, has always been a good metric for me.

The topic has been covered at a technical and architectural level pretty well by Martin Fowler, so much so that he’s even collected his discussions into a nice little Microservices Resource Guide. In it, he and other ThoughtWorkians define them (to the extent that anything in software containing the word “services” can be defined), point out pros and cons compared to monolithic applications, describe testing strategies, and cover off the major success stories in the space.

That doesn’t leave much ground for me to cover which, from a marketing standpoint, is almost surely the point. But I would like to add my voice if for no other reason than to plug the podcast on the subject.

Check out the Western Devs podcast on microservices

One of the more interesting links on Fowler’s Resource Guide is tucked away at the bottom. It’s a series of posts on how SoundCloud is migrating from a monolith to microservices. Part 1 discusses how they stopped working on the monolith and performed all new work in new microservices andpart 2 is on how they split the monolith up into microservices. There were challenges in both cases, leading to other architectural decisions like event sourcing.

The arguments for and against are, predictably, passionate and academic. “Overkill!” you say. “Clean boundaries!” sez I. “But…DevOps!” you counter. “Yes…DevOps!” I respond. But SoundCloud’s experience, to me, is the real selling point of microservices. Unlike Netflix and Amazon, it’s a scale that is still relatable to many of us. We can picture ourselves in the offices there making the same decisions they went through and running up against the same problems. These guys have BEEN THERE, man! Not moving to microservices because they have to but because they had a real problem and needed a solution.

Now if you read the posts, there’s a certain finality to them. “We ran into this problem so we solved it by doing X.” What’s missing from the narrative is doubt. When they ran into problems that required access to an internal API, did anyone ask if maybe they defined the boundaries incorrectly? Once event sourcing was introduced, was there a question of whether they were going too far down a rabbit hole?

That’s not really the point of these posts, which is merely to relay the decision factors to see if it’s similar enough to your situation to warrant an investigation into microservices. All the same, I think this aspect is important for something still in its relative infancy, because there are plenty of people waiting to tell you “I told you so” as soon as you hit your first snag. Knowing SoundCloud ran into the same doubt can be reassuring. Maybe I’m just waiting for Microservices: The Documentary.

Regardless, there are already plenty of counter-arguments (or more accurately, counter-assumptions) to anecdotal evidence. Maybe the situation isn’t the same. They have infrastructure. They have money and time to rewrite. They have confident, “talented” developers who always know how to solve architectural problems the right away.

So now I’ve more or less done what I always do when I talk microservices, which is talk myself into a corner. Am I for ‘em or agin ‘em? And more importantly, should you, reader, use them?

The answer is: absolutely, of course, and yes. On your current project? That’s a little murkier. The experience is there and microservices have been done successfully. It’s still a bit of a wild west which can be exciting if you ain’t much for book learnin’. But “exciting” isn’t always the best reason to decide on an architecture if someone else is paying the bills. As with any architectural shift, you have to factor in the human variables in your particular project.

For my limited experience, I like them. They solve one set of problems nicely and introduce a new set of problems that are not only tractable, but fun, in this hillbilly’s opinion.

And why else did you get into the industry if not to have fun?

Posted in Microservices | Tagged , , | 10 Comments

Outside the shack, or “How to be a technology gigolo”

Almost four years ago, I waxed hillbilly on how nice it was to stick with what you knew, at least for side projects. At the time, my main project was Java and my side projects were .NET. Now, my main project is .NET and for whatever reason, I thought it would be nice to take on a side project.

The side project is Western Devs, a fairly tight-knit community of developers of similar temperament but only vaguely similar backgrounds. It’s a fun group to hang out with online and in person and at one point, someone thought “Wouldn’t it be nice to build ourselves a website and have Kyle manage it while we lob increasingly ridiculous feature requests at him from afar?”

Alas, I suffer from an unfortunate condition I inherited from my grandfather on my mother’s side called “Good Idea At The Time Syndrome” wherein one sees a community in need and charges in to make things right and damn the consequences on your social life because dammit, these people need help! The disease is common among condo association members and school bus drivers. Regardless, I liked the idea and we’re currently trying to pull it off.

The first question: what do we build it in? WordPress was an option we came up with early so we could throw it away as fast as possible. Despite some dabbling, we’re all more or less entrenched in .NET so an obvious choice was one of the numerous blog engines in that space. Personally, I’d consider Miniblog only because of its author.

Then someone suggested Jekyll hosted on GitHub pages due to its simplicity. This wasn’t a word I usually assocated with hosting a blog, especially one in .NET, so I decided to give it a shot.

Cut to about a month later, and the stack consists of:

Of these, the one and only technology I had any experience with was Rake, which I used to automate UI tests at BookedIN. The rest, including Markdown, were foreign to me.

And Lord Tunderin’ Jayzus I can not believe how quickly stuff came together. With GitHub Pages and Jekyll, infrastructure is all but non-existent. Octopress means no database, just file copying. Markdown, Slim and SASS have allowed me to scan and edit content files easier than with plain HTML and CSS. The Minimal Mistakes theme added so much built-in polish that I’m still finding new features in it today.

The most recent addition, and the one the prompted this post, was Travis. I’m a TeamCity guy and have been for years. I managed the TeamCity server for CodeBetter for many moons and on a recent project, had 6 agents running a suite of UI tests in parallel. So when I finally got fed up enough with our deploy process (one can type `git pull origin source && rake site:publish` only so many times), TeamCity was the first hammer* I reached for.

One thing to note: I’ve been doing all my development so far on a MacBook. My TeamCity server is on Windows. I’ve done Rake and Ruby stuff on the CI server before without too much trouble but I still cringe inwardly whenever I have to set up builds involving technology where the readme says “Technically, it works on Windows”. As it is, I have an older version of Ruby on the server that is still required for another project and on Windows, Jekyll requires Python but not the latest version, and I need to install a later version of DevKit, etc, etc, and so on and so forth.

A couple of hours later, I had a build created and running with no infrastructure errors. Except that it hung somewhere. No indication why in the build logs and at that moment, my 5-year-old said, “Dad, let’s play hockey” which sounded less frustrating than having to set up a local Windows environment to debug this problem.

After a rousing game where I schooled the kid 34-0, I left him with his mother to deal with the tears and I sat down to tackle the CI build again. At this point, it occurred to me I could try something non-Windows-based. That’s where Travis came in (on a suggestion from Dave Paquette who I also want to say is the one that suggested Jekyll but I might be wrong).

Fifteen minutes. That’s how long it took to get my first (admittedly failing) build to run. It was frighteningly easy. I just had to hand over complete access to my GitHub repo, add a config file, and it virtually did the rest for me.

Twenty minutes later, I had my first passing build which only built the website. Less than an hour later and our dream of continuous deployment is done. No mucking with gems, no installing frameworks over RDP. I updated a grand total of four files: .travis.yml, _config.yml, Gemfile, and rakefile. And now, whenever someone checks into the `source` branch, I am officially out of the loop. I had to do virtually nothing on the CI server itself, including setting up the Slack notifications.

This is a long-winded contradiction of my post of four years ago where my uncertainty with Java drove me to the comfort of .NET. And to keep perspective, this isn’t exactly a mission critical, LOB application. All the same, for someone with 15-odd years of .NET experience under his obi, I’d be lying if I said I wasn’t amazed at how quickly one can put together a functional website for multiple authors with non-Microsoft technology you barely have passing knowledge of.

To be clear, I’m fully aware of what people say about these things. I know Ruby is a fun language and I feel good about myself whenever I do anything substantial with it. And I know Markdown is all the rage with the kids these days. It’s not really one technology on its own that made me approach epiphaniness. It’s the way all the tools and libraries intermingle so well. Which has this optimistic hillbilly feeling like his personal life and professional life are starting to mirror each other.

Is there a lesson in here for others? I hope so as it would justify me typing all this out and clicking publish committing to the repository. But mostly, like everything else, I’m just happy to be here. As I’ve always said, if you’ve learned anything, that’s your fault, not mine.

Kyle the Coalescent

* With credit to Brendan Enrick’s and Steve Smith’s Software Craftsmanship Calendar 2016

Posted in Uncategorized | Tagged , , | 1 Comment