Save your DNS and your SANITY when using VPN on a Mac (without rebooting)

There was time when using my Mac was bliss form a DNS perspective, I never had to worry about my routing tables getting corrupted. I could always rely on hosts getting resolved, life was good! And then a combination of things happened and well those good old days are gone :-(

  • The networking stack on OSX went downhill.
  • I joined Splunk.
  • I started using a VPN on my Mac (We use Juniper SSL VPN).
  • I started having to deal with this now recurring nightmare of my DNS suddenly failing, generally after using the VPN.

If you use a VPN on a Mac, I am sure you’ve seen it. Suddenly you type “” in your browser, and you get a 404. “Is Github down?” you ask your co-workers? “Nope, works perfectly fine for me”. “Is hipchat down?”. “Nope, I am chatting away.”

Meanwhile. your browser looks something like this:

Screen Shot 2015-10-01 at 7.08.31 AM













So you reboot, and then you find out that Github was up all along, the problem was your routing tables got screwed somehow related to the VPN, either that or the DNS demons have taken over your machine!







After dealing with this constantly, you start to seriously lose your sanity! It will of course always happen at the most inopportune time, like when you are about to present to your execs or walk on stage!

But my friends, there is hope, I have a cure! This is a cure I learned from the wise ninjas at my office (Thank you Danielle and Itay!), it is a little bash alias, and it will save you AND your DNS. Drop it in your .bash_profile and open a new terminal.

alias fixvpn="sudo route -n flush && sudo networksetup -setv4off Wi-Fi && sudo networksetup -setdhcp Wi-Fi"

Next time the DNS demons come to get you, run this baby from the shell. It will excommunicate those demons and quick.

Screen Shot 2015-10-01 at 7.09.48 AM







Wait a few seconds, and bring up that webpage again.

Screen Shot 2015-10-01 at 7.09.57 AM







You DNS and Sanity are restored!

Posted in tips, Uncategorized | 5 Comments

Windows Server Containers are coming whether you like it or not

After posting giddily on Docker in the Windows world recently, Microsoft released Windows Server 2016 Technical Preview 3 with container support. I’ve had a chance to play with it a little so let’s see where this goes…

It’s a preview

Like movie previews, this is equal parts exciting and frustrating. Exciting because you get a teaser of things to come. Frustrating because you just want it to work now. And extra frustration points for various technical issues I’ve run into that, I hope, are due to the “technical preview” label.

For example, installing container support into an existing VM is mind-numbingly slow. Kudos to the team for making it easy to install but at the point where you run ContainerSetup.ps1, be prepared to wait for, by my watch, at least 45 minutes without any visual indication that something is happening. The only reason I knew something was happening is because I saw the size of the VM go up (slowly) on my host hard drive. This is on a 70Mbps internet connection so I don’t think this can be attributed to “island problems” either.

I’ve heard tell of issues setting up container support in a Hyper-V VM as well. That’s second-hand info as I’m using Fusion on a Mac rather than Hyper-V. If you run into problems setting it up on Hyper-V, consider switching to the instructions for setting up containers on non-Hyper-V VMs instead.

There’s also the Azure option. Microsoft was gracious enough to provide an Azure image for Windows Server 2016 pre-configured with container support. This works well if you’re on Azure and I was able to run the nginx tutorial on it with no issues. I had less success with the IIS 10 tutorial even locally. I could get it running but was not able to create a new image based on the container I had.

It’s also a start

Technical issues aside, I haven’t been this excited about technology in Windows since…ASP.NET MVC, I guess, if my tag cloud is to be believed. And since this is a technical preview designed to garner feedback, here’s what I want to see in the Windows container world

Docker client and PowerShell support

I love that I can use the Docker client to work with Windows containers. I can leverage what I’ve already learned with Docker in Linux. But I also love that I can spin up containers with PowerShell so I don’t need to mix technologies in a continuous integration/continuous deployment environment if I already have PowerShell scripts set up for other aspects of my process.

Support for legacy .NET applications

I can’t take credit for this. I’ve been talking with Gabriel Schenker about containers a lot lately and it was he who suggested they need to have support for .NET 4, .NET 3.5, and even .NET 2.0. It makes sense though. There are a lot of .NET apps out there and it would be a shame if they couldn’t take advantage of containers.

Smooth local development

Docker Machine is great for getting up and running fast on a local Windows VM. To fully take advantage of containers, devs need to be able to work with them locally with no friction, whether that means a Windows Container version of Docker Machine or the ability to work with containers natively in Windows 10.

ARM support

At Western Devs, we have a PowerShell script that will spin up a new Azure Linux virtual machine, install docker, create a container, and run our website on it. It goes without saying (even though I’m saying it) that I’d like to do the same with Windows containers.

Lots of images out of the gate

I’d like to wean myself off VMs a little. I picture a world where I have one base VM and I use various containers for the different pieces of the app I’m working on. E.g. A SQL Server container, an IIS container, an ElasticSearch container, possibly even a Visual Studio container. I pick and choose which containers I need to build up my dev environment and use just one (or a small handful) of VMs.

In the meantime, I’m excited enough about Windows containers that I hope to incorporate a small demo with them in my talk at MeasureUP in a few scant weeks so if you’re in the Austin area, come on by to see it.

It is a glorious world ahead in this space and it puts a smile on this hillbilly’s face to see it unfold.

Kyle the Barely Contained

Posted in Docker | Tagged , | 3 Comments

Code the Town! Investing in the next generation of programmers in Austin, TX

Code the TownAustin, TX is a hot-bed for technology.  You can find a user group for just about any technology and purpose meeting almost any day of the week.

And now, there is a group that intersects with giving back to the community and helping the next generation of programmers.  Code the Town is a group that does just that. Clear Measure, and other companies, are sponsors of the group.  The official description is:

“This is a group for anyone interested in volunteering to teach Hour of Code in the Austin and surrounding area school districts. The goal is to get community volunteers to give the age appropriate Hour of Code to every student at every grade level. We want to have our own community prepare students for a technology-based workforce. We also want to build a community of professionals and students that have a passion for coding and teaching. We want to begin the Hour of Code in the high schools first. High school students would then be prepared to teach the younger students.  Once this group has momentum, it will be able to form motivated teams and use software projects done for local non-profit organizations to not only reinvest in our community but also to help our youth gain experience in software engineering.  Whether you are a student, parent, educator, or software professional, please join our Meet Up! This will be fun! And it will have a profound impact on the next generation.”

The long term vision is to create a sustainable community of professionals, educators, parents, and students that continually gives back to local community organizations through computers and technology while continually pulling the next generation of students into computer programming.
Simple codeIt all starts with some volunteers to teach students the basics of computer programming.  In the 1990s, the web changed the world.  Now, we have hand-held smartphones and other devices (TVs, bathroom scales, etc) that are connected to computer systems via the internet.  In the next decade, almost every machine will be connected to computer systems, and robotics will be a merging between mechanical engineering and computer science.  Those who know how to write computer code will have a big advantage in the workforce where the divide between those who build/create and those who service what is created might get bigger than it already has.
BlocklyCode the Town will focus on introducing students to computer programming and then pull them together with their parents, their teachers, and willing community professionals to work on real software projects for local non-profits.  In this fashion, everyone gets something.  Everyone gives something, and everyone benefits.  If you are interested in this vision, please come to the first meeting of Code the Town by signing up for the Meetup group.

Posted in Coding Principles, Developer Community, Featured, Learning, Tips & Tricks | Leave a comment

Docker on Western Devs

In a month, I’ll be attempting to hound my share of glory at MeasureUP with a talk on using Docker for people who may not think it impacts them. In it, I’ll demonstrate some uses of Docker today in a .NET application. As I prepare for this talk, there’s one thing we Western Devs have forgotten to talk about. Namely, some of us are already using Docker regularly just to post on the site.

Western Devs uses Jekyll. Someone suggested it, I tried it, it worked well, decision was done. Except that it doesn’t work well on Windows. It’s not officially supported on the platform and while there’s a good guide on getting it running, we haven’t been able to do so ourselves. Some issue with a gem we’re using and Nokogiri and lib2xml and some such nonsense.

So in an effort to streamline things, Amir Barylko create a Docker image. It’s based on the Ruby base image (version 2.2). After grabbing the base image, it will:

  • Install some packages for building Ruby
  • Install the bundler gem
  • Clone the source code into the /root/jekyll folder
  • Run bundle install
  • Expose port 4000, the default port for running Jekyll

With this in place, Windows users can run the website locally without having to install Ruby, Python, or Jekyll. The command to launch the container is:

docker run -t -p 4000:4000 -v //c/path/to/code:/root/jekyll abarylko/western-devs:v1 sh -c 'bundle install && rake serve'

This will:

  • create a container based on the abarylko/western-devs:v1 image
  • export port 4000 to the host VM
  • map the path to the source code on your machine to /root/jekyll in the container
  • run bundle install && rake serve to update gems and launch Jekyll in the container

To make this work 100%, you also need to expose port 4000 in VirtualBox so that it’s visible from the VM to the host. Also, I’ve had trouble getting a container working with my local source located anywhere except C:\Users\mysuername. There’s a permission issue somewhere in there where the container appears to successfully map the drive but can’t actually see the contents of the folder. This manifests itself in an error message that says Gemfile not found.

Now, Windows users can navigate to localhost:4000 and see the site running locally. Furthermore, they can add and make changes to their posts, save them, and the changes will get reflected in the browser. Eventually, that is. I’ve noticed a 10-15 second delay between the time you press Save to the time when the changes actually get reflected. Haven’t determined a root cause for this yet. Maybe we just need to soup up the VM.

So far, this has been working reasonably well for us. To the point, where fellow Western Dev, Dylan Smith has automated the deployment of the image to Azure via a Powershell script. That will be the subject of a separate post. Which will give me time to figure out how the thing works.


Posted in Docker | Tagged , | Leave a comment

Docker is coming whether you like it or not

I’m excited about Docker. Unnaturally excited, one might say. So much so that I’ll be talking about it at MeasureUp this September.

In the meantime, I have to temper my enthusiasm for the time being because Docker is still a Linux-only concern. Yes, you can run Docker containers on Windows but only Linux-based ones. So no SQL Server and no IIS.

But you can’t stop a hillbilly from dreaming of a world of containers. So with a grand assumption that you know what Docker is roughly all about, here’s what this coder of the earth meditates on, Docker-wise, before going to sleep.


Microservices are a hot topic these days. We’ve talked about them at Western Devs already and Donald Belcham has a good and active list of resources. Docker is an eerily natural fit for microservices so much so that one might think it was created specifically to facilitate the architecture. You can package your entire service into a container and deploy it as a single package to your production server.

I don’t think you can understate the importance of a technology like Docker when it comes to microservices. Containers are so lightweight and portable, you just naturally gravitate to the pattern through normal use of containers. I can see a time in the near future where it’s almost negligent not to use microservices with Docker. At least in the Windows world. This might already be the case in Linux.

Works On My Machine

Ah, the crutch of the developer and the bane of DevOps. You set it up so nicely on your machine, with all your undocumented config entries and custom permissions and the fladnoogles and the whaztrubbets and everything else required to get everything perfectly balanced. Then you get your first bug from QA: can’t log in.

But what if you could test your deployment on the exact same image that you deployed to? Furthermore, what if, when a bug came in that you can’t reproduce locally, you could download the exact container where it was occurring? NO MORE EXCUSES, THAT’S WHAT!

Continuous Integration Build Agents

On one project, we had a suite of UI tests which took nigh-on eight hours in TeamCity. We optimized as much as we could and got it down to just over hours. Parallelizing them would have been a lot of effort to set up the appropriate shared resources and configurations. Eventually, we set up multiple virtual machines so that the entire parallel test run could finish in about an hour and a half. But the total test time of all those runs sequentially is now almost ten hours and my working theory is that it’s due to the overhead of the VMs on the host machine.

Offloading services

What I mean here is kind of like microservices applied to the various components of your application. You have an application that needs a database, a queue, a search components, and a cache. You could spin up a VM and install all those pieces. Or you could run a Postgres container, a RabbitMQ container, an ElasticSearch container, and a Redis container and leave your machine solely for the code.

When it comes right down to it, Docker containers are basically practical virtual machines. I’ve used VMs for many years. When I first started out, it was VMWare WorkStation on Windows. People that are smarter than me (including those that would notice that I should have said, “smarter than I”) told me to use them. “One VM per client” they would say. To the point that their host was limited to checking email and Twitter clients.

I tried that and didn’t like it. I didn’t like waiting for the boot process on both the host and each client and I didn’t like not taking full advantage of my host’s hardware on the specific client I happened to be working on at that moment.

But containers are lightweight. Purposefully so. Delightfully so. As I speak, the overworked USB drive that houses my VMs is down to 20 GB of free space. I cringe at the idea of having to spin up another one. But the idea of a dozen containers I can pick and choose from, all under a GB? That’s a development environment I can get behind.

Alas, this is mostly a future world I’m discussing. Docker is Linux only and I’m in the .NET space. So I have to wait until either: a) ASP.NET is ported over to Linux, or b) Docker supports Windows-based containers. And it’s a big part of my excitement that BOTH of those conditions will likely be met within a year.

In the meantime, who’s waiting? Earlier, I mentioned Postgres, Redis, ElasticSearch, and RabbitMQ. Those all work with Windows regardless of where they’re actually running. Furthermore, Azure already has pre-built containers with all of these.

Much of this will be the basis of my talk at the upcoming MeasureUP conference next month. So…uhhh….don’t read this until after that.

Posted in Clear Measure, Continuous Integration, Docker, Microservices, Presenting, TeamCity, UI Testing | Tagged , , , , | 3 Comments