Sponsored By Aspose - File Format APIs for .NET

Aspose are the market leader of .NET APIs for file business formats – natively work with DOCX, XLSX, PPT, PDF, MSG, MPP, images formats and many more!

Why can’t you just communicate properly?

Online communication bugs me. Actually, bugs isn’t accurate. Maybe saddens and fatigues. When volleying with people hiding behind their keyboard shield and protected by three timezones, you have to make a conscious effort to remain optimistic. It’s part of the reason I haven’t taken to Twitter as much as I probably should.

I’ve talked on this subject before and it’s something I often have in the back of my mind when reading comments. It’s come to the forefront recently with some conversations we’ve had at Western Devs, which led to our most recent podcast. I wasn’t able to attend so here I am.

There are certain phrases you see in comments that automatically seem to devolve a discussion. They include:

  • “Why don’t you just…”
  • “Sorry but…”
  • “Can’t you just…”
  • “It’s amazing that…”

Ultimately, all of these phrases can be summarized as follows:

I’m better than you and here’s why…

In my younger years, I could laugh this off amiably and say “Oh this wacky world we live in”. But I’m turning 44 in a couple of days and it’s time to start practicing my crotchety, even if it means complaining about people being crotchety.

So to that end: I’m asking, nay, begging you to avoid these and similar phrases. This is for your benefit as much as the reader’s. These phrases don’t make you sound smart. Once you use them, it’s very unlikely anyone involved will feel better about themselves, let alone engage in any form of meaningful discussion. Even if you have a valid point, who wants to be talked down to like that? Have you completely forgot what it’s like to learn?

“For fuck’s sake, Mom, why don’t you just type the terms you want to search for in the address bar instead of typing WWW.GOOGLE.COM into Bing?”

Now I know (from experience) it’s hard to fight one’s innate sense of superiority and the overwhelming desire to make it rain down on the unwashed heathen. So take it in steps. After typing your comment, remove all instances of “just” (except when just means “recently” or “fair”, of course). The same probably goes for “simply”. It has more of a condescending tone than a dismissive one. “Actually” is borderline. Rule of thumb: Don’t start a sentence with it.

Once you have that little nervous tic under control, it’s time to remove the negatives. Here’s a handy replacement guide to get you started:

Original phrase Replacement
“Can’t you” “Can you”
“Why don’t you” “Can you”
“Sorry but” no replacement; delete the phrase
“It’s amazing that…” delete your entire comment and have a dandelion break

See the difference? Instead of saying Sweet Zombie Jayzus, you must be the stupidest person on the planet for doing it this way, you’ve changed the tone to Have you considered this alternative? In both instances, you’ve made your superior knowledge known but in the second, it’s more likely to get acknowledged. More importantly, you’re less likely to look like an idiot when the response is: I did consider that avenue and here are legitimate reasons why I decided to go a different route.

To be fair, sometimes the author of the work you’re commenting on needs to be knocked down a peg or two themselves. I have yet to meet one of these people who respond well to constructive criticism critique, let alone the destructive type I’m talking about here. Generally, I find they feel the need to cultivate an antagonistic personality but in my experience, they usually don’t have the black turtlenecks to pull it off. Usually, it ends up backfiring and their dismissive comments become too easy to dismiss over time.

Kyle the Inclusive

Originally posted to: http://www.westerndevs.com/communication/Why-can-t-you-just/
Posted in Sundry | Tagged | 12 Comments

Migrating from Jekyll to Hexo

WesternDevs has a shiny new look thanks to graphic designer extraodinaire, Karen Chudobiak. When implementing the design, we also decided to switch from Jekyll to Hexo. Besides having the opportunity to learn NodeJS, the other main reason was Windows. Most of us use it as our primary machine and Jekyll doesn’t officially support it. There are instructions available by people who were obviously more successful at it than we were. And there are even simpler ones that I discovered during the course of writing this post and that I wish existed three months ago.

Regardless, here we are and it’s already been a positive move overall, not least because the move to Node means more of us are available to help with the maintenance of the site. But it wasn’t without it’s challenges. So I’m going to outline the major ones we faced here in the hopes that it will help you make your decision more informed than ours was.

To preface this, note that I’m new to Node and in fact, this is my first real project with it. That said, I’m no expert in Ruby either, which is what Jekyll is written in. And the short version of my first impressions is: Jekyll feels more like a real product but I had an easier time customizing Hexo once I dug into it. Here’s the longer version

DOCUMENTATION/RESOURCES

You’ll run into this very quickly. Documentation for Hexo is decent but incomplete. And once you start Googling, you’ll discover many of the resources are in Chinese. I found very quickly that there isposts collection and that each post has a categories collection. But as to what these objects look like, I couldn’t tell. They aren’t arrays. And you can’t JSON.stringify them because they have circular references in them. util.inspect works but it’s not available everywhere.

MULTI-AUTHOR SUPPORT

By default, Hexo doesn’t support multiple authors. Neither does Jekyll, mind you, but we found apretty complete theme that does. In Hexo, there’s a decent package that gets you partway there. It lets you specify an author ID on a post and it will attach a bunch of information to it. But you can’t, for example, get a full list of authors to list on a Who We Are page. So we created a separate data file for the authors. But we also haven’t figured out how to use that file to generate a .json file to use for the Featured section on the home page. So at the moment, we have author information in three places. Our temporary solution is to disallow anyone from joining or leaving Western Devs.

CUSTOMIZATION

If you go with Hexo and choose an existing themes, you won’t run into the same issues we did. Out of the box, it has good support for posts, categories, pagination, even things like tags and aliases with the right plugins.

But we started from a design and were migrating from an existing site with existing URLs and had to make it work. I’ve mentioned the challenge of multiple authors already. Another one: maintaining our URLs. Most of our posts aren’t categorized. In Jekyll, that means they show up at the root of the site. In Hexo, that’s not possible. At least at the moment and I suspect this is a bug. We eventually had to fork Hexo itself to maintain our existing URLs.

Another challenge: excerpts. In Jekyll, excerpts work like this: Check the front matter for an excerpt. If one doesn’t exist, take the first few characters from the post. In Hexo, excerpts are empty by default. If you add a <!--more--> tag in your post, everything before that is considered an excerpt. If you specify an excerpt in your front matter, it’s ignored because there is already an excerptproperty on your posts.

Luckily, there’s a plugin to address the last point. But it still didn’t address the issue of all our posts without an excerpt where we relied solely on the contents of the post.

So if you’re looking to veer from the scripted path, be prepared. More on this later in the “good parts” section.

OVERALL FEELING OF RAWNESS

This is more a culmination of the previous issues. It just feels like Hexo is a work-in-progress whereas Jekyll feels more like a finished product. There’s a strong community behind Jekyll and plenty of help. Hexo still has bugs that suggest it’s just not used much in the wild. Like rendering headers with links in them. It makes the learning process a bit challenging because with Jekyll, if something didn’t work, I’d think I’m obviously doing something wrong. With Hexo, it’s I might be doing something wrong or there might be a bug.


THE GOOD PARTS

I said earlier that the move to Hexo was positive overall and not just because I’m optimistic by nature. There are two key benefits we’ve gained just in the last two weeks.

GENERATION TIME

Hexo is fast, plain and simple. Our old Jekyll site took six seconds to generate. Doesn’t sound like much but when you’re working on a feature or tweaking a post, then saving, then refreshing, then rinsing, then repeating, that six seconds adds up fast. In Hexo, a full site generation takes three seconds. But more importantly, it is smart enough to do incremental updates while you’re working on it. So if you run hexo server, then see a mistake in your post, you can save it, and the change will be reflected almost instantly. In fact, it’s usually done by the time I’ve switched back to the browser.

CONTRIBUTORS

We had logistical challenges with Jekyll. To the point where we had two methods for Windows users that wanted to contribute (i.e. add a post). One involved a Docker image and the other Azure ARM. Neither was ideal as they took between seconds and minutes to refresh if you made changes. Granted, both methods furthered our collective knowledge in both Docker and Azure but they both kinda sucked for productivity.

That meant that realistically, only the Mac users really contributed to the maintenance of the site. And our Docker/Azure ARM processes were largely ignored as we would generally just test in production. I.e. create a post, check it in, wait for the site to deploy, make necessary changes, etc, etc.

With the switch to Hexo, we’ve had no fewer than five contributors to the site’s maintenance already. Hexo just works on Windows. And on Mac. Best of both worlds.

CUSTOMIZATION

This is listed under the challenges but ever the optimist, I’m including it here as well. We’ve had to make some customizations for our site, including forking Hexo itself. And for me personally, once I got past the why isn’t this working the way I want? stage, it’s been a ton of fun. It’s crazy simple to muck around in the node modules to try stuff out. And just as simple to fork something and reference it in your project when the need arises. I mentioned an earlier issue rendering links in headers. No problem, we just swapped out the markdown renderer for another one. And if that doesn’t work, we’ll tweak something until it does.


I want to talk more on specific conversion issues we ran into as a guide to those following in our footsteps. But there are enough of them to warrant a follow up post without all this pre-amble. For now, we’re all feeling the love for Hexo. So much so that no less than three other Western Devs are in the process of converting their personal blogs to it.

Originally posted to: http://www.westerndevs.com/jekyll/hexo/Migrating-from-Jekyll-to-Hexo/
Posted in Uncategorized | Tagged , , | Leave a comment

Testing with Data

It’s not a coincidence that this is coming off the heels of Dave Paquette’s post on GenFu and Simon Timms’ post on source control for databases in the same way it was probably not a coincidence that Hollywood released three body-swapping movies in the 1987-1988 period (four if you include Big).

I was asked recently for some advice on generating data for use with integration and UI tests. I already have some ideas but asked the rest of the Western Devs for some elucidation. My tl;dr version is the same as what I mentioned in our discussion on UI testing: it’s hard. But manageable. Probably.

The solution needs to balance a few factors:

  • Each test must start from a predictable state
  • Creating that predictable state should be fast as possible
  • Developers should be able to figure out what is going on by reading the test

The two options we discussed both assume the first factor to be immutable. That means you either clean up after yourself when the test is finished or you wipe out the database and start from scratch with each test. Cleaning up after yourself might be faster but has more moving parts. Cleaning up might mean different things depending on which step you’re in if the test fails.

So given that we will likely re-create the database from scratch before each and every test, there are two options. My current favourite solution is a hybrid of the two.

Maintain a database of known data

In this option, you have a pre-configured database. Maybe it’s a SQL Server .bak file that you restore before each test. Maybe it’s a GenerateDatabase method that you execute. I’ve done the latter on a Google App Engine project, and it works reasonably well from an implementation perspective. We had a class for each domain aggregate and used dependency injection. So adding a new test customer to accommodate a new scenario was fairly simple. There are a number of other ways you can do it, some of which Simon touched on in his post.

We also had it set up so that we could create only the customer we needed for that particular test if we needed to. That way, we could use a step likeGiven I'm logged into 'Christmas Town' and it would set up only that data.

There are some drawbacks to this approach. You still need to create a new class for a new customer if you need to do something out of the ordinary. And if you need to do something only slightly out of the ordinary, there’s a strong tendency to use an existing customer and tweak its data ever so slightly to fit your test’s needs, other tests be damned. With these tests falling firmly in the long-running category, you don’t always find out the effects of this until much later.

Another drawback: it’s not obvious in the test exactly what data you need for that specific test. You can accommodate this somewhat just with a naming convention. For example,Given I'm logged into a company from India, if you’re testing how the app works with rupees. But that’s not always practical. Which leads us to the second option.

Create an API to set up the data the way you want

Here, your API contains steps to fully configure your database exactly the way you want. For example:

Given I have a company named "Christmas Town" owned by "Jack Skellington"
And I have 5 product categories
And I have 30 products
And I have a customer
...

 

You can probably see the major drawback already. This can become very verbose. But on the other hand, you have the advantage of seeing exactly what data is included which is helpful when debugging. If your test data is wrong, you don’t need to go mucking about in your source code to fix it. Just update the test and you’re done.

Also note the lack of specifics in the steps. Whenever possible, I like to be very vague when setting up my test data. If you have a good framework for generating test data, this isn’t hard to do. And it helps uncover issues you may not account for using hard-coded data (as anyone named D’Arcy O’Toole can probably tell you).


Loading up your data with a granular API isn’t realistic which is why I like the hybrid solution. By default, you pre-load your database with some common data, like lookup tables with lists of countries, currencies, product categories, etc. Stuff that needs to be in place for the majority of your tests.

After that, your API doesn’t need to be that granular. You can use something likeGiven I have a basic company which will create the company, add an owner and maybe some products and use that to test the process for creating an order. Under the hood, it will probably use the specific steps.

One reason I like this approach: it hides only the details you don’t care about. When you sayGiven I have a basic company and I change the name to "Rick's Place", that tells me, “I don’t care how the company is set up but the company name is important”. Very useful to help narrow the focus of the test when you’re reading it.

This approach will understandably lead to a whole bunch of different methods for creating data of various sizes and coarseness. And for that you’ll need to…

Maintain test data

Regardless of your method, maintaining your test data will require constant vigilance. In my experience, there is a tremendous urge to take shortcuts when it comes to test data. You’ll re-use a test company that doesn’t quite fit your scenario. You’ll alter your test to fit the data rather than the other way around. You’ll duplicate a data setup step because your API isn’t discoverable.

Make no mistake, maintaining test data is work. It should be treated with the same respect and care as the rest of your code. Possibly more so since the underlying code (in whatever form it takes) technically won’t be tested. Shortcuts and bad practices should not be tolerated and let go because “it’s just test data”. Fight the urge to let things slide. Call it out as soon as you see it. Refactor mercilessly once you see opportunities to do so.

Don’t be afraid to flip over a table or two to get your point across.

– Kyle the Unmaintainable

Posted in Uncategorized | 3 Comments

Running a .NET app against a Postgres database in Docker

Some days/weeks/time ago, I did a presentation at MeasureUP called “Docker For People Who Think Docker Is This Weird Linux Thing That Doesn’t Impact Me”. The slides for that presentation can be found here and the sample application here.

Using the sample app with PostgreSQL

The sample application is just a plain ol’ .NET application. It is meant to showcase different ways of doing things. One of those things is data access. You can configure the app to access the data from SQL storage, Azure table storage, or in-memory. By default, it uses the in-memory option so you can clone the app and launch it immediately just to see how it works.

PancakeProwler

Quick summary: Calgary, Alberta hosts an annual event called the Calgary Stampede. One of the highlights of the 10-ish day event is the pancake breakfast, whereby dozens/hundreds of businesses offer up pancakes to people who want to eat like the pioneers did, assuming the pioneers had pancake grills the size of an Olympic swimming pool.

The sample app gives you a way to enter these pancake breakfast events and each day, will show that day’s breakfasts on a map. There’s also a recipe section to share pancake recipes but we won’t be using that here.

To work with Docker we need to set the app up to use a data access mechanism that will work on Docker. The sample app supports Postgres so that will be our database of choice. Our first step is to get the app up and running locally with Postgres without Docker. So, assuming you have Postgres installed, find the ContainerBuilder.cs file in the PancakeProwler.Web project. In this file, comment out the following near the top of the file:

// Uncomment for InMemory Storage
builder.RegisterAssemblyTypes(typeof(Data.InMemory.Repositories.RecipeRepository).Assembly)
       .AsImplementedInterfaces()
       .SingleInstance();

And uncomment the following later on:

// Uncomment for PostgreSQL storage
builder.RegisterAssemblyTypes(typeof(PancakeProwler.Data.Postgres.IPostgresRepository).Assembly)
    .AsImplementedInterfaces().InstancePerRequest().PropertiesAutowired();

This configures the application to use Postgres. You’ll also need to do a couple of more tasks:

  • Create a user in Postgres
  • Create a Pancakes database in Postgres
  • Update the Postgres connection string in the web project’s web.config to match the username and database you created

The first two steps can be accomplished with the following script in Postgres:

CREATE DATABASE "Pancakes";

CREATE USER "Matt" WITH PASSWORD 'moo';

GRANT ALL PRIVILEGES ON DATABASE "Pancakes" TO "Matt";

Save this to a file. Change the username/password if you like but be aware that the sample app has these values hard-wired into the connection string. Then execute the following from the command line:

psql -U postgres -a -f "C:\path\to\sqlfile.sql"

At this point, you can launch the application and create events that will show up on the map. If you changed the username and/or password, you’ll need to update the Postgres connection string first.

You might have noticed that you didn’t create any tables yet but the app still works. The sample is helpful in this regard because all you need is a database. If the tables aren’t there yet, they will be created the first time you launch the app.

Note: recipes rely on having a search provider configured. We won’t cover that here but I hope to come back to it in the future.

Next, we’ll switching things up so you can run this against Postgres running in a Docker container.

Switching to Docker

I’m going to give away the ending here and say that there is no magic. Literally, all we’re doing in this section is installing Postgres on another “machine” and connecting to it. The commands to execute this are just a little less click-y and more type-y.

The first step, of course, is installing Docker. At the time of writing, this means installing Docker Machine.

With Docker Machine installed, launch the Docker Quickstart Terminal and wait until you see an ASCII whale:

Docker Machine

If this is your first time running Docker, just know that a lightweight Linux virtual machine has been launched in VirtualBox on your machine. Check your Start screen and you’ll see VirtualBox if you want to investigate it but the docker-machine command will let you interact with it for many things. For example:

docker-machine ip default

This will give you the IP address of the default virtual machine, which is the one created when you first launched the Docker terminal. Make a note of this IP address and update the Postgres connection string in your web.config to point to it. You can leave the username and password the same:

<add name="Postgres" connectionString="server=192.168.99.100;user id=Matt;password=moo;database=Pancakes" providerName="Npgsql" />

Now we’re ready to launch the container:

docker run --name my-postgres -e POSTGRES_PASSWORD=moo -p 5432:5432 -d postgres`

Breaking this down:

docker run Runs a docker container from an image
--name my-postgres The name we give the container to make it easier for us to work with. If you leave this off, Docker will assign a relatively easy-to-remember name like “floral-academy” or “crazy-einstein”. You also get a less easy-to-remember identifier which works just as well but is…less…easy-to-remember
-e POSTGRES_PASSWORD=moo The -e flag passes an environment variable to the container. In this case, we’re setting the password of the default postgres user
-p 5432:5432 Publishes a port from the container to the host. Postgres runs on port 5432 by default so we publish this port so we can interact with Postgres directly from the host
-d Run the container in the background. Without this, the command will sit there waiting for you to kill it manually
postgres The name of the image you are creating the container from. We’re using the official postgres image from Docker Hub.

If this is the first time you’ve launched Postgres in Docker, it will take a few seconds at least, possibly even a few minutes. It’s downloading the Postgres image from Docker Hub and storing it locally. This happens only the first time for a particular image. Every subsequent postgres container you create will use this local image.

Now we have a Postgres container running. Just like with the local version, we need to create a user and a database. We can use the same script as above and a similar command:

psql -h 192.168.99.100 -U postgres -a -f "C:\path\to\sqlfile.sql"

The only difference is the addition of -h 192.168.99.100. You should use whatever IP address you got above from the docker-machine ip default command here. For me, the IP address was 192.168.99.100.

With the database and user created, and your web.config updated, we’ll need to stop the application in Visual Studio and re-run it. The reason for this is that the application won’t recognize that we’ve changed database so we need to “reboot” it to trigger the process for creating the initial table structure.

Once the application has been restarted, you can now create pancake breakfast events and they will be stored in your Docker container rather than locally. You can even launch pgAdmin (the Postgres admin tool) and connect to the database in your Docker container and work with it like you would any other remote database.

Next steps

From here, where you go is up to you. The sample application can be configured to use Elastic Searchfor the recipes. You could start an Elastic Search container and configure the app to search against that container. The principle is the same as with Postgres. Make sure you open both ports 9200 and 9300 and update the ElasticSearchBaseUri entry in web.config. The command I used in the presentation was:

docker run --name elastic -p 9200:9200 -p 9300:9300 -d elasticsearch

I also highly recommend Nigel Poulton’s Docker Deep Dive course on Pluralsight. You’ll need access to Linux either natively or in a VM but it’s a great course.

There are also a number of posts right here on Western Devs, including an intro to Docker for OSX, tips on running Docker on Windows 10, and a summary or two on a discussion we had on it internally.

Other than that, Docker is great for experimentation. Postgres and Elastic Search are both available pre-configured in Docker on Azure. If you have access to Azure, you could spin up a Linux VM with either of them and try to use that with your application. Or look into Docker Compose and try to create a container with both.

For my part, I’m hoping to convert the sample application to ASP.NET 5 and see if I can get it running in a Windows Server Container. I’ve been saying that for a couple of months but I’m putting it on the internet in an effort to make it true.

Posted in ASP.NET MVC, Docker, PostgreSQL, Presenting | Tagged , , , | Leave a comment

Windows Server Containers are coming whether you like it or not

After posting giddily on Docker in the Windows world recently, Microsoft released Windows Server 2016 Technical Preview 3 with container support. I’ve had a chance to play with it a little so let’s see where this goes…

It’s a preview

Like movie previews, this is equal parts exciting and frustrating. Exciting because you get a teaser of things to come. Frustrating because you just want it to work now. And extra frustration points for various technical issues I’ve run into that, I hope, are due to the “technical preview” label.

For example, installing container support into an existing VM is mind-numbingly slow. Kudos to the team for making it easy to install but at the point where you run ContainerSetup.ps1, be prepared to wait for, by my watch, at least 45 minutes without any visual indication that something is happening. The only reason I knew something was happening is because I saw the size of the VM go up (slowly) on my host hard drive. This is on a 70Mbps internet connection so I don’t think this can be attributed to “island problems” either.

I’ve heard tell of issues setting up container support in a Hyper-V VM as well. That’s second-hand info as I’m using Fusion on a Mac rather than Hyper-V. If you run into problems setting it up on Hyper-V, consider switching to the instructions for setting up containers on non-Hyper-V VMs instead.

There’s also the Azure option. Microsoft was gracious enough to provide an Azure image for Windows Server 2016 pre-configured with container support. This works well if you’re on Azure and I was able to run the nginx tutorial on it with no issues. I had less success with the IIS 10 tutorial even locally. I could get it running but was not able to create a new image based on the container I had.

It’s also a start

Technical issues aside, I haven’t been this excited about technology in Windows since…ASP.NET MVC, I guess, if my tag cloud is to be believed. And since this is a technical preview designed to garner feedback, here’s what I want to see in the Windows container world

Docker client and PowerShell support

I love that I can use the Docker client to work with Windows containers. I can leverage what I’ve already learned with Docker in Linux. But I also love that I can spin up containers with PowerShell so I don’t need to mix technologies in a continuous integration/continuous deployment environment if I already have PowerShell scripts set up for other aspects of my process.

Support for legacy .NET applications

I can’t take credit for this. I’ve been talking with Gabriel Schenker about containers a lot lately and it was he who suggested they need to have support for .NET 4, .NET 3.5, and even .NET 2.0. It makes sense though. There are a lot of .NET apps out there and it would be a shame if they couldn’t take advantage of containers.

Smooth local development

Docker Machine is great for getting up and running fast on a local Windows VM. To fully take advantage of containers, devs need to be able to work with them locally with no friction, whether that means a Windows Container version of Docker Machine or the ability to work with containers natively in Windows 10.

ARM support

At Western Devs, we have a PowerShell script that will spin up a new Azure Linux virtual machine, install docker, create a container, and run our website on it. It goes without saying (even though I’m saying it) that I’d like to do the same with Windows containers.

Lots of images out of the gate

I’d like to wean myself off VMs a little. I picture a world where I have one base VM and I use various containers for the different pieces of the app I’m working on. E.g. A SQL Server container, an IIS container, an ElasticSearch container, possibly even a Visual Studio container. I pick and choose which containers I need to build up my dev environment and use just one (or a small handful) of VMs.


In the meantime, I’m excited enough about Windows containers that I hope to incorporate a small demo with them in my talk at MeasureUP in a few scant weeks so if you’re in the Austin area, come on by to see it.

It is a glorious world ahead in this space and it puts a smile on this hillbilly’s face to see it unfold.

Kyle the Barely Contained

Posted in Docker | Tagged , | 5 Comments