Yeah, I’m a week late with this recap, but I’ve had writer’s block. The Seattle ALT.NET Open Spaces event was awesome. The open spaces format is by far and away my favorite format now for developer gatherings. I love the interactive aspect of this format. We jumped right into a great set of topics without getting hung up in useless, angsty “What is ALT.NET all about?” discussions. I love catching up with people that I’ve met at other conferences and meeting more of my blogroll.
Ready or Not, Here comes the Polyglot Programming Reality
A major topic in Seattle was the Polyglot Programming trend (I know it was a big topic because I have a lot of notes that I can’t read under the heading “Polyglot *********”). Right now there seems to be an explosion in new programming languages or at least interest in different programming languages.
There was a lot of discussion both about doing some or all project work with alternative languages like F# or IronRuby when it hits. The general idea being that you would probably write infrastructure code in C#, then use a more dynamic language like IronRuby for business logic and “glue” code in the upper layers. More on this later, but I think it’s somewhat comical that we’re calling C# the infrastructure language when a few short years ago it was C++ for infrastructure and C# was the high productivity language in the upper layers.
Would you mix languages on the same project, and if so, is that a good thing? There were a lot of mixed opinions on this subject. On one side is the fear that multiple languages will make a system harder to understand and require more skilled people to work on the system. Then there’s the other side that thinks we’ll get more productivity by having multiple languages that are specialized for certain tasks. Personally, I know I’d like to be able to easily mix and match C# and IronRuby on projects.
Complicating the discussion is the idea of little Domain Specific Languages that aren’t even remotely Turing complete. To this point, I’ve been mostly interested in internal DSL’s using Fluent Interfaces in C#. I look at that technique as merely a stopgap until we get IronRuby and its far superior language support for DSL’s. The next bit of evolution is dabbling the toes into creating an external DSL. I’ve always been leery of trying out some sort of lexical parser ala YACC or ANTLR, but the Language Workbench idea coming from the JetBrains MPS product is very intriguing. Imagine being able to write your own focused mini-language for test automation or configuration without losing Intellisense or basic refactoring. That’s what the Language Workbench products will eventually do for us.
There’s a rumor that Microsoft might be interested in creating some sort of Language Workbench as part of their DSL efforts. Thumbs up from me, but please don’t ruin it by making us use XAML.
Before you panic about the difficulty in learning new languages, think about this: is a tightly focused mini-language DSL going to be any harder to learn than the equivalent new API in C# or VB.NET? In many cases I’d argue that an internal DSL is preferable to an old fashioned API in C#. Web programming with HTML and CSS was brought up as an antipattern because of the 3+ languages you have to know to develop. I’d turn that around and say that web programming is a great example of polyglot programming. Try to imagine what the C# equivalent would be like for replacing HTML and CSS. I think I’ll take HTML and CSS, thank you.
These should be on Twitter but I just feel weird shooting these things out in 140 characters at a time.
Code Generation versus Code Synthesis. I think that one of the big debates in software development over the next couple years is going to be over pursuing better productivity through external tools like software factories and large scale code generation or by using different programming languages (I.e. not Java/C#/VB.Net) that have facilities to write much less code. In this corner is Kathleen Dollard on Code Generation. And in the far corner is Venkat Subramaniam on Code Synthesis (using Metaprogramming and language oriented programming to pack more punch with less code). I’m with Venkat on this one. Why do I want to muck with a separate model outside of code, then run the code generator if I can work at the same high level directly in the code itself? Of course, to be a loud debate, the two sides of the issue would have to know that the other side even exists. Today, code generation probably seems like the default choice in .Net development because we don’t have programming languages that support Venkat’s vision of Code Synthesis. When IronRuby, IronPython, and F# are ready for production, Code Synthesis in the .Net space can be a reality.
Microsoft employees are a little too cloistered for their (our) own good. Not all mind you, but many of the blue badges I interacted with didn’t have very much prior exposure to many of the topics that come up frequently in ALT.NET settings. I get it, it’s a big campus full of smart people and you’re probably heads down on your own project. Case in point, in ALT.NET circles we seem to be much more enthusiastic about Language Oriented Programming with textual DSL’s (or at least that’s my position, preference, and observation). In Redmond, DSL == graphical tooling backed up by code generation.
Why are teams still writing persistence code by hand? There’s a bazillion persistence tools out there that all save development costs in some way over writing data access code by hand. With very few exceptions, I’d say that at this point that if you’re writing ADO.Net code or SQL by hand, you’re stealing money from your employer.
I wasn’t there, but there was a session roughly titled “Are Auto Mocking Containers Evil?” I had this argument a little bit with Scott Bellware the other night, but it came down to a problem with wording. He heard the word “container” and had awful images of Xml configuration hell. All an AMC does for you is look at the constructor of a concrete class and use Dependency Injection to stuff in test doubles for all of the parameters in the constructor function. I’d urge you to approach an auto mocking container (AMC) as simply a tool to reduce the mechanical cost of setting up interaction tests. I initially resisted the AMC idea because I thought it would obscure the unit tests, but I’m sold on the AMC technique after using it for awhile on a project. A class with a lot of collaborators is a code smell, but remember that code smell doesn’t automatically mean there’s a problem. Some Controller-type classes do absolutely nothing but coordinate other classes. Unit testing those classes will require two or more mocks in a single test.
I seriously think you should seldom use your IoC container inside unit tests. The mass majority of interaction unit tests should be easy to set up with pure DI. I say this for two reasons. First, the IoC container is infrastructure code and you want as little code as possible directly coupled to the container. Second, using a container inside of a unit test greatly ratchets up the risk of test data from a previous unit test polluting the results of a later unit test. Unit tests need to be isolated for effectiveness, and using an IoC container in unit tests can easily abrogate that isolation.