…as long as we’re writing code in a static typed language anyway. Dependency Injection the pattern isn’t going away, but it’s full time to reconsider Dependency Injection / Inversion of Control tools.
A couple weeks ago there was quite a bit of backlash against the usage of Dependency Injection (DI) and Inversion of Control (IoC) tools. The pantheon of No Fluff, Just Stuff programming heroes made a game out of mocking Dependency Injection here and here. My CodeBetter mate, Scott Bellware, even had some negative things to say on Dependency Injection tools (which I think is partially caused by him cutting his teeth on Spring.Net 😉 ). I didn’t respond because I was deep into StoryTeller and didn’t quite know what my response would be anyway. I’ll admit that the practice of pulling interfaces out and opening up constructors just to slip in mock objects is beginning to feel clunky in this era of TypeMock and duck-typed languages. One of the things I concentrated on with StructureMap 2.0 was making the configuration easier to use by programmatic alternatives, less Xml overhead, and making StructureMap less intrusive, but StructureMap is still there in your code. I don’t find it hard to use or distracting in the slightest, but maybe I need to start seeing it that way.
Give me Ruby or Python and I probably won’t feel as much need for a StructureMap. That being said, I still think there are plenty of good reasons to use dependency injection besides the stereotypical unit testing scenarios (you’ve never needed an IoC tool just to attach test doubles anyway). I’ve used my own StructureMap DI tool continuously for almost 4 years now and I’ve probably pushed the technique about as far as anybody (except for Aspect Oriented Programming (AOP) where I happily lag the curve). Here’s a list of usages for DI as a pattern with and without an IoC tool that I’ve found beneficial:
- Flexible deployment options. For example, last year I worked on a system that ran components on 3 different servers with a couple different remoting connections. When we first got the system you couldn’t run the full stack without deploying the code to a windows service and an ASP.Net application. Feedback was slow. We hid the service points behind abstracted interfaces and started pulling the dependencies with StructureMap. Once we made it to that point we simply made a different StructureMap profile that ran both the web client and server code in a single AppDomain. The feedback cycle got quicker. Debugging was easier. We were able to write functional tests against the business rules engine because we could easily activate the code in process at will. In case you’re wondering, we did have some automated tests fully end-to-end as well to make sure that the real communication and deployment configuration worked.
- Stubbing out inconvenient services during development. My last project was a .Net client in front of a balky Java web service. The web service development often lagged the .Net code and was frankly a pain in the neck to install even when it was ready. We used a “Stubbed” Profile in our StructureMap configuration to switch between a fully connected mode and a mode that talked to an in memory repository. While you can make the switch by command line arguments, we redirected the application with a pair of batch files called “ConnectMe” and “StubMe” just to switch the default profile in StructureMap. No hard wired stubbing or conditional compilation necessary.
- I think using DI leads to more self-documenting code. I like being able to understand the dependencies of a class just by looking at the arguments to its constructor function. I think it makes the code be more transparent by pointing to the class’s dependencies and configuration. As pointed out originally by Jacob Profitt, this can be construed as a massive violation of encapsulation that can make a class harder to use. That doesn’t concern me particularly because I use StructureMap to chain all the dependencies together in this case so that the client of an interface doesn’t need to know anything about the constructor functions of its dependencies. Very few concrete classes should end up being directly exposed to the IoC tool. Oddly enough, I’m generally not too concerned about the binding to StructureMap since I can pop into the code at any time to add features or fix problems.
- “Push” configuration is better in many ways than “Pull” configuration. I use an IoC tool for all custom configuration needs. Configured properties are just “pushed” into the classes that need them by the IoC tool. The configuration now “knows” about the code but not the other way around. My service classes are now completely usable in whole new settings because they aren’t coupled to a configuration file. More on configuration management with an IoC tool at the StructureMap homepage.
- Reuse. I’ve found that focusing on composing a system into cohesive classes with loose coupling to each class’s dependencies will allow for much simpler reuse. If you can replace the dependencies for a class at will you can take that class and use it in an entirely new context when the need arises. I’ve been able to take big chunks of a rules engine that ran in a transaction processing pipeline and use that same rules engine inside a web service. We didn’t have to change a thing in the rules engine, just push in different implementations of it’s dependencies. Having the engine decoupled from its configuration also made it more reusable when we needed to pull the configuration from an alternative source in the new web service.
- Easy to change the behavior of the system by swapping out implementation of some of the services or adding new components to the system. One of the common things to do with an IoC tool is to define Chain of Responsibility handlers in the IoC tool so that it’s easy to plug in new handlers or reorder the existing handlers to add or change behavior.
- Built in Environment Tests. I don’t know if any of the other IoC tools besides StructureMap has them, but I’ve used them to great effect in the past. It’s especially valuable for distributed systems development and for highly iterative development that requires very frequent testing deployments.
- If you want pluggability for your own services, the Provider model sucks compared to any of the IoC tools. Seriously, the implementation of the Provider model is limited in capability compared to any IoC tool. If you want pluggability, I think you’ll find that any of the common IoC tools are more useful than building your own Provider type. Specifically, with an IoC tool you don’t need to inherit from a particular base class, you have more freedom to compose dependencies rather than build one gigantic, overweight class, and you have more options for configuration storage and runtime pluggability than the Provider model supports.
So, is Dependency Injection still good, or just baggage from yesterday that needs to be cleared away so we can continue to progress? It seems perverse to me to be wrestling with this question since Dependency Injection is still relatively new and hasn’t even come close to crossing the chasm inside the .Net community. Heck, Microsoft is just starting to support DI strategies in some of the Patterns and Practices work and Acropolis (but clumsily)! It’s a harder question for me since I’ve invested quite a bit of time over the last 4 years building and using a DI/IoC tool.
I think it’s fair and accurate to say that some of the goals of using Dependency Injection in a static typed language can be realized through simpler mechanisms in dynamic typed languages. I still think you want to use Dependency Injection in Ruby because the rules of OOP still apply, but maybe the fancy IoC tools largely go away. I could use much more experimentation, but Ruby style metaprogramming does seem much simpler in mechanics and readability to me than using DI engines and AOP in C# to enrich behavior. Despite how many words I just used above to extol the benefits of IoC/DI that I’ve already enjoyed, it is time to reconsider this technique’s place in my toolbox as we move forward.