In regards to software, I think of a sunk cost as a piece of existing infrastructure that’s already completed or purchased. It’s effort, time, or money that you can’t get back.
More germane to making architectural decisions, read this section from the Wikipedia article:
- Having paid the price of the ticket and having suffered watching a movie that he does not want to see, or;
- Having paid the price of the ticket and having used the time to do something more fun.
Choice # 1 is using a bad architecture, framework, or component in the next software project because you’ve already bought it, paid for it, or built it. Choice #2 is jettisoning that same architecture, framework, or component and using or building something that does work for the next software project. Obviously this choice is complicated by the third choice of sinking more time/money/resources into improving the existing architecture, framework, or component to make it more feasible to use. The learning curve to move in a new direction is also a rational consideration. One way or another, the decision to keep an existing platform or do something else has to boil down to economics, and economics only.
Throw it Away!
I’m as bad as anyone in the world about forming emotional attachments to my technical ideas and frameworks. Some pride in your workmanship is a very good thing, but our past work can often cloud our judgement in new situations. Some of the worst design choices I’ve made have resulted from taking a design or library from project A and using it on project B. Can you walk away from your current framework or architecture when a better idea comes along? Will you allow yourself to consider other solutions? Are you going to shoehorn your pet approach into any new situation? More pertinent, is there a better, newer solution or technology out there that is better for your next project? Because if there is, you might be better off throwing away your existing technology. At a minimum, it’s harmful for us to get blinders on about our choice of technical direction. Venkat Subramaniam and Andrew Hunt did a great podcast on .Net Rocks that addressed this very subject. Paraphrasing Andy Hunt, sometimes the smartest thing you can do is throwing your old code away.
Some of the problem is political
Wait, but it makes sense to use what we know! Yes it does, absolutely. But you’ve also got to factor in what the rest of the development world knows. Niche or proprietary technologies come with some cost in regards to team composition. Your ability to bring in new people is much better if you use more common technologies and techniques — especially for commodity
In another real world scenario, let’s say that you discover that your data access strategy simply can’t keep up with scalability requirements anymore. I walked into a political buzzsaw one time in regards to a system that was known to have very sluggish performance and scalability concerns with its database access. I looked at the system and suggested that we ditch the awful CSLA persistence implementation in favor of something simpler and more efficient (they were doing a *lot* of string concatenation with VB6’s “&” operator and Xml manipulation inefficiently with the MSXML 1.0 library). Even if we had the manpower to do it, the political heat would have been too intense to take on that rework because adopting CSLA in the first place had been the previous answer to performance problems. How do you get up in front of management and tell them that the thing we need to do most is rip out and replace the very thing that your immediate predecessor had sold as the silver bullet? At that point the software development team has very poor credibility (you must protect your credibility with the business). We didn’t really have any choice but to keep on going with the code the way that it was.
From the same employer, here are two more examples of sunk costs. In first example the sunk cost played too heavily in their decision making. In the second example the sunk cost was ignored in our decision making.
- A very large inventory management system was purchased, but not implemented for a couple of years. The cost of purchasing that software was “sunk,” there was no way to get that money back out. Deploying and implementing that system for real involved a huge (200 people at its peak) effort. People slept in cots in the team room. The team had to customize the hell out of that application to make it fit the way the business worked. Two years later we were still struggling with how to integrate other applications to the inventory system because it didn’t have any kind of service layer or extensibility model. The upgrade path was completely shot because of the customization. Support was a nightmare because of those very same customizations. I still think that they would have been better off writing off the cost of purchasing the software and looked for a different solution that better fit the business.
- A very large and wellknown consultancy (no names, but you *know* who I’m talking about) walked in and implemented a 3rd party supply chain system for fulfilling factory requests for material from logistics partners. Again, the 3rd party system did not reflect the way the manufacturing company ran their supply chain (we needed to support a “pull to order” model). The consultants brought in the script kiddies and hacked together some Perl scripts around the 3rd party system to kind of make it work. Within 6 weeks of going live it was obvious that the system could not support the business transaction volume and lacked functionality that the business wanted. The business partner and I hashed out a concept for a 100% complete replacement built from scratch. We felt that the existing system was a complete wash and just needed to be thrown away altogether. An 8 person team implemented the new system in a little under 6 months. At one point I had measured the throughput of the new system to be almost 2 orders of magnitude better than the 3rd party system predecessor (simply by not being stupid in the database access). To this day it’s still the most successful project I’ve ever done (and no, it was not an Agile project. More of a “going through the motions waterfall” with me working every weekend process).
In the previous two cases there were non-technical political hurdles. In the first example the political costs were too high to change direction and the organization blundered ahead. Some of the management doing the project had been responsible for choosing the 3rd party software in the first place. It’s near impossible to make rational decisions inside a “blame” culture when political considerations get far too much weight compared to costs and benefits. You’ve got to be able to stand in front of management and say “the choice we made was wrong, we need to go another direction” without fear. I look at it from the standpoint of an “Individual Contributor” and worry about trusting management enough to do that without getting my legs taken out underneath me. From management’s side, they need the transparency to make rational decisions. Politics are definitely bad for rational decision making.
Speculative Design is a leading cause of Sunk Costs
I think a huge source of waste is speculative design and construction of infrastructure. Building foundational infrastructure too far ahead of business need greatly increases the chance that you’re building the wrong things. The answer is YAGNI at an organizational level. Don’t buy software you don’t need right now. Don’t build frameworks you’re not going to use right away. Efficiency of effort, and a reduction in unnecessary or hurtful sunk costs, can be achieved by only creating code or infrastructure in response to an immediate business requirement. Infrastructure you don’t end up using is waste. Infrastructure code that doesn’t end up solving the eventual business problems efficiently is waste.
One last example to illustrate this point about speculative design. A couple years ago I was on a goofy little project to retrofit a very large ASP.Net application with the company’s new standard security framework. The new security framework had been built by a centralized team of architects and senior developers working outside of any business driven project. You know what’s coming next. The framework didn’t really meet our needs out of the box and I suspect that it took much longer to write code around the framework in order to use it than it would have cost to build the security enhancements that the business did want in the existing application. You can argue that a standardized framework could potentially reduce support costs across the family of applications, but the real reason we used it was because it was the “standard” that had been built.
My contention is that they would have been much better off building the security framework specifically for that one application in response to its requirements first, then trying to harvest reuse later. Centralized framework teams can get themselves and the organization as a whole into trouble fast if their work doesn’t solve the actual business problems. The fact that the centralized team often has political might behind it to enforce the usage of their tools can only make the situation worse.