This post isn't really a rant, more of a warning. I really, really think that testability should be a first class consideration for doing software design. From experience, retrofitting an existing codebase with automated testing is problematic. One of the worst culprits has consistently been dealing with application state.
Yeah, I know that I've written this post before, but one more time. Static state in your application can potentially cripple your ability to do automated testing. The only way to create reliable automated tests is to completely control the inputs coming into the test. Cached data really needs to be controlled as well to keep tests from being skewed by previous tests. In order to be reliable and maintainable, automated tests should be isolated from one another and independent upon the order in which the tests are executed.
Yes, you most certainly have legitimate needs for static members, singletons, and cross request state. Just remember to leave yourself an easy way to flush that state or replace those singletons on demand inside automated tests so that you can reliably establish a known starting state with know inputs. It might mean a little more code to write, and maybe even additional complexity. A little more coding in exchange for easier testing can lead to shipping that same code earlier. It's an awfully good idea to keep your eyes on the bigger picture.
O/R mappers can be particularly problematic because they often implement some sort of stateful cache beneath the covers. We were getting burned today because we were resetting the database state of some domain objects directly to the database in the SetUp of a FitNesse suite. The service had a cached version from the previous test and confusing errors ensued. In my previous job we had a wrapper around our NHibernate access that had a "ResetSession()" method that we could call between tests to tear down the stateful NHibernate caches to guarantee a clean slate for each test. We left ourselves a grammar in a FitNesse fixture that we could use to flush all of the cached data between test runs. I would strongly recommend leaving yourself some sort of hook to do exactly that.
I've never employed this tactic much, but going to randomized data might be a good idea inside of test data. In our case, I think we could make our problems largely go away by using a bit fancier FitNesse machinery to grab new surrogate key values inside our tests to avoid test data collision. The test mechanics get a little bit harder, but I'll gladly take a bit more coding in return for tests that can run back to back.
*A* way to get around the Singleton problem without trashing your testability and creating strong coupling: http://structuremap.sourceforge.net/SingletonInjection.htm