I’m working to revamp our test automation strategy this week and next. By test automation here I’m referring to full or partial stack tests that exercise the application from website to database and back again rather than just the normal TDD/BDD testing. As I work today I’m going to jot down notes here and dump it out later. Stream of consciousness here we come:
- Test automation is hard. You’ve got to be committed to using it to really make it work. I don’t think you can succeed by only doing automated testing at the end of the project.
- Integrated testing will catch different types of errors than unit tests. You still want to do unit testing first though.
- I don’t see how you can possibly make test automation work in the large scale without very close participation from the actual development team.
- I look at automated testing as a way to find, remove, and prevent problems in the codebase. It’s not an end all, be all mechanism by itself to prove that the application code works correctly.
- My very strong opinion is that Whitebox tests provide a much, much higher reward to effort ratio than Blackbox testing. I think the people who only accept the results of blackbox tests are only considering the usage of tests as a way to verify the correctness of the system rather than just another mechanism to remove problems in the system.
- The software design matters a lot. Good separation of concerns type qualities enable a lot more chances for Whitebox testing. Using an IoC container is a great way to enable Whitebox tests. For example, we have a custom test harness for our new rules engine. Ordinarily, the full application will run as the website and 2-3 windows services communicating via queuing. Automated testing of the entire stack in its normal configuration isn’t efficient, so we do our functional tests with Whitebox tests. Since all services in the entire system are wired up via StructureMap, we can set up the entire application in one AppDomain. We swap out our normal NHibernate-based Repository with an InMemoryRepository and use in memory queues in place of our normal queuing mechanisms. By removing both the database, the queue’s, and the services from the test automation equation, one person was able to write 700+ automated tests in a week that thoroughly exercises this rules engine, and completely runs in under 4 minutes.
- I guess I don’t need to tell you that our Whitebox testing strategy would never work if we’d used the TypeMock style for interaction tests in the unit tests instead of applying Dependency Injection and the Dependency Inversion Principle between classes and their dependencies. Remember that there’s more to the TypeMock vs Dependency Injection decision than just crowbar-ing in mock object in your unit tests. The DI approach will grant you much more flexibility in your design going forward. Maybe you don’t care about that flexibility that DI affords, but at least be aware of the issue.
- Consistency is key for test automation against user interfaces. Choose a standard for naming elements. All of our Html elements inside forms are named after the property the element edits. We enforce this standard by creating all Textbox’s and the like with expressions like:
<%=TextBoxFor(x => x.Site.PrimaryAddress.City)%> and never with strings. That actually allows us to script out automated tests with the exact same expressions for some good old fashioned compiler safety.
- Race conditions are horrible for testing. Use things like WatiN’s WaitUntilContainsText() method instead of Thread.Sleep(). Remember to have some mechanism for “timing out” tests that hang.
- Avoid push button testing API’s. I’ve screwed this up badly in the past. Don’t use test grammars like “Click Save.” Favor test grammars like “Clicking Save should be successful” or “Clicking Save should fail with these validation errors.” Hide details of the UI workflow that are UI ceremony. Like “Click this Tab” before you can edit fields on a tab. Say “Enter CC list as [BLAH]” instead of “Click the CC Tab” then “Enter [BLAH] for CC.” This will clean up the test representation a bit, and also help to decouple the testing representation from the actual UI. This is key for us because we’re trying to get automated test coverage on our UI now while we wait on some interaction and layout recommendations from a design firm that will probably cause quite a bit of change to our basic UI layout.
- Make the expression of the tests be as declarative as possible. This will help make the tests more readable as executable specifications, but it’ll also help keep the tests from being brittle in the face of application changes.
- You need to do a lot of quiet assertions in the test automation code to provide more context on why a test failed. One example that’s bit me in the past is saving form data that fails validation rules and therefore, cancels the form post. You need to do something in the test automation code that will write out the validation messages instead of failing silently when you try to check the database after the fact.
That’s enough. I’m sure it’s not coherent, but I’ll answer any questions in the comments.