Absolutely nothing. Or so the story goes…
I had a mini-exchange with my pal Glenn Block last night on Twitter. He was recycling the somewhat old meme that code coverage is a cockamamie metric that, as he puts it, ” a false security blanket on quality”.
Measurements (other than net profit, of course) are merely indicators. They give us insight into what we’re doing, red flags that help us know when to follow up on and drill into with more qualitative and hands on tools.This is, more-or-less, the idea behind Autonomation or “automation with a human touch.”
For example: while code coverage isn’t proof that our testing efforts will yield higher maintainability, it does tell us a teams commitment to a test practice.
I tend to believe the most interesting indicators are composite metrics. What happens when we start thinking about combining code coverage with cyclomatic complexity? I’d like to read a (hopefully) inverse relationship over time, coverage rising while complexity falls, as a good indicator that we’re doing simple design.
The important thing to remember about metrics is they are only single purpose tools which, correctly applied, gift us with little tidbits of insight. It’s up to you to find meaningful and creative uses for the various metrics and their various combinations. Once you start doing that, measurements like code coverage can certainly help you achieve your immediate goals.