My team had a conversation about code statistics at lunch the other day, specifically about Cyclomatic Complexity numbers. The gist of the conversation was whether or not it was important or useful to regularly run these metrics on your code. My coworker's point, which I *mostly* agree with, is that he would refactor a class well before it got to a high CC number by simple inspection. As in, he doesn't need a tool to "know" when a class is getting too big or taking on too many responsibilities.
I would agree completely (and I think you should be able to spot bad code visually anyway), except for the old parable about putting a frog in boiling water. If you drop a frog into boiling water he jumps right out. If you put a frog in cool water, then gradually heat the water to boiling, he won't jump out. Classes can be the same way as they slowly accrete yet one more function. A CC number is an awfully quick warning that you're going out of bounds. A run of NDepends onto StructureMap pretty well confirmed to me some weak spots in the class structure and spurred me to make some refactorings that cleaned up the internal structure.
Frank Kelly has a post that pretty well expresses my opinions on metrics. But in brief, even though I routine forget to run NDepends, I think:
- Metrics should only inform, never take the place of a constant attention to code structure and visual awareness of that structure
- Metrics are very useful when you inherit someone else's codebase
– btw, this post and conversation was spawned by a codebase I was looking at (no names) that had some monster classes. When I ran some CC numbers I came up with a half dozen classes over 200 in Cyclomatic Complexity. 20 is the rule of thumb for the upper threshold of a class, just in case you're not familiar with the CC number. 200 screams for refactoring.