Code Rules are not just about Clean Code

Like any developer tool vendor, we at NDepend are eating our own dogfood. In other words, we use NDepend to develop NDepend. Most of default code rules are activated in our development, and they are preventing us daily from all sorts of problems.

Rules like…

…are actually helping much more than it appears at first glance. It is not so much about keeping the code clean for the sake of it. More often than not, a green rule that suddenly gets violated, sheds light on a non-trivial bug. A non-trivial bug that is much more harmful than just …

  • 5% of a type uncovered by tests
  • a public method that could be declared internal
  • an immutable type that now can see the state of its instances change
  • a type not used anymore that can be removed
  • an un-sealed class that could be declared as sealed
  • a constructor calling a virtual method

It is all about regression. It is not intrinsically about the rule violated, but about what happened recently, that provoked a green code rule to be now violated.

This morning thanks to a code rule violated, I just stumbled on a bug that else, would have certainly infiltrated the next public release. The rule Potentially dead Methods  that is usually always green, detected an unused method. Actually the method deemed as dead (i.e uncalled) is a Windows From event handler. Hence I got the information that this handler is not called anymore when clicking its associated LinkLabel control. I figured out that after a few refactoring with the Windows Form designer, it removed the call-back to this handler.

The screenshot below shows the situation. It shows as well that Resharper also detected that this handler method is not called anymore.

Dead Method

I remember the lesson taken when I was a junior C++ programmer: a compilation warning should be treated as an error. The same advice is still judicious for code rules the team decide to respect.

In any sufficiently complex system, we just cannot rely on human skills to detect such regressions. It is not about keeping the code clean just for the pleasure of developing in a clean environment. It is about gathering automatically meaningful warnings as soon as possible, to identify harmful regressions and fix them.

In this context, the meaning of each code rule evolves significantly.

  • Checking regularly that classes 100% covered by tests is not about being obsessional with quality. Empirically, experience shows that portions of code that are hard to cover by tests, or changes that developer forgot to cover by tests, are typically highly error prone. Poorly testable code is highly error prone.
  • Checking regularly that classes that used to be immutable don’t become mutable is essential. Uncontrolled side-effects typically cause some of the most trickier bug. Clients that consume an immutable class certainly rely on this characteristic (you do actually rely every day on string immutability, maybe without even noticing it).
  • Checking regularly that there is no dead code is not about making sure we don’t maintain extra unnecessary code. When a code element becomes suddenly unused, a developer might have forgot to remove it during a refactoring session. But as we’ve seen, it is well possible that a non-used-anymore code element is provoked by a regression bug.
This entry was posted in Code, code organization, Code Query, Code Rule, CQL, CQL rule, CQLinq, Dead Code. Bookmark the permalink. Follow any comments here with the RSS feed for this post.
  • PatrickSmacchia

    Amir you can start browsing the NDepend standard rules set http://www.ndepend.com/DefaultRules/webframe.html. Since rules are written in C#, they contain comment with all sorts of information about “advantages of this rules or answer the question why this rules should be exist?”

  • amir yousefi

    i look for some resources(book, blog post or …) that cant tel me about the advantages of this rules or answer the question why this rules should be exist?

    can anyone help me?

  • http://www.opwernby.com/ Dan Sutton

    The most important thing you’ve said in this article is, “A compilation warning should be treated as an error.” This is a vital thing to learn: it’s hard to make many programmers understand that a warning is a warning *for a reason* — just because you can predict how it’ll resolve in the compiled code with this iteration of compiler/OS/your software/runtime environment/width of chip’s data bus, that doesn’t mean you can always predict it… and this is true in any language. Nice article!