Layering, the Level metric and the Discourse of Method

When we are discussing the architecture of a code base, we often qualify a piece of code with terms such as high level and low level. This is common vocabulary and we all intuitively know what does it means. A piece of code A (whether it is a method a class, a namespace or an assembly) is considered as higher level than a piece of code B if A is using B while B doesn’t know about A. From this simple definition, we can order piece of codes in our program as shown in the following diagram:

We can say that tier code that our application is using but that we are not maintaining has no level. Indeed, such code doesn’t use our application and it doesn’t belong to our code base.

Then we can say that our code that doesn’t use anything has a level of 0. We can say that our code that is using only tier code or code of level 0 has a level of 1. Etc…

We actually just defined a code metric that is named Level. To be computed, the Level metric just need a graph of dependencies. If you consider the graph of dependencies between the methods of your program, you can assign a Level to your methods.  The same apply to the graph of dependencies between your types, the graph of dependencies between your namespaces and the graph of dependencies between your assemblies.

We then obtained 4 Level code metrics defined respectively on methods, types, namespaces and assemblies. These metrics are supported by the tool NDepend. For example, you can easily order your namespace with this simple Code Query Language query:

SELECT NAMESPACES ORDER BY NamespaceLevel DESC 

What is cool is that knowing about Level is a sort of reengineering. With this metric, one doesn’t need to ask to the code base expert where high level code is and where low level code is. One can just check by himself with the tool.

But why a developer would want to know about high level and low level? There is actually a very good reason: at any time developer should make sure that low level code that she is writing doesn’t accidentally rely on higher level code.

Maybe the most important design rule in making software is divide and conquer. Taking care about level is so important in software design because it is the primary way to divide and conquer.

  • We divide our code into component (i.e piece of code, no matter it is class, namespace, assemblies).
  • We conquer by making sure that there are no dependency cycles between our components.

If A is using B that is using C that is using A, then A, B and C cannot be developed and tested independently. In other words A, B and C are indivisible, they are not 3 components anymore but instead they form one single super-component, globally more complex than the 3 smaller components.

Level and dependencies cycles

Interestingly enough, the Level metric cannot be computed when the graph of dependencies contains dependencies cycles as shown in the following picture.

Not only component involved in a cycle cannot have a level but also component that use directly or indirectly a component involved in a cycle cannot have a level. The Level metric can then become a proper indicator to know if your components are layered correctly. If it cannot be computed for some components, there are some cycles. Translated into NDepend and CQL terms, if you consider for example that namespaces are components, you just need to write the following CQL constraint to be warned if your code is not layered correctly:

WARNIF Count > IN SELECT NAMESPACES WHERE !HasLevel AND !IsInFrameworkAssembly

The condition !IsInFrameworkAssembly is just here to avoid matching tier code, that haven’t level anyway.

Level and abstraction

The Level metric can also help to detect poor use of abstraction. Indeed, if you only have concrete components that depend on each others, you will necessarily end up with components with high Level value. On the other hand, if you separate your abstractions and implementations and link them at run-time by using the plug-in pattern, the Level values of your components will decrease significantly. The following picture illustrates this fact. The 2 red arrows shows the direction where the high level components reside. We can see that the Level values doesn’t increase in case of loose coupling between implementations.

As a consequence, if your code base contains types or namespaces or assemblies with a high level value (>10) this might be an indication that you need to create more abstraction to reduce coupling between implementations. If you are interested in knowing more about how to use NDepend for layering, you can read an article I wrote, available here. It is about how to get a clear and maintainable architecture by controlling dependencies between your components.

Origin of the Level metric

As far as I know, the Level metric has been first introduced by John Lakos in his great book Large-Scale C++ Software Design (Addison-Wesley 1996). Although this book was well-known, IMHO the principles it states should have become mainstream but they actually didn’t. Maybe the book targeted a too small population of developers and architects that are responsible for very large projects. Maybe the industry was not ready for these concepts because the C++ language doesn’t make it easy, both to write static analyzer and to componentize code properly.

Rambling on Divide and Conquer

Finally I would like to digress a bit on the divide and conquer principle. It was first introduced by the French philosopher and mathematician René Descartes in 1628 in its Discourse of Method. Descartes is widely considered as a major contributor to modern science and mathematics, it is worth quoting the forth tenets that summarize this discourse:

  • The first was never to accept anything as true if I did not have evident knowledge of its truth; that is, carefully to avoid precipitate conclusions and preconceptions, and to include nothing more in my judgement than what presented itself to my mind so clearly and distinctly that I had no occasion to doubt it.
  • The second, to divide each of the difficulties I examined into as many parts as possible and as may be required in order to resolve them better.
  • The third, to direct my thoughts in an orderly manner, by beginning with the simplest and most easily known objects in order to ascend little by little, step by step, to knowledge of the most complex, and by supposing some order even among objects that have no natural order of precedence.
  • And the last, throughout to make enumerations so complete, and reviews so comprehensive, that I could be sure of leaving nothing out.

I think that these tenets apply so well to software development. Here is my personal interpretation:

  • The first tenet can be seen as writing your test at least in the same time (or even before) as you write your code to make sure that you are actually writing code that undoubtly works (I had no occasion to doubt it).
  • The second tenet can be seen as separating concerns in order to separate difficulties to code each concern better (divide each of the difficulties I examined).
  • The third tenet can be seen as writing low level components first, in order to raise (step by step) the complexity of the code. It also pinpoints that a total order between components should exist, meaning that dependencies cycle are unacceptable.
  • The forth tenet can be seen as writing as many integration tests as needed to make sure to have a high test code coverage ratio, ideally equal to 100% (I could be sure of leaving nothing out).

Undoubtly if Descartes was alive today, he could be a great developer :o)

This entry was posted in .NET assemblies, Afferent Coupling, Application wide, Code metrics, code organization, CQLinq, DAG, Dependency Cycle, Layer, Layering. Bookmark the permalink. Follow any comments here with the RSS feed for this post.
  • Alexander Rubinov

    In order to support and enforce the programming stile according to the concepts discussed here I elaborated an approach and wrote a tool (java) called CODERU.

    Take a look at coderu.org.