ayende’s post contradicts everything I know about software. Ayende goes against the idea that maintainability can be measured by how easy it is for someone
unfamiliar with the code to pick it up and easily understand and make
changes to it. (as David mentioned in the comment, we must assume that someone has appropriate skills to do the job).
I am about
to expose my point of view. I organize my thoughts as a mathematical
demonstration. If you disagree with something and wish to comment, please refer
to the exact step you disagree with.
learnt to forget about all code I wrote, about all code I’ve been familiar
with. One cannot claim to be familiar with a sufficiently large code base (I
would estimate something with more than 100 K LoC). This is an axiom, if there
is no upper threshold from which one cannot be familiar with a code base, one can be
virtually familiar with every lines of code written on earth.
A code base
is maintainable if any developers can quickly and efficiently work on any part.
We never know where in the code the next bug will pop-up.
axiom above, to be able to work on any part of an application, one must be able
to discover easily the code to work on (because we cannot assert that one is familiar with the whole code base). This is possible only if the code is
Thus Maintainability and Learnability of a code base represent the same thing. To be
maintainable, code must be learnable. But what makes code learnable?
learnable when implementation details can be quickly mastered. Details are mainly algorithm in
method bodies, classes interaction and state evolution at run-time.
brain can only focus on a small amount of details at a time. Thus code must be
partionned in such way that when discovering details, this doesn’t lead to an
escalation of detail discovery.
such escalation, the code base must be partionned and elements of the partition
must be abstracted from each other. Elements of the partition are named
component. Concretely, components are group of types strongly interacting together (this
is refered to high cohesion inside a component). If one use one type of a component, one uses all of them. If one changes one type of a component, one changes all of them.
are abstracted from each other if when learning details of component A, one
doesn’t need to learn details of component B. Thus implementation details in
different component doesn’t relate to each other. This is known as low-coupling
a certain amount of coupling between component is needed to make them working
together. At the very least, to communicate, components must agree on
interfaces (or even better and more restrictive, components must agree on interfaces +
component statically depend on each others. Having abstract components
boundaries solve well the problem of escalation of details to discover. But another problem
is luring: the complexity of the graph of dependencies between component.
If the graph of dependencies between components is complex, developers cannot get the big picture of the code base. This big picture is needed to determine which component will be touched to achieve a particular maintenance/evolution task. This big picture is also needed to anticipate future major evolutions.
way to make the graph of dependencies between components easy to understand is
to avoid cyclic dependencies in it. If a cycle exist between some components, they
cannot be developed, refactored, tested in isolation. This is what Robert C.
Martin is calling the ‘morning after syndroma’ in its fantastic paper on
the ‘morning after syndroma’ says that when breaking a component, the
escalation of problem becomes dramatic if the component broken is involved
in cycle with other components. When breaking a component you are breaking components that depends on it. Without cycles, this set of components can be clearly identified. With cycles, this set of components represents virtually the whole code base.
cycles between components means that components are living in a DAG (Directed
Acyclic Graph). In other word, it means that components are layered.