On Programmers Productivity

This morning I stumbled on How to destroy Programmer Productivity by George Stocker, and Fire And Motion by Joel Spolsky. These posts talk about Programmer Productivity, especially the killing productivity patterns. So I though about sharing my positive productivity practices.


The number one productivity trick is to be passionate by the project(s) you are working on daily. You must believe in them. You must imagine how cool the result will be in one, two, three years from now. In several occasions I took a while to stop and think: hey NDepend (my project) has evolved so much during the last two years. Two years back from now, imagining it would have had all these new features and improvements was science-fiction, today it is reality, a reality that has been delivered to real-world users. How cool!

Patience and Confidence

So not only passion and even love is required but also patience. Measuring all the work achieved during the last X years makes me think that it will keep going up, one feature, one improvement, one bug fix, one line of code, at a time. Each past win is nurturing future wins, and will motivate you to be productive every each hour, because there is no more important (professional) things than that.

Today it seems I lost my whole day fixing a pesky bug, if I wouldn’t have been so dumb I could have fix it in half an hour! No stress, it happened to me tons of time in the past and looking back at all those years I can see it is the way to go. This is confidence.

Getting Started

One central facet of passion for a project, is to have a good idea of where we want to go. Short term (days, week), Mid term (months), Long term (years).

Short term means to me having my code base filled up with prioritized TODO comments, TODO5 being more urgent than TODO4. This sounds like a very rudimentary way of listing tasks but there is a strong advantage with TODOs.

TODOs are located where the coding action must take place. Hence starting working on a TODO eliminate the prior step of finding where to start in code. And the act of getting started (or actually getting not started) is a productivity killer. Hence everything that can possibly help getting started writing code is a productivity asset.

Another advantage of TODOs is that when all TODO(n) have been done, the job is done. No more TODO(n) is a very simple Definition of Done since all tasks surrounding coding (writing tests, code review, validation…) can and must be TODOed.

Getting Focused

Of course at a point, TODOs must be defined and then code must be crawled to find the best possible location where to write them. Typically this happens when starting working on a stuff (a stuff is what Agile and Scrum guys calls a PBI, Product Backlog Item). At this point a PBI gets transformed into (a few or many) prioritized TODOs, depending on the scale of the PBI. It can be a one hour bug fix or a feature that will take 3 months of dev.

There shouldn’t be more than 10 top priority TODOs at a time because one productivity important trick is to get focused on what to do. Hence a feature can be divided in 90 TODO1 and 10 TODO2. Once there is no more TODO2, then we can pickup the 10 next tasks to be done within the 90 TODO1 set. And if the current TODO2 to work on is too coarse, then transform it into 10 TODO3 and so on.

This way every morning you just have to search for TODO(n) in your workspace to instantly know what to do. This way you don’t have to think of what to do today, just get started and stay focused.


Imagining Mid-term (months) and Long-terms (years) goals is what can nurture passion and then day-to-day productivity. Two TODO lists must be maintained for that. Since NDepend follows Visual Studio releases our sprint duration is around 12 to 18 months. So we have one Mid-term list of what to do for the next major version that will be released in N months from now. The Mid-term list is driven both by ROI (Return Over Investment, the ratio of  Feature impact/ Dev effort) and the User Voice, that offers a good idea of each future feature impact. The same applies to the Long term list, except that this list is not limited by the sprint duration.


So far I especially covered some project management and organisation practices that are designed to boost productivity by maintaining the flame of passion intact. But Passion, Patience, Confidence, Getting Started, Getting Focused, defining Goals, are all about psychology. And there are several classic every-day psychology routines that are essential to keep up with high-productivity.

Having a sane and joyful life is of course essential. Personally I need something else than coding in my life. Friends, familly, kids and hobbies. Everyday several hours must be dedicated to something else than coding. One must sleep enough and work at regular hours. You must identify the hours when you are the most focused (typically early morning or late night) and struggle to be able to work during this privileged time. This might sounds obvious to you (and to me) but super-geeky individuals must be reminded that. You just can’t dive into code 14 hours a day and keep up with productivity in the long-term.

I found exercise being the number one creativity boost in my programmer life. Running often is for me a productivity routine. Not only new ideas comes to me gently, they just pops up to my mind, but endurance sport provokes endorphin’s hormones rush that can relieve most pain and stress. Runners actually develop an addiction to this, which is a pretty cool addiction. I consider sport-hours like being part of work-hours. Doing so is pretty practical to never miss a session. Btw I found recently that Alan Turing got many of his super brilliant ideas while running (he got almost qualified for 1948 Olympic marathon).

In addition to sport, I also do some every day meditation, especially mindfulness-based stress reduction (MBSR). This just works to get me more quiet, peaceful and focused. MBSR is a very simple activity that can be practiced several times a day in few minutes sessions. It consists in developing the ability to feel the present moment, by listening to the breath, the body sensation, the environment. The key is to develop the ability to be not emotionally disturbed by the flow of thoughts that constantly comes to the mind. We all know that to develop muscle one must train, but few realizes that brain can and must be trained as well. MBSR and meditation in general, is training to develop brain and cognition capacity.

When it comes to psychology and productivity we often hear of the notion of flow. Being in the flow means being totally focused to the current activity. The conditions for reaching the flow, which is the Saint Graal of the productivity are: being passionate about the activity and having solid skills. We often hear that to be skilled in a domain 10.000 hours of practice are needed. Flow is more for seasoned programmers. Another condition for the flow magic to happen, is to be challenged enough by the task. Flow cannot be reached when doing simple stuffs. The good news is that programming (well) is more often difficult than simple!

Of course avoiding interruption is also an important part when it comes to productivity. There are many interruptions you can control, like treating emails in batch mode twice a day, not arguing on the web, not periodically checking twitter or facebook… This is where the benefits of meditation and sport comes into play, because focus is increased.  There are all non-controllable interruptions, whether it is colleague, meeting, or kids (if you work at home). There is no more alternative than coping with that and be able to get back focused as quickly as possible.

There is this theory of reward that never worked with me. Like if I do code 3 hours in a row then I can go to the beach an hour. This just don’t work for me. I consider the reward to be getting the job done and then if I decided to go to beach an hour today, I will go anyway. Theory of reward maybe work for those that are not passionate by their work. It is your number one professional responsibility to find a job that will passionate you.

I hope sharing my practices of productivity can and will help you getting more productive! Now its time to get back to code (so do I!).

Posted in Productivity | 11 Comments

Running NDepend Rules through NDepend.API

When developing a tool for developers, at a point comes the need to propose an API to users to let consume the product features programmatically. NDepend.API has been released almost two years ago with NDepend v4.0. This API is pretty extensive and offers the possibility to achieve most common operations, including creating/loading a project, running an analysis, browsing the code model and all dependencies, metrics, attributes… and compiling/running CQLinq code queries and code rules.

When it comes to Continuous Integration, it is pretty convenient to run an analysis, run code rules and then report code rules violation, programmatically. While a lot of design effort have been spent on the API, I realized that using it to achieve these operations in sequentially might not be obvious at first glance. Hence here is the code excerpt for that:

You can download the VS solution here. You just need to unzip it in the $NDependInstallDir$\RulesRunner and then compile it to see RulesRunner.exe. Notice that since NDepend.API.dll is stored into $NDependInstallDir$\Lib, the source code uses the class AssemblyResolverHelper to redirect assembly resolving to the \Lib sub-dir.


Posted in Code Query, Code Rule, NDepend | Leave a comment

.NET Developer Tooling: The Roslyn Revolution

Unless you’ve been on holidays for the last week on an island without internet,  you’ve probably heard that Microsoft announced that Roslyn is now available as Open-Source.

Roslyn is the next C# and VB.NET compiler, developed with these languages. It is not just a compiler but also provides an extensive set of APIs to do plenty of interesting stuff on source code including, intellisense, refactoring, static analysis, semantic analysis… This paradigm is called Compiler as a service.

Until now (I mean until Roslyn will be integrated in VisualStudio) the offer proposed by Visual Studio concerning these areas was pretty limited. If you need more than basics (in other word if you are a seasoned pro .NET Developer that value its time) you are certainly using daily Jetbrains Resharper or DevExpress CodeRush. But with Roslyn finally becoming a reality, these two well established tools teams are facing a dilemma. To use Roslyn or to not use it??!

A risky decision must be taken

  • Don’t use Roslyn, and take the risk that the Roslyn revolution will challenge the leading position. Another risk, more subtle but it’ll be very noticeable, is that Visual Studio will host in-memory the Roslyn infrastructure. So not using Roslyn will result in asking users to host two compiler/analysis infrastructures in its VS process. This will have a crucial consequence on both memory-consumption and performance. Not using Roslyn is the choice of the R# team and I am worried a bit since I am a big fan of R#.
  • Do use Roslyn, and take the risk to spend a massive refactoring effort for at least one release cycle, effort that could be spent on new features to increase competitiveness. Also, while Roslyn have been developed by some of the smartest brains on earth, it might not be suitable for doing everything. R# developers are claiming that their solution-wide analysis is not really doable with Roslyn. But investing into such huge refactoring is the choice of the CodeRush team. One important advantage with relying on Roslyn is that CodeRush will have future language features almost for free. This can pay off over the long term.

While my preference goes today to R#, it is likely that in five years from now, there will be one leading tooling built upon Roslyn, to achieve all these goodies (refactoring, analysis…). Will it be:

  • A VisualStudio extension developed by Microsoft. I have no idea if such project exist but it’d make sense. At least a partial product must be actually developed I guess.  A partial product that is not trying to implement all what CodeRush or R# can do, but at least increase substantially the actual basics  provided by VS out-of-the-box today.
  • CodeRush
  • A VisualStudio extension developed by a third company, we haven’t heard about yet.

Also we must take account that CodeRush and R# are actually super-mature products. Both teams are taking high risks for the future and it is not clear that another product (developed by the VS team or another team) might one day achieve such level of maturity.

Concerning NDepend

NDepend has never been meant to compete with CodeRush nor R#. Some few features overlap, but many NDepend users are also using CodeRush or R# (like me actually).

NDepend is super-optimized to do macro-analysis. It does check hundreds of solution-wide code rules per second, while CodeRush and R# can do solution-wide checks, but more like in a matter of minutes on real-world large code base.

CodeRush and R# are brilliant to achieve micro-analysis, like showing to the user within the code editor that something can be improved in a method body. NDepend touches some of these areas, but a tool like CodeRush or  R# prove to be invaluable to obtain smart advises in all situations. And they come with the wonder of refactoring.

NDepend offers the necessary ten-thousands foot perspective, in order to take the most relevant refactoring decisions, like about what to do to make a component more cohesive, more re-usable, less coupled, less spaghetti-like. CodeRush and R# are excellent to actually achieve the refactoring once a decision has been taken.

Also, NDepend lets write custom code rules through LINQ queries. I believe our API, for that matter, is close to optimal. For ruling that Base class should not use derivatives it is as easy as writing:

No need to create a VS project. No need to click a compile or run button. Result is well formatted, well browsable and live updated if the rule gets edited. Such rule is executed in a few milli-seconds, even against a very large code base. And could the syntax be more concise? (thanks LINQ, thanks fluent API).

In addition NDepend has many other useful features out-of-the-box including code visualization (matrix, graph, treemap), code diff/baseline, code trending, reporting, code metrics including code coverage, tons of smaller-scale facilities to perform all sorts of convenient action… and many others features are in the development pipe. None of them really overlap with what other tools offer.

I went through all this description to say: NDepend is living in its own niche-area. So far it cohabits gently with CodeRush and Resharper and it doesn’t consume much extra memory nor extra performance. My opinion is that NDepend will cohabit well with Roslyn, without the need to be rebuilt upon it. If you don’t really agree with this, we’d be glad to hear your stance.

Posted in CodeRush, Compiler Service, LINQ, NDepend, Resharper, Roslyn | Leave a comment

NDepend.Path OSS Project

Almost 7 years ago I uploaded the OSS project NDepend.Helpers.FileDirectoryPath on CodePlex. Today I just uploaded its successor NDepend.Path on Github. NDepend.Path represents a major step for the path project. It can now handle pretty much all paths scenarios asked by users during all these years, including:

  • Prefixing path with an environment variable (this one could be the most demanded feature!My number one issue with NDepend still persists to version 5 – you still can’t use environment variables in framework folder paths.
  • Using custom variables/macros in path as in $(SolutionDir)
  • Support for UNC paths (Universal Naming Convention)

In numbers NDepend.Path is 2.032 logical lines of C# code 100% covered by 552 unit-test, spawned through 77 types. This is not a tiny project and I hope this work will be useful to others in charge of applications that need to handle real-world complex paths managements.

Here is a code excerpt showing what can be done with the NDepend.Path API and how to use it.

In NDepend v5.2 just released, we’ve added a new Paths Referenced tab in the Project Properties panel, that lets the user manage all paths referenced by a project, with a glimpse on how paths are resolved on the current machine. The implementation is all based on the NDepend.Path OSS project.


Posted in C#, NDepend, NDepend.Path | 1 Comment

Code Contracts is the next coding practice you should learn and use

Last week I was presenting a session covering some practices and tooling we were using to build NDepend at BuildStuff Lithuania 2013. One of the essential practice we are using is Code Contracts and really I was surprised when I asked who was using Code Contracts in the audience. Not even 5 hands up in like 150 pro developers, that are among developer++ since they attend professional conference.

So I do really hope with this blog post to convince you, dear reader, that you should use Code Contracts in order to drastically reduce the number of bugs that hit production and to reduce the overall maintenance cost of the code base you are working on, by like an order of magnitude.

I’d like to underline that Code Contracts is a coding practice not an implementation. In .NET there are two main ways to define a contracts : through the Microsoft Code Contracts Framework and through System.Diagnostics Debug.Assert(). These are the two main alternatives and I’ll discuss pro and cons of both later on.

What is a contract

  • A contract is an assertion in the code. 
  • If a contract is violated (hence if the condition asserted is false) we consider that there must be a bug.

The important thing here is that if a contract is violated, you (the developer) should be notified of it somehow, to be able to be aware of the bug and get information to fix it.

Another important thing to note, often not well understood, is that if a contract is violated, you shouldn’t try to recover your application. In such situation the application state is certainly corrupted and there is nothing to do about it, except being agile and humble, and try to fix the bug as soon as possible for future executions.

A very common contract, is when one asserts that a method parameters is not null. With MS Code Contract syntax it looks like:

With Debug.Assert() it looks like:

Do you notice how a contract is also a form of documentation? Everytime I look at a method without contracts I wonder what values are acceptable. For example with the method below I can certainly imagine that the one that has written this method didn’t expect str to be null, but this reasoning is just a guess and maybe the null value is acceptable and the method has just a bug.

I admit that I am a bit extremist, if a null value is acceptable, I put a comment explicitly stating that null is valid. But since I think that null values should be banished, it is pretty rare.

But I digressed. While contracts in C# are I’d estimate 66% about avoiding NullReferenceException, there are plenty of situations where you want to assert something else about your states:

  • str.Length > 2
  • str[0] == ‘.’
  • i > 10
  • validate an invariant state inside a loop

What is NOT a contract

Often developers are doing something like that and say it is contracts:

But personally I don’t consider that throwing a common exception is a contract.  I have several reasons for that:

  • It is perfectly legal to throw an exception without asserting that there is a bug.
  • An exception can be caught, and hence you might not be notified of it.
  • Moreover, when an exception is caught, the application can try to recover, and it can be pretty dangerous to work with data potentially corrupted by a bug.

In a word, an exception is a behavior of your program, it is not a contract! A contract is not a behavior, it is an artefact around your code like a unit-test for example, but wait…

Unit-Tests vs. Contracts

…a unit-tests is made of assertions right? Basically a unit-test define a state, throw this state to a portion of your code, and then assert something on the result. If a unit-tests assertions is violated you can consider that there is a bug (in the application, or in the unit-test). And certainly if a unit-test assertion is violated, you’d like to be aware about it. You might even want to fail a build for that, right?

For all these reasons, I consider that contracts and unit-tests are pretty much the same thing. Both are assertions about your code states obtained while running the code. Who care about when and how the code is executed  (debugging time, unit-tests time, smoke tests time, production…), remember the important thing is to be notified of assertions violations, because it means that there is a bug.

Notice a corollary: When a unit-test exercise a code that contains contracts, if a contract is violated you should be notified. Such situation means that there is a bug, either in the exercised code, either in the test code, but there is a bug somewhere and tests must not violate contract! For example it means that if your method asserts that the first argument is not null, then you shouldn’t write a test passing a null reference for this argument.

In 2013 every educated developers do write unit-tests, but almost none of them write contracts, hence this post! We are very very few claiming that contracts and unit-tests are the same thing. I really do hope the situation will evolve in the future because really the benefits of using both massively are outstanding!

Benefits of Contracts

Remember, when a unit-test fails or when a contract is violated it means that there is a bug! Frankly, the opposite is almost true. After using massively contracts and and unit-tests for years, I can say that the absence of assertions violations, pretty means that you have no bugs. Of course there are still a few minor pesky bugs. But by using massively contracts and unit-tests (with very high coverage), maintenance cost is cut by an order of magnitude, really.

Another point that few realizes, is that unit-tests is all about code covered by tests. You can write 1.000 unit-tests and still cover only 10% of the code to be exercised. The number of unit-tests is meaningless. What matters is the percentage of code covered by tests, and also the number of assertions verified. And covering code full of contracts with your unit-tests, is a unique way to multiply the number of assertions checked. So a benefit of contracts lies also in the fact that it makes much higher the number of assertions checked at unit-tests time. By using contracts you make unit-tests much more effective. And this is cool, because while unit tests takes time to be written, contracts takes no time to be written once you took the habit.

At NDepend we use contracts also to test the UI. We have a single huge integration test, that triggers everything that can be triggered in the NDepens UI (and I hope we can agree it is a sophisticated UI). This integration test takes 30 seconds to run. It checks literally millions of assertions, and still this integration test itself just contain just a few assertions. So in practice, contracts are also an excellent and original way to test automatically UI.

And finally, we saw that contracts are powerful documentation.

I do really hope that these benefits sounds good to you! Yes, code contracts work, and they work really well!

Contract and tooling

Contracts makes the job of static analyzer tool easier. For example below, a simple asserts makes Resharper clever enough to notify you of dead code, or of a luring NullRefException.


This benefits isn’t an anecdote. I believe that part of the next decade big things for developers, will be static analyzer detecting non-trivial bugs in the IDE. As you can see Resharper already does it pretty well. But it still doesn’t try to check for states from the chain of callers methods and the chain of called methods (yes it does it for .NET Fx methods and few other cases which is pretty cool, but it is still limited). Contracts is what will make such tool clever enough why? Because with contracts you drastically reduce the number of states the static analyzer have to check, and reducing numbers in an exponential algorithm makes all the difference between something that works practically and something that works theoretically. And hence the tool can focus and report only real buggy cases.

In the same vein, there is also the great Pex tool. I have huge hopes in it. For now there is just a lite version for VS2012 and VS2013, but I guess much more is coming.

Also, soon the C# and VB.NET compiler will be opened thanks to the Roslyn project. It will be easier than ever to write static analyzer because we’ll get all the syntax analysis for free. In other words Roslyn will let tools vendor focus on semantic analysis and this sounds very promising to me.

As a side note, one especially cool situation where I use contract daily, is when you switch over an enumeration with say 3 values. Instead of having 3 cases, one for each value, I like to have 2 cases and one default case, asserting the third value in the default case: The benefit is that the switch flow is free from any ambiguity. Both the C# compiler and Resharper understands it and if all cases returns, it won’t ask for a return statement after the switch ! Notice that the compiler magic isn’t coming from the contract itself. But the cool benefit of the contract is that in the default block we can be certain of the status argument value.


MS Code Contract vs. Debug.Assert()

So what should you use? What do we use at NDepend after years of practices? Actually we do use both technologies, dealing with their pros and cons. MS Code Contracts have huge advantages over Debug.Assert():

  • The framework is nicely done, with many well named methods to handle all scenarios (Require(), Ensure(), Invariant(), ValueAtReturn()…).
  • The framework let’s put contract on interfaces members.
  • The framework works nicely with documentation tooling, and there are facilities to include contracts assertion inside the doc itself.
  • Some checks can be made at static analysis time, to chase automatically for pesky bugs like finding case where a null value can be passed to a method argument that has a contract.

But in practice MS Code Contracts has a single one significant drawback that doesn’t make it competitive with Debug.Assert(). In most scenarios: it double and even triple compilation time, because by using MS Code Contracts assemblies get rewritten once compiled by the C# or VB.NET compiler. This rewrite is needed because MS Code Contracts is doing not natural act on code (like executable code on interface declaration, that needs to be weaved inside the impl methods).

Because long compilation time is not acceptable, in practice, what I’d recommend, is to use contracts on the public surface API of your application (like NDepend.API for us, just one small assembly), and Debug.Assert() everywhere else! Apparently the C# team is considering compiling MS Code Contracts directly in C#6 (source Wesner Moise). This would be really cool, and would certainly make me change my mind and forget definitely about the usage of Debug.Assert().

You’ll notice that in the list of advantages I didn’t put the fact that with MS Code Contracts, assertions can be checked during production. Indeed Debug.Assert() assertions are removed from assemblies compiled in Release mode. Personally I consider this like an advantage because:

  1. Keeping thousands of assertions checks in production code, can and actually do, significantly impact performance.
  2. I don’t consider clients like testers. But still clients do experience bugs, crash and exception reportd (typically automatically reported). Our job is to avoid the client to experience any bug and from my experience, a well detailed exception report is as good as a contract report in 80% of situations.

Btw, one detail point I didn’t mention, is that a MS Code Contract assertion violated throw a ContractException (which is not a public type) that can hardly be caught. And if you think about it, it makes sense. Jon Skeet explains it well in his StackOverflow answer.

As you certainly know, a Debug.Assert() assertion violated triggers a scary blocking dialog that lets attach a debugger. It is especially convenient for notifying a bug and for starting investigating when running code compiled in DEBUG mode.

A bit of History

DbC or Design by Contract has been invented by Dr Bertrand Meyer in 1986 when he specified the Eiffel language. So contract is not something new and I regret so much that when Microsoft built .NET at the end of the  90’s, they didn’t considered including contract in the IL and Common Type System. Especially taking account the fact that M.Meyer was working closely with the .NET team at that time.

As we saw, Contracts is closely related to null reference check, which still represent like half of the bugs on earth! We know that the null reference concept was deemed as a billions dollars mistake by Tony Hoare its inventor. Here also I regret that the .NET Common Type System doesn’t come with a way to statically enforce non-null reference. I’ve read somewhere that it was the biggest Anders Helsberg regret (if anybody has an url on that… cannot find it anymore, but I am sure I read it … ok it is here thanks Antonio for the link :) ). And I remember in 2004 when I was MVP C# trying to convince Anders to enhance the CTS with non-null references. The one and only argument against wasn’t so much about feasibility, but about the fact that it would create a tsunami of breaking changes in the .NET Framework 1.1 and one very cool thing of .NET is that breaking changes are pretty limited!


Pff…. it was a much longer post than what I expected, but if you read this conclusion I really do hope you’ve learn something, and that you are curious about using Code Contracts from now to see how much benefits it can bring.

Posted in .NET Fx, Code, Code Contract, Code Coverage, NDepend, Resharper, UI, Unit Test | 18 Comments