The concept of User Voice

As many I guess, I discovered the concept of User Voice through the Visual Studio User Voice page. We just setup a User Voice page for NDepend this morning. It was surprisingly straightforward, the UserVoice.com team did a great job at providing a clean UI to get started!

Certainly like every ISV, we have a sense of what users are missing and since the beginning almost a decade ago, we really did our best to implement features users were suggesting. As Jason Cohen summarized recently: If you don’t listen to what customers are telling you to do, you’ll build a product no one wants to use. This sounds obvious, but periodically, being reminded about obvious things is welcome.

With the concept of User Voice my guess is that we won’t see new ideas of features we’ve never thought or heard about. No, my hope is to be able to finely quantify, to measure the need for each missing features. I am really wondering if we will see a set features emerging? This would mean, to some extend, that we failed at listening to users feedback But this would mean also that by offering a User Voice page we improved our listening skills.

Exciting time :)

Posted in NDepend, Users Voice | 1 Comment

Measuring Development Trends

NDepend version 5 has just turned RTM today! Big milestone! Last week I talked of the UI relifting we did, but v5 comes also with several flagship features. One of those new feature is about measuring and visualizing development trends. Basically NDepend can log a variety of application-wide code metrics at analysis time. Then the tool can plot historic metrics values on charts and trends suddenly appears:

NDepend Trend

Trend charts are shown both in the new dashboard panel (nested in VS) and in reports built at analysis time. Time is measured both in terms of months and years, and in terms of versions.

We were pretty impatient and curious to apply this trending feature on the NDepend code base itself! But the feature has been developed just a few months ago and the trend chart above shows trends for like the past 7 years. To fetch the past values, we did a short program based on NDepend.API that:

  • Check out past versions of our code base
  • Compile it
  • Pass unit tests and gather code coverage
  • Analyze the result with NDepend analysis
  • Log trend metrics (thanks to types in the namespace NDepend.Trend )

Several remarks can be inferred from this trend chart:

  • The Lines of Code progression is pretty linear in the long term. This suggests that in the long run, Lines of Code is a reliable way to measure progression and estimates future schedules.
  • This suggests also that our development shop doesn’t suffer from the diseconomy of scale syndroma described well by Steve McConnell  in its legendary book: Software Estimation: Demystifying the Black Art : “People naturally assume that a system that is 10 times as large as another system will require something like 10 times as much effort to build. But the effort for a 1,000,000 LOC system is more than 10 times as large as the effort for a 100,000 LOC system”.
  • As a side note, I also quote this Steve McConnel idea : “My personal conclusion about using lines of code for software estimation is similar to Winston Churchill’s conclusion about democracy: The LOC measure is a terrible way to measure software size, except that all the other ways to measure size are worse. “
  • My thoughts on high code coverage ratio are becoming prevalent in the NDepend codebase. With v5 we reached almost 80% of overall code coverage by tests. We can see a large effort to jump from 50% to 80% code coverage within the last months and we will continue this effort. What this chart doesn’t show is that code contract is also massively used since I estimate that automatic test and code contract are two sides of the same practice. Once the product reaches a certain critical size it is not humanly possible to sustain a constant productivity, unless you have a good automatic testing and code quality plan coupled with a solid code structure! My hope is that with everything achieved so far, we will increase the productivity. This will be an interesting trend to measure!
  • The ratio # Lines of Code / # Lines of Comments seems pretty constant, nothing surprising but it is good to visualize this.
  • The lines of code that fall in the NotMyCode category didn’t increase that much. This can be explained by the fact that the good old CQL impl is mostly generated code, and since then we don’t use much generated code anymore.

That’s a lot of remark for just 4 trend metrics logged. But there are almost 50 default trend metrics defined by Code Query LINQ queries embracing code size, code coverage by tests, max and average values, third party usages (see them listed in the Trend Metrics group). Some out-of-the-box extra trend metrics related to the number of code rules violated and the number of code rules violations, are also available.

Defining a trend metric can be as simple as:

TrendLoC

Or more sophisticated like:

Creating your own trend metric is easy if you know a bit of LINQ. For example to count the number of types implementing IDisposable one can just write:

With the richness of the code model API and the flexibility of LINQ, the possibilities are pretty wide.

Finally a dialog is available to create and customize trend charts:TrendChartEdit

I hope you’ll enjoy this new feature as much as I do!

Posted in Code, Code metrics, NDepend, Trend | Leave a comment

Rich UI enhancement : A case study

NDepend version 5 Release Candidate has just been released! Just-in-time to be sim-shipped with Visual Studio 2013. This release represents a big achievement for us because it comes with several flagship features, but also because the UI has been thoroughly revamped.

In this post I’d like to focus on some of these UI enhancements.  During the last 2 years, in the Visual Studio and .NET ecosystem we’ve seen emerging new UI standards, inspired from Windows 8 standards. With NDepend v5 we followed this trend and took a chance to make most features easier to be discovered, learnt and used.

Menu

First we redesigned the main menu. The motto was simple: one feature – one, and only one – word. Icons have been removed from the main menu. Not only because VS API forbids icons on parent menus, but because this would be information overload: words chosen to identify features should be meaningful enough to any professional developer. If for a feature several words make sense, the shortest word has been chosen (Diff/Change/Compare, Trend/Progress/Evolution…).

NDeoend v5 main menu

In feature menu, it is ok to use icons and multi-words captions. Icon are inspired  from the VS icon style, or even copy-pasted from the VS image library. I had a problem with B&W VS2012 icons, but hopefully VS2013 re-introduce color into the icons.

Below is the Rule sub-menu. We tried to limit as much as possible to one sub-level of menus. For the Rules sub-menu we by-passed this guideline to let the user discover and access the range of default rules, organized hierarchically.  Each menu ends up with an option menu (to directly open the feature related option), and one or several links to the related online documentation.

v5Menus

Branding and Logo

Clearly NDepend deserved a better branding and a nicer logo. We are a team of software engineers and I should have admit earlier that we needed this work to be done by a professional designer. Hopefully it is never too late to do things well and we asked the designers of Atomic Duo to do some work for us. I am pretty happy with their results, here are the new NDepend logos:

v5Logos

It is clear that in the Windows task bar it looks much nicer!

v5WndTaskbar

With this new logo and the new VS2013 guidelines, it has been easy to make a cleaner Start Page! Atomic Duo are also working on our web-site re-design. Here is the preview, the site will be updated soon.

v5StartPage

Ergonomy and User Experience

All dialogs and menus have been refactored to get a smoother user experience, compliant with new VS standards. For example the assemblies (to analyze) chooser dialog have new icons, new button style and new features.

It is important that the assemblies choosing experience is near to optimal, to let the  user spend more time browsing analysis result and discover interesting facts about his code base. With prior versions, user was already able to add assemblies from VS solutions, from folders (recursive or not) and by drag&dropping from Windows Explorer. Now we’ve added the possibility to filter assemblies with one or several positive or negative naming filters. Targeting an exact sub-set of assemblies to analyze, among hundreds or thousands of assemblies, is now an immediate task.

v5AsmSelect

VS style has been applied to all panels. Below is the rule editor panel. Not only we have a gain of a few V-pixels, but the overall impression is cleaner. Presenting the same amount of data with less screen area and by increasing the experience: wow, thanks to the VS designers!

We also implemented the idea of having a temporary panel with a violet header. This is a handy addition of VS 2012, that increases the experience when browsing multiple documents in general, and multiple rules in our case. Also the NDepend skinning is now adapted to VS2012 and VS2013 skinning and it gets updated automatically on VS skinning changing. Extra colored skinnings are also supported (VS 2012 only, but I guess they’ll be supported by VS 2013 soon).

v5RuleEdit

In the Queries and Rules explorer panel, clearly the VS2012/2013 large status icons are especially handy to summarize the rules status.

v5RuleExplorer

Here also we’ve added a useful UI facility: user can now choose to list all rules violated (or all rules not compiling, or all rules OK, or any sub-set of rules and queries depending on some others criteria):

v5RuleSelector

The shortcuts dialog that appear when hovering the bottom-right progress circle has been also enhanced with new VS 2012/2013 status icons. Listing all rules or queries according to a certain criterion is a matter of a single click on one of these shortcuts:

v5Control

All panels menus also support new icons. New sub-menus have been added to have a direct access to some advanced features that used to require several steps, only mastered by experienced users.

v5MatrixMenu

These advanced features sub-menus are also available from the main menu. Here are some examples of advanced actions, like exporting the namespace dependencies cycles to the matrix. We expect that with this ergonomy strategy, some users will discover already existing features that they didn’t know about.

v5MatrixMenu

New Report Experience

The report design has also been enhanced by the Atomic Duo designers. It is not just a skinning enhancement, but also new features appeared with the presence of a detailed Dashboard that summarizes diff, and some Trend Charts.

v5Report

Since almost a decade that NDepend exists, we had feedback and advices concerning the UI. We noted them all and hopefully v5 addresses most of them. Nevertheless we keep in mind that ergonomy trends are evolving quickly nowadays and we are committed to keep the product up-to-date, while adding new features.

Posted in NDepend, UI, User Experience, UX | 2 Comments

On Hungarian notation for instance vs. static fields naming

If there is a topic I disagree with most fellow programmers, it is about instance and static fields naming guidelines. Call me old-school, but I am prefixing them with m_ and s_, or in others word I am doing Hungarian notation on fields naming. But clearly, the global consensus can be read on the MS Developer static field naming guideline:

  • Do not use a Hungarian notation prefix on static field names.

On the wikipedia Hungarian notation entry, there are two lists of advantages and disadvantages. I read them  carefully, but still I cannot change my mind to understand why the mass prefer avoiding prefix to differentiate static and instance fields. Despite all the progress made in IDE and text editor, consuming an instance field is not obviously differentiated with consuming a static fields. But in terms of behavior, a static field is something completely different than an instance field, so I need to differentiate them at a glance.

When I code review code that doesn’t follow any form of field prefixing, I found myself constantly checking, and this takes time and friction. Sure one can prefix every usages of an encapsulated instance field with the this reference, but I don’t understand: why following a guideline that applies to the N usages of a field, instead of following a guideline that applies to a single declaration?

Things can get worst if you have non encapsulated fields with name non prefixed. Callers then doesn’t know if they are using a field or a property accessor. And since, having non-encapsulated field is a very bad practice (I won’t debate here on this) having fields prefixed also make calls like obj.m_State = 3 ugly enough to make you feel that something is wrong in the API, and make you want to fix it by encapsulating the field.

In many code bases, I can see that local variables naming follow the same sort of Camel-Case naming than instance and static fields. For me as a programmer, this is pretty much a nightmare to distinguish between the 3 scopes of each state when reading a complex enough method body!!!

As a last note, everybody follows the naming guideline on interface, to prefix them with an upper I. This is actually Hungarian notation that makes it easy to distinguish classes and interfaces. But really, I don’t see any conceptual difference with field prefixing.

I understand it is a matter of habits for everyone, me included. Habits means that we rely on passion instead of relying on reasoning. But when Hungarian notation is accepted for interface naming, I wonder, why all this hate on Hungarian notation for field naming that similarly offer so significant advantages? Am I missing something?

At least, I am glad to make Ayende laugh with default NDepend rules :)

Posted in API usage, C#, Naming Conventions, NDepend | 21 Comments

Code Rules are not just about Clean Code

Like any developer tool vendor, we at NDepend are eating our own dogfood. In other words, we use NDepend to develop NDepend. Most of default code rules are activated in our development, and they are preventing us daily from all sorts of problems.

Rules like…

…are actually helping much more than it appears at first glance. It is not so much about keeping the code clean for the sake of it. More often than not, a green rule that suddenly gets violated, sheds light on a non-trivial bug. A non-trivial bug that is much more harmful than just …

  • 5% of a type uncovered by tests
  • a public method that could be declared internal
  • an immutable type that now can see the state of its instances change
  • a type not used anymore that can be removed
  • an un-sealed class that could be declared as sealed
  • a constructor calling a virtual method

It is all about regression. It is not intrinsically about the rule violated, but about what happened recently, that provoked a green code rule to be now violated.

This morning thanks to a code rule violated, I just stumbled on a bug that else, would have certainly infiltrated the next public release. The rule Potentially dead Methods  that is usually always green, detected an unused method. Actually the method deemed as dead (i.e uncalled) is a Windows From event handler. Hence I got the information that this handler is not called anymore when clicking its associated LinkLabel control. I figured out that after a few refactoring with the Windows Form designer, it removed the call-back to this handler.

The screenshot below shows the situation. It shows as well that Resharper also detected that this handler method is not called anymore.

Dead Method

I remember the lesson taken when I was a junior C++ programmer: a compilation warning should be treated as an error. The same advice is still judicious for code rules the team decide to respect.

In any sufficiently complex system, we just cannot rely on human skills to detect such regressions. It is not about keeping the code clean just for the pleasure of developing in a clean environment. It is about gathering automatically meaningful warnings as soon as possible, to identify harmful regressions and fix them.

In this context, the meaning of each code rule evolves significantly.

  • Checking regularly that classes 100% covered by tests is not about being obsessional with quality. Empirically, experience shows that portions of code that are hard to cover by tests, or changes that developer forgot to cover by tests, are typically highly error prone. Poorly testable code is highly error prone.
  • Checking regularly that classes that used to be immutable don’t become mutable is essential. Uncontrolled side-effects typically cause some of the most trickier bug. Clients that consume an immutable class certainly rely on this characteristic (you do actually rely every day on string immutability, maybe without even noticing it).
  • Checking regularly that there is no dead code is not about making sure we don’t maintain extra unnecessary code. When a code element becomes suddenly unused, a developer might have forgot to remove it during a refactoring session. But as we’ve seen, it is well possible that a non-used-anymore code element is provoked by a regression bug.
Posted in Code, code organization, Code Query, Code Rule, CQL, CQL rule, CQLinq, Dead Code | 3 Comments