Rich UI enhancement : A case study

NDepend version 5 Release Candidate has just been released! Just-in-time to be sim-shipped with Visual Studio 2013. This release represents a big achievement for us because it comes with several flagship features, but also because the UI has been thoroughly revamped.

In this post I’d like to focus on some of these UI enhancements.  During the last 2 years, in the Visual Studio and .NET ecosystem we’ve seen emerging new UI standards, inspired from Windows 8 standards. With NDepend v5 we followed this trend and took a chance to make most features easier to be discovered, learnt and used.

Menu

First we redesigned the main menu. The motto was simple: one feature – one, and only one – word. Icons have been removed from the main menu. Not only because VS API forbids icons on parent menus, but because this would be information overload: words chosen to identify features should be meaningful enough to any professional developer. If for a feature several words make sense, the shortest word has been chosen (Diff/Change/Compare, Trend/Progress/Evolution…).

NDeoend v5 main menu

In feature menu, it is ok to use icons and multi-words captions. Icon are inspired  from the VS icon style, or even copy-pasted from the VS image library. I had a problem with B&W VS2012 icons, but hopefully VS2013 re-introduce color into the icons.

Below is the Rule sub-menu. We tried to limit as much as possible to one sub-level of menus. For the Rules sub-menu we by-passed this guideline to let the user discover and access the range of default rules, organized hierarchically.  Each menu ends up with an option menu (to directly open the feature related option), and one or several links to the related online documentation.

v5Menus

Branding and Logo

Clearly NDepend deserved a better branding and a nicer logo. We are a team of software engineers and I should have admit earlier that we needed this work to be done by a professional designer. Hopefully it is never too late to do things well and we asked the designers of Atomic Duo to do some work for us. I am pretty happy with their results, here are the new NDepend logos:

v5Logos

It is clear that in the Windows task bar it looks much nicer!

v5WndTaskbar

With this new logo and the new VS2013 guidelines, it has been easy to make a cleaner Start Page! Atomic Duo are also working on our web-site re-design. Here is the preview, the site will be updated soon.

v5StartPage

Ergonomy and User Experience

All dialogs and menus have been refactored to get a smoother user experience, compliant with new VS standards. For example the assemblies (to analyze) chooser dialog have new icons, new button style and new features.

It is important that the assemblies choosing experience is near to optimal, to let the  user spend more time browsing analysis result and discover interesting facts about his code base. With prior versions, user was already able to add assemblies from VS solutions, from folders (recursive or not) and by drag&dropping from Windows Explorer. Now we’ve added the possibility to filter assemblies with one or several positive or negative naming filters. Targeting an exact sub-set of assemblies to analyze, among hundreds or thousands of assemblies, is now an immediate task.

v5AsmSelect

VS style has been applied to all panels. Below is the rule editor panel. Not only we have a gain of a few V-pixels, but the overall impression is cleaner. Presenting the same amount of data with less screen area and by increasing the experience: wow, thanks to the VS designers!

We also implemented the idea of having a temporary panel with a violet header. This is a handy addition of VS 2012, that increases the experience when browsing multiple documents in general, and multiple rules in our case. Also the NDepend skinning is now adapted to VS2012 and VS2013 skinning and it gets updated automatically on VS skinning changing. Extra colored skinnings are also supported (VS 2012 only, but I guess they’ll be supported by VS 2013 soon).

v5RuleEdit

In the Queries and Rules explorer panel, clearly the VS2012/2013 large status icons are especially handy to summarize the rules status.

v5RuleExplorer

Here also we’ve added a useful UI facility: user can now choose to list all rules violated (or all rules not compiling, or all rules OK, or any sub-set of rules and queries depending on some others criteria):

v5RuleSelector

The shortcuts dialog that appear when hovering the bottom-right progress circle has been also enhanced with new VS 2012/2013 status icons. Listing all rules or queries according to a certain criterion is a matter of a single click on one of these shortcuts:

v5Control

All panels menus also support new icons. New sub-menus have been added to have a direct access to some advanced features that used to require several steps, only mastered by experienced users.

v5MatrixMenu

These advanced features sub-menus are also available from the main menu. Here are some examples of advanced actions, like exporting the namespace dependencies cycles to the matrix. We expect that with this ergonomy strategy, some users will discover already existing features that they didn’t know about.

v5MatrixMenu

New Report Experience

The report design has also been enhanced by the Atomic Duo designers. It is not just a skinning enhancement, but also new features appeared with the presence of a detailed Dashboard that summarizes diff, and some Trend Charts.

v5Report

Since almost a decade that NDepend exists, we had feedback and advices concerning the UI. We noted them all and hopefully v5 addresses most of them. Nevertheless we keep in mind that ergonomy trends are evolving quickly nowadays and we are committed to keep the product up-to-date, while adding new features.

Posted in NDepend, UI, User Experience, UX | 2 Comments

On Hungarian notation for instance vs. static fields naming

If there is a topic I disagree with most fellow programmers, it is about instance and static fields naming guidelines. Call me old-school, but I am prefixing them with m_ and s_, or in others word I am doing Hungarian notation on fields naming. But clearly, the global consensus can be read on the MS Developer static field naming guideline:

  • Do not use a Hungarian notation prefix on static field names.

On the wikipedia Hungarian notation entry, there are two lists of advantages and disadvantages. I read them  carefully, but still I cannot change my mind to understand why the mass prefer avoiding prefix to differentiate static and instance fields. Despite all the progress made in IDE and text editor, consuming an instance field is not obviously differentiated with consuming a static fields. But in terms of behavior, a static field is something completely different than an instance field, so I need to differentiate them at a glance.

When I code review code that doesn’t follow any form of field prefixing, I found myself constantly checking, and this takes time and friction. Sure one can prefix every usages of an encapsulated instance field with the this reference, but I don’t understand: why following a guideline that applies to the N usages of a field, instead of following a guideline that applies to a single declaration?

Things can get worst if you have non encapsulated fields with name non prefixed. Callers then doesn’t know if they are using a field or a property accessor. And since, having non-encapsulated field is a very bad practice (I won’t debate here on this) having fields prefixed also make calls like obj.m_State = 3 ugly enough to make you feel that something is wrong in the API, and make you want to fix it by encapsulating the field.

In many code bases, I can see that local variables naming follow the same sort of Camel-Case naming than instance and static fields. For me as a programmer, this is pretty much a nightmare to distinguish between the 3 scopes of each state when reading a complex enough method body!!!

As a last note, everybody follows the naming guideline on interface, to prefix them with an upper I. This is actually Hungarian notation that makes it easy to distinguish classes and interfaces. But really, I don’t see any conceptual difference with field prefixing.

I understand it is a matter of habits for everyone, me included. Habits means that we rely on passion instead of relying on reasoning. But when Hungarian notation is accepted for interface naming, I wonder, why all this hate on Hungarian notation for field naming that similarly offer so significant advantages? Am I missing something?

At least, I am glad to make Ayende laugh with default NDepend rules :)

Posted in API usage, C#, Naming Conventions, NDepend | 21 Comments

Code Rules are not just about Clean Code

Like any developer tool vendor, we at NDepend are eating our own dogfood. In other words, we use NDepend to develop NDepend. Most of default code rules are activated in our development, and they are preventing us daily from all sorts of problems.

Rules like…

…are actually helping much more than it appears at first glance. It is not so much about keeping the code clean for the sake of it. More often than not, a green rule that suddenly gets violated, sheds light on a non-trivial bug. A non-trivial bug that is much more harmful than just …

  • 5% of a type uncovered by tests
  • a public method that could be declared internal
  • an immutable type that now can see the state of its instances change
  • a type not used anymore that can be removed
  • an un-sealed class that could be declared as sealed
  • a constructor calling a virtual method

It is all about regression. It is not intrinsically about the rule violated, but about what happened recently, that provoked a green code rule to be now violated.

This morning thanks to a code rule violated, I just stumbled on a bug that else, would have certainly infiltrated the next public release. The rule Potentially dead Methods  that is usually always green, detected an unused method. Actually the method deemed as dead (i.e uncalled) is a Windows From event handler. Hence I got the information that this handler is not called anymore when clicking its associated LinkLabel control. I figured out that after a few refactoring with the Windows Form designer, it removed the call-back to this handler.

The screenshot below shows the situation. It shows as well that Resharper also detected that this handler method is not called anymore.

Dead Method

I remember the lesson taken when I was a junior C++ programmer: a compilation warning should be treated as an error. The same advice is still judicious for code rules the team decide to respect.

In any sufficiently complex system, we just cannot rely on human skills to detect such regressions. It is not about keeping the code clean just for the pleasure of developing in a clean environment. It is about gathering automatically meaningful warnings as soon as possible, to identify harmful regressions and fix them.

In this context, the meaning of each code rule evolves significantly.

  • Checking regularly that classes 100% covered by tests is not about being obsessional with quality. Empirically, experience shows that portions of code that are hard to cover by tests, or changes that developer forgot to cover by tests, are typically highly error prone. Poorly testable code is highly error prone.
  • Checking regularly that classes that used to be immutable don’t become mutable is essential. Uncontrolled side-effects typically cause some of the most trickier bug. Clients that consume an immutable class certainly rely on this characteristic (you do actually rely every day on string immutability, maybe without even noticing it).
  • Checking regularly that there is no dead code is not about making sure we don’t maintain extra unnecessary code. When a code element becomes suddenly unused, a developer might have forgot to remove it during a refactoring session. But as we’ve seen, it is well possible that a non-used-anymore code element is provoked by a regression bug.
Posted in Code, code organization, Code Query, Code Rule, CQL, CQL rule, CQLinq, Dead Code | 3 Comments

A typical effect of setting CopyLocal = true

If you read me in the past, you certainly know that I have a problem about the Visual Studio default option that set CopyLocal = true in project references assemblies. I mean this option:

NancyCopyLocal

I’ve already explained the problem in my blog, and also in White-Books available on the NDepend website. (see white-book Partitioning code base through .NET assemblies and Visual Studio projects page 4 CopyLocal = true is evil)

CopyLocalEvil

For example, in 2009 by fixing the problem on the NUnit code base, I’ve been able in less than an hour, to shrink the compilation time from 32 seconds to 12 seconds.

So what is the consequence of Nancy framework relying on CopyLocal = true: A huge number of cloned version on Nancy.DLL for each VS project referencing it. After a recompilation, I can count 56 occurences of Nancy.dll that weights 842 KB each, it means more than 46MB wasted. Apparently it doesn’t affect so much the VS compilation duration, I guess the VS team made VS 2012 more intelligent in the sense that if it has already parsed a reference assembly for a project, it doesn’t parse it for others, which is a good thing.

NancyDuplication

I tried to post something on the Nancy user group, without success, hence this post. To fix this issue, it is as simple as:

  • making sure that each project compile in a ..\bin\Debug and ..\bin\Release folder
  • from VS projects, reference the DLLs in the ..\bin\Debug directory, and set CopyLocal = false for each reference

Additional, I found ..\bin to be a good folder to host test assemblies. This way they can reference the application assemblies that live in ..\bin\Debug and the indirection can be handled with a App.config file like:

On Nancy code structure

One very good point is that Nancy.dll is a single DLL that only depends on .NET Fx assemblies so it is optimally packaged.

NancyRefs

On the other hand, this large DLL made of more than 7.000 lines of code and almost 300 classes, doesn’t abide by any kind of layered architecture. The consequence is that each namespace depends on all other namespaces.

NancyGraph

As a consequences classes are not layered at all, there is no high level or low level classes.

Let’s have a look at the dependency matrix below. It is made of the 287 Nancy.dll classes, in mode indirect dependency. A blue cells means that the class in column depends (directly or indirectly) on the class in row, a green cell means the opposite, and a black cell means that the 2 classes are dependents on each others. A perfectly layered set of classes (which is not wished, but what should be approached) would be a lower triangular matrix blue, and a higher triangular matrix green. Here the matrix has blue and green cells mixed equally in both lower and higher triangulars, which is another way to figured out that classes are not layered.

NancyMatrix

It means that it must be difficult to unit-test a set of cohesive classes (i.e a component) in isolation from the others (even by using mocks that don’t help in case or re-entrant dependency). It also means that maintenance and evolution must be impaired by this absence of code structure, because touching a single class can virtually have an unexpected effect anywhere in the code base.

Posted in Acyclic componentization, Code Dependency, code structure, Code visualization, CopyLocal syndrome, Dependency Cycle, Dependency Graph, Dependency Matrix, graph, Graph of Dependencies, Lines of Code, Maintainability, Partitioning, Visual Studio, VS, VStudio | 22 Comments

The joy of being a programmer

I am programming since I am 10 and I am now 38. Today I measure how much good programming bring to my life, directly and indirectly. I’d like to give credit to aspects I love in my job. Hopefully some young people will read this and will consider maybe doing one of the most wonderful job on earth.

Getting in the flow: According to wikipediaflow is the mental state of operation in which a person performing an activity is fully immersed in a feeling of energized focus, full involvement, and enjoyment in the process of the activity. In essence, flow is characterized by complete absorption in what one does. Focus, immersion, being concentrated, involvement …  being everyday in the flow by coding hours and hours, contribute a lot to a solid positive state of mind, it is a bit like meditation.  These are moments where one can completely forget about minor everyday pesky stuff, but also forget for a while more serious problem in life everyone has to face. Being in the flow is the condition to solve challenging problems and to create beautiful pieces of engineering. Being in the flow can lead to addiction but it is not addiction. This is essential to control when to check-in in the flow and when check-out, making sure to not be disturbed meantime.

Being creative: Being a software engineer is one of the most mainstream way of being paid for being creative. Often, writing software is deemed to be an artistic activity. A programmer has to be humble, because this is a kind of art not understood by the mass. But being humble is a chance to become wiser and increase self-confidence. Also, knowing you are going to be creative for a while, is an excellent motivation to overcome the first step effort to jump in the flow. But the truth is that for every passionate programmer, there is a background thread in the mind in charge of creativity (often running at sleep-time), that makes it so that in the morning the envy to create what you have in mind is too strong.

Becoming an expert: It is common to hear that a programmer must know numerous technologies, that its skill is to learn how to learn new technologies. On this I disagree because what makes me really happy is to master completely a technology and exercise daily my expertise. I used to master all the tricky details of C++ and COM and it was fun. Before that I used to master some assembly level programming and it was fun, I was not even getting paid for that. In 2002/2003 and then  2005 I wrote two editions of a 1.000 pages book on .NET and C# and writing it has been one of the most blessed moment in my career. Since then I capitalize on this knowledge every single hour of coding, letting me focus my thoughts on problems to solve, and not on all the non-trivial things a complex platform like .NET is actually performing under my feet. Of course I am constantly discovering new details about surrounding technologies, like functional programming paradigms through the prism of functional paradigm introduced in .NET languages. But I know what is my core knowledge, both in terms of technologies and in term of program design skills. And as long as I won’t be forced to change my core skills, relying on my expertise to express my creativity and making a living on top of it is a source of personal achievement.

Meet inspiring people sharing the same passion: I imagine meeting peers is a source of happiness for every expert in something. This is also why investing in a solid programming particular set of skills is a positive thing. Not only respect from others programmers arise, but it lets have great exchange with smart people as passionate as you. The importance of flow, underlined above, also comes with the disadvantage of being often alone with your thoughts. Most programmers enjoy working alone anyway, but for those who need a bit of more social activity, having an expertise in something is also a great way to become partly a teacher (in professional or academic spheres), partly an architect and contribute to important decisions, partly a team-leader and be responsible for project progressing, partly a consultant, and be able to share your knowledge in a pleasant social environment. I put the word partly in italic because if your social activity make it so that you are not writing code meant to be run in production anymore, you shouldn’t consider you as a programmer anymore and you’ll loose a great deal of the points I am mentioning here. If you need social interactions all day long, programming is not for you. This is also why (sadly) there are so few women in software, because evolution designed them to be much more social beings than men.

Being involved in something that make sense: Here also my position might be a bit different of what is widely accepted. I agree that for juniors, it is important to multiply the opportunities to work in several different teams and companies, to get an idea of what they like and what they don’t, to be influenced by several inspiring mentors. But once you become seniors, working on the same application in the long term, where you feel well programming in,  polishing it days after days, seeing its evolution across years, maintaining it in a clean state by adopting modern paradigms like automatic tests, DbC or relentless refactoring, having your word to say about strategic decisions, personally I found this being a great source of daily happiness and a great motivation to involve myself! In addition, working hard to achieving important milestones regularly, is an excellent way to give a sense to your professional career, which is (much) more the exception than the rule.

Work wherever, whenever you like: A 2KG laptop with the proper tools set installed, a few hours of electricity every 24 hours, this is all what a programmer needs to do his job well. A descent internet connection is often appreciated, and there are today only few points on earth where internet is not available at all somehow. Programming might be probably the less demanding working activity in terms of time and space requirement. Getting in a flow in a 12 hours plane flying across the planet, scheduling half a year to live and work in a paradise tropical island, avoiding traffic jam by staying at home for work (in your pijama), delaying programming in the night or early morning to take care of the kids and their education, all this is not only possible but pretty common actually. Most serious software companies let some of their skilled engineers work wherever they prefer. Did you know that many of the great minds behind Visual Studio didn’t actually moved with their family to Seattle, but still live in their preferred location, sometime far away from the US?

Make a decent living doing something you like: Last but not least, everywhere, skilled programmers get paid above the average salary in their countries, and if we take the example of a developing country like India, good programmers get a much much higher income than the average. On top of that, a skilled programmer have pretty close to zero chances to remain unemployed for a long time. This also means that if you don’t like your current position, it is easy to find another better suited one. This situation is made possible because less than two decades ago, the modern civilization realized that IT is the condition for its development. It is a fact that many of the richest persons in the world were originally programmers and every motivated programmer has the potential to create its own ISV business. A programmer can also decide to make more money by coding for the financial industry, or even bet on a startup and potentially makes millions in a few years.

All the next big things will consume even more IT, this includes human genome analysis for the mass, the entire medicine that will be deeply impacted by that, more prevalent portable devices, more sophisticated entertainments, augmented reality, robots and automated machines to do surgery, to assist the increasing growing number of old people, artificial intelligence in the long term, and certainly everything nobody hasn’t imagined yet. After all 20 years ago, nobody anticipated the impact of Google. 10 years ago nobody anticipated the impact of Facebook and smart phones.

Ok enough getting in the flow of writing my passion for programming, I have some code to write before the weekend :)

Posted in Code, Programming | 5 Comments