The occasional attraction of TypeMock & Duck Typing


// virtual for self-mocking


public virtual IMeasureCalculation CreateCalculation(string productType, IMeasurable measures)


 


Look at the bolded part.  Noise in the code for no other reason than to let an opportunistic usage of self mocking slip through the blasted compiler.

About Jeremy Miller

Jeremy is the Chief Software Architect at Dovetail Software, the coolest ISV in Austin. Jeremy began his IT career writing "Shadow IT" applications to automate his engineering documentation, then wandered into software development because it looked like more fun. Jeremy is the author of the open source StructureMap tool for Dependency Injection with .Net, StoryTeller for supercharged acceptance testing in .Net, and one of the principal developers behind FubuMVC. Jeremy's thoughts on all things software can be found at The Shade Tree Developer at http://codebetter.com/jeremymiller.
This entry was posted in Uncategorized. Bookmark the permalink. Follow any comments here with the RSS feed for this post.
  • paweł paul

    Hmm, I didn’t realize that you need informations about someone may have such observations. Very interesting article. Thanks:) world currencies exchange rates

  • http://devauthority.com/blogs/devprime/default.aspx DevPrime

    @cmyers : Last I read from Anders, the CLR never inlines virtual methods. The problem with a system that dynamically loads assemblies at runtime into a single process space is that during compile time, you never know if someone down the line will eventually use another assembly that overrides something you just wrote. Inlining on virtual members is possible in “pure” C++ because you have one single giant monolithic build output and *everything* is known about the usage of the class at compile time. Not so with a system like .NET.

    As for the rest, I guess it depends on your frame of view :-) For ages, in all sorts of different OO languages, the problem of fragile base classes has existed – to the point that academic research produced something like 5 or so classic examples of things you can do to make base classes fragile. Most of them involve overriding members in a way the original author didn’t intend. A lot of the practices since then revolve around “safe” ways to allow extension or specialization on derived classes, and a lot of that involves extremely selective usage of virtual members.

    With regards to Colin’s comment, which touches the same type of issue, it isn’t necessarily the case of making public components. You can have inhouse components that some other member of the team inadvertently extends the wrong way (if everything is virtual), and suddenly you end up with some extremely weird bugs where expected results don’t show up, the object occasionally gets into a bad state it can’t get out of, or you end up with unexpected endless recursion. Those are not easy things to debug necessarily.

    I think the fundamental issue is that some languages are very open to loose interpretation (especially the more dynamic ones). The goal of those languages is to make things as easy as possible for the developer so that they spend more time being “productive”, as it were, rather spending their time thinking about non-business-related technical issues. On the complete opposite side of the spectrum, you have languages that are “safer” or more technically-rigid. Their goal is keep the programmer from shooting himself (or the app/OS) in the foot. They need more input from the programmer because they will do precisely what is asked of it.

    Too much of the latter, and programmers feel frustrated because they spend all their time working to make the compiler happy, but the code always works as expected. Too much of the former, and the programmer is initially happier because he or she can write reams of business-related code without the compiler screaming at them every 5 seconds. However, the runtime results may not be what the programmer expected because the runtime is making interpretive decisions on smaller amounts of code that the programmer would not have made, but didn’t have to think about at coding time.

    I guess somewhere in the middle there’s some happy middle ground, and I’m personally OK with how C# handles it.

  • http://colinjack.blogspot.com Colin Jack

    @cmyers : I agree it is annoying if a framework misses out important virtual members but then a lot of code we write isn’t intented to go into a publicly available framework and in those cases having to make everything virtual seems a shame.

  • http://community.hdri.net/blogs/chads_blog cmyers

    @DevPrime: I think many of the problems with VMT dereferencing and such are largely solved and the JIT does lots of speculative inlining and stuff (take the hit only when/if you actually override something which is usually very little). I could be wrong here and I’m sure a CLR guru or two will correct me, but from what I read about Java’s VM and the .NET JIT, it sounds like it handles most of these situations pretty well.

    As to your other point, you’re right and that’s another big argument for having optional virtualization.

    Working with the .NET BCL, log4net, NHibernate, and a number of other libraries, I find myself constantly cursing any sort of attempt to limit functionality or overridability or any attempt to wall me off from doing what I need to do (like being able to give NHibernate a DataReader and have it hydrate an object for me… pre-1.2 you couldn’t do it due to internal and private methods and inability to override things without having to rewrite 90% of the mapper functionality).

    It seems to me that the Java model encourages private and hidden functionality to prevent accidental or harmful overriding whereas with C#/.NET, you can be a little more willy-nilly about exposing things in a protected fashion without sabotaging yourself.

    There’s a balance between internal implementation and allowing points of implementation diversion and it seems the C# model might more conducive to that.

    Of course Jeremy will remind me that with dynamic languages you can do just about anything you want any time and only fools waste time with compilers… maybe he’s right.

  • http://devauthority.com/blogs/devprime/default.aspx DevPrime

    The problem with virtual is that you end up with a bunch of method pointers in the class’ virtual method table, because any derived classes can replace the entry with another method pointer. This leads to the compiler having to defer to a runtime process to figure out how to dereference the method location, rather than just jumping to the method. Inline is just a bystander casualty of this problem, but not the only issue with reguards to performance. The extra dereferencing itself accounts for most of it.

    The other problem is an issue of architecture more than performance. When you create a class, you are creating a contract. Anything you expose as public is assumed to be callable by anyone, and you must ensure that it is safe to do so. Similarly, when you create something as Virtual or Overridable, you are stating an implied contract that it is safe to insert a custom implementation to the behavior. The problem is that this is often not a safe thing to do. You can *easily* create a fragile base class if you don’t control state management, among many other things. You cannot begin to control that if you make everything overridable by default. So in essence, you are creating a class with a contract that says it’s ok to have all behaviors overriden, when it probably isn’t really safe to do. That to me is a bigger reason to not have everything automatically virtual. The perf hit with regards to dereferencing the VMT is significant only in large quantities, really, but not enough to make me balk at the issue. The fragile base class issue, on the other hand, is pretty significant to me.

  • http://codebetter.com/blogs/jeremy.miller Jeremy D. Miller

    @Phil,

    When I was doing support in my first job I had a manager type that gave my group a terrible time. I caught him cooking the books one time and showed it to my boss.

    For some reason that guy never gave my group any trouble ever again;-)

  • http://haacked.com/ Haacked

    Why make it non-virtual? If you need to override it for testing purposes, might not you want to allow overriding it for other purposes? For example, fixing the books? ;)

  • http://community.hdri.net/blogs/chads_blog cmyers

    IIRC, in Java everything with at least a certain visibility is automatically virtual (though that may no longer be the case).

    This was one of the major differences in C#. I believe the argument was better performance (if everything is virtual, inlining becomes a big problem).

    Also, IIRC, the Java VM dorks figured out how to inline virtual functions and essentially negated the argument, so we’ve come full circle.

    Is a virtual differentiator needed any longer in C#? Can the CLR dorks figure out how to do inlining of virtual methods?