I have been asking this question to many developers after having heard the viewpoints of many others on the alt.net list. The common view point seems to be that Debug.Assert is evil and shows a unit test that is missing. Many also made comments such as that if I am using a Debug.Assert that I should have immediately after it matching production behaviour. Some quotes from the discussions I have had:
Asserts are evil, you should write a unit test to show the failure
Asserts are antiquated
Asserts used to be useful in the C/C++ days because you would be testing conditions that should never be false but could be because of direct memory manipulation etc they have no real use in a managed environment.
I think that all of these viewpoints have really missed the point of why I would want to use an Assert. To me the main reason to Assert is that:
I want this case tested when in Debug mode but I don’t actually WANT to run the test in production. If the test were to fail in production then let it fail or let it return incorrect information
At the outset this may sound like TDD sacrilege perhaps going through an example will clear it up. Let’s say that for whatever reason I am re-implementing Array.BinarySearch. One of the conditions of BinarySearch is that the thing I am searching is sorted. This is a canonical example of where I would want to use an Assert. Writing a unit test for this condition (or expecting a runtime failure from it) is ridiculous as it would be take a O(log n) operation and make it O(n + log n). In other words it would take me longer to insure that the thing is actually sorted than it would take me to actually search it.
Spec# currently also does not support this type of scenario. TO support it I would need the ability to say this condition only applies to the prover, do not emit runtime code for this. Hopefully this is something that will be added in the near future.