Old Code, New Testing Tricks – breaking old habits

I ran into a variation on an old threading problem the other day that I found nearly impossible to unit test.  When I say impossible, getting my test scenario to succeed meant the guaranteed setting of two threading primitives before a WaitHandle evaluation occurred.  Getting this to run properly every time requires some fancy test code which is where I stop and say to myself – “there has to be a better approach.”


The resolution is extremely simple, but in a solution I’ve coded up many times in different situations, it takes a little more thought to break old habits and get it right.  It’s that ‘ah-ha’ moment that made this enjoyable. 


Getting to the Testability Problem…


The problem itself is specific to a threading situation – and it occurs when combining the fact that you can’t just throw an unhandled exception from a thread and you want to give precedence to the abort event.  In the case where both the work and abort events below are signaled prior to the .WaitAny() call being evaluated, the AbandonedWorkItemException must be raised to let the client know that the target work item was not handled.


BadCode


As you can see, this makes testing line #29 extremely difficult.


Stepping Back for Minute…


So I sat back, went back to the basics and asked myself – what is this class’ responsibility?  The answer was simple – maintain thread state and synchronize thread access.  The ‘and’ points out the problem – one of these responsibilities needs to be refactored. As I mentioned earlier, the solution is quite simple – I first refactored the threading work into another class and then extracted the members to an interface to support injection. The interface looks like this:


interface


This results in a much more readable ThreadProc function as shown below.


GoodCode


Turning attention back to the ThreadProc function, the old line #29 becomes line #25 in the new code, and it is now easily tested with a mocking class or I can simply code up my interface mock expectations with something like RhinoMocks using:


Expect.Call(synchHandler.ExecuteWorkerItem).Return(false);
Expect.Call(synchHandler.WorkerEventSignaled).Return(true);

Posted in Uncategorized | Leave a comment

CSLA and Telerik radGrid- a collection of posts

I am busy finishing up some knowledge transfer and found the need to gather together various postings I have done here and elsewhere on getting CSLA and Telerik playing nice.  A few of these topics deal with generic binding issues (CSLA aside).  Here are the items…


 


Converations on CSLA and Telerik radGrid


Conversation: Using the radGrid AddNew functionality with CSLA
Code: Databound (IBindableList/IReportTotalRowCount) Paging and Sorting to a control 
Step-by-Step: Hierarchical data binding with CSLA and Telerik’s radGrid Control
Code: Setting up CSLA databinding on a webpage using an @Register tag  

Posted in Uncategorized | Leave a comment

Creating a Horizontal GridSplitter in WPF – for real

I ran into a number of articles on the web declaring how to create a horizontal grid splitter control in WPF – most of them wrong. There are a couple of “Walkthru” articles on MSDN that show the proper way to do it, but waste time poking around the Properties window (who codes like that anyway?). Besides, they are not all consistent - the first article I found calls for setting properties that are not listed in the second article I found and seem to be extraneous.   


So here is a concise description focusing on the XAML alone - I haven’t found a decent XAML code beautifier on the web yet so bare with me.  I am not inheriting any styles as you can see below.


First,  create a Grid with an additional row to host the horizontal splitter.  Note that in this example I have two columns in my grid to show the spanning function:


<Grid VerticalAlignment=”Stretch” HorizontalAlignment=”Stretch”>
        <Grid.ColumnDefinitions >
            <ColumnDefinition Width=”*” />
            <ColumnDefinition Width=”*”  />       
        </Grid.ColumnDefinitions>
        <Grid.RowDefinitions>
            <RowDefinition Height=”*” />
            <RowDefinition Height=”Auto” />
            <RowDefinition Height=”*” />
        </Grid.RowDefinitions>


 Next, create a GridSplitter on row #1 (remember that row indices are 0 based). 


<GridSplitter 
            ResizeDirection=”Rows”
            Grid.Column=”0″
            Grid.ColumnSpan=”2″
            Grid.Row=”1″ 
            Width=”Auto”
            Height=”3″
            HorizontalAlignment=”Stretch”
            VerticalAlignment=”Stretch”
            Margin=”0″/>


And there you have it.

Posted in Uncategorized | 1 Comment

End of an Era – my favorite Favre story

Brett Favre called it a career today.  That got me thinking about my favorite Favre story… 


His first regular season game was the day after my birthday in 1992 and I went to a sorority house to watch the game.  Two of the women were wearing #4 jerseys (keep in mind, Majkowski was starting and people were still trying to figure out where the ‘V’ and ‘R’ went on Favre).  I had to give them a hard time, so I said “who is #4, the backup punter”? 


They went on to explain that they met Brett along with other players that summer at a restaurant outside of Green Bay and Favre was the only one that took the time to talk to them.  He said how excited he was for a fresh start in Green Bay and that he was excited to move his family to GB.  They were obviously impressed with how genuine he was.


When Majkowski went down that day,  they were jumping up and down and cheering for Favre to go in – and I thought to myself “there goes the season”.


What can I say:  When you’re wrong, you’re wrong.


I went to my first game at Lambeau on December 30th against the Lions – his last regular season game. 


The guy was a class act from day one and kept it going his entire career. Thanks Brett – can’t wait to see the HoF induction!


 

Posted in Uncategorized | Leave a comment

Linq to Objects – measuring performance implications (part 1)

After working with Linq-to-objects, I started thinking about how this tool could work in the wrong hands.  At its simplest, a seemingly elegant query could easily turn into a CPU hog if the underlying data structure isn’t organized well.  At its best, how will these queries make use of the underlying data structures?


What to do?  Write some code and get some numbers to compare.


What I found was interesting, eye-opening and opened more questions for another day. I set out to compare Linq performance over a reasonably well-optimized data structure by varying the where clause composition and also comparing this to code going directly after the data structure in typical pre-Linq fashion.


What I learned was fascinating – let’s take a look at the code


Given a dictionary defined as:
private Dictionary<int, string> _lookupList = new Dictionary<int, string>();


I populate it using a loop where the upper limit is a const I can vary:
for (int x = 0; x < _listSize; x++ )
                _lookupList.Add(x, string.Format(“item#{0}”, x));


I can then use Linq to query the item dictionary with something like this:


            string[] myResult = (from i in _lookupList
                                 where i.Key >= lowerLimit && i.Key <= upperLimit && i.Value.Contains(“1″)
                                 select i.Value).ToArray();


Let’s take three variations as follows:


1)      where i.Key >= lowerLimit && i.Key <= upperLimit && i.Value.Contains(“1″)


2)      where i.Value.Contains(“1″) && i.Key >= lowerLimit && i.Key <= upperLimit


3)      Hand-code the query to go against the data structure to compare performance.


To make sure I varied the access in different areas of the dictionary, the query was performed against 10 discrete key values in the beginning, middle and end of the dictionary. Performance was consistent across each item, showing that Linq was utilizing the datastructure to some degree – as expected.  The .Contains method further reduced the return set to range from 1-2 items depending on where the midpoint was calculated. These test sets were run 10 times and the averages taken.


The results:


Item#1 – averaged 2643 ms
Item#2 – averaged 671 ms
Item#3 – averaged <1 ms


Observations: 


The difference between #1 and #2 is a factor of 4.  I am not sure how this actually performed but I suspect that item #2 scanned the entire contents of the dictionary.  It is hard to tell since a test where you eliminate the where clause results in a very different memory allocation pattern and therefore inherently larger timing numbers.


The difference between #3 and the other items is staggering.  Keep in mind, I tried to handicap this with a try/catch block around the indexer into the dictionary, stored the matching values in a List<string> and then performed a List<string>.ToArray() just for good measure. 


That’s eye popping – it’s more than two factors of 10 faster than the best optimized Linq query and more than three factors of 10 faster than the worst optimized. 


Bottom line – despite my best efforts I just couldn’t make my hand-written code perform as poorly as Linq. 


Summary:


So what did I learn? 


·         Variations on where-clause construction have an implied order when considering performance.  What are these implied rules exactly?  I’d love to know.


·         An entirely different test would have to be constructed to understand join performance.


·         How does the [Indexable] attribute play into optimization choices and what are the performance considerations of that attribute?


·         When doing intensive object querying,  Linq for Objects needs to be eyed very carefully in context of the application.


+++  Addition per Comment


Here is the code that Jimmy Bogard requested.  As noted in my post, I added exception handling and chose to use a List<string> to gather the match values and finally convert to an array of strings just as the Linq function did. If anything, I am going for the most conservative path in this case.  The call to this function is responsible for time measurement.


private void HandCodedAccessorFunction(int lowerLimit, int upperLimit)
        {
            List<string> result = new List<string>();
            int y = 0;
            for (int x = lowerLimit; x <= upperLimit; x++)
            {
                try
                {
                    if (_lookupList[x].Contains(“1″))
                    {
                        result.Add(_lookupList[x]);
                        y++;
                    }
                }
                catch (Exception)
                {
                     Console.Writeline(“Exception thrown”);
                }
            }
            string[] ar = result.ToArray();
            Console.WriteLine(y);
        }

Posted in Featured | 16 Comments