Sponsored By Aspose - File Format APIs for .NET

Aspose are the market leader of .NET APIs for file business formats – natively work with DOCX, XLSX, PPT, PDF, MSG, MPP, images formats and many more!

Unit Testing Business Logic without Tripping Over the Database

A fairly common topic with TDD practitioners, both newbie and experienced, is how the heck to unit test business logic with a database hanging around.  I’ve had several conversations lately about this so I thought I’d get a post about it.  Before I try to talk about some strategies for this, here are a couple good reasons to avoid data access calls inside unit tests for the business logic:


  • Tests with database calls will execute significantly slower than tests that stay within an AppDomain.  Fast feedback cycles are an absolute necessity for effective TDD.  Slow tests directly impact the velocity of a development team in a negative manner.  When I was first out of college I was doing large engineering calculations on an old 486.  Each calculation run could take 5 minutes plus.  When we moved up to P5-100’s our productivity jumped significantly.  Slow unit tests are like working with 486’s.  I do miss napping in my cube though.

  • Tests with database calls are more labor intensive to write.  You have to create SQL statements to create a known database state before you run your tests.  Checking the test results probably involves scraping data back out of the database and coercing into .Net types just to do the assertions.  If you can stay inside the world of strongly typed .Net or Java code then “Intellisense” and code completion can speed you along.

  • Damned if you do, damned if you don’t. 

    • Embedding a bunch of SQL statements into the test fixture classes can make the tests harder to read and understand (trust me on this one)

    • Putting the database setup in another location can make the unit tests harder to understand and troubleshoot because the test data is external to the test fixture class.  “ALT-TAB Hell.”

  • The intent of the unit tests can be unclear.  Verifying a successful unit test by checking a status column in the database isn’t always the height of clarity.


You might not agree with the above, but if you do, the rest of the post is about the various ways I’ve used or observed to isolate the testing of the business logic from persistence infrastructure.


Mock the Database, but Where?


One of the best ways to get the database out of your way is to hide data access behind abstracted interfaces that can be mocked in business logic testing.  All you’re doing here is treating data access as yet another service that is invoked from your application.  Simply take all of your data access and put them into some sort of Gateway pattern class.  Use some sort of Dependency Injection with the business class to substitute the data access implementation with a mock.  In the case below, you would create a mock object for IDataAccessGateway.


      public class BusinessClass


            private IDataAccessGateway _gateway;


            public BusinessClass(IDataAccessGateway gateway)


                  _gateway = gateway;



            public void PerformSomeSortOfAction(DataSet dataSet)


                  // Manipulate the DataSet in some way





      public interface IDataAccessGateway


            void SaveSomething(DataSet dataSet);



Do not mock happy, fun ADO.Net.  By necessity ADO.Net is a low-level API.  As a general rule I would advise anyone to avoid mocking a low-level API under almost any circumstances.  Mocking even a simple ADO.Net call would involve several steps and objects for getting a connection, creating a command object, attaching said command to connection, creating a bunch of parameters, etc.  Just don’t go there.  I noticed a junior-junior pair having some trouble with a coding task last year.  When I finally looked over their shoulder I discovered that they were trying to write a unit test by using NMock to create dynamic mocks for the IDb* interfaces.  The unit test code was about 9 parts NMock.Expect() calls and 1 part performing the actual test.  They changed testing strategies and their work started to move again.


This same stricture applies to any version of Microsoft’s Data Access Application Block (it’s all static methods anyway) or Enterprise Library.  These tools are still just thin veneers over ADO.Net and suffer from the same sort of mocking overhead that raw ADO.Net does.  Our internal analogue to EntLib has a dedicated static mock mechanism for unit testing low level data access code.  We’ve barely used it because it’s still not that convenient.


To put this bluntly in a rule of thumb, a business or even service layer class should never have any reference to any of the System.Data.* namespaces.  I would allow an obvious exception for DataSet’s with the caveat that DataSet’s aren’t the best choice for business entities and not particularly suitable for Data Transfer Object’s either  (go ahead and argue, I’ll just sick Bellware on you).  I’d be really uncomfortable about referencing an IDataReader in business or service code too.  That smells awfully wrong to me.


Invert the Control


The section above talks about mocking data access if your business logic follows a Transaction Script pattern for organizing business logic.  If you’re starting from scratch on a system that is business logic intensive you’re probably better off to organize your business logic as a Domain Model anyway.  In this case your domain classes (business logic layer) are completely independent of any kind of persistence mechanism.  For persistence I like to use the Data Mapper pattern to load and persist the business objects (this is what tools like NHibernate do behind the covers).  The mapper classes are aware of the business domain classes instead of the business classes calling the data access classes (Inversion of Control).  A Service Layer class of some sort would be responsible for calling.  Here’s a sample of what I mean:


      public class OrderServiceClass


            public void SendPendingOrder(SendOrderMessage sendOrderMessage)


                  OrderMapper mapper = new OrderMapper();


                  // Find the correct Order object and call Send(Destination)

                  Order order = mapper.FindOrder(sendOrderMessage.OrderId);



                  // Persist the new Order state





The reason why this advantageous for unit testing is that you can test the business logic behind sending an order without any database interaction.  The business logic can be verified by checking the state of the Order object and it’s children before and after the call to the Send(Destination) method.


I Do Not Like the Active Record Approach


Another way to handle persistence of business domain classes is the Active Record pattern.  In this case each domain class is responsible for its own persistence.  Each domain class will have a signature like this:


      public class Order


            public void Save(){}

            public void Delete(){}

            public void Insert(){}

            public void Load(long id){}



This came up recently at the Austin Agile lunches because one of our members was evaluating Rocky Lhotka’s CSLA.Net framework for one of his projects.  I’ll admit that I’m very biased against CSLA because the single worst codebase I’ve ever seen was a VB6 monster that used CSLA.  I could be wrong, but I still would not recommend something like CSLA.Net for TDD projects because I think the Active Record style of domain objects binds the business logic too tightly to the database.  I think that this style of data access makes unit testing without the database harder. 


Several of the O/R tools for .Net are really Active Record patterns (usually with code generation).  Testability has to be a major concern when you choose a persistence strategy.  I definitely prefer a Domain Model approach with external mapping, but I’ve spoken with people who swear by Active Record classes with internal mapping.


Point of View – the Database is the Application vs. the Database is Merely the Persistence Mechanism


How you personally answer this question largely determines how you layout software systems.  At one extreme, database-centric folks obsess over relational models and treat the middle tier code as merely a conduit to get data in and out of the database.  This point of view seems to be much more prevalent in the Microsoft world and definitely among older developers and requirement analysts (flame away but you know its true).  I’ve often seen requirement specifications from business analysts that amounted to “get data from this table and go insert it over here in a different way.”  This kind of thing leads to some nasty design and architecture smells like gross duplication of code, zero encapsulation, scalability issues, and nightly batch jobs that run for 30 hours at a time locking transactional tables left and right (true story).  In one instance it also led to a PM making a waterfall schedule that had 75% of the development man-hours to logical and physical data modeling on a system that probably had only 4-5 database entities but a complex user interface and a bunch of integrations to legacy systems (thar be dragons).


The other issue with a database-centric development philosophy is that automated unit tests are less effective.  Toss in largish stored procedures and you’ve got a mess.  Procedural code can be more difficult to unit test than well-factored Object Oriented code because it’s more difficult to isolate pieces of functionality.  Embedding this procedural logic within a database just compounds the problem.  Yes, there are xUnit toolsets for database access now but they’ll never be as easy to use as NUnit or JUnit.


No Business Logic in Stored Procedures!


I make exceptions for cases where set-based logic can be done easier with declarative SQL code than with procedural middle tier code.  Otherwise I think business logic in stored procedures is evil.

About Jeremy Miller

Jeremy is the Chief Software Architect at Dovetail Software, the coolest ISV in Austin. Jeremy began his IT career writing "Shadow IT" applications to automate his engineering documentation, then wandered into software development because it looked like more fun. Jeremy is the author of the open source StructureMap tool for Dependency Injection with .Net, StoryTeller for supercharged acceptance testing in .Net, and one of the principal developers behind FubuMVC. Jeremy's thoughts on all things software can be found at The Shade Tree Developer at http://codebetter.com/jeremymiller.
This entry was posted in Database and Persistence, Design Patterns, Ranting, Test Driven Development. Bookmark the permalink. Follow any comments here with the RSS feed for this post.
  • http://focuswindows.blogspot.com WADE


    In my opinion, DB Unit Testing is DB developer’s business, not Application developer’s. At least it is true in my organization, we believe the common developers are not good at database design, especially in the modern OO era.

    We employed a powerful DB unit testing tool, it named as AnyDbTest. (http://www.anydbtest.com). By using this tool, we need NOT write java/.net DB unit testing code any more. We can configure one xml file to tell what we want to test and what is our assertion. Then AnyDbTest can do the rest things.

    Additionally, we can versioned control the xml-style test case just like the code files. It is very handy.

  • Chris McGinn

    I used to feel this way too, If you are dealing with a really big domain it may still apply but enter the “Local Database”. I think developers often ignore the the obvious. The ADO.Net dataset and associated designer to me has been the easiest way to quickly design a domain layer for testing, or designing business logic. The code behind of the Dataset (that takes 5 minutes to set up) is a wonderful place to place to perfrom validation when rows change (rows meaning data, business data that has rules). The dataset also maintains versions of the data which make it a breeze to compare old data before business logic and new data after business logic. I started down the path of using mocks, NHibernate, Spring.NET framework, etc.. all to make dataaccess code and testing easier, then I went back to my .NET 1.0 days and started playing with the Datasets again. I quickly realized it was time to start taking them serious.

  • http://codebetter.com/blogs/jeremy.miller Jeremy D. Miller


    “you can unit test business logic” is different from “it’s easy to unit test the business logic.” I shouldn’t have to mock out data access methods and all that data portal jazz just to unit test business logic at all. It’s work that I wouldn’t have to do with other architectures.

  • ejames

    Using TDD for CSLA objects is no different from TDD using using plain ordinary C# objects. Just because CSLA uses the Active Record pattern doesn’t mean you cannot use dependency injection to inject the data access code.

    CSLA defines four (give or take some) Methods to be overridden for data access: Create, Fetch, Update and Delete. The implementation of such methods is left for the business object developer, leaving the door wide open for use of data-access interfaces, dependency injection, and database isolation.

  • http://dudesdoingdotnet.blogspot.com thedotnetdude

    Hmm.. still searching for an awesome VSTS unit testing complete success stories.. Do you have any? Mind to share it here..

  • http://codebetter.com/blogs/jeremy.miller jmiller


    “How do I prevent the UI programmer from doing this:”

    Um, make sure the UI developer understands why that might not be what the team wants to do? Not just a dictate from above, but a genuine understanding and agreement about why the team wants to code this way. It’s all about socializing your design.

    There’s just about zero chance of getting good software out of sloppy or low skill developers.

  • http://codebetter.com/blogs/jeremy.miller jmiller


    “How do I prevent the UI programmer from doing this:”

    Um, make sure the UI developer understands why that might not be what the team wants to do? Not just a dictate from above, but a genuine understanding and agreement about why the team wants to code this way. It’s all about socializing your design.

    There’s just about zero chance of getting good software out of sloppy or low skill developers.

  • Morri

    Shouldn’t your example come to resemble something like this in the UI code

    public ButtonSaveData(…)
    MyDAG gateway = new MyDAG();
    BusinessClass BC;
    BC = new BusinessClass(gateway);

    BC.SaveData( UIControl.GetValue() );

    How do I prevent the UI programmer from doing this:

    public ButtonSaveData(…)
    MyDAG gateway = new MyDAG();

    gateway.SaveData( UIControl.GetValue() );

    If he has to use DataAccess-class anyways(so that it can be passed to BusinessLogic), and he sees it has a method for saving data (and that’s what he want’s to do), at some point or another, he’s going to do it.

  • http://geekswithblogs.net/wojan Scott Wojan

    First off, great article!
    I love TDD with DDD and I love databases as well so I don’t understand why it’s constantly presented as if you should be on one side or the other?
    “At one extreme, database-centric folks obsess over relational models and treat the middle tier code as merely a conduit to get data in and out of the database.”
    You’re designing an end-to-end solution so it’s important that you are involved from end-to-end and the database is quite often at one end. You HAVE to be concerned with the relational model! If the schema is a mess, it can render a tool like NHibernate almost useless, it can cause poor performance and will probably contain bad data from lack of referential integrity. Sure the database can merely be used for persistence storage but that still should be treated as a huge concern and not an after thought.

  • Wash (back from the dead)

    I agree, but I also disagree.

    Here is why(stick with me – I will get back to the point):

    Our application (Jeremy has heard of it) has two ways to do a simple action: decrease the total dollar amount a given business object and persist it back to the data layer. At the same time, make sure that all the parent objects take in account the adjusted total.

    Currently the app has two methods to accomplish this. One is direct access to a business object which implements an Active Record pattern, except it pushes all the business logic to a sproc(yes, not good). The other is an API, which actually uses the same business object, but was an attempt for separating the ugly implementation stuff so that integration developers did not have to deal with it.

    So, I don’t like this. We want to move the adjustment logic to a single interface using a Service Layer pattern. One of my colleagues referred to a similar layer that he used on a previous project that they called the Crap Layer. So, I renamed ours to the S.L. (You figure out what the S means.)

    The idea of the S.L. is to keep all the ugly code underneath it. Yes, the ugly stored procedures will stay, for now.

    Now, I will get back on the point. We are going to do this under TDD. I want tests written for the new actions that are going to be added to the S.L. I also want the underlying implementation to begin using the legacy objects described above.

    I want to do it this way for a few reasons. One, I know the current logic works (bad or good implementation aside). Two, I don’t have time to change the underlying logic. Three, this falls under a similar pattern of Michael Feather’s characterization tests. Four, It just fells right (never underestimate your gut).

    So, here is my problem. How can I write tests under this scenario without touching the database? I can’t. It does not ensure that I did not break anything. (One could argue that these are not units tests, but integration tests. I just wanted to get some good dialogue on your blog since we never get a chance to do it face to face – that often.)

    Here is where I agree:

    Once I get the S.L. under test. We are going to go back and convert the business objects to use a Domain Model using an OR Mapper. We will right unit tests at this level which will not use the DB layer, but now I have characterization tests to guarantee that I am still under the original behavior of the app.

  • Jeremy D. Miller


    Just to be pedantic, DataSet == “Table Module,” not Active Record. The point being as I’m sure you well know is that DataSets don’t equate to a domain model approach. I got my fill of that back in the days when VB6 apps used the old ADODB.Recordset as a Golden Hammer. It’s awfully nice to actually write strongly-typed OO code where the logic lives with the data.

    For that matter, DataSet’s are a pain in the ass in unit test code anyway.

    DataSet’s are nice for reporting apps with no business logic, but DataSet’s & lots of business logic == ugly. You could even argue that the “tooling” around DataSet’s is nothing buy rope for Mort to hang himself with “write-only” code.

    If you haven’t already, check out http://neo.codehaus.org for a weird/clever mingling of Dataset and Domain Model. It took them a hell of a lot of code to be able to write a domain model and still take advantage of the DataSet persistence.

    RoR can only help us anyway by putting heat on the ASP.Net team.

  • http://codebetter.com/blogs/darrell.norton/ Darrell Norton

    If you want to use the Active Record pattern, just use a DataSet. :) Creating a framework around it is kind of pointless, since the DataSet was built for that type of work.

    And the Ruby on Rails crowd talks about how great the pattern is. Ummm… .NET already HAS the Active Record pattern, and has for 3 years now. With a whole lot more tooling around it. :)

  • http://danbunea.blogspot.com Dan Bunea


    I have been reading your blog for a while and I am very happy with what you write. I have managed to achieve the same conclusions as you did. My approach on sprocs is that they are just too hard to maintain. If your customer changes his mind often, the stored procedures and the business logic there will start taking more and more to update, and in time your velocity will decrease much more then with a code business logic approach.

    Unit test have a few very important properties that must be implemented into them:

    – small – this mean they will be clear, easy to understand and modify/maintain, and can serve as documentation for the code
    – isolated – achieving this is the hardest but it is well worth and there are methods to do it
    – fast – as you say you need feedback from your code. The faster you get it the best. I call my tests my ‘break detectors’.

    When we speak about the 3 properties above and unit testing the code that may depend on the database, I have come to the conclusion that ORMs and business objects/c# usually are the best. Combined with mocking you can test the business logic of an application easily and it is very recommended to do it like that.

    However there might be cases when you really need to test the database access, and for those cases you need to use a database. In some cases like reports, the balance might go to the database dependant tests because setting up data can be easier. If you do a timesheet report, and you need a few hundred rows in the database to test it, it might be easier to avoid writing thecode for a few hundred objects and just make a reset database stored procedure. We use a great tool for this: Thycotic Database Scripter (http://www.thycotic.com/dotnet_dbscripter.html). It exports the data in a database as a stored procedure. Needs change, you regenerate it. And it is very fast. However the temptatin to do this is big although this brings the big stored procedure maintenance risk if not used right. I have found it usefull in areas very hard to test but I try to keep away from it as much as possible, until I see th real implications of it.

    I don’t like the ActiveRecord as well. The business object seem expert classes, too smart. Plus you can’t separate the layers very well and it is hard to work with transactions. We use NHibernate. I like a very separated approach on tiers like:


    Dan Bunea

  • http://www.bloggingabout.net/blogs/dennis/ Dennis van der Stelt

    “You do still have to run some tests against the database though.”

    How much is “some tests”?
    And when do I execute these tests? In a daily build? How do I go about excluding these tests on my own machine and still be able to write them and deploy them soley on the daily-build-machine? Or am I thinking in the wrong direction?

    Because when I implement some new functionality and also have to do some database testing every time…

    Or do you mean that you just write some tests to see if your database connection works and you can get a few rows from your database?

  • Jeremy D. Miller

    Denis –

    Glad to help. You do still have to run some tests against the database though. I meant to say something about testing the actual database manipulation code against the database instead of trying to mock ADO.Net itself. That’s the way of pain.

  • Jeremy D. Miller

    BlackTigerX (great screen name btw),

    “Business logic. It’s easier to write and test in C#, period.” — That’s not enough by itself? Ease of testing == ease of modification == longer application life.

    Business logic in sproc’s can also lead to way too much logic duplication and that’s not good either. There’s also the scalability issue and for us the database engine portability thing.

    I know you can make an argument for sproc’s being faster in some cases, but I wouldn’t buy that in very many cases.

    If you’d seen the T-SQL code that I was reading that day you’d understand my antipathy toward sproc’s.

  • BlackTigerX

    in your article “Good and Evil in the Garden of Stored Procedures”, you state:

    “Business logic. It’s easier to write and test in C#, period.”

    so is that all why you think Business Logic in stored procedures is evil? because is easier to write and test in C#?

  • http://www.bloggingabout.net/blogs/dennis/ Dennis van der Stelt

    Great, great article! Thanks so much!

    I’m preparing a presentation I’m doing in a few weeks. I was just looking for this exact info, to show my collegues that they should not acces the database from within their tests.

    Thanks again

  • John Huang

    Great article. I have saved most of your articles. Somehow this one is too wide on IE with 1024×678 screen resolution, hard to read. Please correct this if possible.